Xen NUMA Roadmap: Difference between revisions
m (→Related pages) |
m (→Work items) |
||
Line 48: | Line 48: | ||
=== Work items === |
=== Work items === |
||
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: NUMA-awareness in credit. |
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: NUMA-awareness in credit. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg00569.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-12/msg01555.html v2 (latest)]. |
||
* NUMA-awareness in credit2. |
* NUMA-awareness in credit2. |
||
* NUMA-awareness in SEDF. |
* NUMA-awareness in SEDF. |
Revision as of 23:28, 19 December 2012
What's this page about
Of course, Xen already runs quite effectively on NUMA machines (for instance, look out cpupools), but there are spots where we can do better and introduce interface and performances improvements. The purpose of this page is, in fact, to act as collection point of all the new features and the needed improvement to make Xen better support complex virtualization workloads on NUMA servers and hosts.
It will also indicates what features are under development, and who in the Xen community is doing that, in contrast with all the ones that are just pending but have not been claimed by any developers yet. Finally, for the items that are under development, this page will (try to) track their progresses.
The ultimate purpose of this page is to provide detailed context and ease collaboration of people involved in NUMA related developments on Xen.
Related pages
For an introduction about NUMA and Xen, check this page Xen NUMA Introduction. Also, just for reference, some goo documentation about what other virtualization solutions do about NUMA can be found here: "Using NUMA Systems with ESX/ESXi", and here: "Performance Evaluation of HPC Benchmarks on VMware’s ESXi Server".
The complete list of open development tasks and projects for Xen is hosted in this page: Xen Development Projects. The features/items from this page that have been formulated in such a way that they constitutes a well formed Development Project (and that, of course, are not being taken care of by anyone yet) are also part of that page. So, in summary, if wanting to start developing on Xen and looking for some random place from where to start, look there. If aiming at the same, but with particular interest in NUMA, or even if just interested in how well Xen supports NUMA machines and how things will likely evolve in the future, look here!
Another feature oriented service provided by the Xen community is our uservoice. Go there to see what are the feature that most Xen users (at least, the ones that use that service) would like to see implemented, and in which priority order. That is not strictly related to this page here, but if thinking there is some NUMA related quirk we're missing, feel free to add it to both.
Updating this page
Given the Wiki nature of this page, anything that looks wrong or incomplete can well be fixed directly by who notices that. If any kind of clarification is needed, feel free to contact the person in charge of the item in question and/or Dario Faggioli. However, since we are talking about development plans, it would be nice if any doubt and/or modification would result in a conversation on the xen-devel mailing list, where the real development of Xen takes place. For some guidance on how to interact with that list, have a look here.
Improved NUMA support TODO list
Just notice that each assigned Work item contains the name and e-mail of the person working on it. In that context, WORKING means work has already started and patches could have already been submitted or will shortly be. On the other hand, PLANNED, means the person is keen on doing the job, but any code have been written yet (if wanting to take over, dropping a note could be useful, as there could be interesting information that he can share). If there is no name at all, the item is something identified as useful, but still unclaimed. Finally, Barred items are to be considered achieved (with the name telling who did the job).
Automatic VM placement
Description
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances.
Status
Basics are there. The old xm/XenD toolstack had a placement logic implemented (in XenD) which was not included in the new xl/libxl toolstack in the first place. That has been fixed recently, and now (starting from Xen 4.2) automatic placement is available for xl/libxl users. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful.
Work items
- Dario Faggioli (<dario.faggioli@citrix.com>):
at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs.Some of the relevant c/sets: f4b5a21f93ad, 4165d71479f9. - Dario Faggioli (<dario.faggioli@citrix.com>), WORKING: allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly. Last round of patches here (needs reposting).
- Dario Faggioli (<dario.faggioli@citrix.com>), WORKING: enhance the placement algorithm to take latencies between nodes (node distances) into account. Discussion here (needs rethinking before reposting).
- Dario Faggioli (<dario.faggioli@citrix.com>), PLANNED: provide aids to enable easy verification and testing of the placement (stressing it by generating synthetic placement request). Discussion here.
- Dario Faggioli (<dario.faggioli@citrix.com>), PLANNED: enhance the placement algorithm to take some more sophisticated measure of NODE load into account (not before having defined them!).
- (Semi-)Automatic placement for Dom0. Discussion here.
NUMA aware scheduling
Description
This is about not statically pinning VCPUs on nodes' PCPUs and just have them prefer running on the nodes where their memory resides. NUMA-awareness for the credit scheduler is one of the key feature planned for Xen 4.3.
Status
Patches posted and reviewed. Repost required quite a while because of extensive benchmarking performed in between the two releases of the patch series, along with the discovery and the need to fix some scheduling anomalies in the Credit scheduler.
Work items
- Dario Faggioli (<dario.faggioli@citrix.com>), WORKING: NUMA-awareness in credit. Patch series: v1, v2 (latest).
- NUMA-awareness in credit2.
- NUMA-awareness in SEDF.
Dynamic memory migration
Description
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Memory migration is one of the feature desirable for Xen 4.3.
Status
Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle.
Work item
- Dario Faggioli (<dario.faggioli@citrix.com>), WORKING: enable moving memory from one node to another (on the same host) upon user request and doing that in small chunks, so that (ideally) no downtime will be perceived by the VM.
- Dario Faggioli (<dario.faggioli@citrix.com>), PLANNED: track how many and from whom non node-local memory is being accessed. Report from the hypervisor to the upper layers so user or toolstack can properly consume it.
- Make sure everything, not only VMs' pages is allocated (see
alloc_{dom,xen}_heap_page()
) on the proper nodes and, if that is the case, properly moved to another one (e.g., the per-VCPU stack and data segments).
IONUMA support
Description
If not only memory, but also I/O controller are attached to specific nodes, you'll end up with devices which are better used b VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device). Yang Zhang did some previous investigation on this situation, which, BTW, goes under the name IONUMA, and the result is this XenSummit-2011 presentation: "I/O Scalability in Xen".
Status
Looks like The Right Thing^TM can be made happen acting at both Dom0 and hypervisor level. Hypervisor level, that looks preferrable for a number of reasons, would mean instrumenting the XENMEM_exchange a little bit, but not before having verified where all the information we need to understand the IONUMA characteristics of the host (which device is where?) are and how to get there. It is also important to investigate on actual IONUMA enabled hardware how big of an issue is to neglect it.
Work item
- Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: collecting IONUMA information. Where are the information about what device is attached to which controller on what node? When could they become available?
- Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: export IONUMA information to the user. As we currently do with things like
xl info -n
, which tells the user what PCPUs are part of which node, allow for something similar wrt devices-nodes mappings. - Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: performance assessment. What happens, from an I/O throughput perspective, if we give memory and run a VM as far as possible from the node where the device is attached? How bad is that? What happens in the opposite (best) case?
- Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: Dom0/Driver IONUMA. Devices should have their DMA buffers allocated on (or as close as possible to) the node to which their controllers are attached.
- Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: guest IONUMA. When guest boots with some passed-through devices, we should try to allocate the memory from the node where the device resides and, especially for multi-node guests, let the guest itself know the IONUMA topology.
Virtual NUMA topology exposure to guests
Description
If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). It is something very important for some specific kind of workloads, for instance, HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Under the name of vNUMA, this is one of the key and most advertised feature of VMWare vSphere 5 ("vNUMA: what it is and why it matters").
Status
First of all, it must be considered that this interacts with some of the above points:
- automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it;
- memory migration: it can change the actual topology (should we update it on-line or disable memory migration?)
Discussion on whether or not this is something worthwhile to have, and (if yes) how to deal with it happened here.
NUMA and memory over-committing
Description
Xen offers a set of different mechanisms for over-committing the host memory: ballooning, Tmem, paging, sharing, etc. (not all of them can be available for the same type of guests or at the same time). For instance, sharing pages between guests residing on different nodes could not be a good idea. Also, ballooning and automatic placement should cooperate, or it's not entirely possible to tell how much free memory there is on a node on the host at any given time. And more than that...
Status
Nothing being done yet.
Inter-VM dependencies and communication issues
Description
If a workload is made up of more than just a VM on the same NUMA host, it might be best to have them sharing the nodes as much as possible, or perhaps do right the opposite, depending on the specific characteristics of he workload itself, and this might be considered during placement, memory migration and perhaps scheduling.
Status
Huge amount of work is being done by the Computer Lab of the University of Cambridge, by Anil Madhavapeddy and Malte Schwarzkopf. Details here, while this is the page through which everyone can contribute to that project, submitting their own results. That being said, nothing is being done (yet) to somehow integrate an/or take advantage of this in Xen.
Benchmarking and performances evaluation
Description
Performance evaluation is extremely important here. It is both needed that we:
- agree on a set of meaningful benchmarks, for answering questions like 'are we actually improving performances for the right workloads?', 'aren't we introducing any performance regressions?', etc.;
- figure out how to automatically run them concurrently in varying number of VM and, possibly, on different hosts (i.e., NUMA hosts with different characteristics).
Status
Benchmarks are being run in parallel within different number of VMs via a custom set of script that needs to be polished and shared, to make the whole thing reproducible. The benchmark being considered up to now are the following ones:
- SpecJBB2005: really looks a good one, very stable and consistent resutls.
- sysbench (for memory and CPU): not so good, it is hard to establish a baseline... Results vary a lot even if nothing changes (or at least, nothing that we can control).
- lmbench (some of the lat_*): seems good, but needs more investigation.
- stream: should be considered, but seems to have issues running with more than one thread (needs more investigation).
Work item
- Use the Xen Open Source testing infrastructure to automatize benchmarking.