Xen NUMA Roadmap
Improving Performances for NUMA Machines
This page is meant at containing the list of items that should be considered in order to improve the NUMA support in Xen. Of course, Xen can already run on NUMA machines, but there are spots where we can do better, bringing significant performances improvements.
Outstanding development projects and tasks are collected in the Xen Development Projects page, so have a look (also) there, if looking for something to do. This page here provides both, some more details and context about the NUMA related projects, as well as items which are being currently worked on, to help tracking progresses and ease collaboration.
The list below is not necessarily complete or representative of actual work on-going on the specific items (although, it tries to be!). This means, if something looks wrong or missing, feel free to update it and/or ask for clarifications to Dario or, better, in the xen-devel mailing list, which is where all the real development takes place!
Automatic placement at guest creation time
Basics are there (shipping with Xen 4.2 as far as xl is concerned). However, a lot of other things are still missing and/or can be improved, for instance:
- automated verification and testing of the placement (Dario: working on it);
- benchmarking and tweaking the placement heuristic (Dario: working on it);
- choosing/building up some measure of node load (more accurate than just counting vcpus) onto which to rely during placement (Dario: working on it);
- consider IONUMA during placement. In some more details:
- Dom0 IONUMA, i.e., devices should get their DMA buffers from the node to which their controllers are attached (currently Dom0 allocates these buffers without taking NUMA into account at all) (Yang Zhang: working on it?)
- Guest IONUMA, i.e., when guest boots with some passed-through devices, we need to allocate the memory from the node where the device resides, as well as let guest know the IONUMA topology (topic mentioned during Xen Summit 2011 in this presentation I/O Scalability in Xen) (Yang Zhang: working on it?)
Placement in Xen internals
- Automatic placement of Dom0, if possible. See here (current code only affects DomUs);
- placing internal Xen items, such as the per-cpu stacks and data area, on the local NUMA node, rather than unconditionally on node 0 as it is at the moment. That might mean changing to alloc_{dom,xen}heap_page() or perhaps only use them (properly) in more places (Andrew/XS team will work on it).
NUMA aware scheduling
Don't pin vcpus on nodes' pcpus, just have them _prefer_ running on the nodes where their memory is. (Dario: working on it, re-posting patches soon).
Dynamic memory migration
Between different nodes of the host, as the counter-part of the NUMA-aware scheduler. See here (Dario: working on it).
Virtual NUMA topology exposure to guests
A.k.a guest-numa. If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). This interacts with some of the above points:
- consider this during automatic placement for resuming/migrating domains (if they have a virtual topology, better not to change it);
- consider this during memory migration (it can change the actual topology, should we update it on-line or disable memory migration?)
Discussion on whether or not this is something worthwhile to have, and (if yes) how to deal with it going on here.
NUMA and memory shrinking and sharing
In some more details:
- page sharing on NUMA boxes: it's probably sane to make it possible disabling sharing pages across nodes;
- ballooning and its interaction with placement (races, amount of memory needed and reported being different at different time, etc.).
- trascendent memory (Tmem) as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. In fact (Dan Magenheimer, can't work on it but can provide mentoring to anyone interested):
- Tmem very nicely separates infrequently-used data from frequently-used data (API/ABI that now fully in upstream Linux);
- add to Xen something like "alloc_page_on_any_node_but_the_current_one()" (or "any_node_except_this_guests_node_set" for multinode guests) and have Xen's tmem implementation use it (especially in combination with selfballooning). This could solve a significant part of the NUMA problem when running tmem-enabled guests.
Inter-VM dependencies and communication issues
If a workload is made up of more than just a VM and they all share the same (NUMA) host, it might be best to have them sharing the nodes as much as possible, or perhaps do right the opposite, depending on the specific characteristics of he workload itself, and this might be considered during placement, memory migration and perhaps scheduling.
Huge amount of work is being done here, at the Computer Lab of the University of Cambridge, by Anil Madhavapeddy and Malte Schwarzkopf. This is the page through which everyone can contribute to that project, submitting their own results.
Benchmarking and performances evaluation
In general, meaning both agreeing on a (set of) relevant workload(s) and on how to extract meaningful performances data from there (and maybe how to do that automatically?).