Difference between revisions of "Xen NUMA Roadmap"

From Xen
(NUMA sort-of roadmap page created)
 
m (Adding Andrew Cooper's contribution about NUMA in Xen internals)
Line 12: Line 12:
 
* choosing/building up some measure of node load (more accurate than just counting vcpus) onto which to rely during placement ([[User:Dariof|Dario]]: working on it);
 
* choosing/building up some measure of node load (more accurate than just counting vcpus) onto which to rely during placement ([[User:Dariof|Dario]]: working on it);
 
* consider IONUMA during placement;
 
* consider IONUMA during placement;
  +
* automatic placement of Dom0, if possible (my current series is only affecting DomU);
 
  +
== Placement in Xen internals ==
* having internal xen data structure honour the placement (e.g., vcpu stacks).
 
  +
  +
* Automatic placement of Dom0, if possible (current code only affects DomUs);
  +
* placing internal Xen ''items'', such as the per-cpu stacks and data area, on the local NUMA node, rather than unconditionally on node 0 as it is at the moment. As part of this, there will be changes to alloc_{dom,xen}heap_page() to allow specification of which node(s) to allocate memory from (Andrew/XS team will work on it).
   
 
== NUMA aware scheduling ==
 
== NUMA aware scheduling ==

Revision as of 17:07, 1 August 2012

Improving Performances for NUMA Machines

This page is meant at containing the list of items that should be considered in order to improve the NUMA support in Xen. Of course, Xen can already run on NUMA machines, but there are spots where we can do better, bringing significant performances improvements.

The list below is not necessarily complete or representative of any actual work on-going on the specific items (although, it tries to be!). This means, if something looks wrong or missing, feel free to update it and/or ask for clarifications in the xen-devel mailing list, which is where all the real development takes place!

Automatic placement at guest creation time

Basics are there (shipping with Xen 4.2 as far as xl is concerned). However, a lot of other things are still missing and/or can be improved, for instance:

  • automated verification and testing of the placement (Dario: working on it);
  • benchmarking and tweaking the placement heuristic (Dario: working on it);
  • choosing/building up some measure of node load (more accurate than just counting vcpus) onto which to rely during placement (Dario: working on it);
  • consider IONUMA during placement;

Placement in Xen internals

  • Automatic placement of Dom0, if possible (current code only affects DomUs);
  • placing internal Xen items, such as the per-cpu stacks and data area, on the local NUMA node, rather than unconditionally on node 0 as it is at the moment. As part of this, there will be changes to alloc_{dom,xen}heap_page() to allow specification of which node(s) to allocate memory from (Andrew/XS team will work on it).

NUMA aware scheduling

Don't pin vcpus on nodes' pcpus, just have them _prefer_ running on the nodes where their memory is. (Dario: working on it, re-posting patches soon).

Dynamic memory migration

Between different nodes of the host, as the counter-part of the NUMA-aware scheduler (Dario: working on it).

Virtual NUMA topology exposure to guests

A.k.a guest-numa. If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). This interacts with some of the above points:

  • consider this during automatic placement for resuming/migrating domains (if they have a virtual topology, better not to change it);
  • consider this during memory migration (it can change the actual topology, should we update it on-line or disable memory migration?)

NUMA and memory shrinking and sharing

In some more details:

  • page sharing on NUMA boxes: it's probably sane to make it possible disabling sharing pages across nodes;
  • ballooning and its interaction with placement (races, amount of memory needed and reported being different at different time, etc.).

Inter-VM dependencies and communication issues

If a workload is made up of more than just a VM and they all share the same (NUMA) host, it might be best to have them sharing the nodes as much as possible, or perhaps do right the opposite, depending on the specific characteristics of he workload itself, and this might be considered during placement, memory migration and perhaps scheduling.

Benchmarking and performances evaluation

In general, meaning both agreeing on a (set of) relevant workload(s) and on how to extract meaningful performances data from there (and maybe how to do that automatically?).