Difference between revisions of "Xen NUMA Roadmap"

From Xen
m (NUMA aware scheduling)
m (Work items)
Line 42: Line 42:
   
 
=== Work items ===
 
=== Work items ===
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>): <del>at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs.</del> Some of the relevant c/sets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/f4b5a21f93ad f4b5a21f93ad], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/4165d71479f9 4165d71479f9].
+
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>): <del>at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs</del>. Relevant changesets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/f4b5a21f93ad f4b5a21f93ad], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/4165d71479f9 4165d71479f9].
 
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2] needs reposting.
 
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2] needs reposting.
 
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enhance the placement algorithm to take latencies between nodes (node distances) into account''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2], but too much computational complexity was being introduced, needs rethinking.
 
* [[User:Dariof|Dario Faggioli]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enhance the placement algorithm to take latencies between nodes (node distances) into account''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2], but too much computational complexity was being introduced, needs rethinking.

Revision as of 15:14, 21 February 2014

About This Page

This page acts as a collection point for all NUMA related features. The idea is using this space to summarize the status of each of them and track their progress. This is being done with the hope of facilitating as much as possible the collaboration between the various community members, and to limit the risk of duplicating efforts.

For more general information about NUMA on Xen, check this page.

Updating this page

This is a Wiki, so, please, go ahead and update/fix (if not a Wiki editor, see this). The maintaner of this page is Dario, so feel also free to contact him for anything you think you need. Even better, especially if about actual development of one of the features, start a conversation in the xen-devel mailing list (but in that case, be sure you follow this).

Legend

In the list below, each Work item contains the name and e-mail of the person working on it. In that context, WORKING means work has already started and patches could have already been submitted or will shortly be.

PLANNED, means the person is keen on doing the job, but any code have been written yet. If wanting to help or take over, consider dropping to such person a note.

If there is no name at all, the item is something identified as useful, but still unclaimed.

Finally, a barred work item items means it is done (and the name tells who did it).

NUMA Features

Automatic VM placement

Description

This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances.

Status

Basics are there. The old xm/XenD toolstack had a placement logic implemented (in XenD) which was not included in the new xl/libxl toolstack in the first place. That has been fixed recently, and now (starting from Xen 4.2) automatic placement is available for xl/libxl users. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful.

Work items

NUMA aware scheduling

Description

Instead of having to statically pin the vCPUs on nodes' pCPUs and just have them prefer running on the nodes where their memory resides. If considered independently from NUMA, this feature can be seen as giving vCPUs a sort of soft affinity (i.e., a set of pCPUs where they will prefer to run), in addition to their hard affinity (i.e., pinning).

Status

For credit1, it's done (see below for patches and changesets). We are now concentrating on making node affinity (soft affinity) per-vCPU, instead than per-domain. That work, still concentrating on credit1, was almost ready for going in Xen 4.4, but, because of some last minut issues, then we decided it could wait for 4.5.

For credit2, some work started, although it's quite complicated, as credit2 lacks pinning (hard affinity) too.

Work items

Virtual NUMA topology exposure to guests

Description

If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). It is something very important for some specific kind of workloads, for instance, HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Under the name of vNUMA, this is one of the key and most advertised feature of VMWare vSphere 5 ("vNUMA: what it is and why it matters").

Status

First of all, it must be considered that this interacts with some of the above points:

  • automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it;
  • memory migration: it can change the actual topology (should we update it on-line or disable memory migration?)

Dynamic memory migration

Description

Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Memory migration is one of the feature desirable for Xen 4.3.

Status

Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle.

Work item

  • Dario Faggioli (<dario.faggioli@citrix.com>), WORKING: enable moving memory from one node to another (on the same host) upon user request and doing that in small chunks, so that (ideally) no downtime will be perceived by the VM.
  • Dario Faggioli (<dario.faggioli@citrix.com>), PLANNED: track how many and from whom non node-local memory is being accessed. Report from the hypervisor to the upper layers so user or toolstack can properly consume it.
  • Make sure everything, not only VMs' pages is allocated (see alloc_{dom,xen}_heap_page()) on the proper nodes and, if that is the case, properly moved to another one (e.g., the per-VCPU stack and data segments).

IONUMA support

Description

If not only memory, but also I/O controller are attached to specific nodes, you'll end up with devices which are better used b VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device). Yang Zhang did some previous investigation on this situation, which, BTW, goes under the name IONUMA, and the result is this XenSummit-2011 presentation: "I/O Scalability in Xen".

Status

Looks like The Right Thing^TM can be made happen acting at both Dom0 and hypervisor level. Hypervisor level, that looks preferrable for a number of reasons, would mean instrumenting the XENMEM_exchange a little bit, but not before having verified where all the information we need to understand the IONUMA characteristics of the host (which device is where?) are and how to get there. It is also important to investigate on actual IONUMA enabled hardware how big of an issue is to neglect it.

Work item

  • Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: collecting IONUMA information. Where are the information about what device is attached to which controller on what node? When could they become available?
  • Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: export IONUMA information to the user. As we currently do with things like xl info -n, which tells the user what PCPUs are part of which node, allow for something similar wrt devices-nodes mappings.
  • Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: performance assessment. What happens, from an I/O throughput perspective, if we give memory and run a VM as far as possible from the node where the device is attached? How bad is that? What happens in the opposite (best) case?
  • Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: Dom0/Driver IONUMA. Devices should have their DMA buffers allocated on (or as close as possible to) the node to which their controllers are attached.
  • Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: guest IONUMA. When guest boots with some passed-through devices, we should try to allocate the memory from the node where the device resides and, especially for multi-node guests, let the guest itself know the IONUMA topology.

Discussion on whether or not this is something worthwhile to have, and (if yes) how to deal with it happened here.