Xen NUMA Roadmap: Difference between revisions
Lars.kurth (talk | contribs) mNo edit summary |
|||
(30 intermediate revisions by 2 users not shown) | |||
Line 12: | Line 12: | ||
= About This Page = |
= About This Page = |
||
This page acts as a collection point |
This page acts as a collection point for all NUMA related features. The idea is using this space to summarize the status of each of them and track their progress. This is being done with the hope of facilitating as much as possible the collaboration between the various community members, and to limit the risk of duplicating efforts. |
||
⚫ | |||
For details on the single features (e.g., who is in charge of them, what is the status, what release is being targeted, etc.), see the specific feature pages (TODO breakdown the below in single feature pages). Notice that the complete list of open, '''not currently owned''', development tasks for the Xen Project is hosted in this page: [[Xen_Development_Projects|Xen Development Projects]]. There are some items there that map to some of the features listed here (or part of them). If aiming at getting started with some Xen development, be sure you look there. |
|||
== |
== Updating this page == |
||
This is a Wiki, so, please, go ahead and update/fix (if not a Wiki editor, see [http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html this]). |
|||
⚫ | |||
⚫ | The maintaner of this page is [[User:Dariof|Dario]], so feel also free to contact him for anything you think you need. Even better, especially if about actual development of one of the features, start a conversation in the [http://lists.xen.org/mailman/listinfo/xen-devel xen-devel] mailing list (but in that case, be sure you follow [[Asking_Xen_Devel_Questions|this]]). |
||
== Legend == |
|||
== NUMA in Other Virtualization Platforms == |
|||
In the list below, each ''Work item'' contains the name and e-mail of the person working on it. In that context, '''WORKING''' means work has already started and patches could have already been submitted or will shortly be. |
|||
Some information on how NUMA is handled in VMWare virtualization solutions can be found here: |
|||
* [http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.resourcemanagement.doc_41/using_numa_systems_with_esx_esxi/c_vmware_numa_optimization_algorithms.html "Using NUMA Systems with ESX/ESXi"] |
|||
* [http://labs.vmware.com/publications/performance-evaluation-of-hpc-benchmarks-on-vmwares-esxi-server "Performance Evaluation of HPC Benchmarks on VMware’s ESXi Server"]. |
|||
'''PLANNED''', means the person is keen on doing the job, but any code have been written yet. If wanting to help or take over, consider dropping to such person a note. |
|||
And some on NUMA in Linux: |
|||
* [https://lwn.net/Articles/568870/ NUMA scheduling progress] |
|||
* [https://lwn.net/Articles/524977/ NUMA in a hurry] |
|||
* [http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=10fc05d0e551146ad6feb0ab8902d28a2d3c5624 Document automatic NUMA balancing sysctls] |
|||
If there is no name at all, the item is something identified as useful, but still unclaimed. |
|||
== Updating this page == |
|||
Finally, a '''<del>barred</del>''' work item items means it is done (and the name tells who did it). |
|||
⚫ | |||
= NUMA Features = |
= NUMA Features = |
||
<!--Just notice that each assigned ''Work item'' contains the name and e-mail of the person working on it. In that context, '''WORKING''' means work has already started and patches could have already been submitted or will shortly be. On the other hand, '''PLANNED''', means the person is keen on doing the job, but any code have been written yet (if wanting to take over, dropping a note could be useful, as there could be interesting information that he can share). If there is no name at all, the item is something identified as useful, but still unclaimed. Finally, <del>Barred</del> items are to be considered achieved (with the name telling who did the job).--> |
|||
== Automatic VM placement == |
== Automatic VM placement == |
||
Line 43: | Line 37: | ||
=== Description === |
=== Description === |
||
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances. |
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances. |
||
Check out the [[Xen 4.2 Automatic NUMA Placement|Automatic NUMA Placement]] page. |
|||
=== Status === |
=== Status === |
||
Basics are there. The old |
Basics are there. The old [[XEND]] toolstack had a placement logic, which did not go inot [[XL]], initially. It is now there too, starting from Xen 4.2. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful. |
||
=== Work items === |
=== Work items === |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>): <del>at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs</del>. Patch series: v1, [http://lists.xen.org/archives/html/xen-devel/2012-06/msg00978.html v2], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00231.html v3], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00475.html v4], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00682.html v5], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01017.html v6], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01256.html v7], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01493.html v8], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01514.html v9]. Relevant changesets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/f4b5a21f93ad f4b5a21f93ad], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/4165d71479f9 4165d71479f9]. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2] needs reposting. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enhance the placement algorithm to take latencies between nodes (node distances) into account''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2], but too much computational complexity was being introduced, needs rethinking. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''provide aids to enable easy verification and testing of the placement (stressing it by generating synthetic placement request). Discussion: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00904.html 1]. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''enhance the placement algorithm to take some more sophisticated measure of ''NODE load'' into account''. |
||
* (Semi-)Automatic placement for Dom0. Discussion [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00345.html |
* ''(Semi-)Automatic placement for Dom0''. Discussion: [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00345.html 1]. |
||
== NUMA aware scheduling == |
== NUMA aware scheduling == |
||
=== Description === |
=== Description === |
||
Instead of having to statically pin the vCPUs on nodes' pCPUs and just have them '''prefer''' running on the nodes where their memory resides. If considered independently from NUMA, this feature can be seen as giving vCPUs a sort of ''soft affinity'' (i.e., a set of pCPUs where they will prefer to run), in addition to their ''hard affinity'' (i.e., pinning). |
|||
Check out the [[Xen_4.3_NUMA_Aware_Scheduling|NUMA Aware Scheduling]] page. |
|||
=== Status === |
=== Status === |
||
For credit1, it's done (see below for patches and changesets). We are now concentrating on making node affinity (soft affinity) per-vCPU, instead than per-domain. That work, still concentrating on credit1, was almost ready for going in Xen 4.4, but, because of some last minut issues, then we decided it could wait for 4.5. |
|||
Patches posted and reviewed. Repost required quite a while because of extensive benchmarking performed in between the two releases of the patch series, along with the discovery and the need to fix some [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html scheduling anomalies] in the Credit scheduler. |
|||
For credit2, some work started, although it's quite complicated, as credit2 lacks pinning (hard affinity) too. |
|||
=== Work items === |
=== Work items === |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), <del>NUMA aware scheduling for credit</del>. Some related discussion (and patches): [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html 1]. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg00569.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-12/msg01555.html v2], [http://lists.xen.org/archives/html/xen-devel/2013-02/msg00009.html v3], [http://lists.xen.org/archives/html/xen-devel/2013-03/msg01217.html v4], [http://www.gossamer-threads.com/lists/xen/devel/277277?do=post_view_threaded#277277 v5], [http://lists.xen.org/archives/html/xen-devel/2013-04/msg01568.html v6]. Relevant changesets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/8bf04f2ed8de 8bf04f2ed8de], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/6a8c84c8e25f 6a8c84c8e25f]. |
||
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''per-vcpu soft affinity in credit''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2013-10/msg00164.html v1], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg00468.html v1-resend], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg01953.html v2], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg02513.html v3], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg03364.html v4], [http://lists.xen.org/archives/html/xen-devel/2013-12/msg00257.html v5], [http://lists.xen.org/archives/html/xen-devel/2013-12/msg01155.html v5-resend]. v6 is in [http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/numa/per-vcpu-affinity-v6 this git branch], waiting to be rebased and reposted as soon as Xen 4.5 development cycle opens. |
|||
* NUMA-awareness in credit2. |
|||
* Justin ([mailto:jtweaver@hawaii.edu jtweaver@hawaii.edu]), WORKING: ''Hard and soft affinity for credit2''. Discussion: [http://lists.xen.org/archives/html/xen-devel/2013-12/msg01391.html 1]. Patch series: [http://lists.xen.org/archives/html/xen-devel/2013-12/msg02190.html v1], [http://www.gossamer-threads.com/lists/xen/devel/311062 v2]. While working on this, a bug on how credit2 handles multiple runqueues was found. Here they are the attempt to fix that, as preliminary work: [http://www.gossamer-threads.com/lists/xen/devel/316253?do=post_view_threaded v1], [http://comments.gmane.org/gmane.comp.emulators.xen.devel/187692 v2], [http://www.gossamer-threads.com/lists/xen/devel/316598?page=last v3] |
|||
* NUMA-awareness in SEDF. |
|||
== Virtual NUMA |
== Virtual NUMA (support for NUMA guests) == |
||
=== Description === |
=== Description === |
||
Line 74: | Line 74: | ||
=== Status === |
=== Status === |
||
For PV guests, most of the work is done (by [http://hellokernel.blogspot.it/ Elena], while participating in [https://wiki.gnome.org/OutreachProgramForWomen/2013/JuneSeptember#Linux_Kernel Round 6 of OPW]), although it still needs to be properly upstreamed. Various patch series have been submitted along such period, here's the most relevant ones: first RFC [http://lists.xen.org/archives/html/xen-devel/2013-08/msg02533.html for Xen], [http://lists.xen.org/archives/html/xen-devel/2013-08/msg02555.html for Linux]; second RFC [http://lists.xen.org/archives/html/xen-devel/2013-09/msg01337.html for Xen], [http://lists.xen.org/archives/html/xen-devel/2013-09/msg01721.html for Linux]; actual v1 [http://lists.xen.org/archives/html/xen-devel/2013-10/msg01376.html for Xen]; v2 [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg01999.html for Xen], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg02565.html for Linux]; v3 [http://osdir.com/ml/general/2013-11/msg30648.html for Xen]; v4 [http://lists.xen.org/archives/html/xen-devel/2013-12/msg00625.html for Xen]. |
|||
First of all, it must be considered that this interacts with some of the above points: |
|||
Having vNUMA in both Dom0 and DomU will enable some potentially relevant optimizations, e.g., wrt the split driver model Xen supports, making sure to run the backend and the frontend on the same NUMA node, and/or to run the backend on the same node where the IO device is also attached (see also IONUMA below). Some thoughts about this [http://www.gossamer-threads.com/lists/xen/devel/283037?do=post_view_threaded here]. |
|||
⚫ | |||
* Elena (<[mailto:ufimtseva@gmail.com ufimtseva@gmail.com]>), WORKING: ''upstream PV vNUMA in both Xen and Linux''. |
|||
* Matt (<[mailto:msw@linux.com msw@linux.com]>), ''send in an RFC''. |
|||
* automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it; |
* automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it; |
||
* memory migration: it can change the actual topology (should we update it on-line or disable memory migration?) |
* memory migration: it can change the actual topology (should we update it on-line or disable memory migration?) |
||
Line 81: | Line 87: | ||
=== Description === |
=== Description === |
||
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. |
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Some development for this features happened during the Xen 4.3 window, but then got stalled. It is supposed to start back during the 4.5 development window. |
||
=== Status === |
=== Status === |
||
Line 87: | Line 93: | ||
=== Work item === |
=== Work item === |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enable moving memory from one node to another (on the same host)''. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''track how many and from whom non node-local memory is being accessed''. |
||
* Make sure everything, not only VMs' pages is allocated (see <code>alloc_{dom,xen}_heap_page()</code>) on the proper nodes and, if that is the case, properly moved to another one (e.g., the per-VCPU stack and data segments). |
|||
== IONUMA support == |
== IONUMA support == |
||
=== Description === |
=== Description === |
||
If not only memory, but also I/O |
If not only memory, but also I/O controllers are attached to specific nodes, you'll end up with devices which are better used by VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device). |
||
=== Status === |
=== Status === |
||
Yang Zhang did some previous investigation on this situation, which , BTW, goes under the name IONUMA, and the result is this presentation [http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf I/O Scalability in Xen] at Xen Summit 2011. |
|||
Looks like ''The Right Thing^TM'' can be made happen acting at both Dom0 and hypervisor level. Hypervisor level, that looks preferrable for a number of reasons, would mean instrumenting the XENMEM_exchange a little bit, but not before having verified where all the information we need to understand the IONUMA characteristics of the host (which device is where?) are and how to get there. It is also important to investigate on actual IONUMA enabled hardware how big of an issue is to neglect it. |
|||
Apart from that, only some discussion happened, on xen-devel: [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00161.html 1], [http://lists.xen.org/archives/html/xen-devel/2012-06/msg01683.html 2], [http://lists.xen.org/archives/html/xen-devel/2013-05/msg01984.html 3]. |
|||
⚫ | |||
* Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: collecting IONUMA information. Where are the information about what device is attached to which controller on what node? When could they become available? |
|||
* Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: export IONUMA information to the user. As we currently do with things like <code>xl info -n</code>, which tells the user what PCPUs are part of which node, allow for something similar wrt devices-nodes mappings. |
|||
* Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: performance assessment. What happens, from an I/O throughput perspective, if we give memory and run a VM as far as possible from the node where the device is attached? How bad is that? What happens in the opposite (best) case? |
|||
⚫ | |||
* Neo Jia (<cjia_AT_nvidia_DOT_com>), PLANNED: guest IONUMA. When guest boots with some passed-through devices, we should try to allocate the memory from the node where the device resides and, especially for multi-node guests, let the guest itself know the IONUMA topology. |
|||
Discussion on whether or not this is something worthwhile to have, and (if yes) how to deal with it happened [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00161.html here]. |
|||
Some more discussion, related to this, happened recently in these two threads: [http://lists.xen.org/archives/html/xen-devel/2014-02/msg01154.html 1] (about PCIe proximity domains), [http://lists.xen.org/archives/html/xen-devel/2014-01/msg00098.html 2] (about introducing some [http://www.open-mpi.org/projects/hwloc/ hwloc] support for a Xen host). |
|||
=== Work item === |
|||
* ''export IONUMA information to the user'', as we currently do for NUMA topology with, in that case, <code>xl info -n</code>; |
|||
* ''IONUMA and automatic placement'': as said in the description, IONUMA info (once available) should bias the automatic placement decisions; |
|||
⚫ | |||
* ''guest IONUMA'': devices passed-through to guests, should have their DMA buffers allocated on (or as close as possible to) the node where their IO controller is attached. |
|||
[[Category:Performance]] |
[[Category:Performance]] |
||
Line 116: | Line 120: | ||
[[Category:Transient]] |
[[Category:Transient]] |
||
[[Category:Roadmap]] |
[[Category:Roadmap]] |
||
[[Category:Xen 4.2]] |
|||
[[Category:Xen 4.3]] |
|||
[[Category:Xen 4.4]] |
[[Category:Xen 4.4]] |
||
[[Category:Resource Management]] |
Latest revision as of 12:55, 9 February 2015
About This Page
This page acts as a collection point for all NUMA related features. The idea is using this space to summarize the status of each of them and track their progress. This is being done with the hope of facilitating as much as possible the collaboration between the various community members, and to limit the risk of duplicating efforts.
For more general information about NUMA on Xen, check this page.
Updating this page
This is a Wiki, so, please, go ahead and update/fix (if not a Wiki editor, see this). The maintaner of this page is Dario, so feel also free to contact him for anything you think you need. Even better, especially if about actual development of one of the features, start a conversation in the xen-devel mailing list (but in that case, be sure you follow this).
Legend
In the list below, each Work item contains the name and e-mail of the person working on it. In that context, WORKING means work has already started and patches could have already been submitted or will shortly be.
PLANNED, means the person is keen on doing the job, but any code have been written yet. If wanting to help or take over, consider dropping to such person a note.
If there is no name at all, the item is something identified as useful, but still unclaimed.
Finally, a barred work item items means it is done (and the name tells who did it).
NUMA Features
Automatic VM placement
Description
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances.
Check out the Automatic NUMA Placement page.
Status
Basics are there. The old XEND toolstack had a placement logic, which did not go inot XL, initially. It is now there too, starting from Xen 4.2. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful.
Work items
- Dario (<dario.faggioli@citrix.com>):
at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs. Patch series: v1, v2, v3, v4, v5, v6, v7, v8, v9. Relevant changesets: f4b5a21f93ad, 4165d71479f9. - Dario (<dario.faggioli@citrix.com>), WORKING: allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly. Patch series: v1, v2 needs reposting.
- Dario (<dario.faggioli@citrix.com>), WORKING: enhance the placement algorithm to take latencies between nodes (node distances) into account. Patch series: v1, v2, but too much computational complexity was being introduced, needs rethinking.
- Dario (<dario.faggioli@citrix.com>), PLANNED: provide aids to enable easy verification and testing of the placement (stressing it by generating synthetic placement request). Discussion: 1.
- Dario (<dario.faggioli@citrix.com>), PLANNED: enhance the placement algorithm to take some more sophisticated measure of NODE load into account.
- (Semi-)Automatic placement for Dom0. Discussion: 1.
NUMA aware scheduling
Description
Instead of having to statically pin the vCPUs on nodes' pCPUs and just have them prefer running on the nodes where their memory resides. If considered independently from NUMA, this feature can be seen as giving vCPUs a sort of soft affinity (i.e., a set of pCPUs where they will prefer to run), in addition to their hard affinity (i.e., pinning).
Check out the NUMA Aware Scheduling page.
Status
For credit1, it's done (see below for patches and changesets). We are now concentrating on making node affinity (soft affinity) per-vCPU, instead than per-domain. That work, still concentrating on credit1, was almost ready for going in Xen 4.4, but, because of some last minut issues, then we decided it could wait for 4.5.
For credit2, some work started, although it's quite complicated, as credit2 lacks pinning (hard affinity) too.
Work items
- Dario (<dario.faggioli@citrix.com>),
NUMA aware scheduling for credit. Some related discussion (and patches): 1. Patch series: v1, v2, v3, v4, v5, v6. Relevant changesets: 8bf04f2ed8de, 6a8c84c8e25f. - Dario (<dario.faggioli@citrix.com>), WORKING: per-vcpu soft affinity in credit. Patch series: v1, v1-resend, v2, v3, v4, v5, v5-resend. v6 is in this git branch, waiting to be rebased and reposted as soon as Xen 4.5 development cycle opens.
- Justin (jtweaver@hawaii.edu), WORKING: Hard and soft affinity for credit2. Discussion: 1. Patch series: v1, v2. While working on this, a bug on how credit2 handles multiple runqueues was found. Here they are the attempt to fix that, as preliminary work: v1, v2, v3
Virtual NUMA (support for NUMA guests)
Description
If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). It is something very important for some specific kind of workloads, for instance, HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Under the name of vNUMA, this is one of the key and most advertised feature of VMWare vSphere 5 ("vNUMA: what it is and why it matters").
Status
For PV guests, most of the work is done (by Elena, while participating in Round 6 of OPW), although it still needs to be properly upstreamed. Various patch series have been submitted along such period, here's the most relevant ones: first RFC for Xen, for Linux; second RFC for Xen, for Linux; actual v1 for Xen; v2 for Xen, for Linux; v3 for Xen; v4 for Xen.
Having vNUMA in both Dom0 and DomU will enable some potentially relevant optimizations, e.g., wrt the split driver model Xen supports, making sure to run the backend and the frontend on the same NUMA node, and/or to run the backend on the same node where the IO device is also attached (see also IONUMA below). Some thoughts about this here.
Work Items
- Elena (<ufimtseva@gmail.com>), WORKING: upstream PV vNUMA in both Xen and Linux.
- Matt (<msw@linux.com>), send in an RFC.
- automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it;
- memory migration: it can change the actual topology (should we update it on-line or disable memory migration?)
Dynamic memory migration
Description
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Some development for this features happened during the Xen 4.3 window, but then got stalled. It is supposed to start back during the 4.5 development window.
Status
Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle.
Work item
- Dario (<dario.faggioli@citrix.com>), WORKING: enable moving memory from one node to another (on the same host).
- Dario (<dario.faggioli@citrix.com>), PLANNED: track how many and from whom non node-local memory is being accessed.
IONUMA support
Description
If not only memory, but also I/O controllers are attached to specific nodes, you'll end up with devices which are better used by VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device).
Status
Yang Zhang did some previous investigation on this situation, which , BTW, goes under the name IONUMA, and the result is this presentation I/O Scalability in Xen at Xen Summit 2011.
Apart from that, only some discussion happened, on xen-devel: 1, 2, 3.
Some more discussion, related to this, happened recently in these two threads: 1 (about PCIe proximity domains), 2 (about introducing some hwloc support for a Xen host).
Work item
- export IONUMA information to the user, as we currently do for NUMA topology with, in that case,
xl info -n
; - IONUMA and automatic placement: as said in the description, IONUMA info (once available) should bias the automatic placement decisions;
- Dom0/Driver IONUMA: devices should have their DMA buffers for the backends allocated on (or as close as possible to) the node where their IO controller is attached;
- guest IONUMA: devices passed-through to guests, should have their DMA buffers allocated on (or as close as possible to) the node where their IO controller is attached.