Xen NUMA Roadmap: Difference between revisions
(Details about memory migration) |
Lars.kurth (talk | contribs) mNo edit summary |
||
(63 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
__NOTOC__ |
|||
= What's this page about = |
|||
{{sidebar |
|||
| name = Content |
|||
| outertitlestyle = text-align: left; |
|||
Of course, Xen already runs quite effectively on NUMA machines (for instance, look out [[Cpupools_Howto|cpupools]]), but there are spots where we can do better and introduce interface and performances improvements. The purpose of this page is, in fact, to act as collection point of all the new features and the needed improvement to make Xen better support complex virtualization workloads on NUMA servers and hosts. |
|||
| headingstyle = text-align: left; |
|||
| contentstyle = text-align: left; |
|||
| content1 = __TOC__ |
|||
It will also indicates what features are under development, and who in the Xen community is doing that, in contrast with all the ones that are just pending but have not been claimed by any developers yet. Finally, for the items that are under development, this page will (try to) track their progresses. |
|||
}} |
|||
= About This Page = |
|||
The ultimate purpose of this page is to provide detailed context and ease collaboration of people involved in NUMA related developments on Xen. |
|||
This page acts as a collection point for all NUMA related features. The idea is using this space to summarize the status of each of them and track their progress. This is being done with the hope of facilitating as much as possible the collaboration between the various community members, and to limit the risk of duplicating efforts. |
|||
= Related pages = |
|||
For more general information about NUMA on Xen, check [[Xen NUMA Introduction|this page]]. |
|||
The complete list of open development tasks and projects for Xen is hosted in this page: [[Xen_Development_Projects|Xen Development Projects]]. The features/items from this page that have been formulated in such a way that they constitutes a well formed ''Development Project'' (and that, of course, are not being taken care of by anyone yet) are also part of that page. So, in summary, if wanting to start developing on Xen and looking for some random place from where to start, look there. If aiming at the same, but with particular interest in NUMA, or even if just interested in how well Xen supports NUMA machines and how things will likely evolve in the future, look here! |
|||
== Updating this page == |
|||
Another feature oriented service provided by the Xen community is [http://xenorg.uservoice.com/ our uservoice]. Go there to see what are the feature that most Xen users (at least, the ones that use that service) would like to see implemented, and in which priority order. That is not strictly related to this page here, but if thinking there is some NUMA related quirk we're missing, feel free to add it to both. |
|||
This is a Wiki, so, please, go ahead and update/fix (if not a Wiki editor, see [http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html this]). |
|||
= Updating this page = |
|||
The maintaner of this page is [[User:Dariof|Dario]], so feel also free to contact him for anything you think you need. Even better, especially if about actual development of one of the features, start a conversation in the [http://lists.xen.org/mailman/listinfo/xen-devel xen-devel] mailing list (but in that case, be sure you follow [[Asking_Xen_Devel_Questions|this]]). |
|||
== Legend == |
|||
Given the Wiki nature of this page, anything that looks wrong or incomplete can well be fixed directly by who notices that. If any kind of clarification is needed, feel free to contact the person in charge of the item in question and/or [[User:Dariof|Dario Faggioli]]. However, since we are talking about development plans, it would be nice if any doubt and/or modification would result in a conversation on the [http://lists.xen.org/mailman/listinfo/xen-devel xen-devel] mailing list, where the ''real development'' of Xen takes place. For some guidance on how to interact with that list, have a look [[Asking_Xen_Devel_Questions|here]]. |
|||
In the list below, each ''Work item'' contains the name and e-mail of the person working on it. In that context, '''WORKING''' means work has already started and patches could have already been submitted or will shortly be. |
|||
= Improved NUMA support TODO list = |
|||
'''PLANNED''', means the person is keen on doing the job, but any code have been written yet. If wanting to help or take over, consider dropping to such person a note. |
|||
Just notice that each assigned ''Work item'' contains the name and e-mail of the person working on it. In that context, '''WORKING''' means work has already started and patches could have already been submitted or will shortly be. On the other hand, '''PLANNED''', means the person is keen on doing the job, but any code have been written yet (if wanting to take over, dropping a note could be useful, as there could be interesting information that he can share). If there is no name at all, the item is something identified as useful, but still unclaimed. Finally, <del>Barred</del> items are to be considered achieved (with the name telling who did the job). |
|||
If there is no name at all, the item is something identified as useful, but still unclaimed. |
|||
Finally, a '''<del>barred</del>''' work item items means it is done (and the name tells who did it). |
|||
= NUMA Features = |
|||
== Automatic VM placement == |
== Automatic VM placement == |
||
Line 25: | Line 37: | ||
=== Description === |
=== Description === |
||
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances. |
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances. |
||
Check out the [[Xen 4.2 Automatic NUMA Placement|Automatic NUMA Placement]] page. |
|||
=== Status === |
=== Status === |
||
Basics are there. The old |
Basics are there. The old [[XEND]] toolstack had a placement logic, which did not go inot [[XL]], initially. It is now there too, starting from Xen 4.2. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful. |
||
=== Work items === |
=== Work items === |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>): <del>at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs</del>. Patch series: v1, [http://lists.xen.org/archives/html/xen-devel/2012-06/msg00978.html v2], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00231.html v3], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00475.html v4], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00682.html v5], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01017.html v6], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01256.html v7], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01493.html v8], [http://lists.xen.org/archives/html/xen-devel/2012-07/msg01514.html v9]. Relevant changesets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/f4b5a21f93ad f4b5a21f93ad], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/4165d71479f9 4165d71479f9]. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2] needs reposting. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enhance the placement algorithm to take latencies between nodes (node distances) into account''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01243.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01586.html v2], but too much computational complexity was being introduced, needs rethinking. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''provide aids to enable easy verification and testing of the placement (stressing it by generating synthetic placement request). Discussion: [http://lists.xen.org/archives/html/xen-devel/2012-07/msg00904.html 1]. |
||
* [[User:Dariof|Dario |
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''enhance the placement algorithm to take some more sophisticated measure of ''NODE load'' into account''. |
||
* ''(Semi-)Automatic placement for Dom0''. Discussion: [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00345.html 1]. |
|||
== NUMA aware scheduling == |
== NUMA aware scheduling == |
||
=== Description === |
=== Description === |
||
Instead of having to statically pin the vCPUs on nodes' pCPUs and just have them '''prefer''' running on the nodes where their memory resides. If considered independently from NUMA, this feature can be seen as giving vCPUs a sort of ''soft affinity'' (i.e., a set of pCPUs where they will prefer to run), in addition to their ''hard affinity'' (i.e., pinning). |
|||
Check out the [[Xen_4.3_NUMA_Aware_Scheduling|NUMA Aware Scheduling]] page. |
|||
This is about not statically pinning VCPUs on nodes' PCPUs and just have them '''prefer''' running on the nodes where their memory resides. NUMA-awareness for the credit scheduler is one of the key feature planned for Xen 4.3. |
|||
=== Status === |
=== Status === |
||
For credit1, it's done (see below for patches and changesets). We are now concentrating on making node affinity (soft affinity) per-vCPU, instead than per-domain. That work, still concentrating on credit1, was almost ready for going in Xen 4.4, but, because of some last minut issues, then we decided it could wait for 4.5. |
|||
For credit2, some work started, although it's quite complicated, as credit2 lacks pinning (hard affinity) too. |
|||
Patches posted and reviewed. Repost required quite a while because of extensive benchmarking performed in between the two releases of the patch series, along with the discovery and the need to fix some [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html scheduling anomalies] in the Credit scheduler. |
|||
=== Work items === |
=== Work items === |
||
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), <del>NUMA aware scheduling for credit</del>. Some related discussion (and patches): [http://lists.xen.org/archives/html/xen-devel/2012-10/msg01732.html 1]. Patch series: [http://lists.xen.org/archives/html/xen-devel/2012-10/msg00569.html v1], [http://lists.xen.org/archives/html/xen-devel/2012-12/msg01555.html v2], [http://lists.xen.org/archives/html/xen-devel/2013-02/msg00009.html v3], [http://lists.xen.org/archives/html/xen-devel/2013-03/msg01217.html v4], [http://www.gossamer-threads.com/lists/xen/devel/277277?do=post_view_threaded#277277 v5], [http://lists.xen.org/archives/html/xen-devel/2013-04/msg01568.html v6]. Relevant changesets: [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/8bf04f2ed8de 8bf04f2ed8de], [http://xenbits.xen.org/hg/staging/xen-unstable.hg/rev/6a8c84c8e25f 6a8c84c8e25f]. |
|||
* [[User:Dariof|Dario Faggioli (<dario.faggioli@citrix.com>)]], WORKING: NUMA-awareness in credit. Las round of patches [http://lists.xen.org/archives/html/xen-devel/2012-10/msg00569.html here] (needs reposting). |
|||
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''per-vcpu soft affinity in credit''. Patch series: [http://lists.xen.org/archives/html/xen-devel/2013-10/msg00164.html v1], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg00468.html v1-resend], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg01953.html v2], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg02513.html v3], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg03364.html v4], [http://lists.xen.org/archives/html/xen-devel/2013-12/msg00257.html v5], [http://lists.xen.org/archives/html/xen-devel/2013-12/msg01155.html v5-resend]. v6 is in [http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/numa/per-vcpu-affinity-v6 this git branch], waiting to be rebased and reposted as soon as Xen 4.5 development cycle opens. |
|||
* NUMA-awareness in credit2. |
|||
* Justin ([mailto:jtweaver@hawaii.edu jtweaver@hawaii.edu]), WORKING: ''Hard and soft affinity for credit2''. Discussion: [http://lists.xen.org/archives/html/xen-devel/2013-12/msg01391.html 1]. Patch series: [http://lists.xen.org/archives/html/xen-devel/2013-12/msg02190.html v1], [http://www.gossamer-threads.com/lists/xen/devel/311062 v2]. While working on this, a bug on how credit2 handles multiple runqueues was found. Here they are the attempt to fix that, as preliminary work: [http://www.gossamer-threads.com/lists/xen/devel/316253?do=post_view_threaded v1], [http://comments.gmane.org/gmane.comp.emulators.xen.devel/187692 v2], [http://www.gossamer-threads.com/lists/xen/devel/316598?page=last v3] |
|||
* NUMA-awareness in SEDF. |
|||
== Virtual NUMA (support for NUMA guests) == |
|||
== Dynamic memory migration == |
|||
=== Description === |
=== Description === |
||
If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). It is something very important for some specific kind of workloads, for instance, HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Under the name of vNUMA, this is one of the key and most advertised feature of VMWare vSphere 5 ("[http://cto.vmware.com/vnuma-what-it-is-and-why-it-matters/ vNUMA: what it is and why it matters]"). |
|||
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Memory migration is one of the feature desirable for Xen 4.3. |
|||
=== Status === |
=== Status === |
||
For PV guests, most of the work is done (by [http://hellokernel.blogspot.it/ Elena], while participating in [https://wiki.gnome.org/OutreachProgramForWomen/2013/JuneSeptember#Linux_Kernel Round 6 of OPW]), although it still needs to be properly upstreamed. Various patch series have been submitted along such period, here's the most relevant ones: first RFC [http://lists.xen.org/archives/html/xen-devel/2013-08/msg02533.html for Xen], [http://lists.xen.org/archives/html/xen-devel/2013-08/msg02555.html for Linux]; second RFC [http://lists.xen.org/archives/html/xen-devel/2013-09/msg01337.html for Xen], [http://lists.xen.org/archives/html/xen-devel/2013-09/msg01721.html for Linux]; actual v1 [http://lists.xen.org/archives/html/xen-devel/2013-10/msg01376.html for Xen]; v2 [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg01999.html for Xen], [http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg02565.html for Linux]; v3 [http://osdir.com/ml/general/2013-11/msg30648.html for Xen]; v4 [http://lists.xen.org/archives/html/xen-devel/2013-12/msg00625.html for Xen]. |
|||
Having vNUMA in both Dom0 and DomU will enable some potentially relevant optimizations, e.g., wrt the split driver model Xen supports, making sure to run the backend and the frontend on the same NUMA node, and/or to run the backend on the same node where the IO device is also attached (see also IONUMA below). Some thoughts about this [http://www.gossamer-threads.com/lists/xen/devel/283037?do=post_view_threaded here]. |
|||
Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle. |
|||
=== Work |
=== Work Items === |
||
* Elena (<[mailto:ufimtseva@gmail.com ufimtseva@gmail.com]>), WORKING: ''upstream PV vNUMA in both Xen and Linux''. |
|||
* [[User:Dariof|Dario Faggioli]] ([mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]), WORKING: enable moving memory from one node to another (on the same host) upon user request and doing that in small chunks, so that (ideally) no downtime will be perceived by the VM. |
|||
* Matt (<[mailto:msw@linux.com msw@linux.com]>), ''send in an RFC''. |
|||
* [[User:Dariof|Dario Faggioli]] ([mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]), PLANNED: track how many and from whom non node-local memory is being accessed. Report from the hypervisor to the upper layers so user or toolstack can properly consume it. |
|||
* automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it; |
|||
* memory migration: it can change the actual topology (should we update it on-line or disable memory migration?) |
|||
== |
== Dynamic memory migration == |
||
=== Description === |
|||
Properly support IONUMA (Neo Jia: working on it?): |
|||
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Some development for this features happened during the Xen 4.3 window, but then got stalled. It is supposed to start back during the 4.5 development window. |
|||
* Dom0 IONUMA, i.e., devices should get their DMA buffers from the node to which their controllers are attached (currently Dom0 allocates these buffers without taking NUMA into account at all) |
|||
* Guest IONUMA, i.e., when guest boots with some passed-through devices, we need to allocate the memory from the node where the device resides, as well as let guest know the IONUMA topology (topic mentioned during Xen Summit 2011 in this presentation [http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf I/O Scalability in Xen]) |
|||
=== Status === |
|||
== Placement in Xen internals == |
|||
Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle. |
|||
=== Work item === |
|||
* Automatic placement of Dom0, if possible. See [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00345.html here] (current code only affects DomUs); |
|||
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), WORKING: ''enable moving memory from one node to another (on the same host)''. |
|||
* placing internal Xen ''items'', such as the per-cpu stacks and data area, on the local NUMA node, rather than unconditionally on node 0 as it is at the moment. That might mean changing to alloc_{dom,xen}heap_page() or perhaps only use them (properly) in more places (Andrew/XS team will work on it). |
|||
* [[User:Dariof|Dario]] (<[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>), PLANNED: ''track how many and from whom non node-local memory is being accessed''. |
|||
== IONUMA support == |
|||
=== Description === |
|||
If not only memory, but also I/O controllers are attached to specific nodes, you'll end up with devices which are better used by VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device). |
|||
=== Status === |
|||
== Virtual NUMA topology exposure to guests == |
|||
Yang Zhang did some previous investigation on this situation, which , BTW, goes under the name IONUMA, and the result is this presentation [http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf I/O Scalability in Xen] at Xen Summit 2011. |
|||
Apart from that, only some discussion happened, on xen-devel: [http://lists.xen.org/archives/html/xen-devel/2012-08/msg00161.html 1], [http://lists.xen.org/archives/html/xen-devel/2012-06/msg01683.html 2], [http://lists.xen.org/archives/html/xen-devel/2013-05/msg01984.html 3]. |
|||
A.k.a guest-numa. If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). This interacts with some of the above points: |
|||
* consider this during automatic placement for resuming/migrating domains (if they have a virtual topology, better not to change it); |
|||
* consider this during memory migration (it can change the actual topology, should we update it on-line or disable memory migration?) |
|||
Some more discussion, related to this, happened recently in these two threads: [http://lists.xen.org/archives/html/xen-devel/2014-02/msg01154.html 1] (about PCIe proximity domains), [http://lists.xen.org/archives/html/xen-devel/2014-01/msg00098.html 2] (about introducing some [http://www.open-mpi.org/projects/hwloc/ hwloc] support for a Xen host). |
|||
== NUMA and memory shrinking and sharing == |
|||
In some more details: |
|||
* page sharing on NUMA boxes: it's probably sane to make it possible disabling sharing pages across nodes; |
|||
* ballooning and its interaction with placement (races, amount of memory needed and reported being different at different time, etc.). |
|||
* trascendent memory (Tmem) as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. In fact ([mailto:dan.magenheimer_AT_oracle_DOT_com Dan Magenheimer], can't work on it but can provide mentoring to anyone interested): |
|||
** Tmem very nicely separates infrequently-used data from frequently-used data (API/ABI that now fully in upstream Linux); |
|||
** add to Xen something like "alloc_page_on_any_node_but_the_current_one()" (or "any_node_except_this_guests_node_set" for multinode guests) and have Xen's tmem implementation use it (especially in combination with selfballooning). This could solve a significant part of the NUMA problem when running tmem-enabled guests. |
|||
== Inter-VM dependencies and communication issues == |
|||
If a workload is made up of more than just a VM and they all share the same (NUMA) host, it might be best to have them sharing the nodes as much as possible, or perhaps do right the opposite, depending on the specific characteristics of he workload itself, and this might be considered during placement, memory migration and perhaps scheduling. |
|||
Huge amount of work is being done [http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/results.html here], at the [http://www.cl.cam.ac.uk/ Computer Lab] of the University of Cambridge, by Anil Madhavapeddy and Malte Schwarzkopf. This is the [http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/ page] through which everyone can contribute to that project, submitting their own results. |
|||
== Benchmarking and performances evaluation == |
|||
=== Description === |
|||
Performance evaluation is extremely important here. It is both needed that we: |
|||
* agree on a set of meaningful benchmarks, for answering questions like 'are we actually improving performances for the right workloads?', 'aren't we introducing any performance regressions?', etc.; |
|||
* figure out how to automatically run them concurrently in varying number of VM and, possibly, on different hosts (i.e., NUMA hosts with different characteristics). |
|||
=== Status === |
|||
Benchmarks are being run in parallel within different number of VMs via a custom set of script that needs to be polished and shared, to make the whole thing reproducible. The benchmark being considered up to now are the following ones: |
|||
* [http://www.spec.org/jbb2005/ SpecJBB2005]: really looks a good one, very stable and consistent resutls. |
|||
* [http://sysbench.sourceforge.net/ sysbench] (for memory and CPU): not so good, it is hard to establish a baseline... Results vary a lot even if nothing changes (or at least, nothing that we can control). |
|||
* [http://www.bitmover.com/lmbench/ lmbench] (some of the lat_*): seems good, but needs more investigation. |
|||
* [http://www.cs.virginia.edu/stream/ stream]: should be considered, but seems to have issues running with more than one thread (needs more investigation). |
|||
=== Work item === |
=== Work item === |
||
* ''export IONUMA information to the user'', as we currently do for NUMA topology with, in that case, <code>xl info -n</code>; |
|||
* Use the Xen Open Source testing infrastructure to automatize benchmarking. |
|||
* ''IONUMA and automatic placement'': as said in the description, IONUMA info (once available) should bias the automatic placement decisions; |
|||
* ''Dom0/Driver IONUMA'': devices should have their DMA buffers for the backends allocated on (or as close as possible to) the node where their IO controller is attached; |
|||
* ''guest IONUMA'': devices passed-through to guests, should have their DMA buffers allocated on (or as close as possible to) the node where their IO controller is attached. |
|||
[[Category:Performance]] |
[[Category:Performance]] |
||
Line 124: | Line 119: | ||
[[Category:NUMA]] |
[[Category:NUMA]] |
||
[[Category:Transient]] |
[[Category:Transient]] |
||
[[Category:Roadmap]] |
|||
[[Category:Xen 4.2]] |
|||
[[Category:Xen 4.3]] |
|||
[[Category:Xen 4.4]] |
|||
[[Category:Resource Management]] |
Latest revision as of 12:55, 9 February 2015
About This Page
This page acts as a collection point for all NUMA related features. The idea is using this space to summarize the status of each of them and track their progress. This is being done with the hope of facilitating as much as possible the collaboration between the various community members, and to limit the risk of duplicating efforts.
For more general information about NUMA on Xen, check this page.
Updating this page
This is a Wiki, so, please, go ahead and update/fix (if not a Wiki editor, see this). The maintaner of this page is Dario, so feel also free to contact him for anything you think you need. Even better, especially if about actual development of one of the features, start a conversation in the xen-devel mailing list (but in that case, be sure you follow this).
Legend
In the list below, each Work item contains the name and e-mail of the person working on it. In that context, WORKING means work has already started and patches could have already been submitted or will shortly be.
PLANNED, means the person is keen on doing the job, but any code have been written yet. If wanting to help or take over, consider dropping to such person a note.
If there is no name at all, the item is something identified as useful, but still unclaimed.
Finally, a barred work item items means it is done (and the name tells who did it).
NUMA Features
Automatic VM placement
Description
This is about picking up a NUMA node (or a set of NUMA nodes) where a newly created VM would best execute, in order to maximize its own and the system overall performances.
Check out the Automatic NUMA Placement page.
Status
Basics are there. The old XEND toolstack had a placement logic, which did not go inot XL, initially. It is now there too, starting from Xen 4.2. That being said, there is still a lot of room for improvements and making the placement algorithm more advanced and powerful.
Work items
- Dario (<dario.faggioli@citrix.com>):
at VM creation time, choose a node or a set of node where the VM fits (memory and VCPU wise) and pin the VM's VCPUs to the node's PCPUs. Patch series: v1, v2, v3, v4, v5, v6, v7, v8, v9. Relevant changesets: f4b5a21f93ad, 4165d71479f9. - Dario (<dario.faggioli@citrix.com>), WORKING: allow the user to control the placement algorithm by specifying some of the parameters it uses, instead of always detemining them implicitly. Patch series: v1, v2 needs reposting.
- Dario (<dario.faggioli@citrix.com>), WORKING: enhance the placement algorithm to take latencies between nodes (node distances) into account. Patch series: v1, v2, but too much computational complexity was being introduced, needs rethinking.
- Dario (<dario.faggioli@citrix.com>), PLANNED: provide aids to enable easy verification and testing of the placement (stressing it by generating synthetic placement request). Discussion: 1.
- Dario (<dario.faggioli@citrix.com>), PLANNED: enhance the placement algorithm to take some more sophisticated measure of NODE load into account.
- (Semi-)Automatic placement for Dom0. Discussion: 1.
NUMA aware scheduling
Description
Instead of having to statically pin the vCPUs on nodes' pCPUs and just have them prefer running on the nodes where their memory resides. If considered independently from NUMA, this feature can be seen as giving vCPUs a sort of soft affinity (i.e., a set of pCPUs where they will prefer to run), in addition to their hard affinity (i.e., pinning).
Check out the NUMA Aware Scheduling page.
Status
For credit1, it's done (see below for patches and changesets). We are now concentrating on making node affinity (soft affinity) per-vCPU, instead than per-domain. That work, still concentrating on credit1, was almost ready for going in Xen 4.4, but, because of some last minut issues, then we decided it could wait for 4.5.
For credit2, some work started, although it's quite complicated, as credit2 lacks pinning (hard affinity) too.
Work items
- Dario (<dario.faggioli@citrix.com>),
NUMA aware scheduling for credit. Some related discussion (and patches): 1. Patch series: v1, v2, v3, v4, v5, v6. Relevant changesets: 8bf04f2ed8de, 6a8c84c8e25f. - Dario (<dario.faggioli@citrix.com>), WORKING: per-vcpu soft affinity in credit. Patch series: v1, v1-resend, v2, v3, v4, v5, v5-resend. v6 is in this git branch, waiting to be rebased and reposted as soon as Xen 4.5 development cycle opens.
- Justin (jtweaver@hawaii.edu), WORKING: Hard and soft affinity for credit2. Discussion: 1. Patch series: v1, v2. While working on this, a bug on how credit2 handles multiple runqueues was found. Here they are the attempt to fix that, as preliminary work: v1, v2, v3
Virtual NUMA (support for NUMA guests)
Description
If a guest ends up on more than one nodes, make sure it knows it's running on a NUMA platform (smaller than the actual host, but still NUMA). It is something very important for some specific kind of workloads, for instance, HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Under the name of vNUMA, this is one of the key and most advertised feature of VMWare vSphere 5 ("vNUMA: what it is and why it matters").
Status
For PV guests, most of the work is done (by Elena, while participating in Round 6 of OPW), although it still needs to be properly upstreamed. Various patch series have been submitted along such period, here's the most relevant ones: first RFC for Xen, for Linux; second RFC for Xen, for Linux; actual v1 for Xen; v2 for Xen, for Linux; v3 for Xen; v4 for Xen.
Having vNUMA in both Dom0 and DomU will enable some potentially relevant optimizations, e.g., wrt the split driver model Xen supports, making sure to run the backend and the frontend on the same NUMA node, and/or to run the backend on the same node where the IO device is also attached (see also IONUMA below). Some thoughts about this here.
Work Items
- Elena (<ufimtseva@gmail.com>), WORKING: upstream PV vNUMA in both Xen and Linux.
- Matt (<msw@linux.com>), send in an RFC.
- automatic placement for resuming/migrating domains: if they have a virtual topology, better not to change it;
- memory migration: it can change the actual topology (should we update it on-line or disable memory migration?)
Dynamic memory migration
Description
Between different nodes of one host, either upon user request or automatically, as a form of load balancing (similar to what happens on the CPU with the NUMA-aware scheduler. Some development for this features happened during the Xen 4.3 window, but then got stalled. It is supposed to start back during the 4.5 development window.
Status
Started, but not yet ready to leave some developer's private patch queue in their dev-boxes. The need to support both HVM and PV guests complicate things quite a bit. Xenbus, qemu, a lot of inherent characteristics of the Xen architecture get in the way of having it simply done within the hypervisor (as it happens for NUMA aware scheduling). The current idea being pursued is for it to happen at low toolstack level (perhaps with the hypervisor exporting statistics that will help toolstacks and users to undertake proper decisions), sort-of mimicking a suspend-resume cycle.
Work item
- Dario (<dario.faggioli@citrix.com>), WORKING: enable moving memory from one node to another (on the same host).
- Dario (<dario.faggioli@citrix.com>), PLANNED: track how many and from whom non node-local memory is being accessed.
IONUMA support
Description
If not only memory, but also I/O controllers are attached to specific nodes, you'll end up with devices which are better used by VMs running on those nodes (or vice-versa, VMs that are better run on the proper node if/when they want to use a specific device).
Status
Yang Zhang did some previous investigation on this situation, which , BTW, goes under the name IONUMA, and the result is this presentation I/O Scalability in Xen at Xen Summit 2011.
Apart from that, only some discussion happened, on xen-devel: 1, 2, 3.
Some more discussion, related to this, happened recently in these two threads: 1 (about PCIe proximity domains), 2 (about introducing some hwloc support for a Xen host).
Work item
- export IONUMA information to the user, as we currently do for NUMA topology with, in that case,
xl info -n
; - IONUMA and automatic placement: as said in the description, IONUMA info (once available) should bias the automatic placement decisions;
- Dom0/Driver IONUMA: devices should have their DMA buffers for the backends allocated on (or as close as possible to) the node where their IO controller is attached;
- guest IONUMA: devices passed-through to guests, should have their DMA buffers allocated on (or as close as possible to) the node where their IO controller is attached.