Xen 4.3 NUMA Aware Scheduling: Difference between revisions
(Added more stuff about preliminary research in NUMA aware scheduling (as a draft)) |
Lars.kurth (talk | contribs) mNo edit summary |
||
(5 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
'''Note:''' Originally posted on [http://blog.xenproject.org/index.php/2013/03/14/numa-aware-scheduling-development-report/ blog.xenproject.org] |
|||
__TOC__ |
__TOC__ |
||
When dealing with NUMA machines, it is (among other things) very important that we: |
|||
== Background and Motivation == |
|||
This document provides an update on NUMA Aware scheduling in <em>Xen 4.3</em>. You can find other articles on NUMA in the [[:Category:NUMA|NUMA categort]]. |
|||
Long story short, they say how [http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access NUMA] is becoming more and more common and that, therefore, it is very important to: |
|||
# achieve a good initial placement, when creating a new VM; |
# achieve a good initial placement, when creating a new VM; |
||
# have a solution that is both flexible and effective enough to take advantage of that placement during the whole VM lifetime. |
# have a solution that is both flexible and effective enough to take advantage of that placement during the whole VM lifetime. |
||
The |
The latter, which basically, means: <em><<When starting a new Virtual Machine, to which NUMA node should I "associate" it with?>></em>. The latter is more about: <em><<How hard should the VM be associated to that NUMA node? Could it, perhaps temporarily, run elsewhere?>></em>, is what is usually called ''NUMA aware scheduling''. |
||
This document aims at describing what was included, regarding NUMA aware scheduling, in <em>Xen 4.3</em>. You can find other articles about NUMA in the [[:Category:NUMA|NUMA category]]. |
|||
== NUMA Placement and Scheduling == |
|||
So, here's the situation: automatic initial placement has been [[Xen_4.2_Feature_List|included in Xen 4.2]], inside libxl. This means, when a VM is created (of course, if that happens through libxl) a set of heuristics decide on which NUMA node his memory has to be allocated, and the vCPUs of the VM are <strong>statically pinned</strong> to the pCPUs of such node. |
|||
= Preliminary/Exploratory Work = |
|||
On the other hand, NUMA aware scheduling has included in Xen 4.3. This mean, instead of being statically pinned, the vCPUs of the VM will <strong>strongly prefer</strong> to run on the pCPUs of the NUMA node, but they can run somewhere else as well... And this is what this status report is all about. |
|||
Suppose we have a VM with all its memory allocated on NODE#0 and NODE#2 of our NUMA host. One may think that the best thing to do would be to pin the VM’s vCPUs on the pCPUs related to the two nodes. However, pinning is quite unflexible: what if those pCPUs get very busy while there are completely idle pCPUs on other nodes? It will depend on the workload, but it is not hard to imagine that having some chance to run --even if on a remote node-- would be better than not running at all. |
|||
== Performance Numbers? == |
|||
Sure thing! Benchmarks similar to the ones already described in the past have been performed. More specifically, directly from the cover letter of the Xen 4.3 implementation. Here's what has been done: |
|||
The idea is, then, to give the scheduler some hints about where a VM’s vCPUs should be executed (and this preference, in this context, will be called from now on ''NUMA affinity''). It then can try at its best to honor these suggestions of ours, but not at the cost of subverting its own algorithm. Here they are some early experimental results for this idea (dating back to [http://lists.xen.org/archives/html/xen-devel/2012-04/msg00732.html this patchset). The various curves in the graph below represents the throughput achieved one VM when it is: |
|||
<blockquote> |
|||
* scheduled without any pinning or NUMA affinity, i.e., ''cpus="all"'' in the config file (the red line); |
|||
<pre>I ran the following benchmarks (again): |
|||
* pinned on NODE#0, so that all its memory accesses are local (the green line); |
|||
* SpecJBB is all about throughput, so pinning is likely the ideal |
|||
* scheduled with NUMA affinity set to NODE#0, and no pinning, (blue line). |
|||
solution. |
|||
The plot shows is the percent increase of each configuration with respect to the worst possible case (i.e., when all memory access are remote). |
|||
* Sysbench-memory is the time it takes for writing a fixed amount |
|||
of memory (and then it is the throughput that is measured). What |
|||
http://xenbits.xen.org/people/dariof/images/blog/NUMA_2/kernbench_avg2.png |
|||
we expect is locality to be important, but at the same time the |
|||
potential imbalances due to pinning could have a say in it. |
|||
It appears quite clear that, introducing NUMA affinity increases performance by ~12% to ~18% from the worst case. It enables up to ~8% performance increase, as compared to unpinned behavior, and that the higher the load on the host, the better. |
|||
* LMBench-proc is the time it takes for a process to fork a fixed |
|||
number of children. This is much more about latency than |
|||
The full set of results for these early benchmarks is available [http://xenbits.xen.org/people/dariof/benchmarks/specjbb2005-numa/ here]. There was a blog post about this, and it is still online at [http://blog.xen.org/index.php/2012/05/16/numa-and-xen-part-ii-scheduling-and-placement/ this address]. |
|||
throughput, with locality of memory accesses playing a smaller |
|||
role and, again, imbalances due to pinning being a potential |
|||
= The Actual Solution in Xen 4.3 = |
|||
issue.</pre> |
|||
</blockquote> |
|||
Automatic placement made it to [[Xen_4.2_Feature_List|Xen 4.2]], and that meant, when a VM is created, a (set of) NUMA node(s) is picked to store its memory, and its vCPUs <strong>statically pinned</strong> to the pCPUs of such node(s). With NUMA aware scheduling, which was included in [[Xen_4.3_Feature_List|Xen 4.3]], the latter is no longer the case. In fact, instead of using pinning, the vCPUs <strong>strongly prefers</strong> to run on the pCPUs of the NUMA node(s), but they can run somewhere else as well. |
|||
This all happened on a 2 node host, where 2 to 10 VMs (2 vCPUs and 960 RAM each) were executing the various benchmarks concurrently. Here they are the results: |
|||
<blockquote> |
|||
During development, more benchmarks were run. For example the following ones: |
|||
<pre> ---------------------------------------------------- |
|||
* SpecJBB: this is all about throughput, thus pinning is likely the ideal solution; |
|||
* Sysbench-memory: this is the time it takes for writing a fixed amount of memory (and then it is the throughput that is measured). What we expect is locality to be important, but at the same time the potential imbalances due to pinning could have a say in it; |
|||
* LMBench-proc: this is the time it takes for a process to fork a fixed number of children. This is much more about latency than throughput, with locality of memory accesses playing a smaller role and, again, imbalances due to pinning being a potential issue. |
|||
The host was a 2 NUMA box, where 2 to 10 VMs (2 vCPUs and 960 RAM each) were executing the various benchmarks concurrently. The results looks as follows: |
|||
<pre> |
|||
---------------------------------------------------- |
|||
| SpecJBB2005, throughput (the higher the better) | |
| SpecJBB2005, throughput (the higher the better) | |
||
---------------------------------------------------- |
---------------------------------------------------- |
||
Line 56: | Line 56: | ||
| 6 | 986.44955 | 1076.7447 | 900.21504 | |
| 6 | 986.44955 | 1076.7447 | 900.21504 | |
||
| 10 | 1211.2434 | 1371.6014 | 1285.5947 | |
| 10 | 1211.2434 | 1371.6014 | 1285.5947 | |
||
---------------------------------------------------- |
---------------------------------------------------- |
||
</ |
</pre> |
||
Which, reasoning in terms of %-performance increase/decrease, means NUMA aware |
|||
scheduling does as follows, as compared to no-affinity at all and to static pinning: |
|||
<blockquote> |
|||
<pre> ---------------------------------- |
|||
| SpecJBB2005 (throughput) | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
| 2 | +13.05% | +0.21% | |
|||
| 6 | +12.30% | +0.53% | |
|||
| 10 | +4.31% | -8.82% | |
|||
---------------------------------- |
|||
| Sysbench memory (throughput) | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
| 2 | +15.44% | +3.79% | |
|||
| 6 | +11.24% | +5.72% | |
|||
| 10 | +4.18% | -1.34% | |
|||
---------------------------------- |
|||
| LMBench proc (latency) | |
|||
| NOTICE: -x.xx% = GOOD here | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
---------------------------------- |
|||
| 2 | -5.66% | -0.50% | |
|||
| 6 | -9.58% | -19.61% | |
|||
| 10 | +5.78% | -6.69% | |
|||
----------------------------------</pre> |
|||
</blockquote> |
|||
The tables show how, when not in overload (where overload='more vCPUs than pCPUs'), NUMA scheduling is <span style="color: #000000;"><strong>the absolute best</strong></span>. In fact, not only it does a lot better than no-pinning on throughput biased benchmarks, as well as a lot better than pinning on latency biased benchmarks (especially with 6 VMs), it also equals or beats both under adverse circumstances (adverse to NUMA scheduling, i.e., beats/equals pinning in throughput benchmarks, and beats/equals no-affinity on the latency benchmark). |
|||
When the system is overloaded, NUMA scheduling scores in the middle, as it could have been expected. It must also be noticed that, when it brings benefits, they are not as huge as in the non-overloaded case. However, this only means that there is still room for more optimization, right? In some more details, the current way a pCPU is selected for a vCPU that is waking-up, couples particularly bad with the new concept of NUMA node affinity. Changing this is not trivial, because it involves rearranging some locks inside the scheduler code, but is already being worked-on. |
|||
Anyway, even with what we have right now, we are <em>overloading the test box by 20%</em> here (without counting Dom0 vCPUs!) <em>and still seeing improvements</em>, which is definitely not bad! |
|||
XXX |
|||
= NUMA Scheduling = |
|||
Suppose we have a VM with all its memory allocated on NODE#0 and NODE#2 of our NUMA host. Of course, the best thing to do would be to pin the VM’s vCPUs on the pCPUs related to the two nodes. However, pinning is quite unflexible: what if those pCPUs get very busy while there are completely idle pCPUs on other nodes? It will depend on the workload, but it is not hard to imagine that having some chance to run –even if on a remote node– would be better than not running at all. It would therefore be preferable to give the scheduler some hints about where a VM’s vCPUs should be executed. It then can try at its best to honor these requests of ours, but not at the cost of subverting its own algorithm. From now on, we’ll call this hinting mechanism node affinity (don’t confuse it with CPU affinity, which is about to static CPU pinning). |
|||
As said, the experimental [http://lists.xen.org/archives/html/xen-devel/2012-04/msg00732.html patchset] introduces also the support for node affinity aware scheduling by means of this changeset: [sched_credit: Let the [http://lists.xen.org/archives/html/xen-devel/2012-04/msg00739.html scheduler know about `node affinity`]. As of now, it is all very simple, and it only happens for the '''credit1''' scheduling plugin of the Xen hypervisor. However, looking at some early performance measurements seems promising. |
|||
Looking at the results, attempts to suggest the scheduler the preferred node for a VM seem to be the righ direction to go (For the interested, complete results set is [http://xenbits.xen.org/people/dariof/benchmarks/specjbb2005-numa/ here]). The various curves in the graph below represents the throughput achieved on one of the VMs, more specifically the one that is being, respectively: |
|||
* scheduled without any pinning or affinity, i.e., (''cpus="all"'' in the VM config file, red line, also called default in this article), |
|||
* created and pinned on NODE#0, so that all its memory accesses are local (green line), |
|||
* scheduled with node affinity only to NODE#0 (no pinning) as per what is introduced by the patch (blue line). |
|||
http://xenbits.xen.org/people/dariof/images/blog/NUMA_2/kernbench_avg2.png |
|||
Which, reasoning in terms of %-performance increase/decrease, means NUMA aware scheduling does as follows, as compared to no-affinity at all and to static pinning: |
|||
What the plot actually shows is the percent increase of each configuration with respect to the worst possible case (i.e., when all memory access are remote). This means tweaking node affinity increases performance by ~12% to ~18% from the worst case. Also, it lets us gain up to ~8% performance as compared to unpinned behavior, doing particularly well as load increases. However, although it gets quite close to the green line (which is the best case), there is still probably some performance bits to squeeze from it. |
|||
<pre> |
|||
= Combining Placement and Scheduling = |
|||
---------------------------------- |
|||
| SpecJBB2005 (throughput) | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
| 2 | +13.05% | +0.21% | |
|||
| 6 | +12.30% | +0.53% | |
|||
| 10 | +4.31% | -8.82% | |
|||
---------------------------------- |
|||
| Sysbench memory (throughput) | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
| 2 | +15.44% | +3.79% | |
|||
| 6 | +11.24% | +5.72% | |
|||
| 10 | +4.18% | -1.34% | |
|||
---------------------------------- |
|||
| LMBench proc (latency) | |
|||
| NOTICE: -x.xx% = GOOD here | |
|||
---------------------------------- |
|||
| #VMs | No affinity | Pinning | |
|||
---------------------------------- |
|||
| 2 | -5.66% | -0.50% | |
|||
| 6 | -9.58% | -19.61% | |
|||
| 10 | +5.78% | -6.69% | |
|||
---------------------------------- |
|||
</pre> |
|||
The tables show how, when not in overload (where overload='more vCPUs than pCPUs'), NUMA aware scheduling is <span style="color: #000000;"><strong>the absolute best</strong></span>. In fact, not only it does a lot better than no-pinning on throughput biased benchmarks, and a lot better than pinning on latency biased benchmarks (especially with 6 VMs), it also equals or beats both under adverse circumstances (adverse to NUMA aware scheduling, i.e., beats/equals pinning in throughput benchmarks, and beats/equals no-affinity on the latency benchmark). |
|||
So, if we do both things, i.e.: |
|||
When the system is overloaded, NUMA scheduling scores in the middle, as it could have been expected. It must also be noticed that, when it brings benefits, they are not as huge as in the non-overloaded case (which probably means there is still room for some more optimization). In particular, the current way a pCPU is selected, when a vCPU waks-up, couples particularly bad with the new concept of NUMA affinity. Changing this is not trivial, because it involves rearranging some locks inside, but can be done, if deemed worthwhile. |
|||
* we introduce some form of automatic placement logic in xl. This means assigning a "node affinity" to a domain at creation time and asking Xen to stick to it; |
|||
* we tweak the scheduler so that it will strongly prefer running a domain on the node(s) it has affinity with, but not in a strict manner as it is for pinning. |
|||
The [http://xenbits.xen.org/docs/unstable/misc/xl-numa-placement.html in tree documentation] has some more details about NUMA aware scheduling, and the interactions it has with [[Xen_4.2_Automatic_NUMA_Placement|automatic NUMA placement]]. Also, there was a blog post about this topic too, available [http://blog.xen.org/index.php/2013/03/14/numa-aware-scheduling-development-report/ here]. |
|||
The various lines have the same meaning described in the [[Xen_NUMA_Introduction#Why_Caring_.3F|Xen NUMA Introduction]] article. |
|||
= Soft Scheduling Affinity = |
|||
Here it is what the benchmarks tells. In all the graphs [http://xenbits.xen.org/people/dariof/benchmarks/specjbb2005-numa/ here], the light blue line is the interesting one, as it is representative of the case when VM#1 has its affinity set to NODE#0. The most interesting among the various plots is probably this one below: |
|||
Starting from Xen 4.5, credit1 supports two forms of affinity: hard and soft, both on a per-vCPU basis. This means each vCPU can have its own soft affinity, stating where such vCPU prefers to execute on. This is less strict than what it (also starting from 4.5) is called hard affinity, as the vCPU can potentially run everywhere, it just prefers some pCPUs rather than others. In Xen 4.5, therefore, NUMA-aware scheduling is achieved by matching the soft affinity of the vCPUs of a domain with its node-affinity. |
|||
http://xenbits.xen.org/people/dariof/benchmarks/specjbb2005-numa/kernbench_avgstd.png |
|||
In fact, as it was for 4.3, if all the pCPUs in a vCPU's soft affinity are busy, it is possible for the domain to run outside from there. The idea is that slower execution (due to remote memory accesses) is still better than no execution at all (as it would happen with pinning). For this reason, NUMA aware scheduling has the potential of bringing substantial performances benefits, although this will depend on the workload. |
|||
We can see the "node affinity" curve managing in getting quite closer to the best case, especially under high system load (4 to 8 VMs). It can't be called as perfect yet, as some more consideration needs to be given to the not-so-loaded cases, but it is a start. If you feel like wanting to help with testing, benchmarking, fixing or whatever... Please, jump in! |
|||
Therefore, for each vCPU, the following three scenarios are possbile: |
|||
* a vCPU is ''pinned'' to some pCPUs and ''does not have any soft affinity''. In this case, the vCPU is always scheduled on one of the pCPUs to which it is pinned, without any specific peference among them. |
|||
* a vCPU ''has'' its own soft affinity and ''is not pinned'' to any particular pCPU. In this case, the vCPU can run on every pCPU. Nevertheless, the scheduler will try to have it running on one of the pCPUs in its soft affinity; |
|||
* a vCPU ''has'' its own vCPU soft affinity and ''is also pinned'' to some pCPUs. In this case, the vCPU is always scheduled on one of the pCPUs onto which it is pinned, with, among them, a preference for the ones that also forms its soft affinity. In case pinning and soft affinity form two disjoint sets of pCPUs, pinning "wins", and the soft affinity is just ignored. |
|||
Finally, soft affinity is not necessarily related to the NUMA characteristics of the host, and can be tweaked independently for achieve arbitrary results. See [[Tuning_Xen_for_Performance#vCPU_Soft_Affinity_for_guests|here]] for more details about it. |
|||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category:Xen 4.3]] |
[[Category:Xen 4.3]] |
||
[[Category:NUMA]] |
[[Category:NUMA]] |
||
[[Category:Performance]] |
[[Category:Performance]] |
||
[[Category:Developers]] |
|||
[[Category:Resource Management]] |
Latest revision as of 12:55, 9 February 2015
When dealing with NUMA machines, it is (among other things) very important that we:
- achieve a good initial placement, when creating a new VM;
- have a solution that is both flexible and effective enough to take advantage of that placement during the whole VM lifetime.
The latter, which basically, means: <<When starting a new Virtual Machine, to which NUMA node should I "associate" it with?>>. The latter is more about: <<How hard should the VM be associated to that NUMA node? Could it, perhaps temporarily, run elsewhere?>>, is what is usually called NUMA aware scheduling.
This document aims at describing what was included, regarding NUMA aware scheduling, in Xen 4.3. You can find other articles about NUMA in the NUMA category.
Preliminary/Exploratory Work
Suppose we have a VM with all its memory allocated on NODE#0 and NODE#2 of our NUMA host. One may think that the best thing to do would be to pin the VM’s vCPUs on the pCPUs related to the two nodes. However, pinning is quite unflexible: what if those pCPUs get very busy while there are completely idle pCPUs on other nodes? It will depend on the workload, but it is not hard to imagine that having some chance to run --even if on a remote node-- would be better than not running at all.
The idea is, then, to give the scheduler some hints about where a VM’s vCPUs should be executed (and this preference, in this context, will be called from now on NUMA affinity). It then can try at its best to honor these suggestions of ours, but not at the cost of subverting its own algorithm. Here they are some early experimental results for this idea (dating back to [http://lists.xen.org/archives/html/xen-devel/2012-04/msg00732.html this patchset). The various curves in the graph below represents the throughput achieved one VM when it is:
- scheduled without any pinning or NUMA affinity, i.e., cpus="all" in the config file (the red line);
- pinned on NODE#0, so that all its memory accesses are local (the green line);
- scheduled with NUMA affinity set to NODE#0, and no pinning, (blue line).
The plot shows is the percent increase of each configuration with respect to the worst possible case (i.e., when all memory access are remote).
It appears quite clear that, introducing NUMA affinity increases performance by ~12% to ~18% from the worst case. It enables up to ~8% performance increase, as compared to unpinned behavior, and that the higher the load on the host, the better.
The full set of results for these early benchmarks is available here. There was a blog post about this, and it is still online at this address.
The Actual Solution in Xen 4.3
Automatic placement made it to Xen 4.2, and that meant, when a VM is created, a (set of) NUMA node(s) is picked to store its memory, and its vCPUs statically pinned to the pCPUs of such node(s). With NUMA aware scheduling, which was included in Xen 4.3, the latter is no longer the case. In fact, instead of using pinning, the vCPUs strongly prefers to run on the pCPUs of the NUMA node(s), but they can run somewhere else as well.
During development, more benchmarks were run. For example the following ones:
- SpecJBB: this is all about throughput, thus pinning is likely the ideal solution;
- Sysbench-memory: this is the time it takes for writing a fixed amount of memory (and then it is the throughput that is measured). What we expect is locality to be important, but at the same time the potential imbalances due to pinning could have a say in it;
- LMBench-proc: this is the time it takes for a process to fork a fixed number of children. This is much more about latency than throughput, with locality of memory accesses playing a smaller role and, again, imbalances due to pinning being a potential issue.
The host was a 2 NUMA box, where 2 to 10 VMs (2 vCPUs and 960 RAM each) were executing the various benchmarks concurrently. The results looks as follows:
---------------------------------------------------- | SpecJBB2005, throughput (the higher the better) | ---------------------------------------------------- | #VMs | No affinity | Pinning | NUMA scheduling | | 2 | 43318.613 | 49715.158 | 49822.545 | | 6 | 29587.838 | 33560.944 | 33739.412 | | 10 | 19223.962 | 21860.794 | 20089.602 | ---------------------------------------------------- | Sysbench memory, throughput (the higher the better) ---------------------------------------------------- | #VMs | No affinity | Pinning | NUMA scheduling | | 2 | 469.37667 | 534.03167 | 555.09500 | | 6 | 411.45056 | 437.02333 | 463.53389 | | 10 | 292.79400 | 309.63800 | 305.55167 | ---------------------------------------------------- | LMBench proc, latency (the lower the better) | ---------------------------------------------------- | #VMs | No affinity | Pinning | NUMA scheduling | ---------------------------------------------------- | 2 | 788.06613 | 753.78508 | 750.07010 | | 6 | 986.44955 | 1076.7447 | 900.21504 | | 10 | 1211.2434 | 1371.6014 | 1285.5947 | ----------------------------------------------------
Which, reasoning in terms of %-performance increase/decrease, means NUMA aware scheduling does as follows, as compared to no-affinity at all and to static pinning:
---------------------------------- | SpecJBB2005 (throughput) | ---------------------------------- | #VMs | No affinity | Pinning | | 2 | +13.05% | +0.21% | | 6 | +12.30% | +0.53% | | 10 | +4.31% | -8.82% | ---------------------------------- | Sysbench memory (throughput) | ---------------------------------- | #VMs | No affinity | Pinning | | 2 | +15.44% | +3.79% | | 6 | +11.24% | +5.72% | | 10 | +4.18% | -1.34% | ---------------------------------- | LMBench proc (latency) | | NOTICE: -x.xx% = GOOD here | ---------------------------------- | #VMs | No affinity | Pinning | ---------------------------------- | 2 | -5.66% | -0.50% | | 6 | -9.58% | -19.61% | | 10 | +5.78% | -6.69% | ----------------------------------
The tables show how, when not in overload (where overload='more vCPUs than pCPUs'), NUMA aware scheduling is the absolute best. In fact, not only it does a lot better than no-pinning on throughput biased benchmarks, and a lot better than pinning on latency biased benchmarks (especially with 6 VMs), it also equals or beats both under adverse circumstances (adverse to NUMA aware scheduling, i.e., beats/equals pinning in throughput benchmarks, and beats/equals no-affinity on the latency benchmark).
When the system is overloaded, NUMA scheduling scores in the middle, as it could have been expected. It must also be noticed that, when it brings benefits, they are not as huge as in the non-overloaded case (which probably means there is still room for some more optimization). In particular, the current way a pCPU is selected, when a vCPU waks-up, couples particularly bad with the new concept of NUMA affinity. Changing this is not trivial, because it involves rearranging some locks inside, but can be done, if deemed worthwhile.
The in tree documentation has some more details about NUMA aware scheduling, and the interactions it has with automatic NUMA placement. Also, there was a blog post about this topic too, available here.
Soft Scheduling Affinity
Starting from Xen 4.5, credit1 supports two forms of affinity: hard and soft, both on a per-vCPU basis. This means each vCPU can have its own soft affinity, stating where such vCPU prefers to execute on. This is less strict than what it (also starting from 4.5) is called hard affinity, as the vCPU can potentially run everywhere, it just prefers some pCPUs rather than others. In Xen 4.5, therefore, NUMA-aware scheduling is achieved by matching the soft affinity of the vCPUs of a domain with its node-affinity.
In fact, as it was for 4.3, if all the pCPUs in a vCPU's soft affinity are busy, it is possible for the domain to run outside from there. The idea is that slower execution (due to remote memory accesses) is still better than no execution at all (as it would happen with pinning). For this reason, NUMA aware scheduling has the potential of bringing substantial performances benefits, although this will depend on the workload.
Therefore, for each vCPU, the following three scenarios are possbile:
- a vCPU is pinned to some pCPUs and does not have any soft affinity. In this case, the vCPU is always scheduled on one of the pCPUs to which it is pinned, without any specific peference among them.
- a vCPU has its own soft affinity and is not pinned to any particular pCPU. In this case, the vCPU can run on every pCPU. Nevertheless, the scheduler will try to have it running on one of the pCPUs in its soft affinity;
- a vCPU has its own vCPU soft affinity and is also pinned to some pCPUs. In this case, the vCPU is always scheduled on one of the pCPUs onto which it is pinned, with, among them, a preference for the ones that also forms its soft affinity. In case pinning and soft affinity form two disjoint sets of pCPUs, pinning "wins", and the soft affinity is just ignored.
Finally, soft affinity is not necessarily related to the NUMA characteristics of the host, and can be tweaked independently for achieve arbitrary results. See here for more details about it.