Tuning Xen for Performance: Difference between revisions
mNo edit summary |
mNo edit summary |
||
(23 intermediate revisions by 6 users not shown) | |||
Line 2: | Line 2: | ||
=== Storage options === |
=== Storage options === |
||
There are several choices for storage, however it is important to understand that the IO performance inside of the guest depends greatly on the storage option used: |
|||
'''LVM''' is the fastest storage backend available but it limits the options for live migration. See [[Storage_options]] for more details. |
|||
* LVM: this is probably the simplest way for obtaining good storage IO performance on Linux without much hassle. |
|||
* ZFS ZVOLS: this is a more advanced configuration, and should provide better performance if configured properly. ZFS has some advanced features like ARC, L2ARC and ZIL that can provide much better performance than plain LVM volumes if properly configured and tuned. Please note that due to ZFS memory requirements in this case the Dom0/Driver domain should be given at least 4GB of RAM (or even more in order to increase performance). |
|||
* iSCSI: the default toolstack in Xen supports using iSCSI disks as storage backends for guests. The performance of iSCSI greatly depends on the capacities of the server and the network components, but if configured properly should provide a performance similar to LVM or ZFS. |
|||
* Files: using files as backends for guest storage is not recommended for performance reasons, but it has several benefits in terms of features, like being able to use raw, qcow, qcow2 or vhd formats to store guests disks. |
|||
See [[Storage_options]] for more details. |
|||
=== Memory === |
=== Memory === |
||
Line 11: | Line 17: | ||
1GB is enough for a pretty large host, more will be needed if you expect your users to use advanced storage types as ZFS or distributed filesystems. |
1GB is enough for a pretty large host, more will be needed if you expect your users to use advanced storage types as ZFS or distributed filesystems. |
||
Dedicating fixed amount of memory for dom0 is good for two reasons: |
|||
The reason to not give all ram to dom0 is that it takes time to re-purpose it went domUs are started. On a host with 128GB Ram you're getting things stuck for minutes while dom0 is ballooning down to make space. |
|||
* First of all (dom0) Linux kernel calculates various network related parameters based on the boot time amount of memory. |
|||
* The second reason is Linux needs memory to store the memory metadata (per page info structures), and this allocation is also based on the boot time amount of memory. |
|||
Now, if you boot up the system with dom0 having all the memory visible to it, and then balloon down dom0 memory every time you start up a new guest, you end up having only a small amount of the original (boot time) amount of memory available in the dom0 in the end. This means the calculated parameters are not correct anymore, and you end up wasting a lot of memory for the metadata for a memory you don't have anymore. Also ballooning down busy dom0 might have bad side effects. |
|||
=== Dom0 vCPUs === |
|||
By default Dom0 gets as many vCPUs as CPUs on the physical host. This might be a good idea if your host only has 4 CPUs, but as systems get bigger there's no reason to assign that many vCPUs to Dom0, so reducing it to something sensible is interesting for performance. In this case, the number of vCPUs assigned to Dom0 greatly depends on the host workload. For example, running HVM domains without stubdomains means that you can end up with a lot of Qemu instances in Dom0 that could be using quite some CPU, so in this case you should make sure that Dom0 has enough vCPUs assigned. In general you should not assigned less than 4 vCPUs to Dom0, and then you should pay attention to the load in Dom0 in order to make sure it is able to sustain the workload with the current assignation, or your guests will start to suffer performance degradations. |
|||
Another interesting approach is pinning Dom0 vCPUs to physical CPUs, this can be done by adding dom0_vcpus_pin to the Xen command line. Then once Dom0 has booted you can see to which CPUs the vCPUs have been pinned and exclude other domains from running on those CPUs. In this example I've used the following command line "dom0_max_vcpus=4 dom0_vcpus_pin": |
|||
=== Dom0 VCPUs === |
|||
If the host has more then 16 cores, limiting the number of Dom0 VCPUs to 8 and pinning them can improve performances. You can do that by adding '''dom0_max_vcpus=8 dom0_vcpus_pin''' to the Xen command line. See: [[XenCommonProblems#Can_I_dedicate_a_cpu_core_.28or_cores.29_only_for_dom0.3F| Can I dedicate a cpu core (or cores) only for dom0? ]]. |
|||
<pre> |
|||
# xl vcpu-list |
|||
Name ID VCPU CPU State Time(s) CPU Affinity |
|||
Domain-0 0 0 0 -b- 28.8 0 |
|||
Domain-0 0 1 1 -b- 22.0 1 |
|||
Domain-0 0 2 2 r-- 22.0 2 |
|||
Domain-0 0 3 3 -b- 22.2 3 |
|||
</pre> |
|||
Now we have to prevent domains from using those CPUs (0 to 3), so Dom0 doesn't have to schedule out, this is done by adding the following in the guest configuration file: |
|||
<pre> |
|||
cpus="all,^0-3" |
|||
</pre> |
|||
This will make the domain use all available CPUs except the ones currently pinned to Dom0: |
|||
<pre> |
|||
# xl vcpu-list |
|||
Name ID VCPU CPU State Time(s) CPU Affinity |
|||
Domain-0 0 0 0 -b- 30.4 0 |
|||
Domain-0 0 1 1 r-- 24.2 1 |
|||
Domain-0 0 2 2 -b- 23.4 2 |
|||
Domain-0 0 3 3 -b- 22.8 3 |
|||
guest 2 0 7 -b- 0.4 4-7 |
|||
guest 2 1 4 -b- 1.4 4-7 |
|||
</pre> |
|||
Another option for those that don't want to pin Dom0 to specific CPUs is to increase the realitve weight of Dom0, so that it gets scheduled more often than unpriviledged domains. By default all guests in Xen (including Dom0) have a weight of 256, this might be a problem if all domains rely on Dom0 for IO, since Dom0 can easily become a bottleneck. |
|||
<pre> |
|||
# xl sched-credit |
|||
Cpupool Pool-0: tslice=30ms ratelimit=1000us |
|||
Name ID Weight Cap |
|||
Domain-0 0 256 0 |
|||
</pre> |
|||
An easy solution to this is to increase the weight of Dom0, while leaving the other domains with the default weight: |
|||
<pre> |
|||
# xl sched-credit -d 0 -w 512 |
|||
# xl sched-credit |
|||
Cpupool Pool-0: tslice=30ms ratelimit=1000us |
|||
Name ID Weight Cap |
|||
Domain-0 0 512 0 |
|||
guest 3 256 0 |
|||
</pre> |
|||
In this case Dom0 will get twice as much CPU time as a normal guest. See [[Credit_Scheduler]] for more information. |
|||
== Tuning your Xen installation: advanced settings == |
== Tuning your Xen installation: advanced settings == |
||
Line 25: | Line 83: | ||
On HAP TLB misses are expensive so if you have really random access, HAP will be expensive. On shadow page table updates are expensive. |
On HAP TLB misses are expensive so if you have really random access, HAP will be expensive. On shadow page table updates are expensive. |
||
HAP is enabled by default (and it is the recommended setting) but can be disabled passing '''hap=0''' in the VM config file. |
HAP is enabled by default (and it is the recommended setting) but can be disabled passing '''hap=0''' in the VM config file. |
||
=== PV vs PV on HVM === |
=== PV vs PV on HVM === |
||
Line 38: | Line 95: | ||
memory = 1024 |
memory = 1024 |
||
name = "linux" |
name = "linux" |
||
vif = [ |
vif = [ "bridge=xenbr0" ] |
||
disk = [ |
disk = [ "/root/images/debian_squeeze_amd64_standard.raw,raw,xvda,w" ] |
||
root = "/dev/xvda1" |
root = "/dev/xvda1" |
||
</pre> |
</pre> |
||
Line 50: | Line 107: | ||
memory = 1024 |
memory = 1024 |
||
name = "linux" |
name = "linux" |
||
vif = [ |
vif = [ "bridge=xenbr0" ] |
||
disk = [ |
disk = [ "/images/debian_squeeze_amd64_standard.raw,raw,xvda,w" ] |
||
root = "/dev/xvda1" |
root = "/dev/xvda1" |
||
</pre> |
</pre> |
||
Line 61: | Line 118: | ||
<pre> |
<pre> |
||
builder= |
builder="hvm" |
||
memory=1024 |
memory=1024 |
||
name = "linuxhvm" |
name = "linuxhvm" |
||
vif = [ |
vif = [ "type=ioemu, bridge=xenbr0" ] |
||
disk = [ |
disk = [ "/images/debian_squeeze_amd64_standard.raw,raw,hda,w", "/images/debian-6.0.5-amd64-netinst.iso,raw,hdc:cdrom,r" ] |
||
serial= |
serial="pty" |
||
boot = |
boot = "dc" |
||
</pre> |
</pre> |
||
See [[Xen_Linux_PV_on_HVM_drivers#Example_HVM_guest_configuration_file_for_PVHVM_use | this page]] for a more detailed example PV on HVM config file. |
See [[Xen_Linux_PV_on_HVM_drivers#Example_HVM_guest_configuration_file_for_PVHVM_use | this page]] for a more detailed example PV on HVM config file. |
||
=== vCPU Pinning for guests === |
|||
=== Vcpu Pinning === |
|||
You can dedicate a physical cpu to a particular virtual cpu or a set of virtual cpus. |
You can dedicate a physical cpu to a particular virtual cpu or a set of virtual cpus. |
||
If you have enough physical cpus for all your guests, including dom0, you can make sure that the scheduler won't get in your way. Even if you don't have enough physical cpus for everybody, you can still use this technique to ensure that a particular guest has always cpu time. |
If you have enough physical cpus for all your guests, including dom0, you can make sure that the scheduler won't get in your way. Even if you don't have enough physical cpus for everybody, you can still use this technique to ensure that a particular guest has always cpu time. |
||
Line 99: | Line 155: | ||
</pre> |
</pre> |
||
{{Warning|Pinning can have unexpected negative effects just as often as beneficial ones. Before using pinning in a production situation, test it on your workload to prove it will be beneficial for you. If you are unsure, the default decision should be to ''not pin''.}} |
|||
=== vCPU Soft Affinity for guests === |
|||
Starting from Xen 4.5, each vcpu has: |
|||
* an hard affinity, also known as pinning (see the above paragraph). This is the list of pcpus where a vcpu is allowed to run; |
|||
* a soft affinity, which this series introduces. This is the list of pcpus where a vcpu '''prefers''' to run. |
|||
This helps in all the situations where it is considered to be preferable, for the (some) vcpus of a VM, to execute on a given set of host's pcpus, but we still want them to be able to run somewhere else, if all the pcpus in such a ''preferred set'' are busy. A typical use case for this are NUMA machines, where the soft affinity for the vcpus of the VM should be set equal to the pcpus of the NUMA node where the VM has been placed (xl and libxl will do this automatically, if not instructed otherwise). |
|||
To control soft affinity, the command is still <tt>xl vcpu-pin</tt>. Soft affinity is listed (started from 4.5) next to hard affinity: |
|||
<pre> |
|||
xl vcpu-list 1 |
|||
Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) |
|||
debian.guest.osstest 1 0 12 -b- 5.2 all / all |
|||
debian.guest.osstest 1 1 14 -b- 3.3 all / all |
|||
</pre> |
|||
For altering it, use the 4th parameter of <tt>xl vcpu-pin</tt>. It is possible to set the soft affinity without changing vcpu pinning by using "<tt>-</tt>" as the 3rd param: |
|||
<pre> |
|||
xl vcpu-pin 1 all - 16-18 |
|||
xl vcpu-list 1 |
|||
Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) |
|||
debian.guest.osstest 1 0 19 -b- 5.3 all / 16-18 |
|||
debian.guest.osstest 1 1 18 -b- 3.3 all / 16-18 |
|||
</pre> |
|||
=== TCP Small Queue === |
|||
Since Linux 3.19, the kernel uses TCP Small Queue that has less than optimal performance on Xen. |
|||
You can switch back to the older Single Flow Throughput behaviour and improve network performance by increasing tcp_limit_output_bytes: |
|||
<pre> |
|||
echo 1048576 > /proc/sys/net/ipv4/tcp_limit_output_bytes |
|||
</pre> |
|||
=== NUMA === |
=== NUMA === |
||
A NUMA machine is typically a multi-sockets machine built in such a way that each processor has its own local memory. Accessing memory of other nodes is possible but slow. |
|||
Read [[Xen_NUMA_Introduction|this article]] to know more about NUMA. |
|||
Usuall VMs are smaller than a single NUMA node so it should be possible to avoid remote memory access altogether, using one of the following techniques: |
|||
A NUMA machine is typically a multi-sockets machine built in such a way that processors have their own local memory. A group of processors connected to the same memory controller is usually called a ''node''. Accessing memory from remote nodes is always possible, but it is usually very slow. Since VMs are usually small (both in number of vcpus and amount of memory) it should be possible to avoid remote memory access altogether. Both XenD and xl (starting from Xen 4.2) try to automatically make that happen '''by default'''. This means they will allocate the vcpus and memory of your VMs trying to take the NUMA topology of the underlying host into account, if no vcpu pinning or cpupools are specified (see right below). Check out [[Xen_NUMA_Introduction|this article]] for some more details. |
|||
* the [[Xen_Numa_Scheduling_and_Placement|default automatic placement]]: xl will try to allocate the vcpus and memory of your VMs according to the NUMA topology of your machine by default. |
|||
However, if one wants to manually control from which node(s) the vcpus and the memory of a VM should come from, the following mechanisms are available: |
|||
* vcpu pinning: if you use the '''cpus''' setting in the VM config file (as described in the previous chapter) to assign all the vcpus of a VM to the pcpus of a single NUMA node, all the memory of the VM will be allocated locally to that node too: no remote memory access will occur. You can use the command '''xl info -n''' to figure out which physical cpus belong to which NUMA node. |
|||
* vcpu pinning: if you use the '''cpus''' setting in the VM config file (as described in the previous chapter) to assign all the vcpus of a VM to the pcpus of a single NUMA node, all the memory of the VM will be allocated locally to that node too: no remote memory access will occur (this is available in xl starting from Xen 4.2). To figure out which physical cpus belong to which NUMA node, you can use the following command: |
|||
* [[Xen_4.2:_cpupools|cpupools]]: using the command '''xl cpupool-numa-split''' you can split your physical cpus and memory into pools according to the NUMA topology. You'll end up with one cpupool per NUMA node: use '''xl cpupool-list''' to see the available cpupools. Then you can assign each VM to a different cpupool adding to the VM config file: |
|||
<pre> |
|||
xl info -n |
|||
</pre> |
|||
* [[Xen_4.2:_cpupools|cpupools]]: using the command '''xl cpupool-numa-split''' (see [[Cpupools_Howto#Using_cpupool-numa-split|here]]) you can split your physical cpus and memory into pools according to the NUMA topology. You'll end up with one cpupool per NUMA node: use '''xl cpupool-list''' to see the available cpupools. Then you can assign each VM to a different cpupool adding to the VM config file: |
|||
<pre> |
<pre> |
||
Line 115: | Line 211: | ||
</pre> |
</pre> |
||
* [[NUMA Aware Scheduling]]: for achieving the best possible locality, while the VM is running. Starting from Xen 4.5, NUMA aware scheduling is implemented by means of scheduling soft affinity. |
|||
To find out more about NUMA within this Wiki, check out the various pages from the proper category: [[:Category:NUMA]] |
|||
== Performance bugs == |
|||
In Xen 4.5 we tried to determine is the reported performance issue of Hyperthreading in Credit1 scheduler was a regression |
|||
or had existed. See [[http://www.gossamer-threads.com/lists/xen/devel/339409 Virt overehead with HT [was: Re: Xen 4.5 development update] |
|||
]] contains gore details. |
|||
The brief summary is that Xen Credit1 has an 7.9% performance drop when using SMT with kernbench workload. That is something that will be looked at |
|||
in the future. |
|||
We originally thought it was an regression - but it is an inherent way credit1 has been implemented and had been since the introduction |
|||
of credit1. |
|||
The workload makes a difference. While this is 'kernbench' which stresses a multitude of paths, other workloads will work just fine. |
|||
Please note that in the most optimal case HT gives 30% boost. Accounting for the 7.9% drop, that means you can still get 20% performance |
|||
speed. |
|||
Please note also that earlier version of Linux kernel did not PV aware spinlocks which would contribute to this. With Linux 3.11 when |
|||
booting as PVHVM with CONFIG_PARAVIRT_SPINLOCK and with Linux 2.6.32 booting as PV (also with CONFIG_PARAVIRT_SPINLOCK enabled), the lock contention leading to abysmal performance on CPU oversubscribe has ameliorated. |
|||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category:Performance]] |
[[Category:Performance]] |
||
[[Category:NUMA]] |
|||
[[Category:Resource Management]] |
Latest revision as of 13:06, 16 October 2015
Tuning your Xen installation: recommended settings
Storage options
There are several choices for storage, however it is important to understand that the IO performance inside of the guest depends greatly on the storage option used:
- LVM: this is probably the simplest way for obtaining good storage IO performance on Linux without much hassle.
- ZFS ZVOLS: this is a more advanced configuration, and should provide better performance if configured properly. ZFS has some advanced features like ARC, L2ARC and ZIL that can provide much better performance than plain LVM volumes if properly configured and tuned. Please note that due to ZFS memory requirements in this case the Dom0/Driver domain should be given at least 4GB of RAM (or even more in order to increase performance).
- iSCSI: the default toolstack in Xen supports using iSCSI disks as storage backends for guests. The performance of iSCSI greatly depends on the capacities of the server and the network components, but if configured properly should provide a performance similar to LVM or ZFS.
- Files: using files as backends for guest storage is not recommended for performance reasons, but it has several benefits in terms of features, like being able to use raw, qcow, qcow2 or vhd formats to store guests disks.
See Storage_options for more details.
Memory
If the host has more memory than a typical laptop/desktop system, then do not rely on dom0 ballooning. Instead set the dom0 memory to be something between 1 and 4GB adding dom0_mem=1024M to the Xen command line.
1GB is enough for a pretty large host, more will be needed if you expect your users to use advanced storage types as ZFS or distributed filesystems.
Dedicating fixed amount of memory for dom0 is good for two reasons:
- First of all (dom0) Linux kernel calculates various network related parameters based on the boot time amount of memory.
- The second reason is Linux needs memory to store the memory metadata (per page info structures), and this allocation is also based on the boot time amount of memory.
Now, if you boot up the system with dom0 having all the memory visible to it, and then balloon down dom0 memory every time you start up a new guest, you end up having only a small amount of the original (boot time) amount of memory available in the dom0 in the end. This means the calculated parameters are not correct anymore, and you end up wasting a lot of memory for the metadata for a memory you don't have anymore. Also ballooning down busy dom0 might have bad side effects.
Dom0 vCPUs
By default Dom0 gets as many vCPUs as CPUs on the physical host. This might be a good idea if your host only has 4 CPUs, but as systems get bigger there's no reason to assign that many vCPUs to Dom0, so reducing it to something sensible is interesting for performance. In this case, the number of vCPUs assigned to Dom0 greatly depends on the host workload. For example, running HVM domains without stubdomains means that you can end up with a lot of Qemu instances in Dom0 that could be using quite some CPU, so in this case you should make sure that Dom0 has enough vCPUs assigned. In general you should not assigned less than 4 vCPUs to Dom0, and then you should pay attention to the load in Dom0 in order to make sure it is able to sustain the workload with the current assignation, or your guests will start to suffer performance degradations.
Another interesting approach is pinning Dom0 vCPUs to physical CPUs, this can be done by adding dom0_vcpus_pin to the Xen command line. Then once Dom0 has booted you can see to which CPUs the vCPUs have been pinned and exclude other domains from running on those CPUs. In this example I've used the following command line "dom0_max_vcpus=4 dom0_vcpus_pin":
# xl vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 -b- 28.8 0 Domain-0 0 1 1 -b- 22.0 1 Domain-0 0 2 2 r-- 22.0 2 Domain-0 0 3 3 -b- 22.2 3
Now we have to prevent domains from using those CPUs (0 to 3), so Dom0 doesn't have to schedule out, this is done by adding the following in the guest configuration file:
cpus="all,^0-3"
This will make the domain use all available CPUs except the ones currently pinned to Dom0:
# xl vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 -b- 30.4 0 Domain-0 0 1 1 r-- 24.2 1 Domain-0 0 2 2 -b- 23.4 2 Domain-0 0 3 3 -b- 22.8 3 guest 2 0 7 -b- 0.4 4-7 guest 2 1 4 -b- 1.4 4-7
Another option for those that don't want to pin Dom0 to specific CPUs is to increase the realitve weight of Dom0, so that it gets scheduled more often than unpriviledged domains. By default all guests in Xen (including Dom0) have a weight of 256, this might be a problem if all domains rely on Dom0 for IO, since Dom0 can easily become a bottleneck.
# xl sched-credit Cpupool Pool-0: tslice=30ms ratelimit=1000us Name ID Weight Cap Domain-0 0 256 0
An easy solution to this is to increase the weight of Dom0, while leaving the other domains with the default weight:
# xl sched-credit -d 0 -w 512 # xl sched-credit Cpupool Pool-0: tslice=30ms ratelimit=1000us Name ID Weight Cap Domain-0 0 512 0 guest 3 256 0
In this case Dom0 will get twice as much CPU time as a normal guest. See Credit_Scheduler for more information.
Tuning your Xen installation: advanced settings
HAP vs. shadow
HAP stands for hardware assisted paging and requires a CPU feature called EPT by Intel and RVI by AMD. It is used to manage the guest's MMU. The alternative is shadow paging, completely managed in software by Xen. On HAP TLB misses are expensive so if you have really random access, HAP will be expensive. On shadow page table updates are expensive. HAP is enabled by default (and it is the recommended setting) but can be disabled passing hap=0 in the VM config file.
PV vs PV on HVM
Linux, NetBSD, FreeBSD and Solaris can run as PV or PV on HVM guests. Memory intensive workloads that involve the continuous creation and destruction of page tables can perform better when run in a PV on HVM guest. Examples are kernbench and sql-bench. On the other hand memory workloads that run on a quasi-static set of page tables run better on a PV guests. An example of this kind of workloads is specjbb. See Xen_Linux_PV_on_HVM_drivers#Performance_Tradeoffs for more details. A basic PV guest config file looks like the following:
bootloader = "/usr/bin/pygrub" memory = 1024 name = "linux" vif = [ "bridge=xenbr0" ] disk = [ "/root/images/debian_squeeze_amd64_standard.raw,raw,xvda,w" ] root = "/dev/xvda1"
You can also specify a kernel and ramdisk path in the dom0 filesystem directly in the VM config file, to be used for the guest:
kernel = "/boot/vmlinuz" ramdisk = "/boot/initrd" memory = 1024 name = "linux" vif = [ "bridge=xenbr0" ] disk = [ "/images/debian_squeeze_amd64_standard.raw,raw,xvda,w" ] root = "/dev/xvda1"
See this page for instructions on how to install a Debian PV guest.
HVM guests run in a fully emulated environment that looks like a normal PC from the inside. As a consequence an HVM config file is a bit different and cannot specify a kernel and a ramdisk. On the other hand it is possible to perform an HVM installation from an emulated cdrom, using the iso of your preferred distro. It is also possible to pxeboot the VM. See the following very basic example:
builder="hvm" memory=1024 name = "linuxhvm" vif = [ "type=ioemu, bridge=xenbr0" ] disk = [ "/images/debian_squeeze_amd64_standard.raw,raw,hda,w", "/images/debian-6.0.5-amd64-netinst.iso,raw,hdc:cdrom,r" ] serial="pty" boot = "dc"
See this page for a more detailed example PV on HVM config file.
vCPU Pinning for guests
You can dedicate a physical cpu to a particular virtual cpu or a set of virtual cpus. If you have enough physical cpus for all your guests, including dom0, you can make sure that the scheduler won't get in your way. Even if you don't have enough physical cpus for everybody, you can still use this technique to ensure that a particular guest has always cpu time.
xl vcpu-pin Domain-name1 0 0 xl vcpu-pin Domain-name1 1 1
These two commands pin vcpu 0 and 1 of Domain-name1 to physical cpu 0 and 1. However they do not prevent other vcpus from running on pcpu 0 and pcpu1: you need to plan in advance and pin the vcpus of all your guests so they won't be running on pcpu 0 and 1. For example:
xl vcpu-pin Domain-name2 all 2-6
This commands forces all the vcpus of Domain-name2 to only run on physical cpus from 2 to 6, leaving pcpu 0 and 1 to Domain-name1. You can also add the following lines to the config file of the VM to automatically pin the vcpus to a set of pcpus at boot time:
cpus="2-6"
vCPU Soft Affinity for guests
Starting from Xen 4.5, each vcpu has:
- an hard affinity, also known as pinning (see the above paragraph). This is the list of pcpus where a vcpu is allowed to run;
- a soft affinity, which this series introduces. This is the list of pcpus where a vcpu prefers to run.
This helps in all the situations where it is considered to be preferable, for the (some) vcpus of a VM, to execute on a given set of host's pcpus, but we still want them to be able to run somewhere else, if all the pcpus in such a preferred set are busy. A typical use case for this are NUMA machines, where the soft affinity for the vcpus of the VM should be set equal to the pcpus of the NUMA node where the VM has been placed (xl and libxl will do this automatically, if not instructed otherwise).
To control soft affinity, the command is still xl vcpu-pin. Soft affinity is listed (started from 4.5) next to hard affinity:
xl vcpu-list 1 Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) debian.guest.osstest 1 0 12 -b- 5.2 all / all debian.guest.osstest 1 1 14 -b- 3.3 all / all
For altering it, use the 4th parameter of xl vcpu-pin. It is possible to set the soft affinity without changing vcpu pinning by using "-" as the 3rd param:
xl vcpu-pin 1 all - 16-18 xl vcpu-list 1 Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) debian.guest.osstest 1 0 19 -b- 5.3 all / 16-18 debian.guest.osstest 1 1 18 -b- 3.3 all / 16-18
TCP Small Queue
Since Linux 3.19, the kernel uses TCP Small Queue that has less than optimal performance on Xen. You can switch back to the older Single Flow Throughput behaviour and improve network performance by increasing tcp_limit_output_bytes:
echo 1048576 > /proc/sys/net/ipv4/tcp_limit_output_bytes
NUMA
A NUMA machine is typically a multi-sockets machine built in such a way that processors have their own local memory. A group of processors connected to the same memory controller is usually called a node. Accessing memory from remote nodes is always possible, but it is usually very slow. Since VMs are usually small (both in number of vcpus and amount of memory) it should be possible to avoid remote memory access altogether. Both XenD and xl (starting from Xen 4.2) try to automatically make that happen by default. This means they will allocate the vcpus and memory of your VMs trying to take the NUMA topology of the underlying host into account, if no vcpu pinning or cpupools are specified (see right below). Check out this article for some more details.
However, if one wants to manually control from which node(s) the vcpus and the memory of a VM should come from, the following mechanisms are available:
- vcpu pinning: if you use the cpus setting in the VM config file (as described in the previous chapter) to assign all the vcpus of a VM to the pcpus of a single NUMA node, all the memory of the VM will be allocated locally to that node too: no remote memory access will occur (this is available in xl starting from Xen 4.2). To figure out which physical cpus belong to which NUMA node, you can use the following command:
xl info -n
- cpupools: using the command xl cpupool-numa-split (see here) you can split your physical cpus and memory into pools according to the NUMA topology. You'll end up with one cpupool per NUMA node: use xl cpupool-list to see the available cpupools. Then you can assign each VM to a different cpupool adding to the VM config file:
pool="Pool-node0"
- NUMA Aware Scheduling: for achieving the best possible locality, while the VM is running. Starting from Xen 4.5, NUMA aware scheduling is implemented by means of scheduling soft affinity.
To find out more about NUMA within this Wiki, check out the various pages from the proper category: Category:NUMA
Performance bugs
In Xen 4.5 we tried to determine is the reported performance issue of Hyperthreading in Credit1 scheduler was a regression or had existed. See [Virt overehead with HT [was: Re: Xen 4.5 development update ]] contains gore details.
The brief summary is that Xen Credit1 has an 7.9% performance drop when using SMT with kernbench workload. That is something that will be looked at in the future.
We originally thought it was an regression - but it is an inherent way credit1 has been implemented and had been since the introduction of credit1.
The workload makes a difference. While this is 'kernbench' which stresses a multitude of paths, other workloads will work just fine.
Please note that in the most optimal case HT gives 30% boost. Accounting for the 7.9% drop, that means you can still get 20% performance speed.
Please note also that earlier version of Linux kernel did not PV aware spinlocks which would contribute to this. With Linux 3.11 when booting as PVHVM with CONFIG_PARAVIRT_SPINLOCK and with Linux 2.6.32 booting as PV (also with CONFIG_PARAVIRT_SPINLOCK enabled), the lock contention leading to abysmal performance on CPU oversubscribe has ameliorated.