Difference between revisions of "Network Throughput and Performance Guide"

From Xen
m (Technical Overview)
m (Diagnostic tools)
 
(39 intermediate revisions by 5 users not shown)
Line 22: Line 22:
   
 
If you would like to contribute to this guide, please submit your feedback to
 
If you would like to contribute to this guide, please submit your feedback to
[rok.strnisa@citrix.com], or get an account and edit the page yourself.
+
[mailto:rok.strnisa@citrix.com Rok Strniša], or get an account and edit the page yourself.
   
If you would like to be notified about updates to this guide, please create an
+
If you would like to be notified about updates to this guide, please "Create account" and "Watch" to this page.
account and "Subscribe" to this page.
 
   
 
== Scenarios ==
 
== Scenarios ==
   
There are many possible scenarios where network throughput can be relevant. The
+
There are many possible scenarios where network throughput can be relevant. The major ones that we have identified are:
major ones that we have identified are:
 
 
* '''dom0 throughput''' The traffic is sent/received directly by `dom0`.
 
* '''dom0 throughput''' The traffic is sent/received directly by `dom0`.
 
* '''single-VM throughput''' The traffic is sent/received by a single VM.
 
* '''single-VM throughput''' The traffic is sent/received by a single VM.
* '''multi-VM throughput''' The traffic is sent/received by multiple VMs,
+
* '''multi-VM throughput''' The traffic is sent/received by multiple VMs, concurrently. Here, we are interested in aggregate network throughput.
  +
* '''single-VCPU VM throughput''' The traffic is sent/received by a single-VCPU VMs.
concurrently. Here, we are interested in aggregate network throughput.
 
* '''single-VCPU VM throughput''' The traffic is sent/received by a single-VCPU
+
* '''single-VCPU single-TCP-thread VM throughput''' The traffic is sent/received by a single TCP thread in single-VCPU VMs.
  +
* '''multi-VCPU VM throughput''' The traffic is sent/received by a multi-VCPU VMs.
VMs.
 
* '''single-VCPU single-TCP-thread VM throughput''' The traffic is
+
* '''network throughput for storage''' The traffic sent/received originates from/is stored on a storage device.
sent/received by a single TCP thread in single-VCPU VMs.
 
* '''multi-VCPU VM throughput''' The traffic is sent/received by a multi-VCPU
 
VMs.
 
* '''network throughput for storage''' The traffic sent/received originates
 
from/is stored on a storage device.
 
   
 
== Technical Overview ==
 
== Technical Overview ==
Line 51: Line 44:
 
[[File:Network_Throughput_Guide.png]]
 
[[File:Network_Throughput_Guide.png]]
   
Therefore, when a process in a VM, e.g. a VM with `domID` equal to `X`, wants to
+
Therefore, when a process in a VM, e.g. a VM with <tt>domID</tt> equal to <tt>X</tt>, wants to
 
send a network packet, the following occurs:
 
send a network packet, the following occurs:
# A process in the VM generates a network packet '''P''', and sends it to a
+
# A process in the VM generates a network packet '''P''', and sends it to a VM's virtual network interface (VIF), e.g. <tt>ethY_n</tt> for some network <tt>Y</tt> and some connection <tt>n</tt>.
  +
# The driver for that VIF, <tt>netfront</tt> driver, then shares the memory page (which contains the packet '''P''') with the backend domain by establishing a new grant entry. A grant reference is part of the request pushed onto the transmit shared ring (<tt>Tx Ring</tt>).
VM's virtual network interface (VIF), e.g. `ethY_n` for some network `Y` and
 
  +
# <tt>netfront</tt> then notifies, via an event channel (not on the diagram), one of <tt>netback</tt> threads in <tt>dom0</tt> (the one responsible for <tt>ethY_n</tt>) where in the shared pages the packet '''P''' is stored. ([[XenStore]] is used to setup the initial connection between the front-end and the back-end, deciding on what event channel to use, and where the shared rings are.)
some connection `n`.
 
  +
# <tt>netback</tt> (in <tt>dom0</tt>) fetches '''P''', processes it, and forwards it to <tt>vifX.Y_n</tt>;
# The driver for that VIF, `netfront` driver, then shares the memory page
 
  +
# The packet is then handed to the back-end network stack, where it is treated according to its configuration just like any other packet arriving on a network device.
(which contains the packet '''P''') with the backend domain by establishing a
 
new grant entry. A grant reference is part of the request pushed onto the
 
transmit shared ring (`Tx Ring`).
 
# `netfront` then notifies, via an event channel (not on the diagram), one of
 
`netback` threads in `dom0` (the one responsible for `ethY_n`) where in the
 
shared pages the packet '''P''' is stored. ([[XenStore]] is used to setup the
 
initial connection between the front-end and the back-end, deciding on what
 
event channel to use, and where the shared rings are.)
 
# `netback` (in `dom0`) fetches '''P''', processes it, and forwards it to
 
`vifX.Y_n`;
 
# The packet is then handed to the back-end network stack, where it is treated
 
according to its configuration just like any other packet arriving on a network
 
device.
 
   
 
When a VM is to receive a packet, the process is almost the reverse of the
 
When a VM is to receive a packet, the process is almost the reverse of the
 
above. The key difference is that on receive there is a copy being made: it
 
above. The key difference is that on receive there is a copy being made: it
happens in `dom0`, and is a copy from back-end owned memory into a `Tx Buf`,
+
happens in <tt>dom0</tt>, and is a copy from back-end owned memory into a <tt>Tx Buf</tt>, which the guest has granted to the back-end domain. The grant references to these buffers are in the request on the <tt>Rx Ring</tt> (not <tt>Tx Ring</tt>).
which the guest has granted to the back-end domain. The grant references to
 
these buffers are in the request on the `Rx Ring` (not `Tx Ring`).
 
   
 
== Symptoms, probable causes, and advice ==
 
== Symptoms, probable causes, and advice ==
Line 82: Line 61:
 
probable causes and advice):
 
probable causes and advice):
   
* I/O is extremely slow on my Hardware Virtualised Machine (HVM), e.g. a
+
* I/O is extremely slow on my Hardware Virtualised Machine (HVM), e.g. a Windows VM.
  +
** '''Verifying the symptom''': Compare the results of an I/O speed test on the problem VM and a healthy VM; they should be at least an order of magnitude different.
Windows VM.
 
  +
** '''Probable cause''': The HVM does not have PV drivers installed.
* '''Verifying the symptom''': Compare the results of an I/O speed test on the
 
  +
** '''Background''': With PV drivers, an HVM can make direct use of some of the underlying hardware, leading to better performance.
problem VM and a healthy VM; they should be at least an order of magnitude
 
  +
** '''Recommendation''': Install PV drivers.
different.
 
  +
* '''Probable cause''': The HVM does not have PV drivers installed.
 
* '''Background''': With PV drivers, an HVM can make direct use of some of the
 
underlying hardware, leading to better performance.
 
* '''Recommendation''': Install PV drivers.
 
 
* VM's VCPU is fully utilised.
 
* VM's VCPU is fully utilised.
  +
** '''Verifying the symptom''': Run <tt>xentop</tt> in <tt>dom0</tt> --- this should give a fairly good estimate of aggregate usage for all VCPUs of a VM; pressing '''V''' reveals how many seconds were spent in which VM's VCPU. Running VCPU measurement tools inside the VM ''does not'' give reliable results; they can only be used to find rough relative usage between applications in a VM.
* '''Verifying the symptom''': Run `xentop` in `dom0` --- this should give a
 
  +
** '''Background''': When a VM sends or receives network traffic, it needs to do some basic packet processing.
fairly good estimate of aggregate usage for all VCPUs of a VM; pressing
 
  +
** '''Probable cause''': There is too much traffic for that VCPU to handle.
'''V''' reveals how many seconds were spent in which VM's VCPU. Running VCPU
 
  +
*** '''Recommendation 1''': Try enabling NIC offloading --- see Tweaks (below) on how to do this.
measurement tools inside the VM ''does not'' give reliable results; they can
 
  +
*** '''Recommendation 2''': Try running the application that does the sending/receiving of network traffic with multiple threads. This will give the OS a chance to distribute the workload over all available VCPUs.
only be used to find rough relative usage between applications in a VM.
 
* '''Background''': When a VM sends or receives network traffic, it needs to
 
do some basic packet processing.
 
* '''Probable cause''': There is too much traffic for that VCPU to handle.
 
* '''Recommendation 1''': Try enabling NIC offloading --- see Tweaks (below)
 
on how to do this.
 
* '''Recommendation 2''': Try running the application that does the
 
sending/receiving of network traffic with multiple threads. This will give the
 
OS a chance to distribute the workload over all available VCPUs.
 
   
 
* HVM VM's first (and possibly only) VCPU is fully utilised.
 
* HVM VM's first (and possibly only) VCPU is fully utilised.
 
** '''Verifying the symptom''': ''Same as above.''
 
** '''Verifying the symptom''': ''Same as above.''
** '''Background''': Currently, only VM's first VCPU can process the handling
+
** '''Background''': Currently, only VM's first VCPU can process the handling of interrupt requests.
  +
** '''Probable cause''': The VM is receiving too many packets for its current setup.
of interrupt requests.
 
  +
*** '''Recommendation 1''': If the VM has multiple VCPUs, try to associate application processing with non-first VCPUs.
* '''Probable cause''': The VM is receiving too many packets for its current
 
  +
*** '''Recommendation 2''': Use more (1 VCPU) VMs to handle receive traffic, and a workload balancer in front of them.
setup.
 
* '''Recommendation 1''': If the VM has multiple VCPUs, try to associate
+
*** '''Recommendation 3''': If the VM has multiple VCPUs and there's no definite need for it to have multiple VCPUs, create multiple 1-VCPU VMs instead (see '''Recommendation 2''').
  +
** '''Plans for improvement:''' Underlying architecture needs to be improved so that VM's non-first VCPUs can process interrupt requests.
application processing with non-first VCPUs.
 
* '''Recommendation 2''': Use more (1 VCPU) VMs to handle receive traffic,
 
and a workload balancer in front of them.
 
* '''Recommendation 3''': If the VM has multiple VCPUs and there's no
 
definite need for it to have multiple VCPUs, create multiple 1-VCPU VMs
 
instead (see '''Recommendation 2''').
 
* '''Plans for improvement:''' Underlying architecture needs to be improved
 
so that VM's non-first VCPUs can process interrupt requests.
 
   
* In `dom0`, a high percentage of a single VCPU is spent processing system
+
* In <tt>dom0</tt>, a high percentage of a single VCPU is spent processing system interrupts.
  +
** '''Verifying the symptom''': Run <tt>top</tt> in <tt>dom0</tt>, then press <tt>z</tt> (for colours) and <tt>1</tt> (to show VCPU breakdown). Check if there is a high value for <tt>si</tt> for a single VCPU.
interrupts.
 
  +
** '''Background''': When packets are sent to a VM on a host, its <tt>dom0</tt> needs to process interrupt requests associated with the interrupt queues that correspond to the device the packets arrived on.
* '''Verifying the symptom''': Run `top` in `dom0`, then press `z` (for
 
  +
** '''Probable cause''': <tt>dom0</tt> is set up to process all interrupt requests for a specific device on a specific <tt>dom0</tt> VCPU.
colours) and `1` (to show VCPU breakdown). Check if there is a high value for
 
  +
*** '''Recommendation 1''': Check in <tt>/proc/interrupts</tt> whether your device exposes multiple interrupt queues. If the device supports this feature, make sure that it is enabled.
`si` for a single VCPU.
 
  +
*** '''Recommendation 2''': If the device supports multiple interrupt queues, distribute the processing of them either automatically (by using <tt>irqbalance</tt> daemon), or manually (by setting <tt>/proc/irq/<irq-no>/smp_affinity</tt>) to all (or a subset of) <tt>dom0</tt> VCPUs.
* '''Background''': When packets are sent to a VM on a host, its `dom0` needs
 
  +
*** '''Recommendation 3''': Otherwise, make sure that an otherwise relatively-idle <tt>dom0</tt> VCPU is set to process the interrupt queue (by manually setting the appropriate <tt>/proc/irq/<irq-no>/smp_affinity</tt>).
to process interrupt requests associated with the interrupt queues that
 
correspond to the device the packets arrived on.
 
* '''Probable cause''': `dom0` is set up to process all interrupt requests for
 
a specific device on a specific `dom0` VCPU.
 
* '''Recommendation 1''': Check in `/proc/interrupts` whether your device
 
exposes multiple interrupt queues. If the device supports this feature, make
 
sure that it is enabled.
 
* '''Recommendation 2''': If the device supports multiple interrupt queues,
 
distribute the processing of them either automatically (by using `irqbalance`
 
daemon), or manually (by setting `/proc/irq/<irq-no>/smp_affinity`) to all
 
(or a subset of) `dom0` VCPUs.
 
* '''Recommendation 3''': Otherwise, make sure that an otherwise
 
relatively-idle `dom0` VCPU is set to process the interrupt queue (by
 
manually setting the appropriate `/proc/irq/<irq-no>/smp_affinity`).
 
   
* In `dom0`, a VCPU is fully occupied with a `netback` process.
+
* In <tt>dom0</tt>, a VCPU is fully occupied with a <tt>netback</tt> process.
  +
** '''Verifying the symptom''': Run <tt>top</tt> in <tt>dom0</tt>. Check if there is a <tt>netback</tt> process, which appears to be taking almost 100%. Then, run <tt>xentop</tt> in <tt>dom0</tt>, and check VCPU usage for <tt>dom0</tt>: if it reads about 120% +/- 20% when there is no other significant process in <tt>dom0</tt>, then there's a high chance that you have confirmed the symptom.
** '''Verifying the symptom''': Run `top` in `dom0`. Check if there is a
 
  +
** '''Background''': When packets are sent from or to a VM on a host, the packets are processed by a <tt>netback</tt> process, which is <tt>dom0</tt>'s side of VM network driver (VM's side is called <tt>netfront</tt>).
`netback` process, which appears to be taking almost 100%. Then, run `xentop`
 
  +
** '''General Recommendation''': Try enabling NIC offloading --- see Tweaks (below) on how to do this.
in `dom0`, and check VCPU usage for `dom0`: if it reads about 120% +/- 20%
 
  +
** '''Possible cause 1''': VMs' VIFs are not correctly distributed over the available <tt>netback</tt> threads.
when there is no other significant process in `dom0`, then there's a high
 
  +
*** '''Recommendation''': Read the [http://support.citrix.com/article/CTX127970 related KB article].
chance that you have confirmed the symptom.
 
* '''Background''': When packets are sent from or to a VM on a host, the
+
** '''Possible cause 2''': Too much traffic is being sent of a single VIF.
  +
*** '''Recommendation''': Create another VIF for the corresponding VM, and setup the application(s) within the VM to send/receive traffic over both VIFs. Since each VIF should be associated with a different <tt>netback</tt> process (each of which is linked to a different <tt>dom0</tt> VCPU), this should remove the associated <tt>dom0</tt> bottleneck. If every <tt>dom0</tt> <tt>netback</tt> thread is taking 100% of a <tt>dom0</tt> VCPU, increase the number of <tt>dom0</tt> VCPUs and <tt>netback</tt> threads first --- see Tweaks (below) on how to do this.
packets are processed by a `netback` process, which is `dom0`'s side of VM
 
network driver (VM's side is called `netfront`).
 
* '''General Recommendation''': Try enabling NIC offloading --- see Tweaks
 
(below) on how to do this.
 
* '''Possible cause 1''': VMs' VIFs are not correctly distributed over the
 
available `netback` threads.
 
* '''Recommendation''': Read the
 
[http://support.citrix.com/article/CTX127970 related KB article].
 
* '''Possible cause 2''': Too much traffic is being sent of a single VIF.
 
** '''Recommendation''': Create another VIF for the corresponding VM, and
 
setup the application(s) within the VM to send/receive traffic over both
 
VIFs. Since each VIF should be associated with a different `netback` process
 
(each of which is linked to a different `dom0` VCPU), this should remove the
 
associated `dom0` bottleneck. If every `dom0` `netback` thread is taking 100%
 
of a `dom0` VCPU, increase the number of `dom0` VCPUs and `netback` threads
 
first --- see Tweaks (below) on how to do this.
 
   
  +
* In <tt>dom0</tt>, most/all VCPUs are fully occupied with <tt>netback</tt> processes.
* There is a VCPU bottleneck either in a `dom0` or in a VM, and I have control
 
  +
** '''Verifying the symptom''': Same as above, except that it is true for all <tt>dom0</tt> VCPUs.
over both the sending and the receiving side of the network connection.
 
* '''Verifying the symptom''': (See notes about `xentop` and `top` above.)
+
** '''Background''': Same as above.
  +
** '''General Recommendation 1''': Pin <tt>dom0</tt> VCPUs to physical CPUs, making sure that no user domains are using the same physical CPUs. The Tweaks section describes how to pin VCPUs.
* '''Background''': (Roughly) Each packet generates an interrupt request, and
 
  +
** '''General Recommendation 2''': If you have a powerful host and spare CPU capacity, give more VCPUs to <tt>dom0</tt>, increase the number of <tt>netback</tt> threads, and restart your VMs (to force re-allocation of VIFs to <tt>netback</tt> threads). The Tweaks section describes how you can increase the number of <tt>dom0</tt> VCPUs and <tt>netback</tt> threads.
each interrupt request requires some VCPU capacity.
 
  +
** '''General Recommendation 3''': If your host has no spare CPU capacity, try decreasing the load by putting fewer VMs on the host and/or removing VCPUs from the VMs.
* '''Recommendation''': Enable Jumbo Frames (see Tweaks (below) for more
 
information) for the whole connection. This should decrease the number of
 
interrupts, and therefore decrease the load on the associated VCPUs (for a
 
specific amount of network traffic).
 
   
* There is obviously no VCPU bottleneck either in a `dom0` or in a VM --- why
+
* There is a VCPU bottleneck either in a <tt>dom0</tt> or in a VM, and I have control over both the sending and the receiving side of the network connection.
  +
** '''Verifying the symptom''': (See notes about <tt>xentop</tt> and <tt>top</tt> above.)
is the framework not making use of the spare capacity?
 
  +
** '''Background''': (Roughly) Each packet generates an interrupt request, and each interrupt request requires some VCPU capacity.
* '''Verifying the symptom''': (See notes about `xentop` and `top` above.)
 
  +
** '''Recommendation''': Enable Jumbo Frames (see Tweaks (below) for more information) for the whole connection. This should decrease the number of interrupts, and therefore decrease the load on the associated VCPUs (for a specific amount of network traffic).
* '''Background''': There are ''many'' factors involved when doing network
 
  +
performance, and many more when using virtual machines.
 
  +
* There is obviously no VCPU bottleneck either in a <tt>dom0</tt> or in a VM --- why is the framework not making use of the spare capacity?
* '''Possible cause 1''': Part of the connection has reached its physical
 
  +
** '''Verifying the symptom''': (See notes about <tt>xentop</tt> and <tt>top</tt> above.)
throughput limit.
 
  +
** '''Background''': There are ''many'' factors involved when doing network performance, and many more when using virtual machines.
* '''Recommendation 1''': Verify that all network components in the
 
connection path physically support the desired network throughput.
+
** '''Possible cause 1''': Part of the connection has reached its physical throughput limit.
* '''Recommendation 2''': If a physical limit has been reached for the
+
*** '''Recommendation 1''': Verify that all network components in the connection path physically support the desired network throughput.
connection, add another network path, setup appropriate PIFs and VIFs, and
+
*** '''Recommendation 2''': If a physical limit has been reached for the connection, add another network path, setup appropriate PIFs and VIFs, and configure the application(s) to use both/all paths.
  +
** '''Possible cause 2''': Some parts of the software associated with network processing might not be completely parallelisable, or the hardware cannot make use of its parallelisation capabilities if the software doesn't follow certain patterns of behaviour.
configure the application(s) to use both/all paths.
 
  +
*** '''Recommendation 1''': Setup the application used for sending or receiving network traffic to use multiple threads. Experiment with the number of threads.
* '''Possible cause 2''': Some parts of the software associated with network
 
  +
*** '''Recommendation 2''': Experiment with the TCP parameters, e.g. window size and message size --- see Tweaks (below) for recommended values.
processing might not be completely parallelisable, or the hardware cannot make
 
  +
*** '''Recommendation 3''': If IOMMU is enabled on your system, try disabling it. See Tweaks for a section on how to disable IOMMU.
use of its parallelisation capabilities if the software doesn't follow certain
 
  +
*** '''Recommendation 4''': Try switching the network backend. See the Tweaks section on how do that.
patterns of behaviour.
 
  +
* '''Recommendation 1''': Setup the application used for sending or receiving
 
  +
* Since switching to XenServer 6.0.0 or XCP 1.5 (or later), aggregate network throughput has decreased for my Windows VMs.
network traffic to use multiple threads. Experiment with the number of
 
  +
** '''Verifying the symptom''': Compare performance on the old system with performance on the new system. (See the section on [[http://wiki.xenproject.org/wiki/Network_Throughput_Guide#Making_throughput_measurements how to make measurements]].)
threads.
 
  +
** '''Possible cause 1''': With XenServer 6.0.0 or XCP 1.5, the RSC (receive-side copying) feature is enabled by default. This feature moves some work that is otherwise done in the control domain into user domains. RSC can cause lower aggregate network throughput.
* '''Recommendation 2''': Experiment with the TCP parameters, e.g. window
 
  +
*** '''Recommendation 1''': Try switching RSC off in all Windows VMs. See the Tweaks section on how to do that.
size and message size --- see Tweaks (below) for recommended values.
 
* '''Recommendation 3''': If IOMMU is enabled on your system, try disabling it.
 
See Tweaks for a section on how to disable IOMMU.
 
* '''Recommendation 4''': Try switching the network backend. See the Tweaks
 
section on how do that.
 
   
 
== Making throughput measurements ==
 
== Making throughput measurements ==
Line 215: Line 141:
 
more common network performance tools.
 
more common network performance tools.
   
=== Iperf 2.0.5 ===
+
=== Our helper tools ===
  +
  +
We created an open-source repository for various tools that can help with performance analysis: [https://github.com/perf101/scripts https://github.com/perf101/scripts]
  +
  +
Please visit the link above for an overview of the scripts available, and for downloading any of them.
  +
  +
If you notice any bugs, if you have any improvement suggestions, or if you would like to add your own script to our repository, please let us know.
  +
  +
=== Iperf ===
   
 
==== Installation ====
 
==== Installation ====
   
 
===== Linux =====
 
===== Linux =====
Make sure the following packages are installed on your system: `gcc`, `g++`,
+
Make sure the following packages are installed on your system: <tt>gcc</tt>, <tt>g++</tt>,
`make`, and `subversion`.
+
<tt>make</tt>, and <tt>subversion</tt>.
   
 
Iperf can be installed from Iperf's SVN repository:
 
Iperf can be installed from Iperf's SVN repository:
   
<pre><nowiki>
+
<pre>
 
svn co https://iperf.svn.sourceforge.net/svnroot/iperf iperf
 
svn co https://iperf.svn.sourceforge.net/svnroot/iperf iperf
 
cd iperf/trunk
 
cd iperf/trunk
Line 233: Line 167:
 
cd
 
cd
 
iperf --version # should mentioned pthreads
 
iperf --version # should mentioned pthreads
</nowiki></pre>
+
</pre>
   
  +
You might also be able to install it via a package manager, e.g.:
  +
  +
<pre>
  +
apt-get install iperf
  +
</pre>
  +
  +
When using the <tt>yum</tt> package manager, you can install it via [http://wiki.centos.org/AdditionalResources/Repositories/RPMForge RPMForge].
   
 
===== Windows =====
 
===== Windows =====
   
You can use the following executable: [[Media:Network_Throughput_Guide$iperf.exe]]
+
* Iperf 1.7.0 (win32 threads): [http://downloads.xenproject.org/Wiki/Network_Throughput_Guide/iperf.exe iperf-170-win32-threads.exe]
  +
* Iperf 2.0.5 (pthreads): [http://downloads.xenproject.org/XCP/iperf205.zip iperf-205-pthreads.zip]
   
Note that we are ''not'' the authors of the above executable. Please use your
+
Note that we are ''not'' the authors of the above executables. Please use your
anti-virus software to scan the file before using it.
+
anti-virus software to scan the files before using them.
  +
  +
Although we haven't done many measurements with Iperf 2.0.5 (pthreads) on Windows yet, it appears to perform better than Iperf 1.7.0 (win32 threads) in most circumstances. The most noticeable difference is that one can achieve roughly the same aggregate throughput with fewer Iperf threads when using Iperf 2.0.5 --- often one thread suffices. If you notice any specific scenarios where one performs better than the other (irrespective of the number of Iperf threads used), please let us know, and we will update this guide accordingly.
   
 
==== Usage ====
 
==== Usage ====
   
We recommend the following usage of `iperf`:
+
We recommend the following usage of <tt>iperf</tt>:
* make sure that firewall is disabled/allows `iperf` traffic;
+
* Make sure that firewall is disabled/allows <tt>iperf</tt> traffic.
* set `iperf` what units to report the results in, e.g. by using `-f m` --- if
+
* Set <tt>iperf</tt> what units to report the results in, e.g. by using <tt>-f m</tt> --- if not set explicitly, <tt>iperf</tt> will change units based on the result.
  +
* An <tt>iperf</tt> test should last at least 20 seconds, e.g. <tt>-t 20</tt>.
not set explicitly, `iperf` will change units based on the result;
 
  +
* You can see interim measurements via <tt>-i 1</tt>.
* an `iperf` test should last at least 20 seconds, e.g. `-t 20`;
 
* experiment with multiple communication threads, e.g. `-P 4`;
+
* Experiment with multiple communication threads, e.g. <tt>-P 4</tt>.
* repeat a test in a specific context at least 5 times, calculating an average,
+
* Repeat a test in a specific context at least 5 times, calculating an average, and making notes of any anomalies.
  +
* Experiment with TCP window size and buffer size settings. Initially, you should run Iperf without setting these parameters --- this is because Iperf can, on some systems, pick up good default values. When doing measurements on Windows VMs, we found that it is normally a good idea to use <tt>-w 256K -l 256K</tt> for both the receiver and the sender.
and making notes of any anomalies;
 
  +
* Use a shell/batch script to start multiple <tt>iperf</tt> processes simultaneously (if required), and possibly to automate the whole testing process.
* experiment with TCP window size and buffer size settings --- using `-w 256K
 
  +
* When running <tt>iperf</tt> on a Windows VM:
-l 256K` for both the receiver and the sender worked well for us;
 
  +
** Run it in non-daemon mode on the receiver, since daemon mode tends to (it's still unclear as to when exactly) create a service. Having an <tt>iperf</tt> service is undesirable, since one cannot as easily control which VCPU it executes on, and with what priority. Also, you cannot have multiple receivers with a service running (in case you wanted to experiment with them).
* use a shell/batch script to start multiple `iperf` processes simultaneously
 
  +
* Run <tt>iperf</tt> with "realtime" priority, and on a non-first VCPU (if you are executing on a multi-VCPU VM) for reasons explained in the section above.
(if required), and possibly to automate the whole testing process.
 
* when running `iperf` on a Windows VM:
 
** run it in non-daemon mode on the receiver, since daemon mode tends to (it's
 
still unclear as to when exactly) create a service. Having an `iperf` service
 
is undesirable, since one cannot as easily control which VCPU it executes on,
 
and with what priority. Also, you cannot have multiple receivers with a
 
service running (in case you wanted to experiment with them);
 
* run `iperf` with "realtime" priority, and on a non-first VCPU (if you are
 
executing on a multi-VCPU VM) for reasons explained in the section above.
 
   
 
Here are the simplest commands to execute on the receiver, and then the sender:
 
Here are the simplest commands to execute on the receiver, and then the sender:
   
<pre><nowiki>
+
<pre>
 
# on receiver
 
# on receiver
iperf -s -f m -w 256K -l 256K
+
iperf -s -f m # add "-w 256K -l 256K" when sender or receiver is a Windows VM
   
 
# on sender
 
# on sender
iperf -c <receiver-IP> -f m -w 256K -l 256K -t 20
+
iperf -c <receiver-IP> -f m -t 20 # add "-w 256K -l 256K" when sender or receiver is a Windows VM
</nowiki></pre>
+
</pre>
 
 
To measure aggregate receive throughput of multiple VMs where the data is sent
 
from a single source (e.g., a different physical machine), use:
 
 
<pre><nowiki>
 
#!/bin/bash
 
 
VMS=$1
 
THREADS=$2
 
TIME=$3
 
TMP=`mktemp`
 
 
for i in `seq $VMS`; do
 
VM_IP="192.168.1.$i" # use your IP scheme here
 
echo "Starting iperf for $VM_IP ..."
 
iperf -c $VM_IP -w 256K -l 256K -t $TIME -f m -P $THREADS | grep -o "[0-9]\+ Mbits/sec" | awk -vn=$i '{print n, $1}' >> $TMP &
 
done
 
 
sleep $((TIME + 3))
 
cat $TMP | sort
 
cat $TMP | awk '{sum+=$2}END{print "Average: ", sum}'
 
rm -rf $TMP
 
</nowiki></pre>
 
   
  +
For running parallel <tt>iperf</tt> sessions to multiple destinations, use the [https://github.com/perf101/scripts/blob/master/multi-iperf.sh multi-iperf.sh] script located in the [https://github.com/perf101/scripts scripts repository] on [https://github.com/perf101 perf101 Github account].
   
 
=== Netperf 2.5.0 ===
 
=== Netperf 2.5.0 ===
   
`netperf`'s `TCP_STREAM` test also tends to give reliable results. However, since
+
<tt>netperf</tt>'s <tt>TCP_STREAM</tt> test also tends to give reliable results. However, since
 
this version (the only version we recommend using) does not automatically
 
this version (the only version we recommend using) does not automatically
 
parallelise over the available VCPUs, such parallelisation needs to be done
 
parallelise over the available VCPUs, such parallelisation needs to be done
Line 311: Line 224:
   
 
===== Linux =====
 
===== Linux =====
Make sure the following packages are installed on your system: `gcc`, `g++`,
+
Make sure the following packages are installed on your system: <tt>gcc</tt>, <tt>g++</tt>,
`make`, and `wget`.
+
<tt>make</tt>, and <tt>wget</tt>.
   
 
Then run the following commands:
 
Then run the following commands:
   
<pre><nowiki>
+
<pre>
 
wget ftp://ftp.netperf.org/netperf/netperf-2.5.0.tar.gz
 
wget ftp://ftp.netperf.org/netperf/netperf-2.5.0.tar.gz
 
tar xzf netperf-2.5.0.tar.gz
 
tar xzf netperf-2.5.0.tar.gz
Line 324: Line 237:
 
make check
 
make check
 
make install
 
make install
</nowiki></pre>
+
</pre>
   
  +
The receiver side can then be stated manually with <tt>netserver</tt>, or you can
 
The receiver side can then be stated manually with `netserver`, or you can
 
 
configure it as a service:
 
configure it as a service:
   
<pre><nowiki>
+
<pre>
 
# these commands may differ depending on your OS
 
# these commands may differ depending on your OS
 
echo "netperf 12865/tcp" >> /etc/services
 
echo "netperf 12865/tcp" >> /etc/services
 
echo "netperf stream tcp nowait root /usr/local/bin/netserver netserver" >> /etc/inetd.conf
 
echo "netperf stream tcp nowait root /usr/local/bin/netserver netserver" >> /etc/inetd.conf
 
/etc/init.d/openbsd-inetd restart
 
/etc/init.d/openbsd-inetd restart
</nowiki></pre>
+
</pre>
 
   
 
===== Windows =====
 
===== Windows =====
   
You can use the following executables: [[Media:Network_Throughput_Guide$netserver.exe]],
+
You can use the following executables:
[[Media:Network_Throughput_Guide$netclient.exe]]
+
* [http://downloads.xenproject.org/Wiki/Network_Throughput_Guide/netserver.exe netserver.exe]
  +
* [http://downloads.xenproject.org/Wiki/Network_Throughput_Guide/netclient.exe netclient.exe]
   
 
Note that we are ''not'' the authors of the above executables. Please use your
 
Note that we are ''not'' the authors of the above executables. Please use your
Line 349: Line 261:
   
 
Here, we describe the usage of the Linux version of Netperf. The syntax for the
 
Here, we describe the usage of the Linux version of Netperf. The syntax for the
Windows version is sometimes different; please see `netclient.exe -h` for more
+
Windows version is sometimes different; please see <tt>netclient.exe -h</tt> for more
 
information.
 
information.
   
With `netperf` installed on both sides, the following script can be used on
+
With <tt>netperf</tt> installed on both sides, the following script can be used on
 
either side to determine network throughput for transmitting traffic:
 
either side to determine network throughput for transmitting traffic:
   
<pre><nowiki>
+
<pre>
 
#!/bin/bash
 
#!/bin/bash
   
Line 370: Line 282:
 
cat $TMP | awk '{sum+=$5}END{print sum}'
 
cat $TMP | awk '{sum+=$5}END{print sum}'
 
rm $TMP
 
rm $TMP
</nowiki></pre>
+
</pre>
 
   
 
=== NTttcp (Windows only) ===
 
=== NTttcp (Windows only) ===
   
The program can be installed by running this installer: [[Media:Network_Throughput_Guide$NTttcp.msi]]
+
The program can be installed by running this installer: [http://downloads.xenproject.org/Wiki/Network_Throughput_Guide/NTttcp.msi NTttcp.msi]
   
 
Note that we are ''not'' the authors of the above installer. Please use your
 
Note that we are ''not'' the authors of the above installer. Please use your
Line 381: Line 292:
   
 
After completing the installation, go to the installation directory, and make
 
After completing the installation, go to the installation directory, and make
two copies of `ntttcp.exe`:
+
two copies of <tt>ntttcp.exe</tt>:
* `ntttcpr.exe` --- use for receiving traffic
+
* <tt>ntttcpr.exe</tt> --- use for receiving traffic
* `ntttcps.exe` --- use for sending traffic
+
* <tt>ntttcps.exe</tt> --- use for sending traffic
   
 
For usage guidelines, please refer to the guide in the installation directory.
 
For usage guidelines, please refer to the guide in the installation directory.
Line 391: Line 302:
 
There are many diagnostic tools one can use:
 
There are many diagnostic tools one can use:
 
* Performance tab in VM's Task Manager;
 
* Performance tab in VM's Task Manager;
* Performance tab for the VM in [[XenCenter]];
+
* Performance tab for the VM in [[Using_XenCenter_to_manage_XCP|XenCenter]];
* Performance tab for the VM's host in [[XenCenter]];
+
* Performance tab for the VM's host in [[Using_XenCenter_to_manage_XCP|XenCenter]];
* `top` (with '''z''' and '''1''' pressed) in VM's host's `dom0`; and,
+
* <tt>top</tt> (with '''z''' and '''1''' pressed) in VM's host's <tt>dom0</tt>; and,
* `xentop` in VM's host's `dom0`.
+
* <tt>xentop</tt> in VM's host's <tt>dom0</tt>.
   
It is sometimes also worth observing `/proc/interrupts` in `dom0`, as well as
+
It is sometimes also worth observing <tt>/proc/interrupts</tt> in <tt>dom0</tt>, as well as
`/proc/irq/<irqno>/smp_affinity`.
+
<tt>/proc/irq/<irqno>/smp_affinity</tt>.
   
 
== Recommended configurations ==
 
== Recommended configurations ==
Line 407: Line 318:
 
All network throughput tests were, in the end, bottlenecked by VCPU
 
All network throughput tests were, in the end, bottlenecked by VCPU
 
capacity. This means that machines with better physical CPUs are expected to
 
capacity. This means that machines with better physical CPUs are expected to
achieve higher network throughputs for both `dom0` and VM tests.
+
achieve higher network throughputs for both <tt>dom0</tt> and VM tests.
   
 
=== Number of VM pairs and threads ===
 
=== Number of VM pairs and threads ===
Line 417: Line 328:
 
implementations, so some experimentation is recommended --- finding a good
 
implementations, so some experimentation is recommended --- finding a good
 
balance can have a drastic effect on network performance (mainly due to better
 
balance can have a drastic effect on network performance (mainly due to better
VCPU utilisation). Our research shows that 8 pairs with 2 `iperf` threads per
+
VCPU utilisation). Our research shows that 8 pairs with 2 <tt>iperf</tt> threads per
pair works well for Debian-based Linux, while 4 pairs with 8 `iperf` threads per
+
pair works well for Debian-based Linux, while 4 pairs with 8 <tt>iperf</tt> threads per
 
pair works well for Windows 7.
 
pair works well for Windows 7.
   
=== Allocation of NICs over `netback` threads ===
+
=== Allocation of VIFs over <tt>netback</tt> threads ===
   
All results above assume equal distribution of used NICs over available
+
All results above assume equal distribution of used VIFs over available
`netback` threads, which may not always be possible --- see
+
<tt>netback</tt> threads, which may not always be possible --- see
 
[http://support.citrix.com/article/CTX127970 a KB article] for more
 
[http://support.citrix.com/article/CTX127970 a KB article] for more
 
information. For VM network throughput, it is important to get as close as
 
information. For VM network throughput, it is important to get as close as
Line 432: Line 343:
 
=== Using irqbalance ===
 
=== Using irqbalance ===
   
The `irqbalance` daemon is enabled by default. It has been observed that this
+
The <tt>irqbalance</tt> daemon is enabled by default. It has been observed that this
 
daemon can improve VM network performance by about 16% --- note that this is
 
daemon can improve VM network performance by about 16% --- note that this is
 
much less than the potential gain of the getting the other points described in
 
much less than the potential gain of the getting the other points described in
this section right. The reason why `irqbalance` can help is that it distributes
+
this section right. The reason why <tt>irqbalance</tt> can help is that it distributes
the processing of `dom0`-level interrupts across all available `dom0` VCPUs, not
+
the processing of <tt>dom0</tt>-level interrupts across all available <tt>dom0</tt> VCPUs, not
 
just the first one.
 
just the first one.
   
Line 442: Line 353:
   
 
It appears that Xen currently feeds all interrupts for a guest to the guest's
 
It appears that Xen currently feeds all interrupts for a guest to the guest's
first VCPU, i.e. `VCPU0`. Initial observations show that more CPU cycles are
+
first VCPU, i.e. <tt>VCPU0</tt>. Initial observations show that more CPU cycles are
 
spent processing the interrupt requests than actually processing the received
 
spent processing the interrupt requests than actually processing the received
 
data (assuming there is no disk I/O, which is slow). This means that, on a
 
data (assuming there is no disk I/O, which is slow). This means that, on a
 
Windows VM with 2 VCPUs, all processing of the received data should be done on
 
Windows VM with 2 VCPUs, all processing of the received data should be done on
the second VCPU, i.e. `VCPU1`: ''Task Manager > Processes > Select Process > Set
+
the second VCPU, i.e. <tt>VCPU1</tt>: ''Task Manager > Processes > Select Process > Set
CPU affinity > 1'' --- in this case, `VCPU0` will be fully used, whereas `VCPU1`
+
CPU affinity > 1'' --- in this case, <tt>VCPU0</tt> will be fully used, whereas <tt>VCPU1</tt>
 
will probably have some spare cycles. While this is acceptable, it is more
 
will probably have some spare cycles. While this is acceptable, it is more
 
efficient to use 2 guests (1 VCPU each), which makes full use of both
 
efficient to use 2 guests (1 VCPU each), which makes full use of both
 
VCPUs. Therefore, to avoid this bottleneck altogether, one should probably use
 
VCPUs. Therefore, to avoid this bottleneck altogether, one should probably use
"`<number of host CPUs> - 4`" VMs, each with 1 VCPU, and combine their
+
"<tt><number of host CPUs> - 4</tt>" VMs, each with 1 VCPU, and combine their
 
capabilities with a [[NetScaler]] Appliance.
 
capabilities with a [[NetScaler]] Appliance.
  +
  +
If you do not use any applications that rely on checksums to be correct, disable checksumming.
  +
This should substantially decrease vCPU usage of the VM.
  +
  +
If you are using Windows VMs ontop of XenServer 6.0.0 or XCP 1.5 (or later), consider turning off
  +
RSC within the VMs (see the Tweaks section on how to do that).
   
 
=== Offloading some network processing to NICs ===
 
=== Offloading some network processing to NICs ===
Line 461: Line 378:
 
that it works for your NIC+driver before using it in a production environment.
 
that it works for your NIC+driver before using it in a production environment.
   
If performing mainly `dom0`-to-`dom0` network traffic, turning on GRO setting
+
If performing mainly <tt>dom0</tt>-to-<tt>dom0</tt> network traffic, turning on GRO setting
 
for the NICs involved can be highly beneficial when combined with the
 
for the NICs involved can be highly beneficial when combined with the
`irqbalance` daemon (see above). This configuration can easily be combined with
+
<tt>irqbalance</tt> daemon (see above). This configuration can easily be combined with
 
Open vSwitch (the default option), since the performance is either equal or
 
Open vSwitch (the default option), since the performance is either equal or
 
faster than with a Linux Bridge. Turning on the ''Large Receive Offload'' (LRO)
 
faster than with a Linux Bridge. Turning on the ''Large Receive Offload'' (LRO)
setting tends to, in general, decrease `dom0` network throughput.
+
setting tends to, in general, decrease <tt>dom0</tt> network throughput.
   
Our initial test results indicate that turning on either of the two offload
+
Our experiments indicate that turning on either of the two offload settings (GRO or LRO) in
settings (GRO or LRO) in dom0 can give mixed results based on the context. Feel
+
dom0 can give mixed VM-level throughput results, based on the context. Feel free to
free to experiment and let us know your findings.
+
experiment and let us know your findings.
   
 
=== Jumbo frames ===
 
=== Jumbo frames ===
Line 486: Line 403:
 
In the various tests that we performed, we observed no statistically significant
 
In the various tests that we performed, we observed no statistically significant
 
difference in network performance for dom0-to-dom0 traffic. We observed from 3%
 
difference in network performance for dom0-to-dom0 traffic. We observed from 3%
(Linux PV guests, no `irqbalance`) to about %10 (Windows HVM guests, with
+
(Linux PV guests, no <tt>irqbalance</tt>) to about 10% (Windows HVM guests, with
`irqbalance`) worse performance for VM-to-VM traffic.
+
<tt>irqbalance</tt>) worse performance for VM-to-VM traffic.
   
 
=== TCP settings ===
 
=== TCP settings ===
   
Our experiments show that tweaking TCP settings inside the `VM`(s) can lead to
+
Our experiments show that tweaking TCP settings inside the <tt>VM</tt>(s) can lead to
 
substantial network performance improvements. The main reason for this is that
 
substantial network performance improvements. The main reason for this is that
 
most systems are still by default configured to work well on 100Mb/s or 1Gb/s,
 
most systems are still by default configured to work well on 100Mb/s or 1Gb/s,
Line 527: Line 444:
 
(together with the usual NIC offloading features enabled) saturate (or nearly
 
(together with the usual NIC offloading features enabled) saturate (or nearly
 
saturate) a 10Gbps connection when ''receiving traffic''. Furthermore, the
 
saturate) a 10Gbps connection when ''receiving traffic''. Furthermore, the
impact on `dom0` is negligible.
+
impact on <tt>dom0</tt> is negligible.
   
 
The Tweaks section below contains a section about how to enable SR-IOV.
 
The Tweaks section below contains a section about how to enable SR-IOV.
  +
  +
=== NUMA Hosts ===
  +
  +
Non-Uniform Memory Access is becoming more common-place and more pronounced in
  +
modern machines. To reach optimum efficiency, we have to put processes that often
  +
interact "close-by" in terms of NUMA-ness, i.e. on the same CPU node, but not on
  +
the same logical CPU (i.e. not on the same CPU core unless they run on different
  +
hyper-threads).
  +
  +
In our case, the main two processes we are concerned about are the <tt>netfront</tt>
  +
(PV drivers in the user domain) and the corresponding <tt>netback</tt> (network
  +
processing in the control domain).
  +
  +
{{mbox | text =
  +
As was mentioned before, a <tt>netfront</tt> is currently allocated in a round-robin fashion to available <tt>netback</tt>s. At the moment, it is not easy to determine what <tt>netback</tt> a <tt>netfront</tt> is linked to --- this can, for example, be done by sending some traffic over <tt>netfront</tt> and observing which <tt>netback</tt> is being used (by looking at <tt>top</tt> in the control domain). It is expected that this will be made much easier in future versions of the product.
  +
}}
  +
  +
By default, the control domain uses 4 VCPUs, which are mapped to 4 (by default randomly-chosen) PCPUs. Similarly, VCPUs of any VM installed will be (by default) randomly allocated to most-free PCPUs. Moreover, Xen Scheduler prefers to put all VMs (including the control domain) as far away from each other in terms of NUMA-ness as possible. In general, this is a good rule, since each VM then has a large cache, and cache-misses are minimised. However, as explained above, this rule is not great for network performance.
  +
  +
For example, suppose we have a 2-node host, each with 12 logical CPUs (Physical CPUs/PCPUs), and we install a single 1-VCPU VM. The VM will be put on a different CPU node than the control domain, which means that the communication between the VM's <tt>netfront</tt> and the control domain's <tt>netback</tt> will not be efficient. Therefore, in scenarios where network performance is of great importance, we should pin VCPUs of the control domain and any user domains explicitly, and close-by in terms of NUMA-ness. The pinning should be performed before the VM starts, using <tt>vcpu-params:mask</tt> --- see [http://support.citrix.com/article/CTX117960 a related article] for more information.
  +
  +
In the scenario described, we could pin the control domain VCPUs to the first four PCPUs, and the VM's VCPU to the fifth PCPU; if we installed any more VMs for which network performance is not critical, we can easily pin them to the second node, i.e. PCPUs 12-23. The tool <tt>xenpm get-cpu-topology</tt> is useful here for obtaining CPU topology of the host.
  +
  +
=== Few fast VIFs ===
  +
  +
For a particular host, if the number of VM network connections (VIFs) that are required to be fast is less than around 2/3 of <tt>netback</tt> threads in <tt>dom0</tt>, a further performance boost can be achieved for such connections.
  +
  +
{{mbox | text = Note that [[#Changing_the_Number_of_Netback_Threads_in_Dom0|the number of <tt>netback</tt> threads can be increased]].}}
  +
  +
<tt>irqbalance</tt>, which is enabled by default in later releases, tries to set up interrupts on VCPUs that are already busy, but not too busy. On bare metal machines, this approach works well in terms of performance and power saving (leaving some cores in lower power states if possible). In a virtualised environment, however, the cost of context switching is higher, which means that it is better for performance to process interrupts on non-busy VCPUs. Therefore, we can disable <tt>irqbalance</tt>, and perform [[#Manual_IRQ_Balancing_in_Dom0|manual IRQ balancing]] to that effect. For best results, this approach should be combined [[#NUMA_Hosts|with manual pinning of VMs]].
  +
  +
{{mbox | text = In a virtualised environment, information gathered by <tt>irqbalance</tt> about CPUs is currently not completely correct, which means that the tool is currently not as effective as it is on bare metal. A future release of the tool will hopefully fix this problem.}}
  +
  +
=== Queue Length ===
  +
  +
Our experiments show that increasing Send Queue Length (<tt>txqueuelen</tt>) can increase network performance a few percent. See Tweaks on how to increase the queue length.
   
 
== Tweaks ==
 
== Tweaks ==
Line 535: Line 488:
 
=== Automatic IRQ Balancing in Dom0 ===
 
=== Automatic IRQ Balancing in Dom0 ===
   
`irqbalance` is enabled by default.
+
<tt>irqbalance</tt> is enabled by default.
   
 
If IRQ balancing service is already installed, you can enable it by running:
 
If IRQ balancing service is already installed, you can enable it by running:
   
<pre><nowiki>
+
<pre>
 
service irqbalance start
 
service irqbalance start
</nowiki></pre>
+
</pre>
 
   
 
Otherwise, you need to install it first with:
 
Otherwise, you need to install it first with:
   
<pre><nowiki>
+
<pre>
 
yum --disablerepo=citrix --enablerepo=base,updates install -y irqbalance
 
yum --disablerepo=citrix --enablerepo=base,updates install -y irqbalance
</nowiki></pre>
+
</pre>
 
   
 
=== Manual IRQ Balancing in Dom0 ===
 
=== Manual IRQ Balancing in Dom0 ===
   
While `irqbalance` does the job in most situations, manual IRQ balancing can
+
While <tt>irqbalance</tt> does the job in most situations, manual IRQ balancing can
prove better in some situations. If we have a `dom0` with 4 VCPUs, the following
+
prove better in some situations. If we have a <tt>dom0</tt> with 4 VCPUs, the following
script disables `irqbalance`, and evenly distributes specific interrupt queues
+
script disables <tt>irqbalance</tt>, and evenly distributes specific interrupt queues
 
(1272--1279) among the available VCPUs:
 
(1272--1279) among the available VCPUs:
   
<pre><nowiki>
+
<pre>
 
service irqbalance stop
 
service irqbalance stop
 
for i in `seq 0 7`; do
 
for i in `seq 0 7`; do
Line 565: Line 516:
 
printf "%x" $aff > /proc/irq/$queue/smp_affinity;
 
printf "%x" $aff > /proc/irq/$queue/smp_affinity;
 
done
 
done
</nowiki></pre>
+
</pre>
   
  +
To find out how many <tt>dom0</tt> VCPUs a host has, use <tt>cat /proc/cpuinfo</tt>. To find
 
To find out how many `dom0` VCPUs a host has, use `cat /proc/cpuinfo`. To find
 
 
out what interrupt queues correspond to which interface, use `cat
 
out what interrupt queues correspond to which interface, use `cat
 
/proc/interrupts`.
 
/proc/interrupts`.
Line 574: Line 524:
 
=== Changing the Number of Dom0 VCPUs ===
 
=== Changing the Number of Dom0 VCPUs ===
   
To check the current number of `dom0` VCPUs, run `cat /proc/cpuinfo`.
+
To check the current number of <tt>dom0</tt> VCPUs, run <tt>cat /proc/cpuinfo</tt>.
 
The desired number of `dom0` VCPUs can be set in `/etc/sysconfig/unplug-vcpus`.
 
 
For this to take effect, you can either restart the host, or (only in the case
 
where the number of VCPUs in `dom0` is decreasing) run:
 
   
  +
On newer systems, this can be done as follows:
<pre><nowiki>
 
  +
<pre>
/etc/init.d/unplug-vcpus start
 
  +
NUM=8
</nowiki></pre>
 
  +
echo "NR_DOMAIN0_VCPUS=${NUM}" > /etc/sysconfig/unplug-vcpus
  +
/opt/xensource/libexec/xen-cmdline --set-xen dom0_max_vcpus=${NUM}
  +
reboot
  +
</pre>
   
  +
On older systems, this can be done by setting <tt>/etc/sysconfig/unplug-vcpus</tt>, and restarting the host (or, in the case
  +
where the number of VCPUs in <tt>dom0</tt> is decreasing, running <tt>/etc/init.d/unplug-vcpus start</tt>.)
   
 
=== Changing the Number of Netback Threads in Dom0 ===
 
=== Changing the Number of Netback Threads in Dom0 ===
   
By default, the number of netback threads in `dom0` equals
+
By default, the number of netback threads in <tt>dom0</tt> equals
`min(4,<number_of_vcpus_in_dom0>)`. Therefore, increasing the number of `dom0`
+
<tt>min(4,<number_of_vcpus_in_dom0>)</tt>. Therefore, increasing the number of <tt>dom0</tt>
 
VCPUs above 4, will by default not increase the number of netback threads.
 
VCPUs above 4, will by default not increase the number of netback threads.
   
 
To increase the threshold number of netback threads to 12, write
 
To increase the threshold number of netback threads to 12, write
`xen-netback.netback_max_groups=12` into `/boot/extlinux.conf` under section
+
<tt>xen-netback.netback_max_groups=12</tt> into <tt>/boot/extlinux.conf</tt> under section
labelled `xe-serial` just after the assignment `console=hvc0`.
+
labelled <tt>xe-serial</tt> just after the assignment <tt>console=hvc0</tt>.
   
 
=== Enabling NIC Offloading ===
 
=== Enabling NIC Offloading ===
Line 600: Line 551:
 
Please see the "Offloading some network processing to NICs" section above.
 
Please see the "Offloading some network processing to NICs" section above.
   
You can use `ethtool` to enable/disable NIC offloading.
+
You can use <tt>ethtool</tt> to enable/disable NIC offloading.
   
<pre><nowiki>
+
<pre>
 
ETH=eth6 # the conn. for which you want to enable offloading
 
ETH=eth6 # the conn. for which you want to enable offloading
 
ethtool -k $ETH # check what is currently enabled/disabled
 
ethtool -k $ETH # check what is currently enabled/disabled
 
ethtool -K $ETH gro on # enable GRO
 
ethtool -K $ETH gro on # enable GRO
</nowiki></pre>
+
</pre>
   
  +
Note that changing offload settings directly via <tt>ethool</tt> will not persist the
 
  +
configuration through host reboots; to do that, use <tt>other-config</tt> of the <tt>xe</tt>
Note that changing offload settings directly via `ethool` will not persist the
 
configuration through host reboots; to do that, use `other-config` of the `xe`
 
 
command.
 
command.
   
<pre><nowiki>
+
<pre>
 
xe pif-param-set uuid=<pif_uuid> other-config:ethtool-gro=on
 
xe pif-param-set uuid=<pif_uuid> other-config:ethtool-gro=on
</nowiki></pre>
+
</pre>
 
   
 
=== Enabling Jumbo Frames ===
 
=== Enabling Jumbo Frames ===
   
Suppose `eth6` and `xenbr6` are the device and the bridge corresponding to the
+
Suppose <tt>eth6</tt> and <tt>xenbr6</tt> are the device and the bridge corresponding to the
 
10 GiB/sec connection used.
 
10 GiB/sec connection used.
   
 
Shut down user domains:
 
Shut down user domains:
   
<pre><nowiki>
+
<pre>
 
VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
 
VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
 
for uuid in $VMs; do xe vm-shutdown uuid=$uuid; done
 
for uuid in $VMs; do xe vm-shutdown uuid=$uuid; done
</nowiki></pre>
+
</pre>
 
   
 
Set network MTU to 9000, and re-plug relevant PIFs:
 
Set network MTU to 9000, and re-plug relevant PIFs:
   
<pre><nowiki>
+
<pre>
 
net_uuid=`xe network-list bridge=xenbr6 params=uuid --minimal`
 
net_uuid=`xe network-list bridge=xenbr6 params=uuid --minimal`
 
xe network-param-set uuid=$net_uuid MTU=9000
 
xe network-param-set uuid=$net_uuid MTU=9000
 
PIFs=$(xe pif-list network-uuid=$net_uuid --minimal | sed 's/,/ /g')
 
PIFs=$(xe pif-list network-uuid=$net_uuid --minimal | sed 's/,/ /g')
 
for uuid in $PIFs; do xe pif-unplug uuid=$uuid; xe pif-plug uuid=$uuid; done
 
for uuid in $PIFs; do xe pif-unplug uuid=$uuid; xe pif-plug uuid=$uuid; done
</nowiki></pre>
+
</pre>
 
   
 
Start user domains (you might want to make sure that VMs are started one after
 
Start user domains (you might want to make sure that VMs are started one after
 
another to avoid potential VIF static allocation problems):
 
another to avoid potential VIF static allocation problems):
   
<pre><nowiki>
+
<pre>
 
VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
 
VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
 
for uuid in $VMs; do xe vm-start uuid=$uuid; done
 
for uuid in $VMs; do xe vm-start uuid=$uuid; done
</nowiki></pre>
+
</pre>
 
   
 
Set up the connections you will use inside the user domains to use MTU 9000. For
 
Set up the connections you will use inside the user domains to use MTU 9000. For
 
Linux VMs, this is done with:
 
Linux VMs, this is done with:
   
<pre><nowiki>
+
<pre>
 
ETH=eth1 # the user domain connection you are concerned with
 
ETH=eth1 # the user domain connection you are concerned with
 
ifconfig $ETH mtu 9000 up
 
ifconfig $ETH mtu 9000 up
</nowiki></pre>
+
</pre>
 
   
 
Verifying:
 
Verifying:
   
<pre><nowiki>
+
<pre>
 
xe vif-list network-uuid=$net_uuid params=MTU --minimal
 
xe vif-list network-uuid=$net_uuid params=MTU --minimal
</nowiki></pre>
+
</pre>
 
   
 
=== Linux TCP parameter settings ===
 
=== Linux TCP parameter settings ===
Line 670: Line 614:
 
==== Default in Dom0 ====
 
==== Default in Dom0 ====
   
  +
<pre>
 
<pre><nowiki>
 
 
ETH=eth6 # the connection you are concerned with
 
ETH=eth6 # the connection you are concerned with
 
sysctl -w net.core.rmem_max=131071
 
sysctl -w net.core.rmem_max=131071
Line 684: Line 627:
 
sysctl -w net.ipv4.tcp_sack=1
 
sysctl -w net.ipv4.tcp_sack=1
 
sysctl -w net.ipv4.tcp_fin_timeout=60
 
sysctl -w net.ipv4.tcp_fin_timeout=60
</nowiki></pre>
+
</pre>
 
   
 
==== Default for a Demo Etch Linux VM ====
 
==== Default for a Demo Etch Linux VM ====
   
  +
<pre>
 
<pre><nowiki>
 
 
ETH=eth1 # the connection you are concerned with
 
ETH=eth1 # the connection you are concerned with
 
sysctl -w net.core.rmem_max=109568
 
sysctl -w net.core.rmem_max=109568
Line 703: Line 644:
 
sysctl -w net.ipv4.tcp_sack=1
 
sysctl -w net.ipv4.tcp_sack=1
 
sysctl -w net.ipv4.tcp_fin_timeout=60
 
sysctl -w net.ipv4.tcp_fin_timeout=60
</nowiki></pre>
+
</pre>
 
   
 
==== Recommended TCP settings for Dom0 ====
 
==== Recommended TCP settings for Dom0 ====
   
 
Changing these settings in only relevant if you want to optimise network
 
Changing these settings in only relevant if you want to optimise network
connections for which one of the end-points is `dom0` (not a user domain). Using
+
connections for which one of the end-points is <tt>dom0</tt> (not a user domain). Using
settings recommended for a user domain (VM) will work well for `dom0` as well.
+
settings recommended for a user domain (VM) will work well for <tt>dom0</tt> as well.
   
 
==== Recommended TCP settings for a VM ====
 
==== Recommended TCP settings for a VM ====
   
  +
<pre>
 
<pre><nowiki>
 
 
Bandwidth Delay Product (BDP) = Route Trip Time (RTT) * Theoretical Bandwidth Limit
 
Bandwidth Delay Product (BDP) = Route Trip Time (RTT) * Theoretical Bandwidth Limit
</nowiki></pre>
+
</pre>
 
   
 
For example, if RTT = 100ms = .1s, and theoretical bandwidth is 10Gbit/s, then:
 
For example, if RTT = 100ms = .1s, and theoretical bandwidth is 10Gbit/s, then:
   
<pre><nowiki>
+
<pre>
 
BDP = (.1s) * (10 * 10^9 bit/s) = 10^9 bit = 1 Gbit ~= 2^30 bit = 134217728 B
 
BDP = (.1s) * (10 * 10^9 bit/s) = 10^9 bit = 1 Gbit ~= 2^30 bit = 134217728 B
</nowiki></pre>
+
</pre>
   
  +
<pre>
 
 
<pre><nowiki>
 
 
ETH=eth6
 
ETH=eth6
 
# ESSENTIAL (large benefit)
 
# ESSENTIAL (large benefit)
Line 744: Line 680:
 
sysctl -w net.ipv4.tcp_fin_timeout=15 # claim resources sooner
 
sysctl -w net.ipv4.tcp_fin_timeout=15 # claim resources sooner
 
sysctl -w net.ipv4.tcp_timestamps=0 # does not work with GRO on in dom0
 
sysctl -w net.ipv4.tcp_timestamps=0 # does not work with GRO on in dom0
</nowiki></pre>
+
</pre>
 
   
 
Checking existing settings:
 
Checking existing settings:
   
<pre><nowiki>
+
<pre>
 
ETH=eth6
 
ETH=eth6
 
sysctl net.core.rmem_max
 
sysctl net.core.rmem_max
Line 762: Line 697:
 
sysctl net.ipv4.tcp_sack
 
sysctl net.ipv4.tcp_sack
 
sysctl net.ipv4.tcp_fin_timeout
 
sysctl net.ipv4.tcp_fin_timeout
</nowiki></pre>
+
</pre>
 
   
 
=== Pinning a VM to specific CPUs ===
 
=== Pinning a VM to specific CPUs ===
Line 769: Line 703:
 
While this does not necessarily improve performance (it can easily make
 
While this does not necessarily improve performance (it can easily make
 
performance worse, in fact), it is useful when debugging CPU usage of a VM. To
 
performance worse, in fact), it is useful when debugging CPU usage of a VM. To
assign a VM to CPUs 3 and 4, run the following in `dom0`:
+
assign a VM to CPUs 3 and 4, run the following in <tt>dom0</tt>:
   
<pre><nowiki>
+
<pre>
 
xe vm-param-set uuid=<vm-uuid> VCPUs-params:mask=3,4
 
xe vm-param-set uuid=<vm-uuid> VCPUs-params:mask=3,4
</nowiki></pre>
+
</pre>
   
  +
In later releases, this can easily be done with <tt>xl vcpu-list</tt> and <tt>xl vcpu-pin</tt>.
   
 
=== Switching between Linux Bridge and Open VSwitch ===
 
=== Switching between Linux Bridge and Open VSwitch ===
   
  +
Note: Open vSwitch has been the default network backend since XenServer 6.0.0 and XCP 1.5. If you switched to using Linux Bridge and this proved beneficial to you, please let us know.
To see what network backend you are currently using, run in `dom0`:
 
   
  +
To see what network backend you are currently using, run in <tt>dom0</tt>:
<pre><nowiki>
 
  +
  +
<pre>
 
cat /etc/xensource/network.conf
 
cat /etc/xensource/network.conf
</nowiki></pre>
+
</pre>
   
  +
To switch to using the Linux Bridge network backend, run in <tt>dom0</tt>:
   
  +
<pre>
To switch to using the Linux Bridge network backend, run in `dom0`:
 
 
<pre><nowiki>
 
 
xe-switch-network-backend bridge
 
xe-switch-network-backend bridge
</nowiki></pre>
+
</pre>
 
   
To switch to using Open VSwtich network backend, run in `dom0`:
+
To switch to using [[Open vSwitch]] network backend, run in <tt>dom0</tt>:
   
<pre><nowiki>
+
<pre>
 
xe-switch-network-backend openvswitch
 
xe-switch-network-backend openvswitch
</nowiki></pre>
+
</pre>
 
   
 
=== Enabling/disabling IOMMU ===
 
=== Enabling/disabling IOMMU ===
Line 804: Line 738:
   
 
Some versions of Xen have IOMMU enabled by default. If disabled, you can enable
 
Some versions of Xen have IOMMU enabled by default. If disabled, you can enable
it by editing `/boot/extlinux.conf`, and adding `iommu=1` to Xen parameters
+
it by editing <tt>/boot/extlinux.conf</tt>, and adding <tt>iommu=1</tt> to Xen parameters
(i.e. just before the first `---` of your active configuration). If enabled by
+
(i.e. just before the first <tt>---</tt> of your active configuration). If enabled by
default, you can disable it by using `iommu=0`, instead.
+
default, you can disable it by using <tt>iommu=0</tt>, instead.
   
 
=== Enabling SR-IOV ===
 
=== Enabling SR-IOV ===
Line 813: Line 747:
 
see section above.
 
see section above.
   
In `dom0`, use `lspci` to display a list of Virtual Functions (VFs). For
+
In <tt>dom0</tt>, use <tt>lspci</tt> to display a list of Virtual Functions (VFs). For
 
example,
 
example,
   
  +
<pre>
 
<pre><nowiki>
 
 
07:10.0 Ethernet controller: Intel Corporation 82559 Ethernet Controller Virtual Function (rev 01)
 
07:10.0 Ethernet controller: Intel Corporation 82559 Ethernet Controller Virtual Function (rev 01)
</nowiki></pre>
+
</pre>
   
  +
In the example above, <tt>07:10.0</tt> is the <tt>bus:device.function</tt> address of the VF.
 
In the example above, `07:10.0` is the `bus:device.function` address of the VF.
 
   
 
Assign a free (non-assigned) VF to the target VM by running:
 
Assign a free (non-assigned) VF to the target VM by running:
   
<pre><nowiki>
+
<pre>
 
xe vm-param-set other-config:pci=0/0000:<bus:device.function> uuid=<vm-uuid>
 
xe vm-param-set other-config:pci=0/0000:<bus:device.function> uuid=<vm-uuid>
</nowiki></pre>
+
</pre>
 
   
 
(Re-)Start the VM, and install the appropriate VF driver (inside your VM) for
 
(Re-)Start the VM, and install the appropriate VF driver (inside your VM) for
Line 836: Line 767:
 
You can assign multiple VFs to a single VM; however, the same VF cannot be
 
You can assign multiple VFs to a single VM; however, the same VF cannot be
 
shared across multiple VMs.
 
shared across multiple VMs.
  +
  +
=== Switching RSC off/on ===
  +
  +
RSC stands for receive-side copying. When RSC is on, some work that is otherwise done in the control domain (by <tt>netback</tt> threads) is instead performed by user domains (their <tt>netfront</tt>s). This feature is present and turned on by default with XenServer 6.0.0 and XCP 1.5. The feature only applies to Windows VMs.
  +
  +
To turn RSC off for a particular Windows VM:
  +
* Create a new registry key within the guest: <tt>\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\xenvif\Parameters</tt>.
  +
* Create a new DWORD named <tt>ReceiverMaximumProtocol</tt> within the above key, setting its value to <tt>0</tt>.
  +
* Restart the guest.
  +
  +
To turn RSC back on:
  +
* Set the value of <tt>ReceiverMaximumProtocol</tt> to <tt>1</tt>.
  +
* Restart the guest.
  +
  +
When starting a guest with RSC enabled, <tt>/var/log/messages</tt> (in the control domain) will note:
  +
<pre>
  +
... XENVIF Notice RingConnect: Protocol 1
  +
</pre>
  +
  +
Whereas, when starting a guest with RSC disabled, <tt>/var/log/messages</tt> (in the control domain) will note:
  +
<pre>
  +
... XENVIF Notice RingConnect: Protocol 0
  +
</pre>
  +
  +
A convenient way to search for the line(s) above is by running the following command in the control domain just before (re-)starting the VM(s): <tt>tail -f /var/log/messages | grep XENVIF | grep Protocol </tt>
  +
  +
=== Increasing Send Queue Length ===
  +
  +
If the ID of the guest is X, and the device number of the VIF of X for which you want to increase the send queue length is Y (see output of <tt>xe vif-list</tt> for information), run the following command in the host's control domain:
  +
<pre>
  +
ifconfig vif<X>.<Y> txqueuelen 1024
  +
</pre>
   
 
== Acknowledgements ==
 
== Acknowledgements ==
   
While this guide was mostly written by Rok Strniša, it could not have been
+
While this guide was mostly written by Rok Strniša, it could not have been
 
nearly as good without the help and advice from many of his colleagues,
 
nearly as good without the help and advice from many of his colleagues,
 
including (in alphabetic order) Alex Zeffertt, Dave Scott, George Dunlap,
 
including (in alphabetic order) Alex Zeffertt, Dave Scott, George Dunlap,
Ian Campbell, James Bulpin, Jonathan Davies, Lawrence Simpson, Marcus Granado,
+
Ian Campbell, James Bulpin, Jonathan Davies, Lawrence Simpson, Malcolm Crossley,
Mike Bursell, Paul Durrant, Rob Hoes, Sally Neale, and Simon Rowe.
+
Marcus Granado, Mike Bursell, Paul Durrant, Rob Hoes, Sally Neale, and Simon Rowe.
   
[[Category:XCP]]
+
[[Category:XAPI]]
 
[[Category:Tutorial]]
 
[[Category:Tutorial]]
 
[[Category:Users]]
 
[[Category:Users]]
 
[[Category:Developers]]
 
[[Category:Developers]]
  +
[[Category:Performance]]
  +
[[Category:Networking]]

Latest revision as of 14:21, 30 April 2014


Introduction

Setting up an efficient network in the world of virtual machines can be a daunting task. Hopefully, this guide will be of some help, and allow you to make good use of your network resources.

The guide applies to XCP 1.0 and later, and to XenServer 5.6 FP1 and later. Much of it is applicable to earlier versions, too.

For a general guide on XenServer network configurations, see Designing XenServer Network Configurations.

Contributing

If you would like to contribute to this guide, please submit your feedback to Rok Strniša, or get an account and edit the page yourself.

If you would like to be notified about updates to this guide, please "Create account" and "Watch" to this page.

Scenarios

There are many possible scenarios where network throughput can be relevant. The major ones that we have identified are:

  • dom0 throughput The traffic is sent/received directly by `dom0`.
  • single-VM throughput The traffic is sent/received by a single VM.
  • multi-VM throughput The traffic is sent/received by multiple VMs, concurrently. Here, we are interested in aggregate network throughput.
  • single-VCPU VM throughput The traffic is sent/received by a single-VCPU VMs.
  • single-VCPU single-TCP-thread VM throughput The traffic is sent/received by a single TCP thread in single-VCPU VMs.
  • multi-VCPU VM throughput The traffic is sent/received by a multi-VCPU VMs.
  • network throughput for storage The traffic sent/received originates from/is stored on a storage device.

Technical Overview

Sending network traffic to and from a VM is a fairly complex process. The figure applies to PV guests, and to HVM guests with PV drivers.

Network Throughput Guide.png

Therefore, when a process in a VM, e.g. a VM with domID equal to X, wants to send a network packet, the following occurs:

  1. A process in the VM generates a network packet P, and sends it to a VM's virtual network interface (VIF), e.g. ethY_n for some network Y and some connection n.
  2. The driver for that VIF, netfront driver, then shares the memory page (which contains the packet P) with the backend domain by establishing a new grant entry. A grant reference is part of the request pushed onto the transmit shared ring (Tx Ring).
  3. netfront then notifies, via an event channel (not on the diagram), one of netback threads in dom0 (the one responsible for ethY_n) where in the shared pages the packet P is stored. (XenStore is used to setup the initial connection between the front-end and the back-end, deciding on what event channel to use, and where the shared rings are.)
  4. netback (in dom0) fetches P, processes it, and forwards it to vifX.Y_n;
  5. The packet is then handed to the back-end network stack, where it is treated according to its configuration just like any other packet arriving on a network device.

When a VM is to receive a packet, the process is almost the reverse of the above. The key difference is that on receive there is a copy being made: it happens in dom0, and is a copy from back-end owned memory into a Tx Buf, which the guest has granted to the back-end domain. The grant references to these buffers are in the request on the Rx Ring (not Tx Ring).

Symptoms, probable causes, and advice

There are many potential bottlenecks. Here is a list of symptoms (and associated probable causes and advice):

  • I/O is extremely slow on my Hardware Virtualised Machine (HVM), e.g. a Windows VM.
    • Verifying the symptom: Compare the results of an I/O speed test on the problem VM and a healthy VM; they should be at least an order of magnitude different.
    • Probable cause: The HVM does not have PV drivers installed.
    • Background: With PV drivers, an HVM can make direct use of some of the underlying hardware, leading to better performance.
    • Recommendation: Install PV drivers.
  • VM's VCPU is fully utilised.
    • Verifying the symptom: Run xentop in dom0 --- this should give a fairly good estimate of aggregate usage for all VCPUs of a VM; pressing V reveals how many seconds were spent in which VM's VCPU. Running VCPU measurement tools inside the VM does not give reliable results; they can only be used to find rough relative usage between applications in a VM.
    • Background: When a VM sends or receives network traffic, it needs to do some basic packet processing.
    • Probable cause: There is too much traffic for that VCPU to handle.
      • Recommendation 1: Try enabling NIC offloading --- see Tweaks (below) on how to do this.
      • Recommendation 2: Try running the application that does the sending/receiving of network traffic with multiple threads. This will give the OS a chance to distribute the workload over all available VCPUs.
  • HVM VM's first (and possibly only) VCPU is fully utilised.
    • Verifying the symptom: Same as above.
    • Background: Currently, only VM's first VCPU can process the handling of interrupt requests.
    • Probable cause: The VM is receiving too many packets for its current setup.
      • Recommendation 1: If the VM has multiple VCPUs, try to associate application processing with non-first VCPUs.
      • Recommendation 2: Use more (1 VCPU) VMs to handle receive traffic, and a workload balancer in front of them.
      • Recommendation 3: If the VM has multiple VCPUs and there's no definite need for it to have multiple VCPUs, create multiple 1-VCPU VMs instead (see Recommendation 2).
    • Plans for improvement: Underlying architecture needs to be improved so that VM's non-first VCPUs can process interrupt requests.
  • In dom0, a high percentage of a single VCPU is spent processing system interrupts.
    • Verifying the symptom: Run top in dom0, then press z (for colours) and 1 (to show VCPU breakdown). Check if there is a high value for si for a single VCPU.
    • Background: When packets are sent to a VM on a host, its dom0 needs to process interrupt requests associated with the interrupt queues that correspond to the device the packets arrived on.
    • Probable cause: dom0 is set up to process all interrupt requests for a specific device on a specific dom0 VCPU.
      • Recommendation 1: Check in /proc/interrupts whether your device exposes multiple interrupt queues. If the device supports this feature, make sure that it is enabled.
      • Recommendation 2: If the device supports multiple interrupt queues, distribute the processing of them either automatically (by using irqbalance daemon), or manually (by setting /proc/irq/<irq-no>/smp_affinity) to all (or a subset of) dom0 VCPUs.
      • Recommendation 3: Otherwise, make sure that an otherwise relatively-idle dom0 VCPU is set to process the interrupt queue (by manually setting the appropriate /proc/irq/<irq-no>/smp_affinity).
  • In dom0, a VCPU is fully occupied with a netback process.
    • Verifying the symptom: Run top in dom0. Check if there is a netback process, which appears to be taking almost 100%. Then, run xentop in dom0, and check VCPU usage for dom0: if it reads about 120% +/- 20% when there is no other significant process in dom0, then there's a high chance that you have confirmed the symptom.
    • Background: When packets are sent from or to a VM on a host, the packets are processed by a netback process, which is dom0's side of VM network driver (VM's side is called netfront).
    • General Recommendation: Try enabling NIC offloading --- see Tweaks (below) on how to do this.
    • Possible cause 1: VMs' VIFs are not correctly distributed over the available netback threads.
    • Possible cause 2: Too much traffic is being sent of a single VIF.
      • Recommendation: Create another VIF for the corresponding VM, and setup the application(s) within the VM to send/receive traffic over both VIFs. Since each VIF should be associated with a different netback process (each of which is linked to a different dom0 VCPU), this should remove the associated dom0 bottleneck. If every dom0 netback thread is taking 100% of a dom0 VCPU, increase the number of dom0 VCPUs and netback threads first --- see Tweaks (below) on how to do this.
  • In dom0, most/all VCPUs are fully occupied with netback processes.
    • Verifying the symptom: Same as above, except that it is true for all dom0 VCPUs.
    • Background: Same as above.
    • General Recommendation 1: Pin dom0 VCPUs to physical CPUs, making sure that no user domains are using the same physical CPUs. The Tweaks section describes how to pin VCPUs.
    • General Recommendation 2: If you have a powerful host and spare CPU capacity, give more VCPUs to dom0, increase the number of netback threads, and restart your VMs (to force re-allocation of VIFs to netback threads). The Tweaks section describes how you can increase the number of dom0 VCPUs and netback threads.
    • General Recommendation 3: If your host has no spare CPU capacity, try decreasing the load by putting fewer VMs on the host and/or removing VCPUs from the VMs.
  • There is a VCPU bottleneck either in a dom0 or in a VM, and I have control over both the sending and the receiving side of the network connection.
    • Verifying the symptom: (See notes about xentop and top above.)
    • Background: (Roughly) Each packet generates an interrupt request, and each interrupt request requires some VCPU capacity.
    • Recommendation: Enable Jumbo Frames (see Tweaks (below) for more information) for the whole connection. This should decrease the number of interrupts, and therefore decrease the load on the associated VCPUs (for a specific amount of network traffic).
  • There is obviously no VCPU bottleneck either in a dom0 or in a VM --- why is the framework not making use of the spare capacity?
    • Verifying the symptom: (See notes about xentop and top above.)
    • Background: There are many factors involved when doing network performance, and many more when using virtual machines.
    • Possible cause 1: Part of the connection has reached its physical throughput limit.
      • Recommendation 1: Verify that all network components in the connection path physically support the desired network throughput.
      • Recommendation 2: If a physical limit has been reached for the connection, add another network path, setup appropriate PIFs and VIFs, and configure the application(s) to use both/all paths.
    • Possible cause 2: Some parts of the software associated with network processing might not be completely parallelisable, or the hardware cannot make use of its parallelisation capabilities if the software doesn't follow certain patterns of behaviour.
      • Recommendation 1: Setup the application used for sending or receiving network traffic to use multiple threads. Experiment with the number of threads.
      • Recommendation 2: Experiment with the TCP parameters, e.g. window size and message size --- see Tweaks (below) for recommended values.
      • Recommendation 3: If IOMMU is enabled on your system, try disabling it. See Tweaks for a section on how to disable IOMMU.
      • Recommendation 4: Try switching the network backend. See the Tweaks section on how do that.
  • Since switching to XenServer 6.0.0 or XCP 1.5 (or later), aggregate network throughput has decreased for my Windows VMs.
    • Verifying the symptom: Compare performance on the old system with performance on the new system. (See the section on [how to make measurements].)
    • Possible cause 1: With XenServer 6.0.0 or XCP 1.5, the RSC (receive-side copying) feature is enabled by default. This feature moves some work that is otherwise done in the control domain into user domains. RSC can cause lower aggregate network throughput.
      • Recommendation 1: Try switching RSC off in all Windows VMs. See the Tweaks section on how to do that.

Making throughput measurements

When making throughput measurements, it is a good idea to start with a simple environment. For example, if testing VM-level receive throughput, try sending traffic from a bare-metal (Linux) host to VM(s) on another (XCP/XenServer) host, and vice-versa when testing VM-level transmit throughput. Transmitting traffic is less demanding on the resources, and is therefore expected to produce substantially better results.

The following sub-sections provide more information about how to use some of the more common network performance tools.

Our helper tools

We created an open-source repository for various tools that can help with performance analysis: https://github.com/perf101/scripts

Please visit the link above for an overview of the scripts available, and for downloading any of them.

If you notice any bugs, if you have any improvement suggestions, or if you would like to add your own script to our repository, please let us know.

Iperf

Installation

Linux

Make sure the following packages are installed on your system: gcc, g++, make, and subversion.

Iperf can be installed from Iperf's SVN repository:

svn co https://iperf.svn.sourceforge.net/svnroot/iperf iperf
cd iperf/trunk
./configure
make
make install
cd
iperf --version # should mentioned pthreads

You might also be able to install it via a package manager, e.g.:

apt-get install iperf

When using the yum package manager, you can install it via RPMForge.

Windows

Note that we are not the authors of the above executables. Please use your anti-virus software to scan the files before using them.

Although we haven't done many measurements with Iperf 2.0.5 (pthreads) on Windows yet, it appears to perform better than Iperf 1.7.0 (win32 threads) in most circumstances. The most noticeable difference is that one can achieve roughly the same aggregate throughput with fewer Iperf threads when using Iperf 2.0.5 --- often one thread suffices. If you notice any specific scenarios where one performs better than the other (irrespective of the number of Iperf threads used), please let us know, and we will update this guide accordingly.

Usage

We recommend the following usage of iperf:

  • Make sure that firewall is disabled/allows iperf traffic.
  • Set iperf what units to report the results in, e.g. by using -f m --- if not set explicitly, iperf will change units based on the result.
  • An iperf test should last at least 20 seconds, e.g. -t 20.
  • You can see interim measurements via -i 1.
  • Experiment with multiple communication threads, e.g. -P 4.
  • Repeat a test in a specific context at least 5 times, calculating an average, and making notes of any anomalies.
  • Experiment with TCP window size and buffer size settings. Initially, you should run Iperf without setting these parameters --- this is because Iperf can, on some systems, pick up good default values. When doing measurements on Windows VMs, we found that it is normally a good idea to use -w 256K -l 256K for both the receiver and the sender.
  • Use a shell/batch script to start multiple iperf processes simultaneously (if required), and possibly to automate the whole testing process.
  • When running iperf on a Windows VM:
    • Run it in non-daemon mode on the receiver, since daemon mode tends to (it's still unclear as to when exactly) create a service. Having an iperf service is undesirable, since one cannot as easily control which VCPU it executes on, and with what priority. Also, you cannot have multiple receivers with a service running (in case you wanted to experiment with them).
  • Run iperf with "realtime" priority, and on a non-first VCPU (if you are executing on a multi-VCPU VM) for reasons explained in the section above.

Here are the simplest commands to execute on the receiver, and then the sender:

# on receiver
iperf -s -f m     # add "-w 256K -l 256K" when sender or receiver is a Windows VM

# on sender
iperf -c <receiver-IP> -f m -t 20    # add "-w 256K -l 256K" when sender or receiver is a Windows VM

For running parallel iperf sessions to multiple destinations, use the multi-iperf.sh script located in the scripts repository on perf101 Github account.

Netperf 2.5.0

netperf's TCP_STREAM test also tends to give reliable results. However, since this version (the only version we recommend using) does not automatically parallelise over the available VCPUs, such parallelisation needs to be done manually in order to make better use of the available VCPU capacity.

Installation

Linux

Make sure the following packages are installed on your system: gcc, g++, make, and wget.

Then run the following commands:

wget ftp://ftp.netperf.org/netperf/netperf-2.5.0.tar.gz
tar xzf netperf-2.5.0.tar.gz
cd netperf-2.5.0
./configure
make
make check
make install

The receiver side can then be stated manually with netserver, or you can configure it as a service:

# these commands may differ depending on your OS
echo "netperf         12865/tcp" >> /etc/services
echo "netperf stream tcp nowait root /usr/local/bin/netserver netserver" >> /etc/inetd.conf
/etc/init.d/openbsd-inetd restart
Windows

You can use the following executables:

Note that we are not the authors of the above executables. Please use your anti-virus software to scan the files before using them.

Usage

Here, we describe the usage of the Linux version of Netperf. The syntax for the Windows version is sometimes different; please see netclient.exe -h for more information.

With netperf installed on both sides, the following script can be used on either side to determine network throughput for transmitting traffic:

#!/bin/bash

THREADS=$1
TIME=$2
DST=$3
TMP=`mktemp`

for i in `seq $THREADS`; do
  netperf -H $DST -t TCP_STREAM -P 0 -c -l $TIME >> $TMP &
done

sleep $((TIME + 3))
cat $TMP | awk '{sum+=$5}END{print sum}'
rm $TMP

NTttcp (Windows only)

The program can be installed by running this installer: NTttcp.msi

Note that we are not the authors of the above installer. Please use your anti-virus software to scan the file before using it.

After completing the installation, go to the installation directory, and make two copies of ntttcp.exe:

  • ntttcpr.exe --- use for receiving traffic
  • ntttcps.exe --- use for sending traffic

For usage guidelines, please refer to the guide in the installation directory.

Diagnostic tools

There are many diagnostic tools one can use:

  • Performance tab in VM's Task Manager;
  • Performance tab for the VM in XenCenter;
  • Performance tab for the VM's host in XenCenter;
  • top (with z and 1 pressed) in VM's host's dom0; and,
  • xentop in VM's host's dom0.

It is sometimes also worth observing /proc/interrupts in dom0, as well as /proc/irq/<irqno>/smp_affinity.

Recommended configurations

When reading this section, please see the Tweaks below it for reference.

CPU bottleneck

All network throughput tests were, in the end, bottlenecked by VCPU capacity. This means that machines with better physical CPUs are expected to achieve higher network throughputs for both dom0 and VM tests.

Number of VM pairs and threads

If one is interested in achieving a high aggregate network throughput of VMs on a host, it is crucial to consider both the number of VM pairs and the number of network transmitting/receiving threads in each VM. Ideal values for these numbers vary from OS to OS due to different networking stack implementations, so some experimentation is recommended --- finding a good balance can have a drastic effect on network performance (mainly due to better VCPU utilisation). Our research shows that 8 pairs with 2 iperf threads per pair works well for Debian-based Linux, while 4 pairs with 8 iperf threads per pair works well for Windows 7.

Allocation of VIFs over netback threads

All results above assume equal distribution of used VIFs over available netback threads, which may not always be possible --- see a KB article for more information. For VM network throughput, it is important to get as close as possible to equal distribution in order to make efficient use of the available VCPUs.

Using irqbalance

The irqbalance daemon is enabled by default. It has been observed that this daemon can improve VM network performance by about 16% --- note that this is much less than the potential gain of the getting the other points described in this section right. The reason why irqbalance can help is that it distributes the processing of dom0-level interrupts across all available dom0 VCPUs, not just the first one.

Optimising Windows VMs (and other HVM guests)

It appears that Xen currently feeds all interrupts for a guest to the guest's first VCPU, i.e. VCPU0. Initial observations show that more CPU cycles are spent processing the interrupt requests than actually processing the received data (assuming there is no disk I/O, which is slow). This means that, on a Windows VM with 2 VCPUs, all processing of the received data should be done on the second VCPU, i.e. VCPU1: Task Manager > Processes > Select Process > Set CPU affinity > 1 --- in this case, VCPU0 will be fully used, whereas VCPU1 will probably have some spare cycles. While this is acceptable, it is more efficient to use 2 guests (1 VCPU each), which makes full use of both VCPUs. Therefore, to avoid this bottleneck altogether, one should probably use "<number of host CPUs> - 4" VMs, each with 1 VCPU, and combine their capabilities with a NetScaler Appliance.

If you do not use any applications that rely on checksums to be correct, disable checksumming. This should substantially decrease vCPU usage of the VM.

If you are using Windows VMs ontop of XenServer 6.0.0 or XCP 1.5 (or later), consider turning off RSC within the VMs (see the Tweaks section on how to do that).

Offloading some network processing to NICs

Network offloading is not officially supported, since there are known issues with some drivers. That said, if your NIC supports offloading, try to use it, especially Generic Receive Offload (GRO). However, please verify carefully that it works for your NIC+driver before using it in a production environment.

If performing mainly dom0-to-dom0 network traffic, turning on GRO setting for the NICs involved can be highly beneficial when combined with the irqbalance daemon (see above). This configuration can easily be combined with Open vSwitch (the default option), since the performance is either equal or faster than with a Linux Bridge. Turning on the Large Receive Offload (LRO) setting tends to, in general, decrease dom0 network throughput.

Our experiments indicate that turning on either of the two offload settings (GRO or LRO) in dom0 can give mixed VM-level throughput results, based on the context. Feel free to experiment and let us know your findings.

Jumbo frames

Note that jumbo frames for the connection from A to B only work when every part of the connection supports (and has enabled) MTU 9000. See the Tweaks section below for information on how to enable this in some contexts.

We have observed network performance gains for VM-to-VM traffic (where VMs are on different hosts). Where the VMs were Linux PV guests, we were able to enable GRO offloading in hosts' dom0, which provided a further speedup.

Open vSwitch

In the various tests that we performed, we observed no statistically significant difference in network performance for dom0-to-dom0 traffic. We observed from 3% (Linux PV guests, no irqbalance) to about 10% (Windows HVM guests, with irqbalance) worse performance for VM-to-VM traffic.

TCP settings

Our experiments show that tweaking TCP settings inside the VM(s) can lead to substantial network performance improvements. The main reason for this is that most systems are still by default configured to work well on 100Mb/s or 1Gb/s, not 10Gb/s, NICs. The Tweaks section below contains a section about the recommended TCP settings for a VM.

Using SR-IOV

SR-IOV is currently a double-edged sword. This section explains what SR-IOV is, what are its down sides, and what its benefits.

Single Root I/O Virtualisation (SR-IOV) is a PCI device virtualisation technology that allows a single PCI device to appear as multiple PCI devices on the physical PCI bus. The actual physical device is known as a Physical Function (PF) while the others are known as Virtual Functions (VF). The purpose of this is for the hypervisor to directly assign one or more of these VFs to a Virtual Machine (VM) using SR-IOV technology: the guest can then use the VF as any other directly assigned PCI device. Assigning one or more VFs to a VM allows the VM to directly exploit the hardware. When configured, each VM behaves as though it is using the NIC directly, reducing processing overhead and improving performance.

SR-IOV can be used only with architectures that support IOMMU and NICs that support SR-IOV; there could be further compatibility constraints by the architecture or the NIC. Please contact support or ask on forums about recommended/officially supported configurations.

If your VM has an SR-IOV VF, functions that require VM mobility, for example, Live Migration, Workload Balancing, Rolling Pool Upgrade, High Availability and Disaster Recovery, are not possible. This is because the VM is directly tied to the physical SR-IOV enabled NIC VF. In addition, VM network traffic sent via an SR-IOV VF bypasses the vSwitch, so it is not possible to create ACLs or view QoS.

Our experiments show that a single-VCPU VM using SR-IOV on a modern system can (together with the usual NIC offloading features enabled) saturate (or nearly saturate) a 10Gbps connection when receiving traffic. Furthermore, the impact on dom0 is negligible.

The Tweaks section below contains a section about how to enable SR-IOV.

NUMA Hosts

Non-Uniform Memory Access is becoming more common-place and more pronounced in modern machines. To reach optimum efficiency, we have to put processes that often interact "close-by" in terms of NUMA-ness, i.e. on the same CPU node, but not on the same logical CPU (i.e. not on the same CPU core unless they run on different hyper-threads).

In our case, the main two processes we are concerned about are the netfront (PV drivers in the user domain) and the corresponding netback (network processing in the control domain).

By default, the control domain uses 4 VCPUs, which are mapped to 4 (by default randomly-chosen) PCPUs. Similarly, VCPUs of any VM installed will be (by default) randomly allocated to most-free PCPUs. Moreover, Xen Scheduler prefers to put all VMs (including the control domain) as far away from each other in terms of NUMA-ness as possible. In general, this is a good rule, since each VM then has a large cache, and cache-misses are minimised. However, as explained above, this rule is not great for network performance.

For example, suppose we have a 2-node host, each with 12 logical CPUs (Physical CPUs/PCPUs), and we install a single 1-VCPU VM. The VM will be put on a different CPU node than the control domain, which means that the communication between the VM's netfront and the control domain's netback will not be efficient. Therefore, in scenarios where network performance is of great importance, we should pin VCPUs of the control domain and any user domains explicitly, and close-by in terms of NUMA-ness. The pinning should be performed before the VM starts, using vcpu-params:mask --- see a related article for more information.

In the scenario described, we could pin the control domain VCPUs to the first four PCPUs, and the VM's VCPU to the fifth PCPU; if we installed any more VMs for which network performance is not critical, we can easily pin them to the second node, i.e. PCPUs 12-23. The tool xenpm get-cpu-topology is useful here for obtaining CPU topology of the host.

Few fast VIFs

For a particular host, if the number of VM network connections (VIFs) that are required to be fast is less than around 2/3 of netback threads in dom0, a further performance boost can be achieved for such connections.

irqbalance, which is enabled by default in later releases, tries to set up interrupts on VCPUs that are already busy, but not too busy. On bare metal machines, this approach works well in terms of performance and power saving (leaving some cores in lower power states if possible). In a virtualised environment, however, the cost of context switching is higher, which means that it is better for performance to process interrupts on non-busy VCPUs. Therefore, we can disable irqbalance, and perform manual IRQ balancing to that effect. For best results, this approach should be combined with manual pinning of VMs.

Queue Length

Our experiments show that increasing Send Queue Length (txqueuelen) can increase network performance a few percent. See Tweaks on how to increase the queue length.

Tweaks

Automatic IRQ Balancing in Dom0

irqbalance is enabled by default.

If IRQ balancing service is already installed, you can enable it by running:

service irqbalance start

Otherwise, you need to install it first with:

yum --disablerepo=citrix --enablerepo=base,updates install -y irqbalance

Manual IRQ Balancing in Dom0

While irqbalance does the job in most situations, manual IRQ balancing can prove better in some situations. If we have a dom0 with 4 VCPUs, the following script disables irqbalance, and evenly distributes specific interrupt queues (1272--1279) among the available VCPUs:

service irqbalance stop
for i in `seq 0 7`; do
  queue=$((1272 + i));
  aff=$((1 << i % 4));
  printf "%x" $aff > /proc/irq/$queue/smp_affinity;
done

To find out how many dom0 VCPUs a host has, use cat /proc/cpuinfo. To find out what interrupt queues correspond to which interface, use `cat /proc/interrupts`.

Changing the Number of Dom0 VCPUs

To check the current number of dom0 VCPUs, run cat /proc/cpuinfo.

On newer systems, this can be done as follows:

NUM=8
echo "NR_DOMAIN0_VCPUS=${NUM}" > /etc/sysconfig/unplug-vcpus
/opt/xensource/libexec/xen-cmdline --set-xen dom0_max_vcpus=${NUM}
reboot

On older systems, this can be done by setting /etc/sysconfig/unplug-vcpus, and restarting the host (or, in the case where the number of VCPUs in dom0 is decreasing, running /etc/init.d/unplug-vcpus start.)

Changing the Number of Netback Threads in Dom0

By default, the number of netback threads in dom0 equals min(4,<number_of_vcpus_in_dom0>). Therefore, increasing the number of dom0 VCPUs above 4, will by default not increase the number of netback threads.

To increase the threshold number of netback threads to 12, write xen-netback.netback_max_groups=12 into /boot/extlinux.conf under section labelled xe-serial just after the assignment console=hvc0.

Enabling NIC Offloading

Please see the "Offloading some network processing to NICs" section above.

You can use ethtool to enable/disable NIC offloading.

ETH=eth6                # the conn. for which you want to enable offloading
ethtool -k $ETH         # check what is currently enabled/disabled
ethtool -K $ETH gro on  # enable GRO

Note that changing offload settings directly via ethool will not persist the configuration through host reboots; to do that, use other-config of the xe command.

xe pif-param-set uuid=<pif_uuid> other-config:ethtool-gro=on

Enabling Jumbo Frames

Suppose eth6 and xenbr6 are the device and the bridge corresponding to the 10 GiB/sec connection used.

Shut down user domains:

VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
for uuid in $VMs; do xe vm-shutdown uuid=$uuid; done

Set network MTU to 9000, and re-plug relevant PIFs:

net_uuid=`xe network-list bridge=xenbr6 params=uuid --minimal`
xe network-param-set uuid=$net_uuid MTU=9000
PIFs=$(xe pif-list network-uuid=$net_uuid --minimal | sed 's/,/ /g')
for uuid in $PIFs; do xe pif-unplug uuid=$uuid; xe pif-plug uuid=$uuid; done

Start user domains (you might want to make sure that VMs are started one after another to avoid potential VIF static allocation problems):

VMs=$(xe vm-list is-control-domain=false params=uuid --minimal | sed 's/,/ /g')
for uuid in $VMs; do xe vm-start uuid=$uuid; done

Set up the connections you will use inside the user domains to use MTU 9000. For Linux VMs, this is done with:

ETH=eth1   # the user domain connection you are concerned with
ifconfig $ETH mtu 9000 up

Verifying:

xe vif-list network-uuid=$net_uuid params=MTU --minimal

Linux TCP parameter settings

Default in Dom0

ETH=eth6   # the connection you are concerned with
sysctl -w net.core.rmem_max=131071
sysctl -w net.core.wmem_max=131071
sysctl -w net.ipv4.tcp_rmem="4096 87380 3080192"
sysctl -w net.ipv4.tcp_wmem="4096 16384 3080192"
sysctl -w net.core.netdev_max_backlog=1000
sysctl -w net.ipv4.tcp_congestion_control=reno
ifconfig $ETH txqueuelen 1000
ethtool -K $ETH gro off
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_sack=1
sysctl -w net.ipv4.tcp_fin_timeout=60

Default for a Demo Etch Linux VM

ETH=eth1   # the connection you are concerned with
sysctl -w net.core.rmem_max=109568
sysctl -w net.core.wmem_max=109568
sysctl -w net.ipv4.tcp_rmem="4096 87380 262144"
sysctl -w net.ipv4.tcp_wmem="4096 16384 262144"
sysctl -w net.core.netdev_max_backlog=1000
sysctl -w net.ipv4.tcp_congestion_control=bic
ifconfig $ETH txqueuelen 1000
ethtool -K $ETH gso off
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_sack=1
sysctl -w net.ipv4.tcp_fin_timeout=60

Recommended TCP settings for Dom0

Changing these settings in only relevant if you want to optimise network connections for which one of the end-points is dom0 (not a user domain). Using settings recommended for a user domain (VM) will work well for dom0 as well.

Recommended TCP settings for a VM

Bandwidth Delay Product (BDP) = Route Trip Time (RTT) * Theoretical Bandwidth Limit

For example, if RTT = 100ms = .1s, and theoretical bandwidth is 10Gbit/s, then:

BDP = (.1s) * (10 * 10^9 bit/s) = 10^9 bit = 1 Gbit ~= 2^30 bit = 134217728 B
ETH=eth6
# ESSENTIAL (large benefit)
sysctl -w net.core.rmem_max=134217728              # BDP
sysctl -w net.core.wmem_max=134217728              # BDP
sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728" # _ _ BDP
sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728" # _ _ BDP
sysctl -w net.core.netdev_max_backlog=300000
modprobe tcp_cubic
sysctl -w net.ipv4.tcp_congestion_control=cubic
ifconfig $ETH txqueuelen 300000
# OPTIONAL (small benefit)
ethtool -K $ETH gso on
sysctl -w net.ipv4.tcp_sack=0                      # for reliable networks only
sysctl -w net.ipv4.tcp_fin_timeout=15              # claim resources sooner
sysctl -w net.ipv4.tcp_timestamps=0                # does not work with GRO on in dom0

Checking existing settings:

ETH=eth6
sysctl net.core.rmem_max
sysctl net.core.wmem_max
sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem
sysctl net.core.netdev_max_backlog
sysctl net.ipv4.tcp_congestion_control
ifconfig $ETH | grep -o "txqueuelen:[0-9]\+"
ethtool -k $ETH 2> /dev/null | grep "generic.segmentation.offload"
sysctl net.ipv4.tcp_timestamps
sysctl net.ipv4.tcp_sack
sysctl net.ipv4.tcp_fin_timeout

Pinning a VM to specific CPUs

While this does not necessarily improve performance (it can easily make performance worse, in fact), it is useful when debugging CPU usage of a VM. To assign a VM to CPUs 3 and 4, run the following in dom0:

xe vm-param-set uuid=<vm-uuid> VCPUs-params:mask=3,4

In later releases, this can easily be done with xl vcpu-list and xl vcpu-pin.

Switching between Linux Bridge and Open VSwitch

Note: Open vSwitch has been the default network backend since XenServer 6.0.0 and XCP 1.5. If you switched to using Linux Bridge and this proved beneficial to you, please let us know.

To see what network backend you are currently using, run in dom0:

cat /etc/xensource/network.conf

To switch to using the Linux Bridge network backend, run in dom0:

xe-switch-network-backend bridge

To switch to using Open vSwitch network backend, run in dom0:

xe-switch-network-backend openvswitch

Enabling/disabling IOMMU

This is, in fact, not a tweak, but a requirement when using SR-IOV (see below).

Some versions of Xen have IOMMU enabled by default. If disabled, you can enable it by editing /boot/extlinux.conf, and adding iommu=1 to Xen parameters (i.e. just before the first --- of your active configuration). If enabled by default, you can disable it by using iommu=0, instead.

Enabling SR-IOV

Make sure that IOMMU is enabled in the version of Xen that you are running --- see section above.

In dom0, use lspci to display a list of Virtual Functions (VFs). For example,

07:10.0 Ethernet controller: Intel Corporation 82559 Ethernet Controller Virtual Function (rev 01)

In the example above, 07:10.0 is the bus:device.function address of the VF.

Assign a free (non-assigned) VF to the target VM by running:

xe vm-param-set other-config:pci=0/0000:<bus:device.function> uuid=<vm-uuid>

(Re-)Start the VM, and install the appropriate VF driver (inside your VM) for your specific NIC.

You can assign multiple VFs to a single VM; however, the same VF cannot be shared across multiple VMs.

Switching RSC off/on

RSC stands for receive-side copying. When RSC is on, some work that is otherwise done in the control domain (by netback threads) is instead performed by user domains (their netfronts). This feature is present and turned on by default with XenServer 6.0.0 and XCP 1.5. The feature only applies to Windows VMs.

To turn RSC off for a particular Windows VM:

  • Create a new registry key within the guest: \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\xenvif\Parameters.
  • Create a new DWORD named ReceiverMaximumProtocol within the above key, setting its value to 0.
  • Restart the guest.

To turn RSC back on:

  • Set the value of ReceiverMaximumProtocol to 1.
  • Restart the guest.

When starting a guest with RSC enabled, /var/log/messages (in the control domain) will note:

... XENVIF   Notice   RingConnect: Protocol 1

Whereas, when starting a guest with RSC disabled, /var/log/messages (in the control domain) will note:

... XENVIF   Notice   RingConnect: Protocol 0

A convenient way to search for the line(s) above is by running the following command in the control domain just before (re-)starting the VM(s): tail -f /var/log/messages | grep XENVIF | grep Protocol

Increasing Send Queue Length

If the ID of the guest is X, and the device number of the VIF of X for which you want to increase the send queue length is Y (see output of xe vif-list for information), run the following command in the host's control domain:

ifconfig vif<X>.<Y> txqueuelen 1024

Acknowledgements

While this guide was mostly written by Rok Strniša, it could not have been nearly as good without the help and advice from many of his colleagues, including (in alphabetic order) Alex Zeffertt, Dave Scott, George Dunlap, Ian Campbell, James Bulpin, Jonathan Davies, Lawrence Simpson, Malcolm Crossley, Marcus Granado, Mike Bursell, Paul Durrant, Rob Hoes, Sally Neale, and Simon Rowe.