Performance of Xen VCPU Scheduling: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
<under construction> |
|||
Matthew Portas |
|||
Marcus Granado |
|||
This page compares the behaviour of the Xen VCPU schedulers when different parameters and patches are used with different guest loads. |
|||
== Introduction == |
|||
(!) 09/Jul: under construction, please wait a few hours... |
|||
This page evaluates the performance of the Xen VCPU schedulers with different parameters/patches and under different guest loads. The motivation was the observation that a configuration where dom0 vcpus were pinned to a set of pcpus in the host using dom0_vcpus_pin and the guests were prevented from running on the dom0 pcpus (here called "exclusively-pinned dom0 vcpus") caused the general performance of the guests in a host with around 24 pcpus or more to increase and the desire to understand why this improved behaviour was not present in the default non-pinned state and if it would be possible to obtain this extra performance in the default state by changing some parameter or patching the Xen VCPU scheduler. |
|||
== T1. Check that pinning/unpinning issue is present == |
|||
== T2. Experiment with scheduler parameter == |
|||
== T3. Understand if there is a latency problem == |
|||
Accumulated login time for the VMLogin events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage. |
|||
[[File:Harusplutter.t3.60vmlogin.nopin-vs-xpin.png|Accumulated login time for the VMLogin events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage.]] |
|||
* At the 20th VM we have more vCPUs than pCPUs on the host. |
|||
* With no pinning the Xen scheduler tends to group the new VM's vCPU running events into larger chucks on the same pCPU. |
|||
* With exclusive pinning the Xen scheduler tend to interleave the new VM's vCPU running events with other VMs' vCPUs running events. |
|||
* with exclusive pinning the new VM's vCPU (black triangle) is believed to be doing IO at boot time and is yield its vCPU to Xen in order to handle this IO request after about 10us. Xen then schedule a different VM's vCPU (green square) which then runs (in the cpu loop) for a timeslice of about 1ms. Only then is the new VM give its vCPU back to handle its next IO request. |
|||
(ToDo: Run the same test but with a maximum Xen timeslice of less than 1ms so that the new VM's IO request are blocked for a short period of time.) |
|||
* With no pinning when the new VM's vCPU is yielded Xen doesn't schedule another VM's vCPU on the same pCPU. Therefore once the IO request has been handle, the VM's vCPU can be rescheduled and process this IO request immediately. |
|||
* This highlight a preference of the scheduler for vCPUs which are running the CPU loops, but only in the exclusive pinned case. |
|||
Accumulated login time for the VMStart events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage. |
|||
[[File:Harusplutter.t3.60vmstart.nopin-vs-xpin.png|Accumulated login time for the VMStart events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage.]] |
|||
''sched_ratelimit_us:'' |
|||
The Xen parameter sched_ratelimit_us is used to set the minimum amount of time for which a VM is allowed to run without being preempted. The default value is 1000 (1ms). |
|||
Setting sched_ratelimit_us=0 |
|||
* For VMLogin with cpu_loop as the VM load the boot time decreased dramatically for exclusive pinning, resulting in a time similar to that of no pinning. http://perf/?t=577 |
|||
* For VMStart with cpu_loop as the VM load the boot time decreased dramatically for both exclusive pinning and no pinning. http://perf/?t=580 |
|||
* However, when performing the LoginVSI tests no improvement in score was achieved and instead a slightly worse score was achieved. http://perf/?t=583 |
|||
* When no load is running in the VM there is barely any difference when using VMStart http://perf/?t=585. |
|||
* However, when using VMLogin with no load in the VM it performs better with this parameter if pinning is not used. If exclusive pinning is used then i actually performs much worse. http://perf/?t=593 |
|||
=== Conclusion === |
|||
* The results should that sched_ratelimit_us can have a big effect on the bootstorm performance. However, it is very depended on the load that is being run in the VMs. |
|||
* If Xen was able to categories the work time of the VM then the sched_ratelimit_us parameter could be adjusted automatically. |
|||
* A prototype of this would be a suitable improvement to implement as part of this TDP. |
Revision as of 18:22, 9 July 2013
<under construction>
Matthew Portas
Marcus Granado
Introduction
This page evaluates the performance of the Xen VCPU schedulers with different parameters/patches and under different guest loads. The motivation was the observation that a configuration where dom0 vcpus were pinned to a set of pcpus in the host using dom0_vcpus_pin and the guests were prevented from running on the dom0 pcpus (here called "exclusively-pinned dom0 vcpus") caused the general performance of the guests in a host with around 24 pcpus or more to increase and the desire to understand why this improved behaviour was not present in the default non-pinned state and if it would be possible to obtain this extra performance in the default state by changing some parameter or patching the Xen VCPU scheduler.
T1. Check that pinning/unpinning issue is present
T2. Experiment with scheduler parameter
T3. Understand if there is a latency problem
Accumulated login time for the VMLogin events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage.
- At the 20th VM we have more vCPUs than pCPUs on the host.
- With no pinning the Xen scheduler tends to group the new VM's vCPU running events into larger chucks on the same pCPU.
- With exclusive pinning the Xen scheduler tend to interleave the new VM's vCPU running events with other VMs' vCPUs running events.
- with exclusive pinning the new VM's vCPU (black triangle) is believed to be doing IO at boot time and is yield its vCPU to Xen in order to handle this IO request after about 10us. Xen then schedule a different VM's vCPU (green square) which then runs (in the cpu loop) for a timeslice of about 1ms. Only then is the new VM give its vCPU back to handle its next IO request.
(ToDo: Run the same test but with a maximum Xen timeslice of less than 1ms so that the new VM's IO request are blocked for a short period of time.)
- With no pinning when the new VM's vCPU is yielded Xen doesn't schedule another VM's vCPU on the same pCPU. Therefore once the IO request has been handle, the VM's vCPU can be rescheduled and process this IO request immediately.
- This highlight a preference of the scheduler for vCPUs which are running the CPU loops, but only in the exclusive pinned case.
Accumulated login time for the VMStart events with xentrace logs taken at 20th and 40th VM start. Exclusive pinning (blue) vs. No pinning (yellow). All the VMs are executing a cpu loop in order to maximize pCPU usage.
sched_ratelimit_us: The Xen parameter sched_ratelimit_us is used to set the minimum amount of time for which a VM is allowed to run without being preempted. The default value is 1000 (1ms). Setting sched_ratelimit_us=0
- For VMLogin with cpu_loop as the VM load the boot time decreased dramatically for exclusive pinning, resulting in a time similar to that of no pinning. http://perf/?t=577
- For VMStart with cpu_loop as the VM load the boot time decreased dramatically for both exclusive pinning and no pinning. http://perf/?t=580
- However, when performing the LoginVSI tests no improvement in score was achieved and instead a slightly worse score was achieved. http://perf/?t=583
- When no load is running in the VM there is barely any difference when using VMStart http://perf/?t=585.
- However, when using VMLogin with no load in the VM it performs better with this parameter if pinning is not used. If exclusive pinning is used then i actually performs much worse. http://perf/?t=593
Conclusion
- The results should that sched_ratelimit_us can have a big effect on the bootstorm performance. However, it is very depended on the load that is being run in the VMs.
- If Xen was able to categories the work time of the VM then the sched_ratelimit_us parameter could be adjusted automatically.
- A prototype of this would be a suitable improvement to implement as part of this TDP.