Xen Project Best Practices: Difference between revisions
Epretorious (talk | contribs) m (Minor grammatical changes.) |
(Add 'Paravirtualization (PV) or (hardware-assisted virtualization) HVM') |
||
(9 intermediate revisions by 5 users not shown) | |||
Line 5: | Line 5: | ||
<!-- Original date: Sat Oct 15 15:26:00 2011 (1318692360000000) --> |
<!-- Original date: Sat Oct 15 15:26:00 2011 (1318692360000000) --> |
||
= Best Practices for Xen = |
= Best Practices for Xen Project = |
||
This wiki page lists various best practices for running Xen hypervisor. |
This wiki page lists various best practices for running Xen Project hypervisor. |
||
Information here applies to opensource Xen from |
Information here applies to opensource Xen Project software from XenProject.org, not necessarily to [[XenServer]] or XCP. |
||
== Paravirtualization (PV) or (hardware-assisted virtualization) HVM == |
|||
⚫ | |||
You should always dedicate a fixed amount of RAM for Xen dom0. |
|||
[https://wiki.xen.org/wiki/Paravirtualization_(PV) Paravirtualization] used to be the recommended choice, because it required no hardware support and less emulation of things like interrupt, disk and network controllers, and could run on any CPU. But with [https://wiki.xen.org/wiki/PV_on_HVM PVHVM] (hardware-assisted virtualization with paravirtualized drivers in the guest), the (already diminishing) advantages don't outweigh the disadvantages (more difficult management such as OS bootstrapping), and also Spectre and Meltdown security. As can be read in [https://xenproject.org/2018/01/22/xen-project-spectre-meltdown-faq-jan-22-update/ XEN PROJECT SPECTRE / MELTDOWN FAQ (JAN 22 UPDATE)], a strategy to combat Meltdown that does not require a non-standard release of Xen, involves switching to HVM. |
|||
⚫ | This can be done by specifying "dom0_mem= |
||
⚫ | |||
You should always dedicate a fixed amount of RAM for Xen Project dom0. To plan what you need, you need to take into account what you may run in your dom0 (<code>rsync</code> processes for backups, <code>partclone</code> or similar tools for DomU imaging, etc), but also a QEMU device model per HVM guest. 1024 MB is a good starting point, but if you run dozens of DomU's, you may need more. |
|||
---- |
|||
For '''GRUB1''': |
|||
⚫ | This can be done by specifying "<code>dom0_mem=1024M,max:1024M</code>" for the Xen Project hypervisor (usually <code>xen.gz</code>) in the <code>/boot/grub/grub.conf</code> or <code>/boot/grub/menu.lst</code> file. This makes sure that the initial amount of memory allocated for dom0 is 1024 MB ('''note:''' Replace this value with the amount of memory you want to allocate to dom0) and leaves the rest of the host system's RAM available for other guests. |
||
Example: |
|||
<pre> |
<pre> |
||
title Xen 4.1.0 / pv_ops dom0 kernel 2.6.32.36 |
title Xen 4.1.0 / pv_ops dom0 kernel 2.6.32.36 |
||
root (hd0,0) |
root (hd0,0) |
||
kernel /xen-4.0.gz dom0_mem= |
kernel /xen-4.0.gz dom0_mem=1024M,max:1024M loglvl=all guest_loglvl=all |
||
module /vmlinuz-2.6.32.36 ro root=/dev/sda2 console=hvc0 earlyprintk=xen nomodeset |
module /vmlinuz-2.6.32.36 ro root=/dev/sda2 console=hvc0 earlyprintk=xen nomodeset |
||
module /initrd-2.6.32.36.img |
module /initrd-2.6.32.36.img |
||
</pre> |
</pre> |
||
---- |
|||
For '''GRUB2''': |
|||
Add or edit the following line in the <code>/etc/default/grub</code> file: |
|||
<pre> |
|||
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=1024M,max:1024M" |
|||
</pre> |
|||
then run <code>update-grub</code> and <code>reboot</code> |
|||
---- |
|||
The next step is to configure the toolstack to make sure dom0 memory is never ballooned down while starting new guests: |
The next step is to configure the toolstack to make sure dom0 memory is never ballooned down while starting new guests: |
||
* If you are using the [[XL]] toolstack this can be done by editing /etc/xen/xl.conf and setting autoballoon=0. This will prevent [[XL]] from ever automatically adjusting the amount of memory assigned to dom0. |
* If you are using the [[XL]] toolstack this can be done by editing <code>/etc/xen/xl.conf</code> and setting <code>autoballoon=0</code>. This will prevent [[XL]] from ever automatically adjusting the amount of memory assigned to dom0. |
||
* If you are using the xend toolstack this can be done by editing /etc/xen/xend-config.sxp and changing the "dom0-min-mem" option (to "dom0-min-mem |
* If you are using the xend toolstack this can be done by editing <code>/etc/xen/xend-config.sxp</code> and changing the "<code>dom0-min-mem</code>" option (to "<code>dom0-min-mem 1024</code>") and changing the "<code>enable-dom0-ballooning</code>" option (to "<code>enable-dom0-ballooning no</code>"). These options will make sure <code>xend</code> never takes any memory away from dom0. |
||
After making these changes to grub.conf and to xend-config.sxp/xl.conf, reboot the system. After reboot you will notice dom0 has only |
After making these changes to <code>grub.conf</code> and to <code>xend-config.sxp/xl.conf</code>, reboot the system. After reboot you will notice dom0 has only 1204 MB of memory, and the rest of the RAM is available in Xen hypervisor as a free memory. You can run "<code>xl list</code>" (or "<code>xm list</code>") to verify the amount of memory dom0 has, and "<code>xl info</code>" (or "<code>xm info</code>") to verify the amount of free memory in Xen Project hypervisor. |
||
== Why should I dedicate fixed amount of memory for Xen dom0? == |
== Why should I dedicate fixed amount of memory for Xen Project dom0? == |
||
Dedicating fixed amount of memory for dom0 is good for two reasons: |
Dedicating fixed amount of memory for dom0 is good for two reasons: |
||
Line 41: | Line 63: | ||
Now, if you boot up the system with dom0 having all the memory visible to it, and then balloon down dom0 memory every time you start up a new guest, you end up having only a small amount of the original (boot time) amount of memory available in the dom0 in the end. This means the calculated parameters are not correct anymore, and you end up wasting a lot of memory for the metadata for a memory you don't have anymore. Also ballooning down busy dom0 might have bad side effects. |
Now, if you boot up the system with dom0 having all the memory visible to it, and then balloon down dom0 memory every time you start up a new guest, you end up having only a small amount of the original (boot time) amount of memory available in the dom0 in the end. This means the calculated parameters are not correct anymore, and you end up wasting a lot of memory for the metadata for a memory you don't have anymore. Also ballooning down busy dom0 might have bad side effects. |
||
== Xen credit scheduler domain weights and making sure dom0 gets enough CPU time to serve IO requests (disk/net) == |
== Xen Project credit scheduler domain weights and making sure dom0 gets enough CPU time to serve IO requests (disk/net) == |
||
For smooth operation and good guest performance you need to make sure that dom0 always gets enough CPU time to process and serve the IO requests for guest VMs. This can be done by setting up the Xen credit scheduler domain weights and caps. See [[CreditScheduler]] wiki page for more information. |
For smooth operation and good guest performance you need to make sure that dom0 always gets enough CPU time to process and serve the IO requests for guest VMs. This can be done by setting up the Xen Project credit scheduler domain weights and caps. See [[CreditScheduler]] wiki page for more information. |
||
Some background: As a default Xen gives every guest (including dom0) the default weight of 256. This means all guests (including dom0) are equal, and get the same amount of CPU time. This can be bad for dom0, since it needs to be able to serve and process the IO requests for other guests. You should give dom0 more weight, so it will get more CPU time than the guests, when it needs it. |
Some background: As a default Xen Project gives every guest (including dom0) the default weight of 256. This means all guests (including dom0) are equal, and get the same amount of CPU time. This can be bad for dom0, since it needs to be able to serve and process the IO requests for other guests. You should give dom0 more weight, so it will get more CPU time than the guests, when it needs it. |
||
Example commands: |
Example commands: |
||
* use "xm sched-credit -d Domain-0" to check the current Xen credit scheduler parameters for dom0. |
* use "<code>xm sched-credit -d Domain-0</code>" to check the current Xen Project credit scheduler parameters for dom0. |
||
* use "xm sched-credit -d Domain-0 -w 512" to give dom0 weight of 512, giving it more (up to twice as much) CPU time than the guests. |
* use "<code>xm sched-credit -d Domain-0 -w 512</code>" to give dom0 weight of 512, giving it more (up to twice as much) CPU time than the guests. |
||
Note that you need to apply this setting after every reboot! It's not persistent setting. You can place the "xm sched-credit" |
Note that you need to apply this setting after every reboot! It's not a persistent setting. You can place the "<code>xm sched-credit</code>" command in <code>rc.local</code> or another script that is executed late in the boot process. Make sure the command is executed after <code>xend</code> is started, since the <code>xm</code> command needs to talk to <code>xend</code>. |
||
For domUs just use the default weight (256), unless you have a reason to tweak them. |
For domUs just use the default weight (256), unless you have a reason to tweak them. |
||
Line 59: | Line 81: | ||
If you're running IO intensive guests or workloads in the VMs it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see [[XenCommonProblems]] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. |
If you're running IO intensive guests or workloads in the VMs it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see [[XenCommonProblems]] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. |
||
== |
== dom0 network configuration == |
||
You should configure and set up networking on Xen dom0 using the networking scripts provided by your dom0 distribution |
You should configure and set up networking on Xen Project dom0 using the networking scripts provided by your dom0 distribution (i.e., "<code>/etc/network/interfaces</code>" on Debian and Ubuntu, and "<code>/etc/sysconfig/network-scripts/ifcfg*</code>" on RHEL/CentOS/Fedora). Using the networking scripts provided by your dom0 is much better than using the Xen "<code>network-bridge</code>" script, which is troublesome on many configurations due to renaming tricks it uses. |
||
If using XL / LIBXL toolstack (Xen 4.1+) you're actually required to set up the networking using distro network scripts! |
If using XL / LIBXL toolstack (Xen Project 4.1+) you're actually required to set up the networking using distro network scripts! |
||
To use distro networking scripts/tools: |
To use distro networking scripts/tools: |
||
* If using xm/xend toolstack disable the Xen network-script in "/etc/xen/xend-config.sxp", |
* If using xm/xend toolstack disable the Xen Project network-script in "<code>/etc/xen/xend-config.sxp</code>", i.e., comment out the "<code>network-script</code>" line, or make it be "<code>/bin/true</code>". |
||
* Configure network settings (bridges etc) using the networking scripts available on your dom0 distribution. See the documentation of your distro for more help. |
* Configure network settings (bridges, VLANs, etc.) using the networking scripts available on your dom0 distribution. See the documentation of your distro for more help. |
||
* Reboot. |
* Reboot. |
||
You can still use the default Xen vif-bridge script to attach VM vifs to the bridges you have configured using the distro networking tools |
You can still use the default Xen Project vif-bridge script to attach VM vifs to the bridges you have configured using the distro networking tools so that no changes are required to the domU configuration files in "<code>/etc/xen/<domU></code>". |
||
See [[Network Configuration Examples (Xen 4.1+)]] wiki page for examples how to use the distro network scripts. |
See [[Network Configuration Examples (Xen 4.1+)]] wiki page for examples how to use the distro network scripts. |
||
== Common problems with Xen == |
== Common problems with Xen Project == |
||
Also check the [[XenCommonProblems]] wiki page for answers to many common problems related to using/running Xen. |
Also check the [[XenCommonProblems]] wiki page for answers to many common problems related to using/running Xen Project software. |
||
[[Category:Xen]] |
[[Category:Xen]] |
Latest revision as of 11:13, 28 September 2019
Best Practices for Xen Project
This wiki page lists various best practices for running Xen Project hypervisor.
Information here applies to opensource Xen Project software from XenProject.org, not necessarily to XenServer or XCP.
Paravirtualization (PV) or (hardware-assisted virtualization) HVM
Paravirtualization used to be the recommended choice, because it required no hardware support and less emulation of things like interrupt, disk and network controllers, and could run on any CPU. But with PVHVM (hardware-assisted virtualization with paravirtualized drivers in the guest), the (already diminishing) advantages don't outweigh the disadvantages (more difficult management such as OS bootstrapping), and also Spectre and Meltdown security. As can be read in XEN PROJECT SPECTRE / MELTDOWN FAQ (JAN 22 UPDATE), a strategy to combat Meltdown that does not require a non-standard release of Xen, involves switching to HVM.
Xen Project dom0 dedicated memory and preventing dom0 memory ballooning
You should always dedicate a fixed amount of RAM for Xen Project dom0. To plan what you need, you need to take into account what you may run in your dom0 (rsync
processes for backups, partclone
or similar tools for DomU imaging, etc), but also a QEMU device model per HVM guest. 1024 MB is a good starting point, but if you run dozens of DomU's, you may need more.
For GRUB1:
This can be done by specifying "dom0_mem=1024M,max:1024M
" for the Xen Project hypervisor (usually xen.gz
) in the /boot/grub/grub.conf
or /boot/grub/menu.lst
file. This makes sure that the initial amount of memory allocated for dom0 is 1024 MB (note: Replace this value with the amount of memory you want to allocate to dom0) and leaves the rest of the host system's RAM available for other guests.
Example:
title Xen 4.1.0 / pv_ops dom0 kernel 2.6.32.36 root (hd0,0) kernel /xen-4.0.gz dom0_mem=1024M,max:1024M loglvl=all guest_loglvl=all module /vmlinuz-2.6.32.36 ro root=/dev/sda2 console=hvc0 earlyprintk=xen nomodeset module /initrd-2.6.32.36.img
For GRUB2:
Add or edit the following line in the /etc/default/grub
file:
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=1024M,max:1024M"
then run update-grub
and reboot
The next step is to configure the toolstack to make sure dom0 memory is never ballooned down while starting new guests:
- If you are using the XL toolstack this can be done by editing
/etc/xen/xl.conf
and settingautoballoon=0
. This will prevent XL from ever automatically adjusting the amount of memory assigned to dom0.
- If you are using the xend toolstack this can be done by editing
/etc/xen/xend-config.sxp
and changing the "dom0-min-mem
" option (to "dom0-min-mem 1024
") and changing the "enable-dom0-ballooning
" option (to "enable-dom0-ballooning no
"). These options will make surexend
never takes any memory away from dom0.
After making these changes to grub.conf
and to xend-config.sxp/xl.conf
, reboot the system. After reboot you will notice dom0 has only 1204 MB of memory, and the rest of the RAM is available in Xen hypervisor as a free memory. You can run "xl list
" (or "xm list
") to verify the amount of memory dom0 has, and "xl info
" (or "xm info
") to verify the amount of free memory in Xen Project hypervisor.
Why should I dedicate fixed amount of memory for Xen Project dom0?
Dedicating fixed amount of memory for dom0 is good for two reasons:
First of all (dom0) Linux kernel calculates various network related parameters based on the boot time amount of memory.
The second reason is Linux needs memory to store the memory metadata (per page info structures), and this allocation is also based on the boot time amount of memory.
Now, if you boot up the system with dom0 having all the memory visible to it, and then balloon down dom0 memory every time you start up a new guest, you end up having only a small amount of the original (boot time) amount of memory available in the dom0 in the end. This means the calculated parameters are not correct anymore, and you end up wasting a lot of memory for the metadata for a memory you don't have anymore. Also ballooning down busy dom0 might have bad side effects.
Xen Project credit scheduler domain weights and making sure dom0 gets enough CPU time to serve IO requests (disk/net)
For smooth operation and good guest performance you need to make sure that dom0 always gets enough CPU time to process and serve the IO requests for guest VMs. This can be done by setting up the Xen Project credit scheduler domain weights and caps. See CreditScheduler wiki page for more information.
Some background: As a default Xen Project gives every guest (including dom0) the default weight of 256. This means all guests (including dom0) are equal, and get the same amount of CPU time. This can be bad for dom0, since it needs to be able to serve and process the IO requests for other guests. You should give dom0 more weight, so it will get more CPU time than the guests, when it needs it.
Example commands:
- use "
xm sched-credit -d Domain-0
" to check the current Xen Project credit scheduler parameters for dom0. - use "
xm sched-credit -d Domain-0 -w 512
" to give dom0 weight of 512, giving it more (up to twice as much) CPU time than the guests.
Note that you need to apply this setting after every reboot! It's not a persistent setting. You can place the "xm sched-credit
" command in rc.local
or another script that is executed late in the boot process. Make sure the command is executed after xend
is started, since the xm
command needs to talk to xend
.
For domUs just use the default weight (256), unless you have a reason to tweak them.
Dedicating a CPU core(s) only for dom0
If you're running IO intensive guests or workloads in the VMs it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.
dom0 network configuration
You should configure and set up networking on Xen Project dom0 using the networking scripts provided by your dom0 distribution (i.e., "/etc/network/interfaces
" on Debian and Ubuntu, and "/etc/sysconfig/network-scripts/ifcfg*
" on RHEL/CentOS/Fedora). Using the networking scripts provided by your dom0 is much better than using the Xen "network-bridge
" script, which is troublesome on many configurations due to renaming tricks it uses.
If using XL / LIBXL toolstack (Xen Project 4.1+) you're actually required to set up the networking using distro network scripts!
To use distro networking scripts/tools:
- If using xm/xend toolstack disable the Xen Project network-script in "
/etc/xen/xend-config.sxp
", i.e., comment out the "network-script
" line, or make it be "/bin/true
". - Configure network settings (bridges, VLANs, etc.) using the networking scripts available on your dom0 distribution. See the documentation of your distro for more help.
- Reboot.
You can still use the default Xen Project vif-bridge script to attach VM vifs to the bridges you have configured using the distro networking tools so that no changes are required to the domU configuration files in "/etc/xen/<domU>
".
See Network Configuration Examples (Xen 4.1+) wiki page for examples how to use the distro network scripts.
Common problems with Xen Project
Also check the XenCommonProblems wiki page for answers to many common problems related to using/running Xen Project software.
Language: | English • Deutsch • español • français • 日本語 • 한국어 • português do Brasil • русский • 中文 |
---|