Setting boot order for domUs: Difference between revisions
Florianheigl (talk | contribs) |
Florianheigl (talk | contribs) |
||
(3 intermediate revisions by the same user not shown) | |||
Line 19: | Line 19: | ||
== PV guests == |
== PV guests == |
||
PV guests don't run in an fully emulated environment like HVM guests. As a consequence they |
PV guests don't run in an fully emulated environment like HVM guests. As a consequence they don't have a BIOS to boot from the network (pxeboot) or from a cdrom without special configuration. |
||
They can however boot from a user provided kernel or from one of the kernels installed in the VM disk image. |
They can however boot from a user provided kernel or from one of the kernels installed in the VM disk image. |
||
And in many cases they can in fact boot from network or a cdrom. |
|||
=== boot from a user specified kernel === |
=== boot from a user specified kernel === |
||
Line 56: | Line 57: | ||
You can use [[Xenpvnetboot_:_A_network_bootloader_for_Xen_PV_guest|Xenpvnetboot]]. |
You can use [[Xenpvnetboot_:_A_network_bootloader_for_Xen_PV_guest|Xenpvnetboot]]. |
||
It supports many schemes for fetching the kernel. |
|||
==== pypxeboot ==== |
==== pypxeboot ==== |
||
The older solution for this is pypxeboot, which can be found i.e. here: |
The older solution for this is pypxeboot, which can be found i.e. here: |
||
https://github.com/blamarvt/pypxeboot with documentation to be found at http://zhigang.org/files/docbook/xen-pxeboot.html |
https://github.com/blamarvt/pypxeboot with documentation to be found at http://zhigang.org/files/docbook/xen-pxeboot.html |
||
This only works with the deprecated xm toolstack and should not be used for newer systems. |
This only works with the deprecated xm toolstack and should not be used for newer systems. |
||
It looks for a dhcp server, requests a kernel info via PXE and then launches the VM with the kernel received. |
|||
If the kernel from PXE is invalid, or not was offered, it will do local boot using pygrub. |
|||
The advantages were it being PXE compatible and supporting fall-through to local disk. (This means it works like a normal BIOS boot order would) |
|||
==== xl create wrapper ==== |
==== xl create wrapper ==== |
||
Line 70: | Line 78: | ||
* calls '''xl create config-file''. |
* calls '''xl create config-file''. |
||
Note this will not fetch a new kernel / command line on reboot. you may need to use '''on_reboot = destroy''' and handle VM restarts from your script. |
Note this will '''not''' fetch a new kernel / command line on reboot. you may need to use '''on_reboot = destroy''' and handle VM restarts from your script. |
||
If you're going that route, expect some extra work and consider dynamically creating the VM configs all the time. |
If you're going that route, expect some extra work and consider dynamically creating the VM configs all the time. |
||
Latest revision as of 18:28, 15 May 2015
How to change the boot order (boot sequence) in Xen PV and HVM guests.
HVM guests
Use the boot parameter:
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="cda"
Put the letters in order from left to right, the drive corresponding to the first letter on the left is going to be the one that boots first. For example if you want to boot from cd-rom first, then from the network and lastly from the hard-disk, this is what you need to set your boot parameter to:
boot="dnc"
PV guests
PV guests don't run in an fully emulated environment like HVM guests. As a consequence they don't have a BIOS to boot from the network (pxeboot) or from a cdrom without special configuration. They can however boot from a user provided kernel or from one of the kernels installed in the VM disk image. And in many cases they can in fact boot from network or a cdrom.
boot from a user specified kernel
You just need to pass the right kernel and initrd parameters:
# Kernel image to boot kernel = "/boot/vmlinuz" # Ramdisk (optional) ramdisk = "/boot/initrd.gz"
You can also pass an extra parameter to specify the kernel command line, that allows you to choose the root disk:
# Kernel command line options extra = "root=/dev/xvda1"
boot from a kernel installed in the VM disk
You need to add the following two lines to the VM config file:
bootloader = '/usr/bin/pygrub' extra = "(hd0)/boot/grub/menu.lst"
Make sure you pass -c to xl create to connect to the guest's console right away.
network boot solutions and workarounds
Xenpvnetboot
You can use Xenpvnetboot.
It supports many schemes for fetching the kernel.
pypxeboot
The older solution for this is pypxeboot, which can be found i.e. here: https://github.com/blamarvt/pypxeboot with documentation to be found at http://zhigang.org/files/docbook/xen-pxeboot.html
This only works with the deprecated xm toolstack and should not be used for newer systems.
It looks for a dhcp server, requests a kernel info via PXE and then launches the VM with the kernel received. If the kernel from PXE is invalid, or not was offered, it will do local boot using pygrub.
The advantages were it being PXE compatible and supporting fall-through to local disk. (This means it works like a normal BIOS boot order would)
xl create wrapper
Otherwise you can write yourself a simple wrapper script that:
- gets the kernel and initrd from the network;
- updates the kernel and initrd parameters in the VM config file;
- calls 'xl create config-file.
Note this will not fetch a new kernel / command line on reboot. you may need to use on_reboot = destroy and handle VM restarts from your script. If you're going that route, expect some extra work and consider dynamically creating the VM configs all the time.