Nested Virtualization in Xen: Difference between revisions
m (fix spelling mistake bule -> blue) |
|||
(22 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
------- |
------- |
||
== Introduction == |
|||
Nested virtualization is the ability to run a hypervisor inside of a virtual machine. The hypervisor that runs on the real hardware is called a ''level 0'' or ''L0''; the hypervisor that runs as a guest on L0 is called ''level 1'' or ''L1''; a guest that runs on the L1 hypervisor is called a ''level 2'' or ''L2''. |
|||
Nested virtualization is a technology that has several potential applications: |
|||
As the virtualization becomes popular, there are more and more requirements for nested virtualization, which provides the capability to enable the virtual machines to start second level of virtual machines. For example, with nested virtualization, cloud vendors are able to migrate their own cloud to others without any modification. |
|||
* '''End-user virtualization for guests'''. With nested virtualization, users can run Windows 7's XP compatibility mode (VirtualPC), or install Bromium in guests. |
|||
Currently, we enabled the nested virtualization on Xen. We can boot up all popular VMMs as normal virtual machine (Xen, KVM, VMware Workstation, VMware ESX, Hyper-V, etc.) and all those VMMs are able to boot its own VM. Besides, with adding several critical nested features, we made a good progress in performance that the second level guest is almost same with first level guest: Performance optimization (virtual EPT, VMCS shadowing, etc.) has boosted Xen nested virtualization performance by 30%-500%. |
|||
* '''Development'''. Many developers have found testing and debugging hypervisor and dom0 code much easier when it's run inside a virtual machine. |
|||
* '''Deployment testing'''. When experimenting with deployment of large clouds, admins can create "virtual clouds", comprising dozens of virtual machines, to test how cloud orchestration software or other coordination and deployment layers will work at scale, without the need to actually have a large number of dedicated machines. |
|||
Running PV guests as an L2 has been supported in Xen since the introduction of HVM guests in Xen 3.0. Support for HVM guests as L2 guests is heavily dependent on architecture-specific support. Nested HVM on AMD CPUs is considered "experimental". Nested HVM on Intel CPUs, as of Xen 4.4, is considered "tech preview". For many common cases, it should work reliably and with low overhead. However, there are some important limitations, and we do not recommend that it be used in a production environment at this time. |
|||
Only 64-bit hypervisors are supported at this time. |
|||
See below for more details on tested hypervisior / guest combinations, and known issues on Intel CPUs. |
|||
------- |
------- |
||
== Quick-start guide == |
|||
* Make sure you have the right support |
|||
** Xen 4.4 or later |
|||
** Intel CPU with EPT support |
|||
* Add the following to your config file: |
|||
<pre><nowiki> |
|||
hap=1 |
|||
nestedhvm=1 |
|||
</nowiki></pre> |
|||
== Abbreviation == |
|||
{| border="1" |
{| border="1" |
||
|+ |
|+ |
||
Line 22: | Line 45: | ||
| L2 || The second level VM which boot up by L1 |
| L2 || The second level VM which boot up by L1 |
||
|- |
|- |
||
| |
| XP_mode|| L1 WIN7 VirtualPC XP_mode on L0 XEN |
||
|- |
|||
| VMware_ESX_on_XEN|| L1 VMware ESX Server on L0 XEN |
|||
|- |
|- |
||
|V||Verified to work |
|||
| VMware_Workstaion_on_XEN|| L1 VMware Workstation on L0 XEN |
|||
|- |
|- |
||
|V*||Works but buggy? |
|||
| Hyperv_on_XEN|| L1 WIN8 with Hyper-v feature enabled on L0 XEN |
|||
|- |
|- |
||
|X||Does not work |
|||
| XP_mode|| L1 WIN7 VirtualPC XP_mode on L0 XEN |
|||
|} |
|} |
||
------- |
------- |
||
== How to use nested == |
|||
===== Booting Xen: ===== |
===== Booting Xen: ===== |
||
:Nested feature is added to Xen long time ago. But it is really ready to use only from last year. So it’s better to use the latest Xen, |
:Nested feature is added to Xen long time ago. But it is really ready to use only from last year. So it’s better to use the latest Xen, |
||
Line 69: | Line 90: | ||
------- |
------- |
||
== Current status == |
|||
==== Testing environment: ==== |
==== Testing environment: ==== |
||
Line 77: | Line 98: | ||
! scope="col" | |
! scope="col" | |
||
! scope="col" | Version |
! scope="col" | Version |
||
|- |
|||
|L0 XEN|| git commit:58f5bcaf05621810f06bf5b3592e2ae87475053d |
|||
|- |
|- |
||
|L1 XEN ||git commit:e423b5cd60ff95ba3680e2e4a8440d4d19b2b13e |
|L1 XEN ||git commit:e423b5cd60ff95ba3680e2e4a8440d4d19b2b13e |
||
Line 98: | Line 117: | ||
</nowiki></pre> |
</nowiki></pre> |
||
==== Result: ==== |
==== Result with xen commit:58f5bcaf as L0: ==== |
||
{| border="1" cellpadding="2" |
{| border="1" cellpadding="2" |
||
! scope="col" width="150px" | L1\L2 |
! scope="col" width="150px" | L1\L2 |
||
Line 123: | Line 142: | ||
:V* means it works before, but buggy with latest Xen |
:V* means it works before, but buggy with latest Xen |
||
==== Result with xen 4.5.0-rc4 as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | XP_x86 |
|||
! scope="col" | XP_x64 |
|||
! scope="col" | RHEL6U4_x86 |
|||
! scope="col" | RHEL6U4_x64 |
|||
! scope="col" | WIN7_x86 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x86 |
|||
! scope="col" | WIN8_x64 |
|||
|- |
|||
|XEN||V||V||V||V||V||V||V||V |
|||
|- |
|||
|KVM||V||V||V||V||V||V||V||V |
|||
|- |
|||
|VMware-ESX||X blue screen||X blue screen||V||V||V||X blue screen||X blue screen||X blue screen |
|||
|- |
|||
|VMware-Workstation||X blue screen||X blue screen||X panic||X panic||X blue screen||X blue screen||X blue screen||X blue screen |
|||
|- |
|||
|Hyperv||X||X||X||X||X||X||X||X |
|||
|} |
|||
==== Result with xen 4.6.0-release as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | XP_x64 |
|||
! scope="col" | RHEL6U4_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
|- |
|||
|XEN||V||V||V||V |
|||
|- |
|||
|KVM||V||V||V||V |
|||
|- |
|||
|VMware-Workstation||X blue screen||X panic||X blue screen||X blue screen |
|||
|- |
|||
|Hyperv||X||X||X||X |
|||
|} |
|||
==== Result with xen 4.7-rc4 as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | XP_x64 |
|||
! scope="col" | RHEL6U4_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
! scope="col" | WIN8.1_x64 |
|||
|- |
|||
|XEN||V||V||V||V||V |
|||
|- |
|||
|KVM||V||V||V||V||V |
|||
|- |
|||
|VMware-Workstation||X blue screen||X panic||X blue screen||X blue screen||X blue screen |
|||
|- |
|||
|Hyperv||X||X||X||X||X |
|||
|} |
|||
==== Result with xen 4.8-rc2 as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | RHEL7U2_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
! scope="col" | WIN8.1_x64 |
|||
|- |
|||
|XEN||V||X||X||X |
|||
|- |
|||
|KVM||X||X||X||X |
|||
|- |
|||
|VMware-Workstation||X panic||X||X||X |
|||
|- |
|||
|Hyperv||X||X||X||X |
|||
|} |
|||
------- |
------- |
||
=== |
==== Result with xen 4.9.0 as L0: ==== |
||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | RHEL7U3_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
! scope="col" | WIN8.1_x64 |
|||
|- |
|||
|XEN||X||X||X||X |
|||
|- |
|||
|KVM||X||X||X||X |
|||
|- |
|||
|VMware-Workstation||X panic||X||X||X |
|||
|- |
|||
|Hyperv||X||X||X||X |
|||
|} |
|||
==== Result with xen 4.10.0 as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | RHEL7U4_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
! scope="col" | WIN8.1_x64 |
|||
|- |
|||
|XEN||X||X||X||X |
|||
|- |
|||
|KVM||X||N/A||X||X |
|||
|- |
|||
|VMware-Workstation||X panic||X||X||X |
|||
|- |
|||
|Hyperv||X||X||X||X |
|||
|} |
|||
==== Result with xen 4.11-rc5 as L0: ==== |
|||
{| border="1" cellpadding="2" |
|||
! scope="col" width="150px" | L1\L2 |
|||
! scope="col" | RHEL7U5_x64 |
|||
! scope="col" | WIN7_x64 |
|||
! scope="col" | WIN8_x64 |
|||
! scope="col" | WIN8.1_x64 |
|||
|- |
|||
|XEN||X||X||X blue screen||X blue screen |
|||
|- |
|||
|KVM||X||N/A||X||X |
|||
|- |
|||
|VMware-Workstation||X panic||X||X||X |
|||
|- |
|||
|Hyperv||X||X||X||X |
|||
|} |
|||
== Know Issues == |
|||
Line 138: | Line 283: | ||
::or set full update mode in Qemu. |
::or set full update mode in Qemu. |
||
:4. Using populate-on-demand (memory!=maxmem) or guest paging |
:4. Using populate-on-demand (memory!=maxmem) or guest paging in an L1 hypervisor for an L2 guest may deadlock the L0 hypervisor. |
||
{{Warning|This means an L1 admin can DOS the L0 hypervisor. This is a potential security issue; for this reason, we do not recommend running nested virtualization in production yet.}} |
|||
:: For this reason, Xen nested is still considered "tech preview" and unsuitable for production systems |
|||
------- |
------- |
||
== Not Tested == |
|||
:1. L1 stress test, performance test |
:1. L1 stress test, performance test |
||
Line 157: | Line 303: | ||
:2. Nested Virtualization Update from Intel http://www.slideshare.net/xen_com_mgr/nested-virtualization-update-from-intel |
:2. Nested Virtualization Update from Intel http://www.slideshare.net/xen_com_mgr/nested-virtualization-update-from-intel |
||
[[Category:HowTo]] |
|||
[[Category:Overview]] |
|||
[[Category:Manual]] |
Latest revision as of 02:13, 7 July 2018
Introduction
Nested virtualization is the ability to run a hypervisor inside of a virtual machine. The hypervisor that runs on the real hardware is called a level 0 or L0; the hypervisor that runs as a guest on L0 is called level 1 or L1; a guest that runs on the L1 hypervisor is called a level 2 or L2.
Nested virtualization is a technology that has several potential applications:
- End-user virtualization for guests. With nested virtualization, users can run Windows 7's XP compatibility mode (VirtualPC), or install Bromium in guests.
- Development. Many developers have found testing and debugging hypervisor and dom0 code much easier when it's run inside a virtual machine.
- Deployment testing. When experimenting with deployment of large clouds, admins can create "virtual clouds", comprising dozens of virtual machines, to test how cloud orchestration software or other coordination and deployment layers will work at scale, without the need to actually have a large number of dedicated machines.
Running PV guests as an L2 has been supported in Xen since the introduction of HVM guests in Xen 3.0. Support for HVM guests as L2 guests is heavily dependent on architecture-specific support. Nested HVM on AMD CPUs is considered "experimental". Nested HVM on Intel CPUs, as of Xen 4.4, is considered "tech preview". For many common cases, it should work reliably and with low overhead. However, there are some important limitations, and we do not recommend that it be used in a production environment at this time.
Only 64-bit hypervisors are supported at this time.
See below for more details on tested hypervisior / guest combinations, and known issues on Intel CPUs.
Quick-start guide
- Make sure you have the right support
- Xen 4.4 or later
- Intel CPU with EPT support
- Add the following to your config file:
hap=1 nestedhvm=1
Abbreviation
Abbreviation | Description |
---|---|
L0 | Xen hypervisor |
L1 | First level VM which is able to start second level VM |
L2 | The second level VM which boot up by L1 |
XP_mode | L1 WIN7 VirtualPC XP_mode on L0 XEN |
V | Verified to work |
V* | Works but buggy? |
X | Does not work |
How to use nested
Booting Xen:
- Nested feature is added to Xen long time ago. But it is really ready to use only from last year. So it’s better to use the latest Xen,
git clone git://xenbits.xen.org/xen.git xen.git
- NB: When I wrote this Wiki, there are still one patch is missing in upstream Xen if you want to boot up VMware and Hyper-v. You can get the patch from:
Booting Dom0:
- No special requirement for Dom0. Just use the stable one is enough.
QEMU:
- Both Qemu-xen and Qemu traditional are OK.
L1 configuration:
- To use nested virtualization, you need to add the following two lines in your guest configuration file:
hap=1 nestedhvm=1
- NB: We don’t want to support L1 guest with shadow mode, since the performance is very poor. So I recommend you to enable the EPT(hap = 1) in the L1 guest. Besides, there is a known issue that nested is failing to work with L1 shadow mode.
- To boot L1 VMware-ESX, VMware-Workstation, Win8-hyperv, you needs to mask CPU bit.
cpuid = ['0x1:ecx=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']
Current status
Testing environment:
Version | |
---|---|
L1 XEN | git commit:e423b5cd60ff95ba3680e2e4a8440d4d19b2b13e |
L1 KVM | 3.12.-rc2 |
L1 VMware Workstation | 10.0 |
L1 VMware ESX | 5.1 |
L1 Hyperv | Win8 with hyperv feature enabled |
L1 configuration:
hap=1 nestedhvm=1 cpuid = ['0x1:ecx=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx']
Result with xen commit:58f5bcaf as L0:
L1\L2 | XP_x86 | XP_x64 | RHEL6U4_x86 | RHEL6U4_x64 | WIN7_x86 | WIN7_x64 | WIN8_x86 | WIN8_x64 |
---|---|---|---|---|---|---|---|---|
XEN | V | V | V | V | V | V | V | V |
KVM | V | V | V | V | V | V | V | V |
VMware-ESX | V | V | V | V | V | V | V | V |
VMware-Workstation | V* | V | V | V | V* | V | V | V |
Hyperv | V | V | V | V | V | V | V | V |
- V means guest boots well
- V* means it works before, but buggy with latest Xen
Result with xen 4.5.0-rc4 as L0:
L1\L2 | XP_x86 | XP_x64 | RHEL6U4_x86 | RHEL6U4_x64 | WIN7_x86 | WIN7_x64 | WIN8_x86 | WIN8_x64 |
---|---|---|---|---|---|---|---|---|
XEN | V | V | V | V | V | V | V | V |
KVM | V | V | V | V | V | V | V | V |
VMware-ESX | X blue screen | X blue screen | V | V | V | X blue screen | X blue screen | X blue screen |
VMware-Workstation | X blue screen | X blue screen | X panic | X panic | X blue screen | X blue screen | X blue screen | X blue screen |
Hyperv | X | X | X | X | X | X | X | X |
Result with xen 4.6.0-release as L0:
L1\L2 | XP_x64 | RHEL6U4_x64 | WIN7_x64 | WIN8_x64 |
---|---|---|---|---|
XEN | V | V | V | V |
KVM | V | V | V | V |
VMware-Workstation | X blue screen | X panic | X blue screen | X blue screen |
Hyperv | X | X | X | X |
Result with xen 4.7-rc4 as L0:
L1\L2 | XP_x64 | RHEL6U4_x64 | WIN7_x64 | WIN8_x64 | WIN8.1_x64 |
---|---|---|---|---|---|
XEN | V | V | V | V | V |
KVM | V | V | V | V | V |
VMware-Workstation | X blue screen | X panic | X blue screen | X blue screen | X blue screen |
Hyperv | X | X | X | X | X |
Result with xen 4.8-rc2 as L0:
L1\L2 | RHEL7U2_x64 | WIN7_x64 | WIN8_x64 | WIN8.1_x64 |
---|---|---|---|---|
XEN | V | X | X | X |
KVM | X | X | X | X |
VMware-Workstation | X panic | X | X | X |
Hyperv | X | X | X | X |
Result with xen 4.9.0 as L0:
L1\L2 | RHEL7U3_x64 | WIN7_x64 | WIN8_x64 | WIN8.1_x64 |
---|---|---|---|---|
XEN | X | X | X | X |
KVM | X | X | X | X |
VMware-Workstation | X panic | X | X | X |
Hyperv | X | X | X | X |
Result with xen 4.10.0 as L0:
L1\L2 | RHEL7U4_x64 | WIN7_x64 | WIN8_x64 | WIN8.1_x64 |
---|---|---|---|---|
XEN | X | X | X | X |
KVM | X | N/A | X | X |
VMware-Workstation | X panic | X | X | X |
Hyperv | X | X | X | X |
Result with xen 4.11-rc5 as L0:
L1\L2 | RHEL7U5_x64 | WIN7_x64 | WIN8_x64 | WIN8.1_x64 |
---|---|---|---|---|
XEN | X | X | X blue screen | X blue screen |
KVM | X | N/A | X | X |
VMware-Workstation | X panic | X | X | X |
Hyperv | X | X | X | X |
Know Issues
- 1. Only L2 EPT/Shadow on L1 EPT is working. L2 EPT/Shadow on L1 Shadow is not supported.
- 2. Virtual Box fails to boot on top of Xen (L1 panic while booting L2)
- 3. Hyper-v screen flush issue:
- Current the screen flush is not correct after booting hyper-v. Two workarouds:
- using remote desktop to connect guest
- or set full update mode in Qemu.
- 4. Using populate-on-demand (memory!=maxmem) or guest paging in an L1 hypervisor for an L2 guest may deadlock the L0 hypervisor.
Not Tested
- 1. L1 stress test, performance test
- 2. L2 stress test performance test
- 3. L1 save restore, live migration
- 4. L2 save restore, live migration
Reference Documents
- 1. Nested Virtualization on Xen http://www-archive.xenproject.org/files/xensummit_intel09/xensummit-nested-virt.pdf
- 2. Nested Virtualization Update from Intel http://www.slideshare.net/xen_com_mgr/nested-virtualization-update-from-intel