Difference between revisions of "Xen Project 4.4 Feature List"

From Xen
(Improved Support for SPICE)
 
(31 intermediate revisions by 4 users not shown)
Line 2: Line 2:
 
__TOC__
 
__TOC__
 
}}
 
}}
<em>NOTE: THIS IS A STUB PENDING RELEASE</em>
 
   
 
= High Level Features =
 
= High Level Features =
   
See [[Xen Release Features|this table]] for a comparison of the feature sets of different Xen releases. Compatibility information can be found in the following two tables: [[:Category:Host Install|Host Operating Systems]] and [[:Category:Guest Install|Guest Operating Systems]].
+
See [[Xen Release Features|this table]] for a comparison of the feature sets of different Xen Project releases. Compatibility information can be found in the following two tables: [[:Category:Host Install|Host Operating Systems]] and [[:Category:Guest Install|Guest Operating Systems]].
   
<em>Note that Linux Distributions and other operating systems, will upgrade to Xen 4.4 according to their own release schedules.</em>
+
<em>Note that Linux Distributions and other operating systems, will upgrade to Xen Project 4.4 according to their own release schedules.</em>
   
  +
== Improved Flexibility in Driver Domains ==
== ARM Support ==
 
== Performance and Scalability Enhancements ==
 
== Improved Security ==
 
== Toolstack Improvements ==
 
== Usability improvements ==
 
== Tools ==
 
== Hardware support ==
 
== Guest visible features ==
 
== Other features and improvements ==
 
   
  +
Linux driver domains used to rely on udev events in order to launch backends for guests. With Xen Project 4.4, the dependency on udev is replaced with a custom daemon built on top of libxl, that provides greater flexibility in order to run user-space backends inside of driver domains. As an example, this allows driver domains to use Qdisk backends, which was not possible with udev.
   
  +
== Event Channel Scalability Improvements ==
== Non-Linux driver domains ==
 
   
  +
Event channels are para-virtualized interrupts. These were previously limited to either 1024 or 4096 channels per domain. Domain 0 needs several event channels for each guest VM (for network/disk backends, qemu etc.). This limited the total number of VMs to around 300-500 (depending on VM configuration).
[see last e-mail ... this needs to be refined]
 
   
  +
The FIFO-based event channel ABI allows for over 100,000 event channels and has improved fairness and multiple priorities. The increased limit allows for more VMs, which benefits large systems and cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM.
== Event channel scalability ==
 
   
  +
The new ABI requires guest support (which will be available in Linux 3.14).
Event channels are per-VM resources that allow VMs to communicate with each other. These were previously
 
limited to either 1024 or 4096 channels per domain. Domain 0 needs several event channels for each guest VM, which limited the total number of VMs available to several hundred. The new event channel implementation allows hundreds of thousands of event channels, removing this as a limit on the number of VMs which can be started. This benefits cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM, ... as well as disaggregated Xen systems in which drivers, services (e.g. Qemu, Tor, ...) and other functionality that would normally be run in Domain 0 can be run in a separate VM.
 
   
== Experimental support for PVH mode for guests ==
+
== Experimental Support for ParaVirtualization Hardware (PVH) Mode for Guests ==
   
PVH mode combines the best elements of HVM and PV into a mode which allows Xen to take
+
PVH mode combines the best elements of HVM and PV into a mode which allows Xen Project to take
advantage of many of the hardware virtualization features without needing the overhead of simulating devices of a physical computer. This
+
advantage of many of the hardware virtualization features that exist in contemporary hardware.
will allow for increased efficiency, as well as reduced footprint in Linux and FreeBSD going forward.
 
   
  +
This will allow for increased efficiency, as well as reduced footprint in Linux and FreeBSD going forward.
More information on PVH: see https://www.linux.com/news/enterprise/systems-management/658784-the-spectrum-of-paravirtualization-with-xen-part-2 and
 
   
  +
More information on PVH: see https://www.linux.com/news/enterprise/systems-management/658784-the-spectrum-of-paravirtualization-with-xen-part-2
== Improved support for SPICE ==
 
  +
and http://blog.xenproject.org/index.php/2014/01/31/linux-3-14-and-pvh/
   
  +
== Intel Nested Virtualization declared "Tech Preview" ==
SPICE is a protocol for virtial desktops which allows a much richer connection than display-only
 
protocols like VNC. Xen 4.4 adds support for additional SPICE functionality, including vdagent, clipboard sharing, and USB redirection.
 
   
  +
Nested virtualization provides virtualized hardware virtualization extensions to guests. This allows you to run Xen Project, KVM, VMWare or HyperV inside of a guest for debugging or deployment testing. It also allows
== GRUB 2 now supports PV xen images (external) ==
 
  +
Windows 7 "XP Compatibility mode". Nested virtualization is not yet ready for production use, but has made significant gains in functionality and reliability, and is now ready to be declared "tech preview". Please try it out
  +
and report any issues you find.
   
  +
More information on nested virtualization: see [[Xen nested]]
In the past, Xen required a custom implementation of GRUB called pvgrub. The upstream GRUB 2 (see http://www.gnu.org/software/grub/) project now has a build target which will construct a bootable PV xen image. This ensures 100% GRUB 2
 
  +
  +
== Improved Support for SPICE ==
  +
  +
[[SPICE support in Xen|SPICE]] is a protocol for virtual desktops which allows a much richer connection than display-only
  +
protocols like VNC. Xen Project 4.4 adds support for additional SPICE functionality, including vdagent, clipboard sharing, and USB redirection.
  +
  +
== GRUB 2 Support of Xen Project PV Images (External) ==
  +
  +
In the past, Xen Project software required a custom implementation of GRUB called pvgrub. The upstream GRUB 2 (see http://www.gnu.org/software/grub/) project now has a build target which will construct a bootable PV xen image. This ensures 100% GRUB 2
 
compatibility for pvgrub going forward.
 
compatibility for pvgrub going forward.
   
== Indirect descriptors for block PV protocol (Linux) ==
+
== Indirect Descriptors for Block PV Protocol (Linux) ==
   
 
Modern storage devices work much better with larger chunks of data. Indirect
 
Modern storage devices work much better with larger chunks of data. Indirect
 
descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage
 
descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage
technologies like SSD and RAID. This support is available in any guest running Linux 3.11 or higher (regardless of Xen version).
+
technologies like SSD and RAID. This support is available in any guest running Linux 3.11 or higher (regardless of Xen Project version).
   
== Improved kexec for debug support ==
+
== Improved kexec Support ==
   
  +
kexec allows a running Xen Project host to be replaced with another OS without rebooting. This is primarily used execute a crash environment to collect information on a Xen Project hypervisor or dom0 crash.
kexec functionality is primarily used when a crash happens, to allow a special kernel to come in afterwards and collect information about the cause of the crash, to allow developers to diagnose and fix the root cause.
 
  +
  +
The existing functionality has been extended to:
  +
  +
* Allow tools to load images without requiring dom0 kernel support (which does not exist in upstream kernels).
  +
* Improve reliability when used from a 32-bit dom0.
  +
  +
kexec-tools 2.0.5 or later is required.
   
 
== Improved XAPI and Mirage OS support in Xen Project environment ==
 
== Improved XAPI and Mirage OS support in Xen Project environment ==
   
XAPI and Mirage OS are sub-projects within the Xen Project written in OCaml. Both are also used in XenServer (see http://xenserver.org/) and rely on the Xen OCaml language bindings to operate well. These language bindings have had a major overhaul, and result in much better compatibility between XAPI, Mirage OS and Linux distros going forward.
+
XAPI and Mirage OS are sub-projects within the Xen Project written in OCaml. Both are also used in XenServer (see http://xenserver.org/) and rely on the Xen Project OCaml language bindings to operate well. These language bindings have had a major overhaul, and result in much better compatibility between XAPI, Mirage OS and Linux distributions going forward.
   
== Experimental support for Guest EFI boot ==
+
== Experimental Support for Guest EFI boot ==
   
 
EFI is the new booting standard that is replacing BIOS. Some operating systems only boot
 
EFI is the new booting standard that is replacing BIOS. Some operating systems only boot
 
with EFI; and some features, like SecureBoot, only work with EFI.
 
with EFI; and some features, like SecureBoot, only work with EFI.
  +
  +
== Improved Integration between GlusterFS and Xen Project Software ==
  +
  +
You can find a blog post to set up an iSCSI target on the gluster blog [http://www.gluster.org/2013/11/a-gluster-block-interface-performance-and-configuration/ here].
   
 
== Improved ARM support for Xen Project Hypervisor ==
 
== Improved ARM support for Xen Project Hypervisor ==
   
[TODO: clarify whether support has moved from tech preview to experimental or supported]. A number of new features have been implemented:
+
A number of new features have been implemented:
** 64 bit Xen on ARM now supports booting guests
+
* 64 bit Xen on ARM now supports booting guests
** Physical disk partitions and LVM volumes can now be used to store guest images using xen-blkback (or is PV drivers better in terms of terminology)
+
* Physical disk partitions and LVM volumes can now be used to store guest images using xen-blkback (or is PV drivers better in terms of terminology)
  +
  +
* Significant stability improvements across the board
  +
* ARM/multiboot booting protocol design and implementation in Xen Project
  +
* PSCI support in Xen Project
  +
  +
* Same DMA in Dom0 even with no hardware IOMMUs (not sure what the implications of this are)
   
  +
* ARM and ARM64 ABIs in Xen Project are declared stable and maintained for backwards compatibility
** Significant stability improvements across the board
 
  +
* Significant usability improvements, such as automatic creation of guest device trees and improved handling of host DTBs.
** ARM/multiboot booting protocol design and implementation in Xen and U-boot
 
  +
* Adding new hardware platforms to Xen Project on ARM has been vastly improved, making it easier for Hardware vendors and embedded vendors to port Xen on ARM to their board.
** PSCI support in Xen
 
  +
* Xen on ARM now supports the Arndale board, Calxeda ECX-2000 (aka Midway), Applied Micro X-Gene Storm, TI OMAP5 and Allwinner A20/A30 boards.
  +
* ARM server class hardware (Calxeda Midway) has been introduced in the Xen Project OSSTest automated testing framework.
   
  +
== Early microcode loading ==
** Same DMA in Dom0 even with no hardware IOMMUs (not sure what the implications of this are)
 
   
  +
The hypervisor can update the microcode in the early phase of boot time. The microcode binary blob can be either as a standalone multiboot payload, or part of the initial kernel (dom0) initial ramdisk (initrd). To take advantage of this use latest version of ''dracut'' with ''--early-microcode'' parameter and on the Xen command line specify: ''ucode=scan''. For details see ''dracut'' manpage and http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
** ARM and ARM64 ABIs in Xen are declared stable and maintained for backwards compatibility
 
** Significant usability improvements, such as automatic creation of guest device trees and improved handling of host DTBs.
 
** Adding new hardware platforms to Xen on ARM has been vastly improved, making it easier for Hardware vendors and embedded vendors to port Xen on ARM to their board.
 
** Xen on ARM now supports the Arndale board, Calxeda ECX-2000 (aka Midway), Applied Micro X-Gene Storm, TI OMAP5 and Allwinner A20/A30 boards [TODO: check with APM whether we can use this in a press release and whether there is more than Mustang support now].
 
** ARM server class hardware (Calxeda Midway) has been introduced in the Xen OSSTest automated testing framework.
 
   
  +
== Updated Components ==
   
== Updated to qemu 1.6 and SeaBIOS 1.7.3.1 ==
+
* Updated qemu to 1.6
  +
* Updated SeaBIOS to 1.7.3.1
   
 
= Documentation =
 
= Documentation =
You can find Xen 4.4 documentation in the following two locations:
+
You can find Xen Project 4.4 documentation in the following two locations:
* [[Xen_4.4_Release_Notes|Xen 4.4 Release Notes]]
+
* [[Xen_4.4_Release_Notes|Xen Project 4.4 Release Notes]]
* [[Xen_4.4_Man_Pages|Xen 4.4 Man Pages]]
+
* [[Xen_4.4_Man_Pages|Xen Project 4.4 Man Pages]]
* [[:Category:Xen 4.4|Articles and tutorials related to new functionality in Xen 4.4]]
+
* [[:Category:Xen 4.4|Articles and tutorials related to new functionality in Xen Project 4.4]]
   
 
= Acknowledgements =
 
= Acknowledgements =
We wanted to thank the various contributors to Xen 4.4 : for a complete list of contributions check the [[Xen_4.4_Acknowledgements|Xen 4.4 Acknowledgements]].
+
We wanted to thank the various contributors to Xen Project 4.4 : for a complete list of contributions check the [[Xen_4.4_Acknowledgements|Xen Project 4.4 Acknowledgements]].
   
 
= Downloads =
 
= Downloads =
Xen 4.4 (and update releases) can be downloaded from the [http://xenproject.org/downloads/xen-archives/supported-xen-44-series.html 4.4 Download Archives].
+
Xen Project 4.4 (and update releases) can be downloaded from the [http://xenproject.org/downloads/xen-archives/xen-44-series.html 4.4 Download Archives].
   
 
[[Category:Xen]]
 
[[Category:Xen]]

Latest revision as of 16:29, 9 April 2014

Book Help Manual Search.png

 

High Level Features

See this table for a comparison of the feature sets of different Xen Project releases. Compatibility information can be found in the following two tables: Host Operating Systems and Guest Operating Systems.

Note that Linux Distributions and other operating systems, will upgrade to Xen Project 4.4 according to their own release schedules.

Improved Flexibility in Driver Domains

Linux driver domains used to rely on udev events in order to launch backends for guests. With Xen Project 4.4, the dependency on udev is replaced with a custom daemon built on top of libxl, that provides greater flexibility in order to run user-space backends inside of driver domains. As an example, this allows driver domains to use Qdisk backends, which was not possible with udev.

Event Channel Scalability Improvements

Event channels are para-virtualized interrupts. These were previously limited to either 1024 or 4096 channels per domain. Domain 0 needs several event channels for each guest VM (for network/disk backends, qemu etc.). This limited the total number of VMs to around 300-500 (depending on VM configuration).

The FIFO-based event channel ABI allows for over 100,000 event channels and has improved fairness and multiple priorities. The increased limit allows for more VMs, which benefits large systems and cloud operating systems such as MirageOS, ErlangOnXen, OSv, HalVM.

The new ABI requires guest support (which will be available in Linux 3.14).

Experimental Support for ParaVirtualization Hardware (PVH) Mode for Guests

PVH mode combines the best elements of HVM and PV into a mode which allows Xen Project to take advantage of many of the hardware virtualization features that exist in contemporary hardware.

This will allow for increased efficiency, as well as reduced footprint in Linux and FreeBSD going forward.

More information on PVH: see https://www.linux.com/news/enterprise/systems-management/658784-the-spectrum-of-paravirtualization-with-xen-part-2 and http://blog.xenproject.org/index.php/2014/01/31/linux-3-14-and-pvh/

Intel Nested Virtualization declared "Tech Preview"

Nested virtualization provides virtualized hardware virtualization extensions to guests. This allows you to run Xen Project, KVM, VMWare or HyperV inside of a guest for debugging or deployment testing. It also allows Windows 7 "XP Compatibility mode". Nested virtualization is not yet ready for production use, but has made significant gains in functionality and reliability, and is now ready to be declared "tech preview". Please try it out and report any issues you find.

More information on nested virtualization: see Xen nested

Improved Support for SPICE

SPICE is a protocol for virtual desktops which allows a much richer connection than display-only protocols like VNC. Xen Project 4.4 adds support for additional SPICE functionality, including vdagent, clipboard sharing, and USB redirection.

GRUB 2 Support of Xen Project PV Images (External)

In the past, Xen Project software required a custom implementation of GRUB called pvgrub. The upstream GRUB 2 (see http://www.gnu.org/software/grub/) project now has a build target which will construct a bootable PV xen image. This ensures 100% GRUB 2 compatibility for pvgrub going forward.

Indirect Descriptors for Block PV Protocol (Linux)

Modern storage devices work much better with larger chunks of data. Indirect descriptors have allowed the size of each individual request to triple, greatly improving I/O performance when running on fast storage technologies like SSD and RAID. This support is available in any guest running Linux 3.11 or higher (regardless of Xen Project version).

Improved kexec Support

kexec allows a running Xen Project host to be replaced with another OS without rebooting. This is primarily used execute a crash environment to collect information on a Xen Project hypervisor or dom0 crash.

The existing functionality has been extended to:

  • Allow tools to load images without requiring dom0 kernel support (which does not exist in upstream kernels).
  • Improve reliability when used from a 32-bit dom0.

kexec-tools 2.0.5 or later is required.

Improved XAPI and Mirage OS support in Xen Project environment

XAPI and Mirage OS are sub-projects within the Xen Project written in OCaml. Both are also used in XenServer (see http://xenserver.org/) and rely on the Xen Project OCaml language bindings to operate well. These language bindings have had a major overhaul, and result in much better compatibility between XAPI, Mirage OS and Linux distributions going forward.

Experimental Support for Guest EFI boot

EFI is the new booting standard that is replacing BIOS. Some operating systems only boot with EFI; and some features, like SecureBoot, only work with EFI.

Improved Integration between GlusterFS and Xen Project Software

You can find a blog post to set up an iSCSI target on the gluster blog here.

Improved ARM support for Xen Project Hypervisor

A number of new features have been implemented:

  • 64 bit Xen on ARM now supports booting guests
  • Physical disk partitions and LVM volumes can now be used to store guest images using xen-blkback (or is PV drivers better in terms of terminology)
  • Significant stability improvements across the board
  • ARM/multiboot booting protocol design and implementation in Xen Project
  • PSCI support in Xen Project
  • Same DMA in Dom0 even with no hardware IOMMUs (not sure what the implications of this are)
  • ARM and ARM64 ABIs in Xen Project are declared stable and maintained for backwards compatibility
  • Significant usability improvements, such as automatic creation of guest device trees and improved handling of host DTBs.
  • Adding new hardware platforms to Xen Project on ARM has been vastly improved, making it easier for Hardware vendors and embedded vendors to port Xen on ARM to their board.
  • Xen on ARM now supports the Arndale board, Calxeda ECX-2000 (aka Midway), Applied Micro X-Gene Storm, TI OMAP5 and Allwinner A20/A30 boards.
  • ARM server class hardware (Calxeda Midway) has been introduced in the Xen Project OSSTest automated testing framework.

Early microcode loading

The hypervisor can update the microcode in the early phase of boot time. The microcode binary blob can be either as a standalone multiboot payload, or part of the initial kernel (dom0) initial ramdisk (initrd). To take advantage of this use latest version of dracut with --early-microcode parameter and on the Xen command line specify: ucode=scan. For details see dracut manpage and http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html

Updated Components

  • Updated qemu to 1.6
  • Updated SeaBIOS to 1.7.3.1

Documentation

You can find Xen Project 4.4 documentation in the following two locations:

Acknowledgements

We wanted to thank the various contributors to Xen Project 4.4 : for a complete list of contributions check the Xen Project 4.4 Acknowledgements.

Downloads

Xen Project 4.4 (and update releases) can be downloaded from the 4.4 Download Archives.