Xen Project 4.9 Feature List

From Xen
Revision as of 13:59, 5 June 2017 by Lars.kurth (talk | contribs) (Created page with " == New Features == Boot Xen on EFI platforms using GRUB2 (x86): from Xen Project 4.9 and GRUB2 2.02 onwards, the Xen Project Hypervisor can be booted using the multiboot2 p...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

New Features

Boot Xen on EFI platforms using GRUB2 (x86): from Xen Project 4.9 and GRUB2 2.02 onwards, the Xen Project Hypervisor can be booted using the multiboot2 protocol on legacy BIOS and EFI x86 platforms. Partial support for the multiboot2 protocol was also introduced into network boot firmware (iPXE). This makes the Xen Project boot process much more flexible: boot configurations can be changed directly from within a bootloader (without having to use text editors) and boot configurations are more portable across different platforms.

Near native latency for embedded and automotive environments: The "null" scheduler enables use-cases where every virtual CPUs can be assigned to a physical (commonly needed for embedded and automotive environments) removing almost all of the scheduler overheads in such environments. Usage of the “null” scheduler guarantees near zero scheduling overhead, significantly lower latency, and more predictable performance. The new vwfi parameter for ARM (virtual Wait For Interrupt) allows fine-grained control of how the Xen Project Hypervisor handles WFI instructions. Setting vwfi to "native" reduces interrupt latency by approximately 60%: benchmarks on Xilinx Zynq Ultrascale+ MPSoC’s have shown a maximum interrupt latency of less than 2 microseconds, which is extremely close to hardware limits. and should be small enough for the vast majority of embedded use cases.

Xen 4.9 includes new standard ABIs for sharing devices between virtual machines (including reference implementations) for a number of embedded, automotive and cloud native computing use-cases.

For embedded/automotive a virtual sound ABI was added implementing audio playback and capture as well as volume control and the possibility to mute/unmute audio sources. In addition a new virtual display ABI for complex display devices exposing multiple framebuffers and displays has been added. Multi-touch support has been added to the virtual keyboard/mouse protocol (enabling touch screens). Xen 4.9 also introduces a Xen transport for 9pfs, which is a remote filesystem protocol originally written for Plan 9. During the Xen 4.9 release cycle, a Xen 9pfs frontend was upstreamed in the Linux kernel and a backend in QEMU. It is now possible to share a filesystem (not necessarily a block device) from a virtual machine to another, which is a requirement for adding Xen support to many container engines, such as CoreOS rkt. Finally, the PV Calls ABI has been introduced to allow forwarding POSIX requests across guests: a POSIX function call originating from an app in a DomU can be forwarded and implemented in Dom0. For example, guest networking socket calls can be executed to Dom0, enabling a new networking model which is a natural fit for cloud-native apps.

Improvements to existing Functionality

Xenstored optimisations: xenstore daemons allow dom0 and guests get access to system configuration information. C-xenstored scalability limits have been increased to allow large hosts (about >1000 domains) to run efficiently. Transaction handling has been improved for better performance, smaller memory footprint and fewer transaction conflicts. Dynamic debugging capabilities have been added.

DMOP (Device Model Operation Hypercall): In Xen 4.9 the interface between Xen and QEMU was completely re-worked and consolidated. There is now only a single hypercall in Xen (the DMOP hypercall) which is carefully designed to allow the privcmd driver to audit any QEMU memory ranges and parameters that are passed to Xen via DMOP. The Linux privcmd driver enables DMOP auditing, which significantly limits the capability of a compromised QEMU to attack the hypervisor.

Alternative runtime patching and GICv3 support for ARM32: Alternative runtime patching which enables the hypervisor to apply workarounds for erratas affecting the processor and to apply optimizations specific to a CPU and GICv3 support was extended for 32-bit ARM platforms, bringing this functionality to embedded use-cases.

System Error Detection (ARM): While waiting for ARMv8.2 for full RAS support, Xen on ARM made a step forward in reliability and serviceability with the introduction of System Error detection and reporting, a key feature for customers with highly available systems.

GCOV support: We removed the old GCOV implementation and replaced it with an updated version that supports more formats and exposes a more generic interface.

Re-work and hardening of x86 emulation code for security: hardware-assisted virtualisation provides hypervisors with the ability to execute most privileged instructions natively and securely. However for some boundary cases, it is still necessary to emulate x86 instructions in software. In Xen 4.9, the project completely re-worked the x86 emulation code, added support for new instructions, audited the code against security vulnerabilities and created AFL based test fuzzing tests that are regularly run against the emulator.

Credit 2: There are significant improvements to the credit2 scheduler

Updated support for Microsoft’s Hyper-V Hypervisor Top-Level Functional Specification (also known as Viridian Enlightenments): Xen implements a subset of version 5.0 of the Hyper-V Hypervisor TLFS, which enables Xen to run Windows guests at similar performance as it would run on Hyper-V. In addition, this work lays the groundwork to enable us to run Hyper-V within Xen in future, using nested virtualization.

Multi-Release long-term development

These is development which goes over several release cycles.

Transition from PVHv1 to PVHv2: Xen 4.8 laid the groundwork for re-architecting and simplifying PVH, focussing on the DomU guest ABI, which enabled Guest operating system developers to start porting their OSes to this mode. Support for FreeBSD is in progress, while support for Linux is committed. Xen 4.9 added Dom0 builder support and support for multiple virtual Intel I/O Advanced Programmable Interrupt Controllers (vIO APIC). PVHv2 for interrupt routing and PCI emulation is currently being peer reviewed and can be expected early in the Xen 4.10 release cycle. This lays the groundwork for a PVHv2 Dom0. For PVHv2 DomU support, PCI Passthrough and a major re-work of the xl/libxl and libvirt user interfaces for PVH have been started. Support for PVHv1 has been removed from the Xen Codebase.

Reworking the Xen-QEMU integration to protect against QEMU security vulnerabilities: In Xen 4.8 we embarked on an effort to re-work Xen-QEMU integration which amounts to sandboxing QEMU within Dom0. Significant progress was made in Xen 4.9 towards this goal, with the implementation of DMOP. Other changes such de-privileging QEMU in Dom0 and changes to the Linux privcmd driver have been mostly completed in Xen 4.9. Changes which are currently designed are necessary changes to libxl and QEMUs usage of XenStore.

Hypervisor Changelog

Icon todo.png To Do:

Add for final RC