Difference between revisions of "Xen Project 4.7 Feature List"

From Xen
(Performance and Workloads)
(ARM Specific)
Line 40: Line 40:
 
== New Hardware Support ==
 
== New Hardware Support ==
 
=== ARM Specific ===
 
=== ARM Specific ===
SBBR Compliance: Xen Project now supports booting on hosts that expose ACPI 6.0 (and later) information. The ARM Server Base Boot Requirements (SBBR) stipulate that compliant systems need to express hardware resources with ACPI; thus this support will come in useful for ARM Servers. This effort was carried out by Shannon Zhao of Linaro with minor patches from Julien Grall of ARM.
+
'''SBBR Compliance''': Xen Project now supports booting on hosts that expose ACPI 6.0 (and later) information. The ARM Server Base Boot Requirements (SBBR) stipulate that compliant systems need to express hardware resources with ACPI; thus this support will come in useful for ARM Servers. This effort was carried out by Shannon Zhao of Linaro with minor patches from Julien Grall of ARM.
   
PCSI 1.0 Compatibility: PSCI 1.0 compatibility allows Xen Project software to operate on systems that expose PSCI 1.0 methods. Now, all 1.x versions of PSCI will be compatible with Xen Project software. More information on Power State Coordination Interface can be found here. This effort was also carried out by Julien Grall with a patch from Dirk Behme of Bosch.
+
'''PCSI 1.0 Compatibility''': PSCI 1.0 compatibility allows Xen Project software to operate on systems that expose PSCI 1.0 methods. Now, all 1.x versions of PSCI will be compatible with Xen Project software. More information on Power State Coordination Interface can be found here. This effort was also carried out by Julien Grall with a patch from Dirk Behme of Bosch.
   
vGIC-v3: Virtual Generic Interrupt Controller version 3. Reworked to be spec-compliant and optimised in some code paths.
+
'''vGIC-v3''': Virtual Generic Interrupt Controller version 3. Reworked to be spec-compliant and optimised in some code paths.
   
Wallclock support: ARM guest can now get wallclock time directly from Xen Project via shared info page.
+
'''Wallclock support''': ARM guest can now get wallclock time directly from Xen Project via shared info page.
   
 
=== Features specific to Intel® Xeon® processor product family ===
 
=== Features specific to Intel® Xeon® processor product family ===

Revision as of 10:56, 9 June 2016

To make it easier to understand the major changes during this release cycle, I’ve grouped them below into several categories:

  • Security Features
  • Live Migration Support
  • Performance and Workloads
  • Support for new hardware features

Security Features

Reboot-free Live Patching: Xen Project Hypervisor 4.7 comes equipped with Live Patching, a technology that enables re-boot free deployment of security patches to minimize disruption and downtime during security upgrades for system administrators and DevOps practitioners. Xen Project 4.7 implements version 1 of the Xen Project’s Live Patching specification, which is designed to encode the vast majority of security patches (approximately 90%) as Live Patching payloads. This version ships with a Live Patching enabled hypervisor and payload deployment tools and is available as a technology preview.

KCONFIG support: For security, embedded automotive and IoT use cases, Xen Project introduced the ability to remove core Xen Hypervisor features at compile time via KCONFIG. This ability creates a more lightweight hypervisor and eliminates extra attack surfaces that are beneficial in security-first environments, microservice architectures and environments that have heavy compliance and certification needs, like automotive.

Improvements to the Virtual Machine Introspection (VMI) subsystem: A number of performance, scalability, robustness and interface improvements have been added to the Virtual Machine Introspection subsystem, that was introduced in Xen 4.5. In addition, Bitdefender Hypervisor Introspection leveraging Xen Project Virtual Machine Introspection, has recently been released as a new enterprise security solution to discover and remedy deep threats that remain hidden via traditional endpoint security tools.

Foundation work to tolerate a restartable Dom0: Several key components in a Xen Project system run in Dom0, which make Dom0 the single point of failure. Xen Project has been able to run xenstored, the daemon for managing the hypervisor’s central settings repository on a Xen Project host, in a sandboxed Virtual Machine called xenstored stub domain since Xen Project version 4.2. In Xen 4.7, we have made it easier to build xenstored stub domains and for them to tolerate a Dom0 restart. This will make Dom0 less critical to a Xen Project system and help us move towards a more robust and secure architecture in the future. More work in this area is expected in subsequent releases.

Live Migration Support

Improved Live Migration support: CPU ID Levelling enables migration of VM’s between a larger range of non-identical hosts than previously supported.

Fault Tolerance / Coarse-grained Lock-stepping (COLO): Xen 4.5 laid the foundation for COLO while improving the Xen Project’s Hypervisors Live Migration and Remus High Availability support. The COLO Manager, which introduces a relaxed approach to checkpointing that avoids unnecessary checkpoints enabling near native performance for many workloads, has been fully integrated as an experimental feature into Xen 4.7. Note that the COLO Block Replication and COLO Proxy components, both of which are QEMU components, are currently still reviewed by the QEMU community. Both components are available as out-of-tree add-ons to the Xen Project Hypervisor, until fully integrated into QEMU.

Performance and Workloads

Support for a wider range of workloads and applications: The PV guest limit restriction of 512GB has been removed to allow the creation of huge PV domains in the TB range. TB sized VMs, coupled with Xen Project’s existing support for 512 vCPUs per VM, enable execution of extremely memory and compute intensive workloads such as big data analytics workloads and in-memory databases.

Improved Credit 2 scheduler: The Credit2 scheduler is one (big) step closer to being ready for production use. It is now possible to instruct the scheduler to organize its runqueues and perform load balancing at core, socket or NUMA node granularity. More fine grained (core) configurations, deliver more aggressive load balancing, and are best suited for medium size systems. This feature has been proven to enable very good performance, especially if Hyper Threading is present.

Less fine grained configurations entail less overhead, and is suitable for larger servers or when no Hyper Threading is available. In addition, Credit2 has been extended to allow pinning of vCPUs to pCPUs (also known as “hard affinity”), allowing system administrators to configure the system in the exact way they want, and achieve the best setup for a given workload (for instance, a guarantee that a certain subset of vCPUs are always able to run when they need to run).

Improved RTDS scheduler: The RTDS scheduler is a real-time CPU scheduler built to provide guaranteed CPU capacity to guest VMs on SMP hosts, which primarily targets embedded, real-time and low-latency workloads. In Xen Project 4.7, the scheduling model has been changed from a quantum-driven to an event-driven model, which reduces scheduling overhead and thus scalability and performance for embedded and realtime workloads. In addition, per-VCPU parameter configuration has been added to allow better scheduler control for specialised workloads.

Per-cpu reader-writer lock: This new infrastructure allows for the fast path read case to have low overhead by only setting/clearing a per-cpu variable for using the read lock. After transforming various hypervisor locks to this infrastructure, VM-VM network transfer with 16 queues jumped from 15 gbit/s to 48 gbit/s on a 2 socket Haswell-EP host.

Usability Improvements PVUSB Support: In Xen Project 4.7, a new XL command line interface to manage PVUSB devices has been introduced to manage PVUSB devices for PV guests. Both in kernel PVUSB backend and QEMU backend are supported.

Hot plugging of QEMU disk backends: Xen now enables hot-plugging of USB devices as well as QEMU disk backends, such as drbd, iscsi, and more in HVM guests.This new feature allows users to add and remove disk backends to virtual machines without the need to reboot the guest.

Soft-reset: The soft reset feature for HVM guests allows for a more graceful shutdown and restart of the HVM guest.

New Hardware Support

ARM Specific

SBBR Compliance: Xen Project now supports booting on hosts that expose ACPI 6.0 (and later) information. The ARM Server Base Boot Requirements (SBBR) stipulate that compliant systems need to express hardware resources with ACPI; thus this support will come in useful for ARM Servers. This effort was carried out by Shannon Zhao of Linaro with minor patches from Julien Grall of ARM.

PCSI 1.0 Compatibility: PSCI 1.0 compatibility allows Xen Project software to operate on systems that expose PSCI 1.0 methods. Now, all 1.x versions of PSCI will be compatible with Xen Project software. More information on Power State Coordination Interface can be found here. This effort was also carried out by Julien Grall with a patch from Dirk Behme of Bosch.

vGIC-v3: Virtual Generic Interrupt Controller version 3. Reworked to be spec-compliant and optimised in some code paths.

Wallclock support: ARM guest can now get wallclock time directly from Xen Project via shared info page.

Features specific to Intel® Xeon® processor product family

Improved Interrupt Efficiency: Xen Project 4.7 supports VT-d Posted Interrupts, which provides hardware-level acceleration to increase interrupt virtualization efficiency. It reduces latency and improves user experience through performance improvements, especially for interrupt-intensive front- end workloads such as web servers.

Code and Data Prioritization: Xen Project 4.7 is the first to include Code and Data Prioritization (CDP), part of the Intel® Resource Director Technology (RDT) Framework and an extension of Cache Allocation Technology (CAT), first introduced in Xen Project 4.6. The introduction of CDP allows isolation of code/data within the shared L3 cache of multi-tenant environments, reducing contention and improving performance.

Other Intel Features: Additional features specific to the Intel Xeon processor family in Xen Project 4.7 include: VMX TSC Scaling, which allows for easier migration between machines with different CPU frequencies and support for Memory Protection Keys, a new security feature for hardening the software stack.