Xen 4.1 Release Notes
Xen 4.1 release notes
Xen 4.1.0 release announcement:
- Xen hypervisor 4.1.0 was released on 25th March 2011.
- Official release announcement email from xen-devel mailinglist: mailing list announcement
- blog.xen.org Xen 4.1.0 release announcement
- Download url: http://xen.org/download/index_4.1.0.html
Xen 4.1.1 release announcement:
- Xen hypervisor 4.1.1 has released on 15 Jun 2011.
- Official release announcement email from xen-devel mailinglist: mailing list announcement
- blog.xen.org Xen 4.1.1 release announcement
- Download url: http://xen.org/products/xen_source.html
For a full list of available Xen hypervisor downloads see: http://xen.org/products/xen_archives.html .
Overview of Xen 4.1
Xen 4.1 is the core hypervisor with basic command line management tools. You can use it with your favourite Linux distribution to build your own custom, secure, high performance virtualization solution. Some users add additional third party management tools and interfaces to create their own virtualization platform. More information about the available external 3rd party management tools and (web) interfaces in the XenManagementTools wiki page.
If you're looking for an easy to start with "appliance" type version of Xen, shipped as ISO-image, take a look at Xen Cloud Platform (XCP): http://www.xen.org/products/cloudxen.html . XCP is integrated, well tested, ready made and dedicated open source virtualization platform offering remote management APIs (not a generic Linux distribution).
Xen 4.1 new features
See Xen4.0 wiki page for features in previous Xen4.0 release.
Changes since Xen 4.0:
- A re-architected and improved XL toolstack that is functionally nearly equivalent to XM/XEND.
- Prototype credit2 scheduler designed for latency-sensitive workloads and very large systems.
- CPU Pools for advanced partitioning.
- Support for large systems (>255 processors and 1GB/2MB super page support).
- Support for x86 Advanced Vector eXtension (AVX).
- New Memory Access API enabling integration of 3rd party security solutions into Xen virtualized environments.
- Optimizations for Linux HVM guest IRQ delivery when running PV-on-HVM drivers.
- Fixes to xenpaging and memory sharing, but they are still considered as 'Tech Previews'.
- Tmem fixes, but still disabled as a default (enable with hypervisor cmdline option).
- Remus FT (Fault Tolerance) fixes.
- Documentation additions (vbd-interface.txt, vbd numbering and naming, xl disk configuration syntax, etc).
- Fixes to properly support jumbo frames (mtu 9000) with vif-bridge script.
- Many IOMMU fixes (both Intel VT-d IOMMU and AMD IOMMU).
- Many toolstack and buildsystem fixes for Linux and NetBSD hosts.
- Thirdparty libs: libvirt driver for libxl has been merged to upstream libvirt.
- HVM guest PXE boot enhancements, replacing gPXE with iPXE.
- Interrupt (IRQ) delivery fixes, fixing keyboard/mouse on some laptops.
- Xentrace and xenoprofile fixes for analysing the hypervisor and VMs.
- Userspace qemu-based block device backend driver to use when dom0 kernel does not have kernel based xen-blkback driver available (upstream Linux 3.0 was the first version to include kernel xen-blkback driver, so if using older dom0 kernels you'll benefit from the userspace qemu blockback driver).
- Even better stability through our new automated regression tests.
See [1] for more information about some of the new feature highlights in Xen 4.1. For full list of changes see the mercurial history/log at: http://xenbits.xen.org/xen-4.1-testing.hg .
Requirements for compiling Xen 4.1 from source
All the requirements and instructions for building Xen 4.0 also apply to Xen 4.1, so please check the Xen4.0 wiki page for more information about required packages/dependencies, installation instructions etc.
Additional requirements in Xen 4.1, on top of those for Xen 4.0:
- If you have ocaml installed, you also need to install ocaml-findlib, otherwise building Xen 4.1 fails.
Known issues in Xen 4.1
- PVGRUB (based on MiniOS) seems broken for 32bit PV domUs, but works OK for 64bit PV domUs. This is a regression, since PVGRUB works in Xen 4.0.x for both 32b and 64b.
- Xen 4.1.1 if compiled with gcc 4.6 (Ubuntu 11.10+, Debian testing/unstable, Fedora 15+) has an issue where xen hvmloader is known to miscompile and thus crash on start when trying to run Xen HVM guests. This bug has been fixed in xen-4.1-testing.hg repository and thus the fix will be included in upcoming Xen 4.1.2 and later releases. See this email for more info about the bugfix patches: http://lists.xensource.com/archives/html/xen-devel/2011-07/msg00922.html
Toolstacks
Xen 4.1 still includes the old xm/xend toolstack, but xl/libxl toolstack is considered the new primary toolstack. All new development should be done against xl/libxl. The old xm/xend toolstack will probably get removed during upcoming Xen 4.2 development phase.
XL / libxl toolstack is the new lightweight Xen management toolstack written in C, which makes it fast and compact. The old xm/xend toolstack was written in Python. Libxl provides all the common lowlevel functionality so that libxl can be shared by all the higher level toolstacks, such as XCP XAPI and libvirt, avoiding code duplication, making Xen hypervisor management more robust and integration to other programming languages easier.
Presentation about libxl (libxenlight) from Xen Summit 2010 North America: http://www.slideshare.net/xen_com_mgr/xen-summit-amd2010v3 .
Migrating from xm/xend toolstack to xl/libxl
Please check the MigrationGuideToXen4.1+ wiki page for instructions how to migrate from xm/xend to xl/libxl toolstack.
Features in xl/libxl
Xen 4.1 xl/libxl toolstack has most of the features of xm/xend. xl is supposed to be a drop in replacement for xm. The command line syntax and domain configuration file syntax of xl is the same as xm has.
List of missing features from xl/libxl in Xen 4.1:
- PVUSB (for more info see PVUSB sections of: XenUSBPassthrough)
- PVSCSI (for more info see: XenPVSCSI)
- Remus FT (Fault Tolerance)
- VGA graphics card passthru
- NUMA-aware memory allocation for VMs. xl in Xen 4.1 will allocate equal amount of memory from every NUMA node for the VM. xm/xend allocates all the memory from the same NUMA node.
Xen 4.1 and dom0 network configuration
In Xen 4.1 with xl/libxl toolstack you need to set up the networking bridges manually by using the networking scripts provided by your dom0 distribution. Even if still using the old xm/xend toolstack the usage of Xen "network-bridge" network-script is NOT recommended because it's known to be problematic on many systems and especially on more custom setups, for example with IPv6. It has been the best practice for a long time already to use the Linux distribution default networking scripts and/or configuration files to set up the Xen dom0 (host) networking and bridges for Xen.
See HostConfiguration/Networking wiki page for more information how to set up the dom0 networking/bridges in most commonly used Linux distributions. It's also good to read XenBestPractices wiki page.
Xen 4.1 init scripts on Debian/Ubuntu
When installing Xen from source (.tar.gz) on Debian/Ubuntu, this is how you can enable automatic start of Xen related services on system startup:
update-rc.d xencommons defaults 19 18 update-rc.d xend defaults 20 21 update-rc.d xendomains defaults 21 20 update-rc.d xen-watchdog defaults 22 23
After making those changes reboot the system.