Xen ARM with Virtualization Extensions whitepaper: Difference between revisions

From Xen
Jump to navigationJump to search
No edit summary
 
(34 intermediate revisions by 4 users not shown)
Line 2: Line 2:


== What is Xen? ==
== What is Xen? ==
Xen is a small footprint Open Source hypervisor. Xen on ARM amounts to less than 90K lines of code. Xen is licensed GPLv2 and has an healthy and diverse community to support it and fund the development. Xen is hosted by the LinuxFoundation, that provides stewardship for the project.
[http://www.xenproject.org/developers/teams/hypervisor.html Xen] is a lightweight, high performance, Open Source hypervisor. Xen has a very low footprint: the ARM port amounts to less than 90K lines of code. Xen is licensed [http://www.gnu.org/licenses/gpl-2.0.html GPLv2] and has an healthy and diverse community that supports it and funds its development. Xen is hosted by the [http://www.xenproject.org LinuxFoundation], that provides [http://www.xenproject.org/join.html stewardship] for the project.


== The Xen Architecture ==
== The Xen Architecture ==
Xen is type-1 hypervisor: it runs directly on the hardware, everything else in the system is running as a virtual machine on top of Xen, including Dom0 that is the first virtual machine created by Xen. Dom0 is privileged and drives the devices on the platform.
Xen is type-1 hypervisor: it runs directly on the hardware, everything else in the system is running as a virtual machine on top of Xen, including Dom0, the first virtual machine. Dom0 is created by Xen, is privileged and drives the devices on the platform.
Xen virtualizes CPU, memory, interrupts and timers, providing virtual machines with one or more virtual CPUs, a fraction of the memory of the system, a virtual interrupt controller (indistinguishable from a physical interrupt controller from the guest point of view) and a virtual timer. Xen assigns devices such as the SATA controller and network cards to Dom0, taking care of remapping MMIO regions and IRQs. Dom0 (typically Linux, but it could also be FreeBSD or other operating systems) runs the same device drivers for these devices that would be using on a native execution.
Xen virtualizes CPU, memory, interrupts and timers, providing virtual machines with one or more virtual CPUs, a fraction of the memory of the system, a virtual interrupt controller and a virtual timer. Xen assigns devices such as SATA controllers and network cards to Dom0, taking care of remapping MMIO regions and IRQs. Dom0 (typically Linux, but it could also be FreeBSD or other operating systems) runs the same device drivers for these devices that would be using on a native execution.

Dom0 also runs a set of drivers called “paravirtualized backends” to give access to disk, network, etc, to the other unprivileged virtual machines. The operating system running as DomU (unprivileged guest in Xen terminology) gets access to a set of generic virtual devices by running the corresponding “paravirtualized frontend” drivers. A single backend services multiple frontends. A pair of paravirtualized drivers exist for all the most common classes of devices: disk, network, console, framebuffer, etc. They usually live in the operating system kernel, i.e. Linux. A few PV backends can also run in userspace in QEMU. The frontends connect to the backends using a simple shared ring protocol over a page in memory. Xen provides all the tools for discovery and to setup the initial communication. Xen also provides a mechanism for the frontend and the backend to share additional pages and notify each other via software interrupts.
Dom0 also runs a set of drivers called ''paravirtualized backends'' to give access to disk, network, etc, to the other unprivileged virtual machines. The operating system running as DomU (unprivileged guest in Xen terminology) gets access to a set of generic virtual devices by running the corresponding ''paravirtualized frontend'' drivers. A single backend services multiple frontends. A pair of paravirtualized drivers exist for all the most common classes of devices: disk, network, console, framebuffer, mouse, keyboard, etc. They usually live in the operating system kernel, i.e. Linux. A few PV backends can also run in userspace in QEMU. The frontends connect to the backends using a simple ring protocol over a shared page in memory. Xen provides all the tools for discovery and to setup the initial communication. Xen also provides a mechanism for the frontend and the backend to share additional pages and notify each other via software interrupts.
[[File:Xen arch1.png|600px|center|frameless]]
[[File:Xen arch1.png|600px|center|frameless]]


However there is no reasons to run all the device drivers and all the paravirtualized backends in Dom0. The Xen architecture allows “driver domains”: unprivileged virtual machines with the only purpose of running the driver and the paravirtualized backend for one device. For example you can have a disk driver domain, with the SATA controller assigned and running the driver for it and the disk paravirtualized backend. You can have a network driver domain with the network card assigned and running the driver for it and the network paravirtualized backend. As driver domains are just unprivileged guests, they make the system more secure because they allow large pieces of code, such as the entire network stack, to run in unprivileged mode. Even if a malicious guest manages to take over the paravirtualized network backend and the network driver domain, it wouldn’t be able to take over the entire system. Driver domains also improve isolation and resilience: the network driver domain is fully isolated from the disk driver domains and Dom0. If the network driver crashes it wouldn’t be able to take down the entire system, only the network. It is possible to reboot just the driver domains while everything else stays online.
Even though it is the most common configuration, there is no reasons to run all the device drivers and all the paravirtualized backends in Dom0. The Xen architecture allows ''driver domains'': unprivileged virtual machines with the only purpose of running the driver and the paravirtualized backend for one class of devices. For example you can have a disk driver domain, with the SATA controller assigned, running the driver for it and the disk paravirtualized backend. You can have a network driver domain with the network card assigned, running the driver for it and the network paravirtualized backend. As driver domains are regular unprivileged guests, they make the system more ''secure'' because they allow large pieces of code, such as the entire network stack, to run unprivileged. Even if a malicious guest manages to take over the paravirtualized network backend and the network driver domain, it would not be able to take over the entire system. Driver domains also improve ''isolation'' and ''resilience'': the network driver domain is fully isolated from the disk driver domains and Dom0. If the network driver crashes it would not be able to take down the entire system, only the network. It is possible to reboot just the network driver domains while everything else remains online.
Finally driver domains allow Xen users to disaggregate and componentize the system in ways that wouldn’t be possible otherwise. For example they allow users to run a real-time operating system alongside the main OS to drive a device that has real time constraints. They allow users to run a legacy OS to drive old devices that don’t have any new drivers in modern operating systems. They allow users to separate and isolate critical functionalities from less critical ones. For example they allow to run an OS such as QNX to drive most of the devices on the platform and Android for the UI.
Finally driver domains allow Xen users to ''disaggregate'' and ''componentize'' the system in ways that would not be possible otherwise. For example they allow users to run a real-time operating system alongside the main OS to drive a device that has real time constraints. They allow users to run a legacy OS to drive old devices that do not have any new drivers in modern operating systems. They allow users to separate and isolate critical functionalities from less critical ones. For example they allow to run an OS such as QNX to drive most devices on the platform alongside Android for the user interface.
[[File:Xen arch2.png|630px|center|frameless]]
[[File:Xen arch2.png|630px|center|frameless]]


== Xen on ARM: a cleaner architecture ==
== Xen on ARM: a cleaner architecture ==
Xen on ARM is not just a straight 1:1 port of x86 Xen. We exploited the opportunity to clean up the architecture and get rid of the cruft that accumulated during the many years of x86 development. Firstly we removed any need for emulation. Emulated interfaces are slow and insecure. QEMU, used for emulation on x86 Xen, is a nice and well maintained Open Source project but is big both in terms of binary size and lines of source code. In software the smaller, the simpler, the better. Xen on ARM does not need QEMU because it does not do any emulation. It accomplishes the goal by exploiting virtualization support in hardware as much as possible and using paravirtualized interfaces for IO.
Xen on ARM is not just a straight 1:1 port of x86 Xen. We exploited the opportunity to clean up the architecture and get rid of the cruft that we accumulated during the many years of x86 development. Firstly ''we removed any need for emulation''. Emulated interfaces are slow and insecure. QEMU, used for emulation on x86 Xen, is a well maintained Open Source project but is big both in terms of binary size and lines of source code. ''The smaller, the simpler, the better''. Xen on ARM does not need QEMU because it does not do any emulation. It accomplishes the goal by exploiting virtualization support in hardware as much as possible and using paravirtualized interfaces for IO. As a result Xen on ARM is faster and more secure.


On x86 two different kind of Xen guests coexist: PV guests, such as Linux and other Open Source OSes, and HVM guests, usually Microsoft Windows, but any OS can run as HVM guest. PV and HVM guests look quite different from one another from the hypervisor point of view. The difference is exposed all the way up to the user that needs to choose how to run the guest by setting a line in the VM config file. On ARM we didn’t want to introduce this differentiation that we felt was artificial and could confuse our users. Xen on ARM only supports one kind of guests that is the best of both worlds: it does not need any emulation and relies on paravirtualized interfaces for IO as early as possible in the boot sequence, like PV guests on x86. It exploits virtualization support in hardware as much as possible and does not require invasive changes in the guest operating system kernel in order to run, like HVM guests on x86.
On x86 two different kinds of Xen guest coexist: PV guests, such as Linux and other Open Source OSes, and HVM guests, usually Microsoft Windows, but any OS can run as HVM guest. PV and HVM guests are quite different from the hypervisor point of view. The difference is exposed all the way up to the user, that needs to choose how to run the guest by setting a line in the VM config file. On ARM we did not want to introduce this differentiation: we felt that it is artificial and confusing. ''Xen on ARM only supports one kind of guest that is the best of both worlds'': it does not need any emulation and relies on paravirtualized interfaces for IO as early as possible in the boot sequence, like x86 PV guests. It exploits virtualization support in hardware as much as possible and does not require invasive changes to the guest operating system kernel in order to run, like x86 HVM guests.


The new architecture designed for Xen on ARM is much cleaner and simpler and it turned out to be a very good match for the hardware.
''The new architecture designed for Xen on ARM is much cleaner and simpler and it turned out to be a very good match for the hardware''.


== Xen on ARM: virtualization extensions ==
== Xen on ARM: virtualization extensions ==
ARM provides 3 levels of execution: EL0, user mode, EL1, kernel mode, and EL2, hypervisor mode. ARM virtualization extensions introduce a new instruction, HVC, to switch between kernel mode and hypervisor mode. The MMU supports 2 stages of translation. The generic timers and the GIC interrupt controller are also virtualization aware.
ARM virtualization extensions provide 3 levels of execution: EL0, user mode, EL1, kernel mode, and EL2, hypervisor mode. They introduce a new instruction, HVC, to switch between kernel mode and hypervisor mode. The MMU supports 2 stages of translation. The generic timers and the GIC interrupt controller are virtualization aware.
[[File:Xen arm arch1.png|750px|center|frameless]]
[[File:Xen arm arch1.png|750px|center|frameless]]
ARM virtualization extensions are a great fit for the Xen architecture:
Xen runs entirely and only in hypervisor mode. It leaves kernel mode for the guest operating system kernel and EL0 for guest user space applications. Type-2 hypervisors need to frequently switch between hypervisor mode and kernel mode. By running entirely in EL2 Xen significantly reduces the number of context switches required. The new instruction, HVC, is used by the kernel to issue hypercalls to the hypervisor. Xen uses second stage translation in the MMU to assign memory to virtual machines. Xen also takes over the generic timers and the GIC interrupt controller. It uses the generic timers to receive timer interrupts as well as injecting timer interrupts and exposing the counter to virtual machines. It uses the GIC to receive interrupts as well as injecting interrupts into guests.
* Xen runs entirely and only in hypervisor mode <br />Xen leaves kernel mode for the guest operating system kernel and EL0 for guest user space applications. Type-2 hypervisors need to frequently switch between hypervisor mode and kernel mode. By running entirely in EL2 Xen significantly reduces the number of context switches required.
* HVC, the new instruction, is used by the kernel to issue hypercalls to Xen
* Xen uses 2-stage translation in the MMU to assign memory to virtual machines
* Xen uses generic timers to receive timer interrupts as well as injecting timer interrupts and exposing the counter to virtual machines
* Xen uses the GIC to receive interrupts as well as injecting interrupts into guests
[[File:Xen arm arch2.png|750px|center|frameless]]
[[File:Xen arm arch2.png|750px|center|frameless]]
Xen discovers the hardware via device tree. It assigns all the devices that it does not use to Dom0 by remapping the corresponding MMIO regions and interrupts. It generates a flatten device tree binary for Dom0 that describes exactly the environment exposed to it: the number of virtual cpus that Xen created for it (maybe less than the number of physical cpus on the platform), the amount of memory that Xen gave to it (surely less than the amount of physical memory available) and only the devices that Xen re-assigned to it (not all devices are assigned to Dom0, at the very least one UART is not). Xen also adds a device tree node to advertise its own presence on the platform. Dom0 boots exactly the same way it would boot natively. By using device tree to discover the hardware, Dom0 finds out exactly what is available and loads the drivers for it. It does not try to access interfaces that are not present and therefore Xen does not need to do any emulation. By finding the Xen hypervisor node, Dom0 knows that it is running on Xen and therefore can initialize the paravirtualized backends. Other DomUs would load the paravirtualized frontends instead.
Xen discovers the hardware via device tree. It assigns all the devices that it does not use to Dom0 by remapping the corresponding MMIO regions and interrupts. It generates a flatten device tree binary for Dom0 that describes exactly the environment exposed to it. Dom0's device tree contains:
* the exact number of virtual cpus that Xen created for it (maybe less than the number of physical cpus on the platform)
* the exact amount of memory that Xen gave to it (surely less than the amount of physical memory available)
* the devices that Xen re-assigned to it and no more (not all devices are assigned to Dom0, at the very least one UART is not)
* an hypervisor node to advertise the presence of Xen on the platform
Dom0 boots exactly the same way it would boot natively. By using device tree to discover the hardware, Dom0 finds out what is available and loads the drivers for it. It does not try to access interfaces that are not present and therefore ''Xen does not need to do any emulation''. By finding the Xen hypervisor node, Dom0 knows that it is running on Xen and therefore can initialize the paravirtualized backends. Other DomUs would load the paravirtualized frontends instead.


== Xen on ARM: code size ==
== Xen on ARM: code size ==
We wrote previously that the new architecture turned out to be a very good match for the hardware. This is proven by the code size: the smaller the better. Xen on ARM is 1/10 of the code size of x86_64 Xen, while still providing a similar level of features.
We wrote previously that the new architecture turned out to be a very good match for the hardware. This is proven by the code size: the smaller the better. ''Xen on ARM is 1/6 of the code size of x86_64 Xen'', while still providing a similar level of features.
In Xen 4.4.0:
{| class="wikitable" style="margin-left: auto; margin-right: auto; text-align: center; border-color: black; border-style: solid; border-width: 1px 1px 1px 1px; padding: 5;"
{| class="wikitable" style="margin-left: auto; margin-right: auto; text-align: center; border-color: black; border-style: solid; border-width: 1px 1px 1px 1px; padding: 5;"
|-
|-
Line 39: Line 51:
|-
|-
|style="text-align:left;"|xen/arch/arm
|style="text-align:left;"|xen/arch/arm
|5,122
|11,767
|1,969
|3,503
|1,812
|821
|17,082
|7912
|-
|-
|style="text-align:center;"|C
|style="text-align:center;"|C
|5,023
|11,587
|406
|954
|344
|813
|5,773
|13,354
|-
|-
|style="text-align:center;"|ASM
|style="text-align:center;"|ASM
|99
|180
|1,563
|2,549
|477
|999
|2,139
|3,728
|-
|-
|style="text-align:left;"|xen/include/asm-arm
|style="text-align:left;"|xen/include/asm-arm
|2,315
|4,786
|563
|984
|1,050
|666
|3,544
|6,820
|-
|-
|style="background-color:LightBlue; text-align:left;"|Total ARM
|style="background-color:LightBlue; text-align:left;"|Total ARM
|style="background-color:LightBlue;"|7,437
|style="background-color:LightBlue;"|16,553
|style="background-color:LightBlue;"|2,532
|style="background-color:LightBlue;"|4,487
|style="background-color:LightBlue;"|1,487
|style="background-color:LightBlue;"|2,862
|style="background-color:LightBlue;"|11,456
|style="background-color:LightBlue;"|23,902
|-
|-
|colspan="4" style="background-color:Tomato; text-align:center;"|x86_64
|colspan="4" style="background-color:Tomato; text-align:center;"|x86_64
Line 72: Line 84:
|-
|-
|colspan="4" style="text-align:left;"|xen/arch/x86
|colspan="4" style="text-align:left;"|xen/arch/x86
|124,851
|124,615
|-
|-
|colspan="4" style="text-align:left;"|xen/include/asm-x86
|colspan="4" style="text-align:left;"|xen/include/asm-x86
|18,542
|18,530
|-
|-
|colspan="4" style="background-color:Tomato; text-align:left;"|Total x86_64
|colspan="4" style="background-color:Tomato; text-align:left;"|Total x86_64
|style="background-color:Tomato;"|143,393
|style="background-color:Tomato;"|143,145
|}
|}


== Porting Xen to a new SoC ==
== Porting Xen to a new SoC ==
Assuming that you already have a functional Dom0 kernel (usually Linux) for your SoC, porting Xen to it is a very simple task.
In terms of devices Xen only uses the GIC, generic timers, the SMMU for device assignment and one UART for debugging. Therefore porting Xen to a new SoC is a simple task. It usually involves writing a new UART driver for Xen (if the SoC comes with an unsupported UART) and the code to bring up secondary CPUs (if the platform does not support PSCI, for which Xen has already a driver).
In fact in terms of devices, Xen only uses:
* GIC
* generic timers
* SMMU
* one UART for debugging

Therefore the porting effort is limited to writing a new UART driver for Xen (if the SoC comes with an unsupported UART) and the code to bring up secondary CPUs (if the platform does not support PSCI, for which Xen has already a driver).
See for example the [http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/drivers/char/exynos4210-uart.c Exynos 4210 Xen driver] and the [http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/arch/arm/platforms/exynos5.c Exynos5 platform code].

If you need to debug the interrupts, have a look to the function do_IRQ() in Xen. All interrupts are taken by Xen through the GIC and routed to do_IRQ(). This will dispatch the IRQ either to a guest or call a Xen specific handler, then. Xen itself handles only limited number of interrupts: timers, UART and SMMU. The rest is either routed to guests or blacklisted by Xen.


== Porting an operating system to Xen on ARM ==
== Porting an operating system to Xen on ARM ==
Porting an OS to Xen on ARM could not be easier: it does not require any changes to the operating system kernel, only few new drivers to get the paravirtualized frontends running and obtain access to network, disk, console, etc.
Porting an OS to Xen on ARM is easy: it does not require any changes to the operating system kernel, only a few new drivers to get the paravirtualized frontends running and to obtain access to network, disk, console, etc.
The paravirtualized frontends rely on:
The paravirtualized frontends rely on the grant table, for page sharing, on xenbus, for discovery and on event channels for notifications. Once the OS has support for these basic building blocks, introducing the paravirtualized frontends is easy. You are likely to be able to reuse the existing frontends in Linux (GPLv2 licensed) or the ones in FreeBSD or NetBSD (BSD licensed).
* grant table for page sharing ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/grant-table.c linux version], [http://svnweb.freebsd.org/base/head/sys/xen/gnttab.c?view=co freebsd version])
* xenbus for discovery ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/xenbus linux version], [http://svnweb.freebsd.org/base/head/sys/xen/xenbus/ freebsd version])
* event channels for notifications ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/events linux version], [http://svnweb.freebsd.org/base/head/sys/x86/xen/xen_intr.c freebsd version])
Once the OS has support for the basic building blocks, the next step is introducing the paravirtualized frontend drivers. You are likely to be able to reuse the existing ones:

* network ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/xen-netfront.c linux version], [http://svnweb.freebsd.org/base/head/sys/dev/xen/netfront/ freebsd version])
* block ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/block/xen-blkfront.c linux version], [http://svnweb.freebsd.org/base/head/sys/dev/xen/blkfront/ freebsd version])
* console ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/tty/hvc/hvc_xen.c linux version], [http://svnweb.freebsd.org/base/head/sys/dev/xen/console/ freebsd version])
* framebuffer ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/video/xen-fbfront.c linux version])
* keyboard and mouse ([https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/input/misc/xen-kbdfront.c linux version])

== Mobile platforms and new PV protocols ==
Virtualizing a modern mobile platform involves dealing with devices such as camera, compass, gps, etc, for which PV frontend and backend drivers do not exist today.
If only one VM needs access to one of these devices at a time, you can simply assign the device to the VM, remapping the corresponding MMIO regions and interrupts.
If multiple VMs need access to the device simultaneously, you have to write a new pair of PV frontend and backend drivers. Fortunately many open source implementations of PV frontends and backends for different class of devices already exist in Linux and other operating systems. Something similar is likely to already exist. The difficulty of writing a new pair of PV frontends and backends increases with the complexity of the device you are trying to share. If the device is simple, such as the compass, writing the new pair of drivers is going to very easy. If the device is complex, such as a 3d graphic accelerator, writing the new pair of frontends and backends is going to be difficult.

[[Category:XenARM]]
[[Category:Developers]]
[[Category:OpenEmbedded]]
[[Category:Xen 4.3]]
[[Category:Xen 4.4]]

Latest revision as of 18:17, 19 February 2018

Xen on ARM

What is Xen?

Xen is a lightweight, high performance, Open Source hypervisor. Xen has a very low footprint: the ARM port amounts to less than 90K lines of code. Xen is licensed GPLv2 and has an healthy and diverse community that supports it and funds its development. Xen is hosted by the LinuxFoundation, that provides stewardship for the project.

The Xen Architecture

Xen is type-1 hypervisor: it runs directly on the hardware, everything else in the system is running as a virtual machine on top of Xen, including Dom0, the first virtual machine. Dom0 is created by Xen, is privileged and drives the devices on the platform. Xen virtualizes CPU, memory, interrupts and timers, providing virtual machines with one or more virtual CPUs, a fraction of the memory of the system, a virtual interrupt controller and a virtual timer. Xen assigns devices such as SATA controllers and network cards to Dom0, taking care of remapping MMIO regions and IRQs. Dom0 (typically Linux, but it could also be FreeBSD or other operating systems) runs the same device drivers for these devices that would be using on a native execution.

Dom0 also runs a set of drivers called paravirtualized backends to give access to disk, network, etc, to the other unprivileged virtual machines. The operating system running as DomU (unprivileged guest in Xen terminology) gets access to a set of generic virtual devices by running the corresponding paravirtualized frontend drivers. A single backend services multiple frontends. A pair of paravirtualized drivers exist for all the most common classes of devices: disk, network, console, framebuffer, mouse, keyboard, etc. They usually live in the operating system kernel, i.e. Linux. A few PV backends can also run in userspace in QEMU. The frontends connect to the backends using a simple ring protocol over a shared page in memory. Xen provides all the tools for discovery and to setup the initial communication. Xen also provides a mechanism for the frontend and the backend to share additional pages and notify each other via software interrupts.

Xen arch1.png

Even though it is the most common configuration, there is no reasons to run all the device drivers and all the paravirtualized backends in Dom0. The Xen architecture allows driver domains: unprivileged virtual machines with the only purpose of running the driver and the paravirtualized backend for one class of devices. For example you can have a disk driver domain, with the SATA controller assigned, running the driver for it and the disk paravirtualized backend. You can have a network driver domain with the network card assigned, running the driver for it and the network paravirtualized backend. As driver domains are regular unprivileged guests, they make the system more secure because they allow large pieces of code, such as the entire network stack, to run unprivileged. Even if a malicious guest manages to take over the paravirtualized network backend and the network driver domain, it would not be able to take over the entire system. Driver domains also improve isolation and resilience: the network driver domain is fully isolated from the disk driver domains and Dom0. If the network driver crashes it would not be able to take down the entire system, only the network. It is possible to reboot just the network driver domains while everything else remains online. Finally driver domains allow Xen users to disaggregate and componentize the system in ways that would not be possible otherwise. For example they allow users to run a real-time operating system alongside the main OS to drive a device that has real time constraints. They allow users to run a legacy OS to drive old devices that do not have any new drivers in modern operating systems. They allow users to separate and isolate critical functionalities from less critical ones. For example they allow to run an OS such as QNX to drive most devices on the platform alongside Android for the user interface.

Xen arch2.png

Xen on ARM: a cleaner architecture

Xen on ARM is not just a straight 1:1 port of x86 Xen. We exploited the opportunity to clean up the architecture and get rid of the cruft that we accumulated during the many years of x86 development. Firstly we removed any need for emulation. Emulated interfaces are slow and insecure. QEMU, used for emulation on x86 Xen, is a well maintained Open Source project but is big both in terms of binary size and lines of source code. The smaller, the simpler, the better. Xen on ARM does not need QEMU because it does not do any emulation. It accomplishes the goal by exploiting virtualization support in hardware as much as possible and using paravirtualized interfaces for IO. As a result Xen on ARM is faster and more secure.

On x86 two different kinds of Xen guest coexist: PV guests, such as Linux and other Open Source OSes, and HVM guests, usually Microsoft Windows, but any OS can run as HVM guest. PV and HVM guests are quite different from the hypervisor point of view. The difference is exposed all the way up to the user, that needs to choose how to run the guest by setting a line in the VM config file. On ARM we did not want to introduce this differentiation: we felt that it is artificial and confusing. Xen on ARM only supports one kind of guest that is the best of both worlds: it does not need any emulation and relies on paravirtualized interfaces for IO as early as possible in the boot sequence, like x86 PV guests. It exploits virtualization support in hardware as much as possible and does not require invasive changes to the guest operating system kernel in order to run, like x86 HVM guests.

The new architecture designed for Xen on ARM is much cleaner and simpler and it turned out to be a very good match for the hardware.

Xen on ARM: virtualization extensions

ARM virtualization extensions provide 3 levels of execution: EL0, user mode, EL1, kernel mode, and EL2, hypervisor mode. They introduce a new instruction, HVC, to switch between kernel mode and hypervisor mode. The MMU supports 2 stages of translation. The generic timers and the GIC interrupt controller are virtualization aware.

Xen arm arch1.png

ARM virtualization extensions are a great fit for the Xen architecture:

  • Xen runs entirely and only in hypervisor mode
    Xen leaves kernel mode for the guest operating system kernel and EL0 for guest user space applications. Type-2 hypervisors need to frequently switch between hypervisor mode and kernel mode. By running entirely in EL2 Xen significantly reduces the number of context switches required.
  • HVC, the new instruction, is used by the kernel to issue hypercalls to Xen
  • Xen uses 2-stage translation in the MMU to assign memory to virtual machines
  • Xen uses generic timers to receive timer interrupts as well as injecting timer interrupts and exposing the counter to virtual machines
  • Xen uses the GIC to receive interrupts as well as injecting interrupts into guests
Xen arm arch2.png

Xen discovers the hardware via device tree. It assigns all the devices that it does not use to Dom0 by remapping the corresponding MMIO regions and interrupts. It generates a flatten device tree binary for Dom0 that describes exactly the environment exposed to it. Dom0's device tree contains:

  • the exact number of virtual cpus that Xen created for it (maybe less than the number of physical cpus on the platform)
  • the exact amount of memory that Xen gave to it (surely less than the amount of physical memory available)
  • the devices that Xen re-assigned to it and no more (not all devices are assigned to Dom0, at the very least one UART is not)
  • an hypervisor node to advertise the presence of Xen on the platform

Dom0 boots exactly the same way it would boot natively. By using device tree to discover the hardware, Dom0 finds out what is available and loads the drivers for it. It does not try to access interfaces that are not present and therefore Xen does not need to do any emulation. By finding the Xen hypervisor node, Dom0 knows that it is running on Xen and therefore can initialize the paravirtualized backends. Other DomUs would load the paravirtualized frontends instead.

Xen on ARM: code size

We wrote previously that the new architecture turned out to be a very good match for the hardware. This is proven by the code size: the smaller the better. Xen on ARM is 1/6 of the code size of x86_64 Xen, while still providing a similar level of features. In Xen 4.4.0:

Common ARMv7 ARMv8 Total
xen/arch/arm 11,767 3,503 1,812 17,082
C 11,587 954 813 13,354
ASM 180 2,549 999 3,728
xen/include/asm-arm 4,786 984 1,050 6,820
Total ARM 16,553 4,487 2,862 23,902
x86_64 Total
xen/arch/x86 124,615
xen/include/asm-x86 18,530
Total x86_64 143,145

Porting Xen to a new SoC

Assuming that you already have a functional Dom0 kernel (usually Linux) for your SoC, porting Xen to it is a very simple task. In fact in terms of devices, Xen only uses:

  • GIC
  • generic timers
  • SMMU
  • one UART for debugging

Therefore the porting effort is limited to writing a new UART driver for Xen (if the SoC comes with an unsupported UART) and the code to bring up secondary CPUs (if the platform does not support PSCI, for which Xen has already a driver). See for example the Exynos 4210 Xen driver and the Exynos5 platform code.

If you need to debug the interrupts, have a look to the function do_IRQ() in Xen. All interrupts are taken by Xen through the GIC and routed to do_IRQ(). This will dispatch the IRQ either to a guest or call a Xen specific handler, then. Xen itself handles only limited number of interrupts: timers, UART and SMMU. The rest is either routed to guests or blacklisted by Xen.

Porting an operating system to Xen on ARM

Porting an OS to Xen on ARM is easy: it does not require any changes to the operating system kernel, only a few new drivers to get the paravirtualized frontends running and to obtain access to network, disk, console, etc. The paravirtualized frontends rely on:

Once the OS has support for the basic building blocks, the next step is introducing the paravirtualized frontend drivers. You are likely to be able to reuse the existing ones:

Mobile platforms and new PV protocols

Virtualizing a modern mobile platform involves dealing with devices such as camera, compass, gps, etc, for which PV frontend and backend drivers do not exist today. If only one VM needs access to one of these devices at a time, you can simply assign the device to the VM, remapping the corresponding MMIO regions and interrupts. If multiple VMs need access to the device simultaneously, you have to write a new pair of PV frontend and backend drivers. Fortunately many open source implementations of PV frontends and backends for different class of devices already exist in Linux and other operating systems. Something similar is likely to already exist. The difficulty of writing a new pair of PV frontends and backends increases with the complexity of the device you are trying to share. If the device is simple, such as the compass, writing the new pair of drivers is going to very easy. If the device is complex, such as a 3d graphic accelerator, writing the new pair of frontends and backends is going to be difficult.