Driver Domain

From Xen
Revision as of 11:47, 13 January 2012 by Lars.kurth (talk | contribs) (moved DriverDomain to Driver Domain)
Jump to navigationJump to search


Icon todo.png Needs Refactor

This document contains glossary and howto elements, which would better be separated out (but linked to)


Xen Driver Domain

A driver domain is unprivileged Xen domain that has been given responsibility for a particular piece of hardware. It runs a minimal kernel with only that hardware driver and the backend driver for that device class. Thus, if the hardware driver fails, the other domains (including Dom0) will survive and, when the driver domain is restarted, will be able to use the hardware again.

Benefits

  • Performance: eliminate the dom0 bottleneck. All device backend in dom0 will result dom0 to have bad response latency.
  • Enhanced security: Hardware drivers are the most failure-prone part of an operating system. It would be good for safety if you could isolate a driver from the rest of the system so that, when it failed, it could just be restarted without affecting the rest of the machine.

Setup

Software requirements of the driver domain:

  • The passthroughed hardware driver.
  • The backend driver.
  • The hotplug scripts.

However, PCI passthrough is optional. For example, you can setup a driver domain just doing the network package filtering, but pass the packages to dom0 for transfer.

To passthrough a device to a domain, see: http://zhigang.org/wiki/XenPCIPassthrough.

To start using the driver domain as block device backend:

# xm block-attach <Domain> file:/disks/disk1.img xvdc w <DriverDomain>


The xenstore entries looks like:

...
/local/domain/3/backend/vbd/4/51744=""
/local/domain/3/backend/vbd/4/51744/domain="OVM_EL5U3_X86_PVM_4GB"
/local/domain/3/backend/vbd/4/51744/frontend="/local/domain/4/device/vbd/51744"
/local/domain/3/backend/vbd/4/51744/uuid="1c955e9d-c6a8-dcee-dfb2-fcbe1e836479"
/local/domain/3/backend/vbd/4/51744/bootable="0"
/local/domain/3/backend/vbd/4/51744/dev="xvdc"
/local/domain/3/backend/vbd/4/51744/state="1"
/local/domain/3/backend/vbd/4/51744/params="/share/vm/disks/disk18.img"
/local/domain/3/backend/vbd/4/51744/mode="w"
/local/domain/3/backend/vbd/4/51744/online="1"
/local/domain/3/backend/vbd/4/51744/frontend-id="4"
/local/domain/3/backend/vbd/4/51744/type="file"
/local/domain/3/backend/vbd/4/51744/node="/dev/loop0"
/local/domain/3/backend/vbd/4/51744/physical-device="7:0"
/local/domain/3/backend/vbd/4/51744/hotplug-status="connected
...
/local/domain/4/device/vbd/51744=""
/local/domain/4/device/vbd/51744/virtual-device="51744"
/local/domain/4/device/vbd/51744/device-type="disk"
/local/domain/4/device/vbd/51744/protocol="x86_32-abi"
/local/domain/4/device/vbd/51744/backend-id="3"
/local/domain/4/device/vbd/51744/state="3"
/local/domain/4/device/vbd/51744/backend="/local/domain/3/backend/vbd/4/51744"
/local/domain/4/device/vbd/51744/ring-ref="836"
/local/domain/4/device/vbd/51744/event-channel="11"
...


Limitations

Due to architectural limitations of most PC hardware, driver domains with direct hardware access cannot be done securely. In effect, any domain that has direct hardware access has to be considered “trusted”.

The reason why is that all IO is done in physical addresses.

Consider the following case:

  • domain A is mapped in 0-2GB of physical memory
  • domain B is mapped in 2-4GB of physical memory
  • domain A has direct access to a PCI NIC
  • domain A programs the NIC to DMA in the 2-4GB physical memory range, overwriting domain B’s memory. Ooops!

The solution is a hardware unit known as an “IOMMU” (IO Memory Management Unit)

Reference