Windows PV Drivers Presentation

From Xen
Revision as of 13:52, 10 February 2015 by Rcpavlicek (talk | contribs) (→‎Slide 5)
Jump to navigationJump to search

Notes:

Slide 0

Hi, I’m Paul Durrant. I’m a principal engineer in the XenServer group at Citrix and I’m project lead for the XenProject Windows PV Drivers.

Slide 1

In this presentation I’m going to be giving an overview of the drivers.

We’ll start with the origins of the drivers, and the journey from the original XenServer-specific closed-source ‘Legacy’ drivers, through the open source XenServer drivers (dubbed the ‘Standard’ drivers in Citrix and available on GitHub), to the current generic XenProject drivers, the source of which is now hosted on Xenbits.

I’ll then move on to the way that functionality is broken down into interfaces, how they are provided and consumed, and how compatibility is maintained as they evolve.

And finally I’ll give a brief overview of what you need to do to build and install the drivers, and contribute to the project.

Slide 2

To start with I need to introduce some Windows driver terminology and some conventions I’ve used in the diagrams in this presentation.

Windows devices are organized into a tree, or a set of trees, rooted at what’s called a Physical Device Object or PDO. In my view of the world trees grow downwards so I put PDOs at the top 

Normally a PDO just represents a piece of hardware, which is not that useful unless you have some code to talk to it. That code is called a Function Driver and when a function driver attaches to a PDO it creates a corresponding Function Device Object or FDO.

Unlike some OS, such as Linux, Windows has a concept of demand-loading drivers. Hence function drivers do not contain code to discover their hardware. Instead they are part of a package described by what is called an INF file. In that INF file there are entries to tell Windows what PDO ‘names’ a particular function driver will ‘bind’ to, So, as Windows builds its device tree it can look at the names of newly created PDOs and determine which Function Drivers to load.

A Function Driver can also be what’s called a Bus Driver. That means that, having created its FDO, it can also create PDOs. For example, the root PCI driver binds to a PDO created by the ACPI driver (which is parsing the DSDT). It will create an FDO to bind to that, enumerate the root bus (using PCI config cycles) and create a PDO for each unique bus/device/function that it finds.

Slide 3

The first set of drivers we’ll mention are the closed source ‘Legacy’ drivers.

Before XenServer 6.1 was released, these were the only PV drivers and they were getting pretty long in the tooth. I believe they were written for Windows 2000 support on the first version of XenServer (or possibly even XenEnterprise?) to support HVM guests.

They are still used in XenServer today, but only for Windows Server 2003 (and XP before it went EOL).

Citrix have never provided source for these drivers, and that is mainly because there is code in them that is of unknown origin. Also, there is less and less point in doing so as time goes by. Server 2003 will be EOL this year (2015), at which point these drivers will finally be consigned to history.

Slide 4

To give you an idea of why Citrix made these drivers ‘Legacy’ and replaced them with a new set for Vista onwards, let’s take a look at the structure of the driver packages and how they (just about) hang together…

The first thing you’ll notice is there are essentially two ‘root’ PDOs. The one on the right is the Xen Platform PCI device, created by QEMU, and a key part of any HVM guest running on pretty much any Xen distribution. The one on the left, however, is synthesized by a driver installer package.

The main virtual bus driver is called XENEVTCHN (don’t know why) and that, along with the export driver XENUTIL (an export driver is like a kernel DLL), is where most of the code that talks to Xen lives. XENEVTCHN is the ultimate parent of the PV network devices, but not the storage devices. Those are dealt with by XENVBD, which binds directly to the PCI device, but uses code in XENUTIL to co-ordinate with XENEVTCHN.

The XENVBD package also installs a filter driver, SCSIFILT. The reason for this driver is that (because it needs to work on versions of Windows older than Vista) XENVBD uses a very old storage driver API in Windows called SCSIPORT, and SCSIPORT has very poor locking semantics and only a single request queue for an HBA. This makes it very slow. SCSIFILT is designed to sit between the generic Windows DISK driver and the XENVBD and intercept storage requests. Being a filter driver it’s not bound by any logo requirement to use a standard Windows storage API and so it bypasses the whole SCSIPORT queuing and locking framework and talks directly to the PV backends, which is a lot faster.

Back over on the left, you can see the XENNET driver for PV networks devices but in between that and XENEVTCHN is another driver, XENVIF. Because the legacy drivers used to be used for versions of Windows all the way from Server 2000 through to 7 and Server 2008R2 they actually had to have two distinct versions of the XENNET driver. Between releasing Server 2003 and Vista Microsoft changed the NDIS API in an incompatible way, so anyone writing Windows network drivers needed to fork their code. Server 2003 and before uses NDIS version 5.x and Vista onwards uses version 6.x.

The original code had both these flavours of XENNET but there was a lot of code duplicated between them and when bugs cropped up it was easy to end up applying a fix to one driver that really should be applied to both. I therefore re-wrote the drivers, moving all the common code into a driver called XENVIF which I also made the parent of all XENNETs, to allow for dynamic interface discovery which is something we’ll come onto later.

Slide 5

So this rather complex structure causes some problems…

SCSIFILT, whilst working round the deficiencies of SCSIPORT, causes some problems. There are utilities which directly open storage devices (SCSIPORT allows this) and send read and write requests. Those requests, because they did not come from the DISK driver, bypass SCSIFILT and thus XENVBD has to have a very odd ‘loopback’ path where it injects the requests into the storage stack as if they did come from the DISK driver to allow them to be intercepted by SCSIFILT. Also, because there are some circumstances where SCSIFILT is not loaded (e.g. if a disk is disabled in Device Manager) both XENVBD and SCSIFILT must have code to deal with the PV state modes, for purposes of VBD unplug… which is more code duplication.

Cross-package linkage dependencies (generally to XENUTIL) are a massive problem. There never was a defined ABI and so it was very easy for packages to become binary incompatible leading to very odd BSODs during upgrade. Really there is no safe way to upgrade legacy drivers… it is best to remove the old set before adding the new set. But, that requires two reboots.

The two root nodes also cause a big problem. Initialization of the PV interfaces to Xen need to be done before either XENEVTCHN or XENVBD can fully function, but you never know which one is going to come up first, and worse… a resource rebalance (something Windows may need to do to redistribute interrupts for example) means either one can be unloaded and reloaded at any time. This makes the initialization code very very complicated, non-obvious and fragile.

Finally, the use of a synthetic root node completely precludes deployment via Windows Update as those nodes can only be created by a driver installer.

Windows Update deployment has always been a goal for Citrix and so that final point is really a showstopper for these drivers.

Slide 6

Slide 7

Slide 8

Slide 9

Slide 10

Slide 11

Slide 12

Slide 13

Slide 14

Slide 15

Slide 16

Slide 17

Slide 18

Slide 19

Slide 20

Slide 21

Slide 22