Inter-domain communication for XAPI
In the future we will have
- storage driver domains
- network driver domains
- qemu stub domain(s)
- ... various other helper domains (e.g. possibly a domain running pygrub)
The XCP toolstack already has several pieces including:
- xapi: handling resource pools
- squeezed: applying ballooning policy
- xenopsd: (new) "babysitting" running VMs on a host
- perfmon: monitors alerts based on performance measurements
We currently use a set of ad-hoc protocols for inter-process communication including
- XMLRPC
- xenstore "rpc"
- JSONRPC
We desire to standardise on a particular messaging/RPC system which
- is available everywhere and very easy to use from many different source languages
- promotes location-transparency (so we don't suffer when a service is moved from dom0 to a domU)
Proposal: DBUS
[DBUS|http://en.wikipedia.org/wiki/D-Bus] is a simple IPC system originally designed for graphical desktop environments. It supports
- a message broker which can buffer messages and start receiving services on demand
- an IDL supporting both basic types (string, int etc) and structured types (arrays, structs)
- languages including: C, python, ocaml
- location-transparency via a notion of a "well-known bus address" (like org.xen.foo) used to identify services
It is also the system used by XCI.
Useful links:
- [d-feet|http://live.gnome.org/DFeet]: a pygtk-based object inspector
- http://wiki.meego.com/D-Bus/Overview: an overview, including instructions for using d-feet
Concrete example
Consider a simplified XCP system containing the following services:
- xapi: running in a domU, handling XenAPI calls from clients
- xenopsd: running in dom0, performing start/shutdown/... on running VMs
- storage: running in a domU i.e. a storage driver domain
We would create a special bus for our communication (we wouldn't use the system or the session default buses)
Each service would bind to: org.xen.xcp.servicename
Each service would place its objects into path: org/xen/xcp/servicename/objectname
TBD: Would we want to expose internal application "objects" (e.g. the VMs being managed by xenopsd or the VDIs within a storage driver domain) as individual objects, so they are introspectable? Or would that require that we broadcast too much internal state?
We would want to specify IDL up-front for interfaces (even though it can be used dynamically e.g. with python)
TBD: Are signals useful for us?
TBD: Are there any default interfaces for heartbeating and diagnostics?