Ceph and libvirt technology preview: Difference between revisions

From Xen
Jump to navigationJump to search
(Added categories)
 
(14 intermediate revisions by one other user not shown)
Line 32: Line 32:
Next you must connect your CentOS host to an existing ceph cluster. Typically this involves writing a /etc/ceph/ceph.conf and exchanging keys. Have a look at the
Next you must connect your CentOS host to an existing ceph cluster. Typically this involves writing a /etc/ceph/ceph.conf and exchanging keys. Have a look at the
[http://ceph.com/docs/master/rados/operations/authentication/#add-a-key Ceph documentation]
[http://ceph.com/docs/master/rados/operations/authentication/#add-a-key Ceph documentation]

You can check to see if your connection is working by running:

# ceph osd lspools
0 data,1 metadata,2 rbd,3 libvirt-pool,

Before you can run VMs from ceph storage, you must create a XenServer "Storage Repository" (libvirt calls this a "storage pool") to represent the ceph pool. Create a small text file describing the ceph storage, as if you were going to run "virsh pool-create" on it. My sr.xml which describes a ceph pool named "ceph" looks like:

<pool type='rbd'>
<name>ceph</name>
<source>
<name>rbd</name>
<host name='10.80.237.208' port='6789'/>
</source>
</pool>

Then type

xe sr-create type=libvirt name-label=ceph device-config:xml-filename=sr.xml

You should now be able to manage your ceph SR via XenCenter and the 'xe' CLI.


= Installing a Windows VM =
= Installing a Windows VM =

== Using XenCenter ==

There is a small bug in XenCenter triggered when a system has no .isos at all. I created a simple [https://github.com/djs55/xenadmin/tree/hyperspace-fixes branch on github] with a fix for this. I've also uploaded a XenCenter binary to http://xenbits.xen.org/djs/XenCenterMain.exe

This is how I installed windows from an .iso on an NFS server:

# Press the 'new storage' button to create an 'NFS ISO' storage repository.
# Press the 'new VM' button to use the wizard to create a VM from a template and start it installing.

== Using the CLI ==

This is how I installed windows from an .iso I copied onto the CentOS box:

scp windows.iso root@centos:/usr/share/xapi/packages/iso
SR=$(xe sr-list name-label=XenServer\ Tools params=uuid --minimal)
xe sr-scan uuid=$SR
VM=$(xe vm-install template=Windows\ 7\ \(64-bit\) new-name-label=win7)
xe vm-cd-add vm=$VM cd-name=windows.iso device=1
xe vm-start vm=$VM


= Installing a PV linux VM =
= Installing a PV linux VM =


== Using XenCenter ==
= Architecture diagram =

This should just work as normal, but I've not managed to test it.

== Using the CLI ==

VM=$(xe vm-install template=Debian\ Squeeze\ 6.0\ \(64-bit\) new-name-label=debian)
xe vm-param-set uuid=$VM other-config:install-repository=http://ftp.uk.debian.org/debian
NET=$(xe network-list bridge=brem1 params=uuid --minimal)
xe vif-create vm-uuid=$VM network-uuid=$NET device=0
xe vm-start vm=$VM

= Architecture =

This tech preview relies on the following third-party code:

# CentOS 6.4: for a 64-bit domain 0
# xen 4.2.2
# linux 3.4
# libxl: the toolstack utility library used by default in xen 4.2+
# the "upstream" qemu: a modern qemu with new virtual hardware and new storage protocols (like ceph's RBD)
# libvirt 0.10.2.4: used primarily for storage control plane operations

The experimental xapi toolstack version has the following components:

# message-switch: a system of message passing over named queues. This hides all the details of inter-service communication and makes moving services into driver domains easier.
# xenopsd-xenlight: a local domain manager which uses libxl (thanks to Rob Hoes)
# xapi-libvirt-storage: a storage 'adapter' which transforms the xapi storage interface into the libvirt one. This allows us to invoke control-plane operations on libvirt storage (e.g. volume create/destroy etc)
# xcp-networkd: manages host network configuration
# xcp-rrdd: samples performance data and produces statistics
# squeezed: manages host memory through ballooning
# xapi: exports the XenAPI for remote management

[[File:fusion-block-diagram.svg|Block diagram showing system architecture]]


= Known issues =
= Known issues =

Keyboard input to HVM guests doesn't work, although the framebuffer itself and the mouse are ok. Either enable direct remote access to your guest (ssh, RDP) or try [http://www.gossamer-threads.com/lists/xen/api/288089 xen vnc proxy]

XenCenter's new VM wizard will crash if there are 0 .isos on the XenServer. Try the modified binary mentioned above.

Localhost migrate will definitely deadlock -- don't try it. Cross-host migrate is much more interesting to test.

The only storage types present are: local .vhd, local or remote .iso and ceph via libvirt. No other storage type will work.

This tech preview uses libxl, and the error reporting is not very good. When it goes wrong except a nondescript general 'failure' error and expect to have to look in /var/log/messages for the interesting stuff.

This tech preview uses xen 4.2.2 and the "upstream" qemu. The old "traditional" qemu is still the normal default in xen 4.2. The upstream qemu has become the default in xen 4.3, so we should definitely upgrade as soon as possible.

The amount of memory reported by xapi is hardcoded to a fixed value for now. This will probably prevent density experiments on very big hosts.

Disk metrics are definitely missing because the xcp-rrdd statistics collector is reading the wrong paths in /sys.


= Getting involved =
= Getting involved =

First join the xs-devel mailing list: http://xenserver.org/discuss-virtualization/mailing-lists.html

and hang out on IRC: chat.freenode.net #xen-api

Second, note that you can run

yum install yum-utils -y
yum-builddep xapi -y

This allows you to clone repositories like this one:

git clone git://github.com/xapi-project/xenopsd

and then build them:

cd xenopsd
./configure
make

This is a good way to compile and test fixes and new things.

[[Category:XAPI Devel]]
[[Category:XAPI Users]]
[[Category:Ceph]]
[[Category:Libvirt]]
[[Category:CentOS]]

Latest revision as of 15:36, 8 July 2013

This "technology preview" is an experimental set of software packages which allow you to experiment and play with XenServer (formerly XCP) + CentOS 6.4 + ceph storage + libvirt. Obviously the packages should not be used in production! However now is a good time to play, discover issues, suggest things, build things and generally get involved.

Pre-requesities

You need a CentOS 6.4 x86_64 physical host.

Installing the tech preview

The tech preview consists of an RPM repo for CentOS 6.4 which contains the experimental software. It can be installed by first adding the repository:

rpm -ihv http://xenbits.xen.org/djs/xenserver-tech-preview-release-0.3.0-0.x86_64.rpm

and then installing the xenserver-core metapackage:

yum install -y xenserver-core

Once the packages have been installed, you can create a basic system configuration by running:

xenserver-install-wizard

Note the wizard works best on 'clean' environments (i.e. like that just after a fresh CentOS install). If it fails either: correct the problem and re-run, or perhaps inspect what the script is doing and adapt to your environment. Once the basic configuration is completed, you must reboot your system. Note: it should be possible to polish the script so that a reboot becomes unnecessary.

Once the machine reboots, you should be able to connect XenCenter to the management IP, or to login to dom0 and use the 'xe' CLI (with tab completion)

Observe that:

  1. we're installing the tech preview XenServer via a normal-looking package repository rather than via a custom .iso. Hopefully we'll be able to use 'yum' more in future.
  2. we're installing on top of a vanilla 64-bit CentOS 6.4, rather than a 32-bit CentOS 5.7. Hopefully we'll be able to track the underlying distro more closely in future.

Connecting to ceph storage

Next you must connect your CentOS host to an existing ceph cluster. Typically this involves writing a /etc/ceph/ceph.conf and exchanging keys. Have a look at the Ceph documentation

You can check to see if your connection is working by running:

# ceph osd lspools
0 data,1 metadata,2 rbd,3 libvirt-pool,

Before you can run VMs from ceph storage, you must create a XenServer "Storage Repository" (libvirt calls this a "storage pool") to represent the ceph pool. Create a small text file describing the ceph storage, as if you were going to run "virsh pool-create" on it. My sr.xml which describes a ceph pool named "ceph" looks like:

<pool type='rbd'>
  <name>ceph</name>
     <name>rbd</name>
     <host name='10.80.237.208' port='6789'/>
</pool>

Then type

xe sr-create type=libvirt name-label=ceph device-config:xml-filename=sr.xml

You should now be able to manage your ceph SR via XenCenter and the 'xe' CLI.

Installing a Windows VM

Using XenCenter

There is a small bug in XenCenter triggered when a system has no .isos at all. I created a simple branch on github with a fix for this. I've also uploaded a XenCenter binary to http://xenbits.xen.org/djs/XenCenterMain.exe

This is how I installed windows from an .iso on an NFS server:

  1. Press the 'new storage' button to create an 'NFS ISO' storage repository.
  2. Press the 'new VM' button to use the wizard to create a VM from a template and start it installing.

Using the CLI

This is how I installed windows from an .iso I copied onto the CentOS box:

scp windows.iso root@centos:/usr/share/xapi/packages/iso
SR=$(xe sr-list name-label=XenServer\ Tools params=uuid --minimal)
xe sr-scan uuid=$SR
VM=$(xe vm-install template=Windows\ 7\ \(64-bit\) new-name-label=win7)
xe vm-cd-add vm=$VM cd-name=windows.iso device=1
xe vm-start vm=$VM

Installing a PV linux VM

Using XenCenter

This should just work as normal, but I've not managed to test it.

Using the CLI

VM=$(xe vm-install template=Debian\ Squeeze\ 6.0\ \(64-bit\) new-name-label=debian)
xe vm-param-set uuid=$VM other-config:install-repository=http://ftp.uk.debian.org/debian
NET=$(xe network-list bridge=brem1 params=uuid --minimal)
xe vif-create vm-uuid=$VM network-uuid=$NET device=0
xe vm-start vm=$VM

Architecture

This tech preview relies on the following third-party code:

  1. CentOS 6.4: for a 64-bit domain 0
  2. xen 4.2.2
  3. linux 3.4
  4. libxl: the toolstack utility library used by default in xen 4.2+
  5. the "upstream" qemu: a modern qemu with new virtual hardware and new storage protocols (like ceph's RBD)
  6. libvirt 0.10.2.4: used primarily for storage control plane operations

The experimental xapi toolstack version has the following components:

  1. message-switch: a system of message passing over named queues. This hides all the details of inter-service communication and makes moving services into driver domains easier.
  2. xenopsd-xenlight: a local domain manager which uses libxl (thanks to Rob Hoes)
  3. xapi-libvirt-storage: a storage 'adapter' which transforms the xapi storage interface into the libvirt one. This allows us to invoke control-plane operations on libvirt storage (e.g. volume create/destroy etc)
  4. xcp-networkd: manages host network configuration
  5. xcp-rrdd: samples performance data and produces statistics
  6. squeezed: manages host memory through ballooning
  7. xapi: exports the XenAPI for remote management

Block diagram showing system architecture

Known issues

Keyboard input to HVM guests doesn't work, although the framebuffer itself and the mouse are ok. Either enable direct remote access to your guest (ssh, RDP) or try xen vnc proxy

XenCenter's new VM wizard will crash if there are 0 .isos on the XenServer. Try the modified binary mentioned above.

Localhost migrate will definitely deadlock -- don't try it. Cross-host migrate is much more interesting to test.

The only storage types present are: local .vhd, local or remote .iso and ceph via libvirt. No other storage type will work.

This tech preview uses libxl, and the error reporting is not very good. When it goes wrong except a nondescript general 'failure' error and expect to have to look in /var/log/messages for the interesting stuff.

This tech preview uses xen 4.2.2 and the "upstream" qemu. The old "traditional" qemu is still the normal default in xen 4.2. The upstream qemu has become the default in xen 4.3, so we should definitely upgrade as soon as possible.

The amount of memory reported by xapi is hardcoded to a fixed value for now. This will probably prevent density experiments on very big hosts.

Disk metrics are definitely missing because the xcp-rrdd statistics collector is reading the wrong paths in /sys.

Getting involved

First join the xs-devel mailing list: http://xenserver.org/discuss-virtualization/mailing-lists.html

and hang out on IRC: chat.freenode.net #xen-api

Second, note that you can run

yum install yum-utils -y
yum-builddep xapi -y

This allows you to clone repositories like this one:

git clone git://github.com/xapi-project/xenopsd

and then build them:

cd xenopsd
./configure
make

This is a good way to compile and test fixes and new things.