Xen via libvirt for OpenStack juno: Difference between revisions
Rcpavlicek (talk | contribs) m (→Steps) |
Rcpavlicek (talk | contribs) m (→Steps) |
||
Line 67: | Line 67: | ||
admin_password = NOVA_PASS |
admin_password = NOVA_PASS |
||
</pre> |
</pre> |
||
* Add glance section |
* Add glance section: |
||
[glance] |
[glance] |
||
host=controller |
host=controller |
Revision as of 20:31, 22 November 2014
This document describes steps I took to setup a compute node based on Ubuntu 14.04 for OpenStack "juno", using the Xen Project via libvirt approach. Openstack does not support this approach well as it is in the Group C of the hypervisor support matrix for Openstack. You can hardly find any tutorial online describing this approach and this might be the first. Let's get started!
Prerequisite
Follow "OpenStack Installation Guide for Ubuntu 14.04" to setup the control node and network node, following the three-node architecture with OpenStack Networking (neutron). This involves lots of configurations and could take a day or two. Check that the control node and network node is working.
Steps
NOTE: Steps 3, 4, and 5 are workarounds for bugs present in Ubuntu 14.04 (and probably in other Debian derivatives of that era). Future releases of Ubuntu may not require these workarounds.
1. Add OpenStack "juno" to the repository:
apt-get update apt-get install software-properties-common add-apt-repository cloud-archive:juno apt-get update
2. Install nova-compute-xen, sysfsutils and python nova client:
apt-get install nova-compute-xen sysfsutils python-novaclient
3. Install qemu-2.0.2 with a patch fixing unmapping of persistent grants. Current qemu releases (including 2.0.2, 2.1.2 and 2.2.0-rc1) do not have this patch included and this will result in Dom0 kernel crashes when creating a Xen Project DomU from OpenStack GUI (horizon). I have applied this patch and make the modified qemu available in github:
wget https://github.com/xinglin/qemu-2.0.2/archive/master.zip unzip master.zip cd qemu-2.0.2-master/ apt-get build-dep qemu ./configure make -j16 make install
4. Add a patch to /etc/init.d/xen to start qemu process during startup:
--- /etc/init.d/xen 2014-11-18 20:54:10.788457049 -0700 +++ /etc/init.d/xen.bak 2014-11-18 20:53:14.804107463 -0700 @@ -228,6 +228,9 @@ case "$1" in *) log_end_msg 1; exit ;; esac log_end_msg 0 + /usr/local/bin/qemu-system-i386 -xen-domid 0 -xen-attach -name dom0 -nographic -M xenpv -daemonize \ + -monitor /dev/null -serial /dev/null -parallel /dev/null \ + -pidfile /var/run/qemu-xen-dom0.pid ;; stop) capability_check
5. Create a link at /usr/bin for pygrub:
ln -s /usr/lib/xen-4.4/bin/pygrub /usr/bin/pygrub
6. Reboot the machine and boot into Xen Project Dom0.
7. Edit the /etc/nova/nova.conf, to configure nova service. You can also follow steps in the OpenStack installation guide to configure nova as a compute node.
- In the default section, add the following:
[default] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = RABBIT_PASS auth_strategy = keystone my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS novncproxy_base_url = http://controller:6080/vnc_auto.html verbose = True
MANAGEMENT_INTERFACE_IP_ADDRESS is the IP address of the management network interface for this compute node, typically 10.0.0.31 for the first compute node.
- Add keystone_authtoken section:
[keystone_authtoken] auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = nova admin_password = NOVA_PASS
- Add glance section:
[glance] host=controller
8. Verify the conent of /etc/nova/nova-compute.conf is as following:
[DEFAULT] compute_driver=libvirt.LibvirtDriver [libvirt] virt_type=xen
9. Install and configure network component in compute node: Follow the steps outlined in this OpenStack install guide for neutron compute nodes. Note, in /etc/neutron/neutron.conf, I did not set "allow_overlapping_ips = True" in the default section, because it is said to set this property to False if both Neutron and the nova security groups are used together.
10. Finally step: now you should be able to launch an instance from horizon. In my case, I launched an instance running cirros-0.3.3-x86_64. When I login the compute node, I can see this instance running with virsh:
# virsh --connect=xen:/// Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list Id Name State ---------------------------------------------------- 1 instance-0000003b running
Reference
- Compile and Install QEMU
- Compile and Install Xen Project 4.4 in Ubuntu 14.04
- Bugs which needed to be worked around in steps 3-5 have been filed with Debian. See the email describing this with pointers to the bug reports. The bug in number 5 has already had a patch submitted in libvirt.org, but it will take time before it is generally available. Check the bugs to see if they are still needed in future releases.