Difference between revisions of "COLO - Coarse Grain Lock Stepping"

From Xen
(Requirements)
(Example)
 
(98 intermediate revisions by 4 users not shown)
Line 1: Line 1:
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of [[Remus]]. It is one of the features that is tracked in the [http://lists.xenproject.org/archives/html/xen-devel/2014-06/msg02597.html Xen 4.5 Development Update].
+
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of [[Remus]]. Remus was prepared for use with COLO in [[:Category:Xen 4.5|Xen 4.5]]. The COLO Manager component is part of [[:Category:Xen 4.7|Xen 4.7]], while other components will eventually be part of QEMU.
 
 
= Background =
 
= Background =
   
COLO FT/HA (COarse-grain LOck-stepping Virtual Machines for Non-stop Service) project is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and [http://www.linux-kvm.org/wiki/images/1/1d/Kvm-forum-2013-COLO.pdf academia paper in SOCC 2013].
+
COLO FT/HA (COarse-grain LOck-stepping Virtual Machines for Non-stop Service) project is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and [http://www.socc2013.org/home/program/a3-dong.pdf?attredirects=0 academia paper in SOCC 2013].
   
 
= Components =
 
= Components =
   
 
* COLO Manager:
 
* COLO Manager:
  +
: COLO Checkpoint/Failover Controller
** COLO Controller - Modifications of save/restore flow (based on Remus).
 
  +
: Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
** COLO Disk Manager - When primary VM writes data into image, the colo disk manger captures this data<br>
 
  +
: always be consistent with VM in Primary side.
:: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
 
  +
* COLO Block Replication: (Please refer to [http://wiki.qemu.org/Features/BlockReplication BlockReplication])
:: the ontext of primary VM 's image.
 
  +
: When primary VM writes data into image, the colo disk manger captures this data<br>
* COLO Agent
 
  +
: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
: We need an agent to compare the packets returned by Primary VM and Secondary VM<br>
 
  +
: the ontext of primary VM 's image.
: and decide whether to start a checkpoint according to some rules. It is a linux kernel module<br>
 
  +
* COLO Proxy:
: for host.
 
  +
: We need an module to compare the packets returned by Primary VM and Secondary VM<br>
  +
: and decide whether to start a checkpoint according to some rules. <br>
  +
: In previous version. COLO Proxy is a linux kernel module for host. Because of this module <br>
  +
: can't accepted by kernel community,you have to apply some patches to linux kernel and compile it yourself. <br>
  +
: No support has been provided to kernelspace COLO Proxy.<br>
  +
: In current version.COLO Proxy is a part of Qemu, all codes running in userspace. The filter-redirector, filter-mirror,<br>
  +
: colo-compare and filter-rewriter compose the COLO-proxy. We can get better performance with COLO-proxy's help.<br>
  +
: Compare with previous kernelspace COLO-Proxy, no more additional job to do, you just need ensure your qemu version support COLO Proxy. <br>
   
 
= Current Status =
 
= Current Status =
 
COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now.
 
COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now.
 
The overall status of COLO:
 
The overall status of COLO:
* COLO Manager([https://github.com/wencongyang/xen/tree/colo-v4 View on Github]):
+
* COLO Manager (Merged since 4.7.0-rc1):
** COLO Controller (Under review)
+
* COLO Block replication (Merged by Qemu)
  +
* COLO Proxy (Userspace COLO Proxy has been merged by Qemu)
** COLO Disk Manager (Under review)
 
* COLO Agent ([https://github.com/wencongyang/colo-agent View on Github])
+
: The legacy kernelspace COLO Proxy [https://github.com/zhangckid/colo-proxy View on Github ]
   
 
= Requirements =
 
= Requirements =
Line 29: Line 36:
 
== Hardware requriements ==
 
== Hardware requriements ==
   
There is at least one directly connected nic to forward the nic from client
+
There is at least one directly connected nic to forward the network requests from client
 
to secondary vm. The directly connected nic must not be used by any other
 
to secondary vm. The directly connected nic must not be used by any other
 
purpose. If your guest has more than one nic, you should have directly
 
purpose. If your guest has more than one nic, you should have directly
connected nic for each guest nic. If you don't have enouth directly connected
+
connected nic for each guest nic. If you don't have enough directly connected
 
nic, you can use vlan.
 
nic, you can use vlan.
   
 
== Dom0 requirements ==
 
== Dom0 requirements ==
* Kernel with dom0 support
+
# Kernel with dom0 support
  +
# If your host os has OEM-released xen tools, please uninstall it
* kernel module
 
  +
# Use latest Qemu version call for Xen
** sch_ingress
 
  +
:: (Because have some COLO module communication related patches haven't be merged, you can use internal version: [https://github.com/zhangckid/qemu/tree/qemu-colo-for-xen View on Github] )
** cls_basic
 
** cls_tcindex
 
** cls_u32
 
** act_mirred
 
** ifb
 
** blktap2
 
* libnl-tools >= 3.0. This package provides the command nl-qdisc-list, and colo need this command.
 
* If your host os has OEM-released xen tools, please uninstall it first.
 
* You can load the module which is not provided by OEM.
 
<pre>Note: Only SUSE may provide blktap2 module. If your kernel don't have it, you can build it:
 
a. Export the symbol zap_page_range() from kernel
 
b. Get the blktap2 source from [https://github.com/wencongyang/blktap here]</pre>
 
   
 
== Guest requirements ==
 
== Guest requirements ==
Line 56: Line 52:
 
Only HVM guest(without pv extensions) is supported now. If you want to use OEM released guest os, please use SUSE. REDHAT and Ubuntu is not supported now because I don't find any way to disable pv extensions. If you want to use REDHAT or Ubuntu, you need to build the newest kernel which has the parameter xen_nopv.
 
Only HVM guest(without pv extensions) is supported now. If you want to use OEM released guest os, please use SUSE. REDHAT and Ubuntu is not supported now because I don't find any way to disable pv extensions. If you want to use REDHAT or Ubuntu, you need to build the newest kernel which has the parameter xen_nopv.
   
  +
= Setup COLO environment =
= Guest Requirements =
 
  +
  +
== Network link topology ==
  +
<pre>
  +
=================================normal ======================================
  +
+--------+
  +
|client |
  +
master +----+---+ slave
  +
-------------------------+ | + -------------------------+
  +
PVM | + | |
  +
+-------+ +----[eth0]-----[switch]-----[eth0]---------+ |
  +
|guest | +---+-+ | | +---+-+ |
  +
| [tap0]--+ br0 | | | | br0 | |
  +
| | +-----+ [eth1]-----[forward]----[eth1]--+ +-----+ SVM |
  +
+-------+ | | | +-------+|
  +
| | | +-----+ | guest ||
  +
[eth2]---[checkpoint]---[eth2] +--+br1 |-[tap0] ||
  +
| | +-----+ | ||
  +
| | +-------+|
  +
-------------------------+ +--------------------------+
  +
e.g.
  +
master:
  +
br0: 192.168.2.98
  +
eth1: 192.168.1.33
  +
eth2: 192.168.3.1
  +
  +
slave:
  +
br0: 192.168.2.99
  +
br1: no ip address
  +
eth1: 192.168.1.88
  +
eth2: 192.168.3.2
  +
</pre>
  +
<pre>
  +
===========================after failover=====================================
  +
+--------+
  +
|client |
  +
master (dead) +----+---+ slave (alive)
  +
-------------------------+ | ---------------------------+
  +
PVM | + | |
  +
+-------+ +----[eth0]-----[switch]-----[eth0]-------+ |
  +
|guest | +---+-+ | | +---+-+ |
  +
| [tap0]--+ br0 | | | | br0 +--+ |
  +
| | +-----+ [eth1]-----[forward]----[eth1] +-----+ | SVM |
  +
+-------+ | | | +-------+|
  +
| | +-----+ | | guest ||
  +
[eth2]---[checkpoint]---[eth2] |br1 | +[tap0] ||
  +
| | +-----+ | ||
  +
| | +-------+|
  +
-------------------------+ +--------------------------+
  +
</pre>
  +
  +
== Test environment prepare ==
  +
  +
On both Primary/Secondary hosts:
  +
  +
* checkout necessary repos:
  +
<pre>
  +
# cd ~
  +
# git clone https://github.com/xen-project/xen.git
  +
# git clone https://github.com/zhangckid/qemu.git
  +
</pre>
  +
  +
* Build and install xen
  +
<pre>
  +
# cd ~/xen
  +
# ./autogen.sh; ./configure --enable-debug
  +
# make dist-xen; make install-xen
  +
# make dist-tools; make install-tools
  +
</pre>
  +
  +
* Build qemu
  +
<pre>
  +
# cd ~/qemu
  +
# git checkout qemu-colo-for-xen
  +
# cd ~/xen/tools/qemu-xen-dir
  +
# ./configure --enable-xen --target-list=x86_64-softmmu \
  +
--extra-cflags="-I~/xen/tools/include -I~/xen/tools/libxc -I~/xen/tools/xenstore" \
  +
--extra-ldflags="-L~/xen/tools/libxc -L~/xen/tools/xenstore"
  +
# make $(grep -c processor /proc/cpuinfo)
  +
</pre>
  +
'''Note''': You must use qemu that we provide, qemu-xen and qemu-xen-traditional are not supported.
  +
  +
  +
On Primary host:
  +
  +
* guest config
  +
:Add "xen_platform_pci=0" and below disk/net config into the guest configfile.
  +
<pre>
  +
disk = [ 'format=raw,devtype=disk,access=w,backendtype=qdisk,vdev=hda,colo,colo-host=192.168.3.2,colo-port=9001,colo-export=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/zhangchen/suse-64hvm-zc.img']
  +
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=rtl8139, colo_sock_sec_redirector0_id=red0,colo_sock_sec_redirector0_ip=192.168.2.98,colo_sock_sec_redirector0_port=9003,
  +
colo_sock_sec_redirector1_id=red1,colo_sock_sec_redirector1_ip=192.168.2.98,colo_sock_sec_redirector1_port=9004,
  +
colo_filter_sec_redirector0_queue=tx,colo_filter_sec_redirector0_indev=red0,colo_filter_sec_redirector1_queue=rx,
  +
colo_filter_sec_redirector1_outdev=red1,colo_filter_sec_rewriter0_queue=all,colo_sock_mirror_id=mirror0,colo_sock_mirror_ip=192.168.2.98,
  +
colo_sock_mirror_port=9003,colo_sock_compare_pri_in_id=compare0,colo_sock_compare_pri_in_ip=192.168.2.98,colo_sock_compare_pri_in_port=9021,
  +
colo_sock_compare_sec_in_id=compare1,colo_sock_compare_sec_in_ip=192.168.2.98,colo_sock_compare_sec_in_port=9004,colo_sock_compare_notify_id=not1,
  +
colo_sock_compare_notify_ip=192.168.2.98,colo_sock_compare_notify_port=9998,colo_sock_redirector0_id=compare_out,
  +
colo_sock_redirector0_ip=192.168.2.98,colo_sock_redirector0_port=9005,colo_sock_redirector1_id=compare0-0,colo_sock_redirector1_ip=192.168.2.98,
  +
colo_sock_redirector1_port=9021,colo_sock_redirector2_id=compare_out0,colo_sock_redirector2_ip=192.168.2.98,colo_sock_redirector2_port=9005,
  +
colo_filter_mirror_queue=tx,colo_filter_mirror_outdev=mirror0,colo_filter_redirector0_queue=rx,colo_filter_redirector0_indev=compare_out,
  +
colo_filter_redirector1_queue=rx,colo_filter_redirector1_outdev=compare0,colo_compare_pri_in=compare0-0,colo_compare_sec_in=compare1,
  +
colo_compare_out=compare_out0,colo_compare_notify_dev=not1,colo_checkpoint_host=192.168.2.98,colo_checkpoint_port=9998' ]
  +
</pre>
  +
* Copy physical machine disk image from Primary to Secondary,and make sure their absolute path are the same
  +
'''Note''': colo-port is the secondary host's IP, colo-port is the secondary host's NBD server port, forwarddev is the directly connected nic.
  +
  +
= Run COLO =
  +
  +
On both Primary/Secondary hosts:
  +
<pre>
  +
# service xencommons start
  +
</pre>
  +
  +
On Secondary host:
  +
excute the following script:
  +
<pre>
  +
#! /bin/bash
  +
  +
active_disk=/mnt/ramfs/active_disk.img
  +
hidden_disk=/mnt/ramfs/hidden_disk.img
  +
local_img=~/suse-64hvm.img
  +
tmp_disk_size=`./qemu-colo/qemu-img info $local_img |grep 'virtual size' |awk '{print $3}'
  +
  +
function create_image()
  +
{
  +
~/qemu/qemu-img create -f qcow2 $1 $tmp_disk_size
  +
}
  +
  +
function prepare_temp_images()
  +
{
  +
grep -q "^none /mnt/ramfs ramfs" /proc/mounts
  +
if [[ $? -ne 0 ]]; then
  +
mount -t ramfs none /mnt/ramfs/ -o size=2G
  +
fi
  +
  +
if [[ ! -e $active_disk ]]; then
  +
create_image $active_disk
  +
fi
  +
  +
if [[ ! -e $hidden_disk ]]; then
  +
create_image $hidden_disk
  +
fi
  +
}
  +
  +
prepare_temp_images
  +
</pre>
  +
'''Note''': It is recommended to put active disk and hidden disk in ramdisk.
  +
  +
On Primary host:
  +
<pre>
  +
# xl create -p <domconfig>
  +
# xl pause <domconfig>
  +
# xl remus -c -p -u <domconfig> 192.168.3.2
  +
</pre>
  +
  +
= Known problems =
  +
  +
* Secondary vm may crash due to triple fault.
  +
'''Note''': this problem doesn't happen every time. So you can run colo again to avoid this problem.
  +
  +
= Trouble shooting =
  +
  +
* If there's some error happend when staritng COLO, you can do:
  +
# Make sure you have all necessary modules that DOM0 needed on both side.
  +
# Make sure you have followed all the instructions in this README.
  +
# Try to reboot both primary and secondary host.
  +
# If you still have problems, collect the error logs and contact Zhang Chen(zhangckid@gmail.com), Xie Changlong(xiecl.fnst@cn.fujitsu.com),
  +
# Wen Congyang(wency@cn.fujitsu.com), Yang Hongyang(imhy.yang@gmail.com) for help.
  +
  +
= Example =
  +
  +
If you use SLES11.3, you can get the detailed steps from the wiki: [[COLO_-_Coarse_Grain_Lock_Stepping_SLES|Setup COLO on SLES11 SP3]]
  +
  +
An example guest config:
  +
<pre>
  +
builder='hvm'
  +
memory='2048'
  +
vcpus=2
  +
cpus=['2','3']
  +
  +
name='hvm_nopv_colo'
  +
device_model_version='qemu-xen'
  +
device_model_override='/home/zhangckid/qemu/x86_64-softmmu/qemu-system-x86_64'
  +
  +
disk = [ 'format=raw,devtype=disk,access=w,backendtype=qdisk,vdev=hda,colo,colo-host=192.168.3.2,colo-port=9001,colo-export=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/zhangchen/suse-64hvm-zc.img']
  +
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=rtl8139, colo_sock_sec_redirector0_id=red0,colo_sock_sec_redirector0_ip=192.168.2.98,colo_sock_sec_redirector0_port=9003,
  +
colo_sock_sec_redirector1_id=red1,colo_sock_sec_redirector1_ip=192.168.2.98,colo_sock_sec_redirector1_port=9004,
  +
colo_filter_sec_redirector0_queue=tx,colo_filter_sec_redirector0_indev=red0,colo_filter_sec_redirector1_queue=rx,
  +
colo_filter_sec_redirector1_outdev=red1,colo_filter_sec_rewriter0_queue=all,colo_sock_mirror_id=mirror0,colo_sock_mirror_ip=192.168.2.98,
  +
colo_sock_mirror_port=9003,colo_sock_compare_pri_in_id=compare0,colo_sock_compare_pri_in_ip=192.168.2.98,colo_sock_compare_pri_in_port=9021,
  +
colo_sock_compare_sec_in_id=compare1,colo_sock_compare_sec_in_ip=192.168.2.98,colo_sock_compare_sec_in_port=9004,colo_sock_compare_notify_id=not1,
  +
colo_sock_compare_notify_ip=192.168.2.98,colo_sock_compare_notify_port=9998,colo_sock_redirector0_id=compare_out,
  +
colo_sock_redirector0_ip=192.168.2.98,colo_sock_redirector0_port=9005,colo_sock_redirector1_id=compare0-0,colo_sock_redirector1_ip=192.168.2.98,
  +
colo_sock_redirector1_port=9021,colo_sock_redirector2_id=compare_out0,colo_sock_redirector2_ip=192.168.2.98,colo_sock_redirector2_port=9005,
  +
colo_filter_mirror_queue=tx,colo_filter_mirror_outdev=mirror0,colo_filter_redirector0_queue=rx,colo_filter_redirector0_indev=compare_out,
  +
colo_filter_redirector1_queue=rx,colo_filter_redirector1_outdev=compare0,colo_compare_pri_in=compare0-0,colo_compare_sec_in=compare1,
  +
colo_compare_out=compare_out0,colo_compare_notify_dev=not1,colo_checkpoint_host=192.168.2.98,colo_checkpoint_port=9998' ]
  +
  +
#-----------------------------------------------------------------------------
  +
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
  +
# default: hard disk, cd-rom, floppy
  +
  +
boot='c'
  +
sdl=0
  +
vnc=1
  +
vnclisten=''
  +
stdvga = 0
  +
serial='pty'
  +
apic=1
  +
apci=1
  +
pae=1
  +
extid=0
  +
keymap='en-us'
  +
localtime=1
  +
hpet=1
  +
usbdevice='tablet'
  +
xen_platform_pci = 0
  +
</pre>
  +
  +
= Man Pages =
  +
* [http://xenbits.xen.org/docs/unstable/man/xl.1.html#DOMAIN-SUBCOMMANDS xl(1) search for Remus]
  +
* [http://xenbits.xen.org/docs/unstable/man/xl.conf.5.html xl.conf(5) search for colo.default.proxyscript]
   
 
= Links =
 
= Links =
Line 65: Line 281:
 
* [http://www.slideshare.net/xen_com_mgr/colo-coarsegrain-lockstepping-virtual-machines-for-nonstop-service COLO Intro (Long), 2012]
 
* [http://www.slideshare.net/xen_com_mgr/colo-coarsegrain-lockstepping-virtual-machines-for-nonstop-service COLO Intro (Long), 2012]
 
* [http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-COLO-xen-dong.pdf COLO Intro (Short), 2012]
 
* [http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-COLO-xen-dong.pdf COLO Intro (Short), 2012]
  +
* [http://wiki.qemu.org/Features/COLO COLO on QEMU/KVM]
   
 
[[Category:High Availability]]
 
[[Category:High Availability]]
 
[[Category:Glossary]]
 
[[Category:Glossary]]
 
[[Category:Xen 4.5]]
 
[[Category:Xen 4.5]]
  +
[[Category:Xen 4.7]]

Latest revision as of 17:17, 11 October 2017

COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus. Remus was prepared for use with COLO in Xen 4.5. The COLO Manager component is part of Xen 4.7, while other components will eventually be part of QEMU.

Background

COLO FT/HA (COarse-grain LOck-stepping Virtual Machines for Non-stop Service) project is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and academia paper in SOCC 2013.

Components

  • COLO Manager:
COLO Checkpoint/Failover Controller
Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
always be consistent with VM in Primary side.
When primary VM writes data into image, the colo disk manger captures this data
and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
the ontext of primary VM 's image.
  • COLO Proxy:
We need an module to compare the packets returned by Primary VM and Secondary VM
and decide whether to start a checkpoint according to some rules.
In previous version. COLO Proxy is a linux kernel module for host. Because of this module
can't accepted by kernel community,you have to apply some patches to linux kernel and compile it yourself.
No support has been provided to kernelspace COLO Proxy.
In current version.COLO Proxy is a part of Qemu, all codes running in userspace. The filter-redirector, filter-mirror,
colo-compare and filter-rewriter compose the COLO-proxy. We can get better performance with COLO-proxy's help.
Compare with previous kernelspace COLO-Proxy, no more additional job to do, you just need ensure your qemu version support COLO Proxy.

Current Status

COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now. The overall status of COLO:

  • COLO Manager (Merged since 4.7.0-rc1):
  • COLO Block replication (Merged by Qemu)
  • COLO Proxy (Userspace COLO Proxy has been merged by Qemu)
The legacy kernelspace COLO Proxy View on Github

Requirements

Hardware requriements

There is at least one directly connected nic to forward the network requests from client to secondary vm. The directly connected nic must not be used by any other purpose. If your guest has more than one nic, you should have directly connected nic for each guest nic. If you don't have enough directly connected nic, you can use vlan.

Dom0 requirements

  1. Kernel with dom0 support
  2. If your host os has OEM-released xen tools, please uninstall it
  3. Use latest Qemu version call for Xen
(Because have some COLO module communication related patches haven't be merged, you can use internal version: View on Github )

Guest requirements

Only HVM guest(without pv extensions) is supported now. If you want to use OEM released guest os, please use SUSE. REDHAT and Ubuntu is not supported now because I don't find any way to disable pv extensions. If you want to use REDHAT or Ubuntu, you need to build the newest kernel which has the parameter xen_nopv.

Setup COLO environment

Network link topology

=================================normal ======================================
                                +--------+
                                |client  |
         master                 +----+---+                    slave
-------------------------+           |            + -------------------------+
   PVM                   |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]---------+              |
|guest  |     +---+-+    |                        |       +---+-+            |
|     [tap0]--+ br0 |    |                        |       | br0 |            |
|       |     +-----+  [eth1]-----[forward]----[eth1]--+  +-----+     SVM    |
+-------+                |                        |    |            +-------+|
                         |                        |    |  +-----+   | guest ||
                       [eth2]---[checkpoint]---[eth2]  +--+br1  |-[tap0]    ||
                         |                        |       +-----+   |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+
e.g.
master:
br0: 192.168.2.98
eth1: 192.168.1.33
eth2: 192.168.3.1

slave:
br0: 192.168.2.99
br1: no ip address
eth1: 192.168.1.88
eth2: 192.168.3.2
===========================after failover=====================================
                                +--------+
                                |client  |
    master (dead)               +----+---+                 slave (alive)
-------------------------+           |            ---------------------------+
  PVM                    |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]-------+                |
|guest  |     +---+-+    |                        |     +---+-+              |
|     [tap0]--+ br0 |    |                        |     | br0 +--+           |
|       |     +-----+  [eth1]-----[forward]----[eth1]   +-----+  |     SVM   |
+-------+                |                        |              |  +-------+|
                         |                        |     +-----+  |  | guest ||
                       [eth2]---[checkpoint]---[eth2]   |br1  |  +[tap0]    ||
                         |                        |     +-----+     |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+

Test environment prepare

On both Primary/Secondary hosts:

  • checkout necessary repos:
# cd ~
# git clone https://github.com/xen-project/xen.git
# git clone https://github.com/zhangckid/qemu.git
  • Build and install xen
# cd ~/xen 
# ./autogen.sh; ./configure --enable-debug
# make dist-xen; make install-xen
# make dist-tools; make install-tools
  • Build qemu
# cd ~/qemu
# git checkout qemu-colo-for-xen
# cd ~/xen/tools/qemu-xen-dir
# ./configure --enable-xen --target-list=x86_64-softmmu \
              --extra-cflags="-I~/xen/tools/include -I~/xen/tools/libxc -I~/xen/tools/xenstore" \
              --extra-ldflags="-L~/xen/tools/libxc -L~/xen/tools/xenstore"
# make $(grep -c processor /proc/cpuinfo)

Note: You must use qemu that we provide, qemu-xen and qemu-xen-traditional are not supported.


On Primary host:

  • guest config
Add "xen_platform_pci=0" and below disk/net config into the guest configfile.
disk = [ 'format=raw,devtype=disk,access=w,backendtype=qdisk,vdev=hda,colo,colo-host=192.168.3.2,colo-port=9001,colo-export=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/zhangchen/suse-64hvm-zc.img']
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=rtl8139, colo_sock_sec_redirector0_id=red0,colo_sock_sec_redirector0_ip=192.168.2.98,colo_sock_sec_redirector0_port=9003,
colo_sock_sec_redirector1_id=red1,colo_sock_sec_redirector1_ip=192.168.2.98,colo_sock_sec_redirector1_port=9004,
colo_filter_sec_redirector0_queue=tx,colo_filter_sec_redirector0_indev=red0,colo_filter_sec_redirector1_queue=rx,
colo_filter_sec_redirector1_outdev=red1,colo_filter_sec_rewriter0_queue=all,colo_sock_mirror_id=mirror0,colo_sock_mirror_ip=192.168.2.98,
colo_sock_mirror_port=9003,colo_sock_compare_pri_in_id=compare0,colo_sock_compare_pri_in_ip=192.168.2.98,colo_sock_compare_pri_in_port=9021,
colo_sock_compare_sec_in_id=compare1,colo_sock_compare_sec_in_ip=192.168.2.98,colo_sock_compare_sec_in_port=9004,colo_sock_compare_notify_id=not1,
colo_sock_compare_notify_ip=192.168.2.98,colo_sock_compare_notify_port=9998,colo_sock_redirector0_id=compare_out,
colo_sock_redirector0_ip=192.168.2.98,colo_sock_redirector0_port=9005,colo_sock_redirector1_id=compare0-0,colo_sock_redirector1_ip=192.168.2.98,
colo_sock_redirector1_port=9021,colo_sock_redirector2_id=compare_out0,colo_sock_redirector2_ip=192.168.2.98,colo_sock_redirector2_port=9005,
colo_filter_mirror_queue=tx,colo_filter_mirror_outdev=mirror0,colo_filter_redirector0_queue=rx,colo_filter_redirector0_indev=compare_out,
colo_filter_redirector1_queue=rx,colo_filter_redirector1_outdev=compare0,colo_compare_pri_in=compare0-0,colo_compare_sec_in=compare1,
colo_compare_out=compare_out0,colo_compare_notify_dev=not1,colo_checkpoint_host=192.168.2.98,colo_checkpoint_port=9998' ]
  • Copy physical machine disk image from Primary to Secondary,and make sure their absolute path are the same

Note: colo-port is the secondary host's IP, colo-port is the secondary host's NBD server port, forwarddev is the directly connected nic.

Run COLO

On both Primary/Secondary hosts:

# service xencommons start

On Secondary host: excute the following script:

#! /bin/bash

active_disk=/mnt/ramfs/active_disk.img
hidden_disk=/mnt/ramfs/hidden_disk.img
local_img=~/suse-64hvm.img
tmp_disk_size=`./qemu-colo/qemu-img info $local_img |grep 'virtual size' |awk  '{print $3}'

function create_image()
{
    ~/qemu/qemu-img create -f qcow2 $1 $tmp_disk_size 
}

function prepare_temp_images()
{
    grep -q "^none /mnt/ramfs ramfs" /proc/mounts
    if [[ $? -ne 0 ]]; then
        mount -t ramfs none /mnt/ramfs/ -o size=2G
    fi

    if [[ ! -e $active_disk ]]; then
        create_image $active_disk      
    fi

    if [[ ! -e $hidden_disk ]]; then
        create_image $hidden_disk
    fi
}

prepare_temp_images

Note: It is recommended to put active disk and hidden disk in ramdisk.

On Primary host:

# xl create -p <domconfig>
# xl pause <domconfig>
# xl remus -c -p -u <domconfig> 192.168.3.2

Known problems

  • Secondary vm may crash due to triple fault.

Note: this problem doesn't happen every time. So you can run colo again to avoid this problem.

Trouble shooting

  • If there's some error happend when staritng COLO, you can do:
  1. Make sure you have all necessary modules that DOM0 needed on both side.
  2. Make sure you have followed all the instructions in this README.
  3. Try to reboot both primary and secondary host.
  4. If you still have problems, collect the error logs and contact Zhang Chen(zhangckid@gmail.com), Xie Changlong(xiecl.fnst@cn.fujitsu.com),
  5. Wen Congyang(wency@cn.fujitsu.com), Yang Hongyang(imhy.yang@gmail.com) for help.

Example

If you use SLES11.3, you can get the detailed steps from the wiki: Setup COLO on SLES11 SP3

An example guest config:

builder='hvm'
memory='2048'
vcpus=2
cpus=['2','3']

name='hvm_nopv_colo'
device_model_version='qemu-xen'
device_model_override='/home/zhangckid/qemu/x86_64-softmmu/qemu-system-x86_64'

disk = [ 'format=raw,devtype=disk,access=w,backendtype=qdisk,vdev=hda,colo,colo-host=192.168.3.2,colo-port=9001,colo-export=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/zhangchen/suse-64hvm-zc.img']
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=rtl8139, colo_sock_sec_redirector0_id=red0,colo_sock_sec_redirector0_ip=192.168.2.98,colo_sock_sec_redirector0_port=9003,
colo_sock_sec_redirector1_id=red1,colo_sock_sec_redirector1_ip=192.168.2.98,colo_sock_sec_redirector1_port=9004,
colo_filter_sec_redirector0_queue=tx,colo_filter_sec_redirector0_indev=red0,colo_filter_sec_redirector1_queue=rx,
colo_filter_sec_redirector1_outdev=red1,colo_filter_sec_rewriter0_queue=all,colo_sock_mirror_id=mirror0,colo_sock_mirror_ip=192.168.2.98,
colo_sock_mirror_port=9003,colo_sock_compare_pri_in_id=compare0,colo_sock_compare_pri_in_ip=192.168.2.98,colo_sock_compare_pri_in_port=9021,
colo_sock_compare_sec_in_id=compare1,colo_sock_compare_sec_in_ip=192.168.2.98,colo_sock_compare_sec_in_port=9004,colo_sock_compare_notify_id=not1,
colo_sock_compare_notify_ip=192.168.2.98,colo_sock_compare_notify_port=9998,colo_sock_redirector0_id=compare_out,
colo_sock_redirector0_ip=192.168.2.98,colo_sock_redirector0_port=9005,colo_sock_redirector1_id=compare0-0,colo_sock_redirector1_ip=192.168.2.98,
colo_sock_redirector1_port=9021,colo_sock_redirector2_id=compare_out0,colo_sock_redirector2_ip=192.168.2.98,colo_sock_redirector2_port=9005,
colo_filter_mirror_queue=tx,colo_filter_mirror_outdev=mirror0,colo_filter_redirector0_queue=rx,colo_filter_redirector0_indev=compare_out,
colo_filter_redirector1_queue=rx,colo_filter_redirector1_outdev=compare0,colo_compare_pri_in=compare0-0,colo_compare_sec_in=compare1,
colo_compare_out=compare_out0,colo_compare_notify_dev=not1,colo_checkpoint_host=192.168.2.98,colo_checkpoint_port=9998' ]

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) 
# default: hard disk, cd-rom, floppy

boot='c'
sdl=0
vnc=1
vnclisten=''
stdvga = 0 
serial='pty'
apic=1
apci=1
pae=1
extid=0
keymap='en-us'
localtime=1
hpet=1
usbdevice='tablet'
xen_platform_pci = 0 

Man Pages

Links

For more information see: