COLO - Coarse Grain Lock Stepping: Difference between revisions
Line 125: | Line 125: | ||
* Prepare host kernel |
* Prepare host kernel |
||
:colo-proxy kernel module need cooperate with linux kernel. |
:colo-proxy kernel module need cooperate with linux kernel. |
||
:You should patch kernel with ~/colo |
:You should patch kernel with ~/colo-proxy/colo-patch-for-kernel.patch |
||
:Then compile kernel and intall the new kernel (Recommend [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git kernel-v4.0]) |
:Then compile kernel and intall the new kernel (Recommend [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git kernel-v4.0]) |
||
Revision as of 02:52, 10 May 2016
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus. It is one of the features that has been updated in 4.5, but has not been integrated into the Xen Hypervisor code base (in other words, the code is out-of-tree). However, COLO works with Remus for XL as developed in Xen 4.5 or newer.
Background
COLO FT/HA (COarse-grain LOck-stepping Virtual Machines for Non-stop Service) project is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and academia paper in SOCC 2013.
Components
- COLO Manager:
- COLO Checkpoint/Failover Controller
- Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
- always be consistent with VM in Primary side.
- COLO Block Replication (Please refer to BlockReplication)
- When primary VM writes data into image, the colo disk manger captures this data
- and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
- the ontext of primary VM 's image.
- COLO Proxy:
- We need an module to compare the packets returned by Primary VM and Secondary VM
- and decide whether to start a checkpoint according to some rules. It is a linux kernel module
- for host.
Current Status
COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now. The overall status of COLO:
- COLO Manager (Merged since 4.7.0-rc1):
- COLO Block replication (View on Github temporary)
- COLO Proxy (View on Github temporary)
Note: We are missing COLO Block/Proxy now, and these two parts are implemented base on Qemu.
Requirements
Hardware requriements
There is at least one directly connected nic to forward the network requests from client to secondary vm. The directly connected nic must not be used by any other purpose. If your guest has more than one nic, you should have directly connected nic for each guest nic. If you don't have enouth directly connected nic, you can use vlan.
Dom0 requirements
- Kernel with dom0 support
- kernel module
- nf_conntrack
- nf_conntrack_ipv4
- nf_nat
- libnl-tools >= 3.0.
- If your host os has OEM-released xen tools, please uninstall it first.
Guest requirements
Only HVM guest(without pv extensions) is supported now. If you want to use OEM released guest os, please use SUSE. REDHAT and Ubuntu is not supported now because I don't find any way to disable pv extensions. If you want to use REDHAT or Ubuntu, you need to build the newest kernel which has the parameter xen_nopv.
Setup COLO environment
Network link topology
=================================normal ====================================== +--------+ |client | master +----+---+ slave -------------------------+ | + -------------------------+ PVM | + | | +-------+ +----[eth0]-----[switch]-----[eth0]---------+ | |guest | +---+-+ | | +---+-+ | | [tap0]--+ br0 | | | | br0 | | | | +-----+ [eth1]-----[forward]----[eth1]--+ +-----+ SVM | +-------+ | | | +-------+| | | | +-----+ | guest || [eth2]---[checkpoint]---[eth2] +--+br1 |-[tap0] || | | +-----+ | || | | +-------+| -------------------------+ +--------------------------+ e.g. master: br0: 192.168.0.33 eth1: 192.168.1.33 eth2: 192.168.2.33 slave: br0: 192.168.0.88 br1: no ip address eth1: 192.168.1.88 eth2: 192.168.2.88
===========================after failover===================================== +--------+ |client | master (dead) +----+---+ slave (alive) -------------------------+ | ---------------------------+ PVM | + | | +-------+ +----[eth0]-----[switch]-----[eth0]-------+ | |guest | +---+-+ | | +---+-+ | | [tap0]--+ br0 | | | | br0 +--+ | | | +-----+ [eth1]-----[forward]----[eth1] +-----+ | SVM | +-------+ | | | +-------+| | | +-----+ | | guest || [eth2]---[checkpoint]---[eth2] |br1 | +[tap0] || | | +-----+ | || | | +-------+| -------------------------+ +--------------------------+
Test environment prepare
On both Primary/Secondary hosts:
- checkout necessary repos:
# cd ~ # git clone https://github.com/Pating/colo-proxy # git clone https://github.com/Pating/iptables # git clone https://github.com/Pating/xen # git clone https://github.com/Pating/qemu/ # git clone https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
- Prepare host kernel
- colo-proxy kernel module need cooperate with linux kernel.
- You should patch kernel with ~/colo-proxy/colo-patch-for-kernel.patch
- Then compile kernel and intall the new kernel (Recommend kernel-v4.0)
- Proxy module
- proxy module is used for network packets compare.
# cd ~/colo-ft-proxy # make # make install
- Modified iptables
- We have added a new rule to iptables command.
# cd ~/iptables # ./autogen.sh && ./configure # make && make install
- Build and install xen
# cd xen # git checkout -b colo_v6 # make dist-xen; make install-xen # make dist-tools; make install-tools
- Apply the patch for qemu xen, and rebuild xen tools:
# cd ~/xen/tools/qemu-xen-dir # git am ~/ColoPatchForQemu/*.patch # cd ~/xen # make dist-tools && make install-tools
Note: You must use qemu-xen. qemu-xen-traditional is not supported.
On Primary host:
- Install the guest
- Add "xen_platform_pci=0" into the guest configfile
- If you use suse, please select physical machine
- copy the disk image to the other host
- Update your guest config file for COLO:
- disk
disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/yanghy/colo-hvm-suse64.img' ]
- nic
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=e1000, forwarddev=eth1' ]
NOTE: The ip/port in colo-params is the secondary host's IP. forwarddev is the directly connected nic.
Run COLO
On both Primary/Secondary hosts:
# modprobe nf_conntrack_colo
On Secondary host: excute the following script:
#! /bin/bash active_disk=/mnt/ramfs/active_disk.img hidden_disk=/mnt/ramfs/hidden_disk.img function create_image() { /home/yanghy/client-xen/tools/qemu-xen-dir/qemu-img create -f qcow2 $1 10G } function prepare_temp_images() { grep -q "^none /mnt/ramfs ramfs" /proc/mounts if [[ $? -ne 0 ]]; then mount -t ramfs none /mnt/ramfs/ -o size=2G fi if [[ ! -e $active_disk ]]; then create_image $active_disk fi if [[ ! -e $hidden_disk ]]; then create_image $hidden_disk fi } prepare_temp_images
Note: It is recommended to put active disk and hidden disk in ramdisk.
On Primary host:
# xl create -p <domconfig> # xl remus -c -u <dom> 192.168.2.88
Known problems
- Secondary vm may crash due to triple fault.
NOTE: Problem 1 doesn't happen every time. So you can run colo again to avoid this problem.
Trouble shooting
If there's some error happend when staritng COLO, you can do:
- Make sure you have all necessary modules that DOM0 needed on both side.
- Make sure you have followed all the instructions in this README.
- Try to reboot both primary and secondary host.
- If you still have problems, collect the error logs and contact Yang Hongyang(yanghy@cn.fujitsu.com) Wen Congyang(wency@cn.fujitsu.com) for help.
Example
If you use SLES11.3, you can get the detailed steps from the wiki: Setup COLO on SLES11 SP3
An example guest config:
builder='hvm' memory = 512 vcpus=2 cpus=["2", "3"] name = "hvm_nopv_colo" disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/test/colo-hvm-suse64.img' ] vif = [ 'mac=00:16:4f:00:00:11, bridge=xenbr0, model=virtio-net, forwarddev=eth1' ] #----------------------------------------------------------------------------- # boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="c" sdl=0 vnc=1 vnclisten='0.0.0.0' vncunused = 1 stdvga = 0 serial='pty' apic=1 apci=1 pae=1 extid=0 keymap="en-us" localtime=1 hpet=1 usbdevice='tablet' xen_platform_pci=0
Links
For more information see: