Difference between revisions of "COLO - Coarse Grain Lock Stepping"

From Xen
(Updated pre-amble to make clear that it works with Remus & XL as delivered in Xen 4.5)
Line 8: Line 8:
   
 
* COLO Manager:
 
* COLO Manager:
  +
** COLO Checkpoint/Failover Controller
** COLO Controller - Modifications of save/restore flow (based on Remus).
 
  +
: Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
** COLO Disk Manager - When primary VM writes data into image, the colo disk manger captures this data<br>
 
  +
: always be consistent with VM in Primary side.
:: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
 
  +
** COLO Block Replication (Please refer to [http://wiki.qemu.org/Features/BlockReplication BlockReplication])
:: the ontext of primary VM 's image.
 
  +
: When primary VM writes data into image, the colo disk manger captures this data<br>
* COLO Agent
 
  +
: and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with<br>
: We need an agent to compare the packets returned by Primary VM and Secondary VM<br>
 
  +
: the ontext of primary VM 's image.
  +
* COLO Proxy:
  +
: We need an module to compare the packets returned by Primary VM and Secondary VM<br>
 
: and decide whether to start a checkpoint according to some rules. It is a linux kernel module<br>
 
: and decide whether to start a checkpoint according to some rules. It is a linux kernel module<br>
 
: for host.
 
: for host.
Line 20: Line 23:
 
COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now.
 
COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now.
 
The overall status of COLO:
 
The overall status of COLO:
* COLO Manager([https://github.com/wencongyang/xen/tree/colo-v4 View on Github]):
+
* COLO Manager([https://github.com/macrosheep/xen/tree/colo-v6 View on Github]):
  +
* COLO Block replication ([https://github.com/wencongyang/qemu-colo View on Github])
** COLO Controller (Under review)
 
  +
* COLO Proxy ([https://github.com/macrosheep/colo-ft-proxy View on Github])
** COLO Disk Manager (Under review)
 
* COLO Agent ([https://github.com/wencongyang/colo-agent View on Github])
 
   
 
= Requirements =
 
= Requirements =
Line 38: Line 40:
 
# Kernel with dom0 support
 
# Kernel with dom0 support
 
# kernel module
 
# kernel module
  +
## nf_conntrack
## sch_ingress
 
  +
## nf_conntrack_ipv4
## cls_basic
 
## cls_tcindex
+
## nf_nat
  +
# libnl-tools >= 3.0.
## cls_u32
 
## act_mirred
 
## ifb
 
## blktap2
 
# libnl-tools >= 3.0. This package provides the command nl-qdisc-list, and colo need this command.
 
 
# If your host os has OEM-released xen tools, please uninstall it first.
 
# If your host os has OEM-released xen tools, please uninstall it first.
# You can load the module which is not provided by OEM.
 
<pre>Note: Only SUSE may provide blktap2 module. If your kernel don't have it, you can build it:
 
a. Export the symbol zap_page_range() from kernel
 
b. Get the blktap2 source from [https://github.com/wencongyang/blktap]</pre>
 
   
 
== Guest requirements ==
 
== Guest requirements ==
Line 58: Line 52:
 
= Setup COLO environment =
 
= Setup COLO environment =
   
  +
== Network link topology ==
# Build and install xen
 
  +
<pre>
# Apply the patch for qemu xen, and rebuild xen tools:
 
  +
=================================normal ======================================
## cd into tools/qemu-xen-dir
 
  +
+--------+
## use git am to apply the patch: <TODO>
 
  +
|client |
## make tools && make install-tools
 
  +
master +----+---+ slave
## Note: You must use qemu-xen. qemu-xen-traditional is not supported.
 
  +
-------------------------+ | + -------------------------+
# Install the guest
 
  +
PVM | + | |
## Add "xen_platform_pci=0" into the guest configfile
 
  +
+-------+ +----[eth0]-----[switch]-----[eth0]---------+ |
## If you use suse, please select physical machine
 
  +
|guest | +---+-+ | | +---+-+ |
## copy the disk image to the other host
 
  +
| [tap0]--+ br0 | | | | br0 | |
# Install COLO agent module:
 
  +
| | +-----+ [eth1]-----[forward]----[eth1]--+ +-----+ SVM |
## Download [https://github.com/wencongyang/colo-agent COLO agent], and compile it
 
  +
+-------+ | | | +-------+|
## Install it: it is the kernel module, so copy the module to the directory /lib/modules/<version>/updates/, and run depmod.
 
  +
| | | +-----+ | guest ||
# Update your guest config file for COLO:
 
  +
[eth2]---[checkpoint]---[eth2] +--+br1 |-[tap0] ||
## disk
 
  +
| | +-----+ | ||
### <pre>disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=tap,filter=colo,filter-params=192.168.3.1:9000,target=/root/images/hvm/hvm_nopv/hvm.img' ]</pre>
 
  +
| | +-------+|
## nic
 
  +
-------------------------+ +--------------------------+
### <pre>vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=e1000, forwarddev=eth0' ]</pre>
 
  +
e.g.
## NOTE:
 
  +
master:
### The ip/port in filter-params is the secondary host's IP. Don't use the directly connected nic's IP.
 
  +
br0: 192.168.0.33
### forwarddev is the directly connected nic.
 
  +
eth1: 192.168.1.33
### If you have more than one disk, filter-params must be different
 
  +
eth2: 192.168.2.33
# Load the module ifb - We need to two ifb nics for each virtual nic, you can load ifb like this:
 
  +
## <pre>modprobe ifb numifbs=100</pre>
 
  +
slave:
# NOTE: Secondary host only need to do step 1-2.
 
  +
br0: 192.168.0.88
  +
br1: no ip address
  +
eth1: 192.168.1.88
  +
eth2: 192.168.2.88
  +
</pre>
  +
<pre>
  +
===========================after failover=====================================
  +
+--------+
  +
|client |
  +
master (dead) +----+---+ slave (alive)
  +
-------------------------+ | ---------------------------+
  +
PVM | + | |
  +
+-------+ +----[eth0]-----[switch]-----[eth0]-------+ |
  +
|guest | +---+-+ | | +---+-+ |
  +
| [tap0]--+ br0 | | | | br0 +--+ |
  +
| | +-----+ [eth1]-----[forward]----[eth1] +-----+ | SVM |
  +
+-------+ | | | +-------+|
  +
| | +-----+ | | guest ||
  +
[eth2]---[checkpoint]---[eth2] |br1 | +[tap0] ||
  +
| | +-----+ | ||
  +
| | +-------+|
  +
-------------------------+ +--------------------------+
  +
</pre>
  +
  +
== Test environment prepare ==
  +
  +
On both Primary/Secondary hosts:
  +
  +
* checkout necessary repos:
  +
<pre>
  +
# cd ~
  +
# git clone https://github.com/macrosheep/colo-ft-proxy
  +
# git clone https://github.com/macrosheep/iptables
  +
# git clone https://github.com/macrosheep/ColoPatchForQemu
  +
# git clone https://github.com/macrosheep/xen
  +
</pre>
  +
  +
* Prepare host kernel
  +
:colo-proxy kernel module need cooperate with linux kernel.
  +
:You should patch kernel with ~/colo-ft-proxy/colo-patch-for-kernel.patch
  +
:Then compile kernel and intall the new kernel (Recommend [https://www.kernel.org/ kernel-3.18.10])
  +
  +
* Proxy module
  +
** proxy module is used for network packets compare.
  +
<pre>
  +
# cd ~/colo-ft-proxy
  +
# make
  +
# make install
  +
</pre>
  +
  +
* Modified [https://github.com/macrosheep/iptables iptables]
  +
** We have added a new rule to iptables command.
  +
<pre>
  +
# cd ~/iptables
  +
# ./autogen.sh && ./configure
  +
# make && make install
  +
</pre>
  +
  +
* Build and install xen
  +
<pre>
  +
# cd xen
  +
# git checkout -b colo_v6
  +
# make dist-xen; make install-xen
  +
# make dist-tools; make install-tools
  +
</pre>
  +
  +
* Apply the patch for qemu xen, and rebuild xen tools:
  +
<pre>
  +
# cd ~/xen/tools/qemu-xen-dir
  +
# git am ~/ColoPatchForQemu/*.patch
  +
# cd ~/xen
  +
# make dist-tools && make install-tools
  +
</pre>
  +
<pre>
  +
Note: You must use qemu-xen. qemu-xen-traditional is not supported.
  +
</pre>
  +
  +
On Primary host:
  +
  +
* Install the guest
  +
: Add "xen_platform_pci=0" into the guest configfile
  +
: If you use suse, please select physical machine
  +
: copy the disk image to the other host
  +
  +
* Update your guest config file for COLO:
  +
** disk
  +
<pre>
  +
disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/yanghy/colo-hvm-suse64.img' ]
  +
</pre>
  +
  +
** nic
  +
<pre>
  +
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=e1000, forwarddev=eth1' ]
  +
</pre>
  +
<pre>
  +
NOTE: The ip/port in colo-params is the secondary host's IP.
  +
forwarddev is the directly connected nic.
  +
</pre>
   
 
= Run COLO =
 
= Run COLO =
   
  +
On both Primary/Secondary hosts:
<pre>xl remus -c -u <domname> <secondary host IP></pre>
 
  +
<pre>
Note: The ip must not be the directly connected nic's IP.
 
  +
# modprobe nf_conntrack_colo
  +
</pre>
   
  +
On Secondary host:
= Performance tunning =
 
  +
excute the following script:
  +
<pre>
  +
#! /bin/bash
   
  +
active_disk=/mnt/ramfs/active_disk.img
* colo agent
 
  +
hidden_disk=/mnt/ramfs/hidden_disk.img
** The COLO agent module doesn't include codes which are experimental and improve the performance. The codes are in the branch experimental. If you use this branch, you need to load some kernel modules by hand after running COLO. The module sch_colo_ipv4 should be loaded before loading sch_colo_icmp, sch_colo_tcp and sch_colo_udp.
 
  +
* Guest
 
  +
function create_image()
** Disable the following tcp option: tcp_sack, tcp_dsack, tcp_timestamps. Add the following lines to /etc/sysctl.conf and then run `sysctl -p`:
 
  +
{
<pre>net.ipv4.tcp_sack=0
 
  +
/home/yanghy/client-xen/tools/qemu-xen-dir/qemu-img create -f qcow2 $1 10G
net.ipv4.tcp_dsack=0
 
  +
}
net.ipv4.tcp_timestamps=0</pre>
 
  +
* Host
 
  +
function prepare_temp_images()
** The primary host and secondary host date should be the same
 
  +
{
** If you build dom's kernel, irqbalance service may not start. COLO uses more than one nic, and you should bind each nic's irq to one cpu.
 
  +
grep -q "^none /mnt/ramfs ramfs" /proc/mounts
  +
if [[ $? -ne 0 ]]; then
  +
mount -t ramfs none /mnt/ramfs/ -o size=2G
  +
fi
  +
  +
if [[ ! -e $active_disk ]]; then
  +
create_image $active_disk
  +
fi
  +
  +
if [[ ! -e $hidden_disk ]]; then
  +
create_image $hidden_disk
  +
fi
  +
}
  +
  +
prepare_temp_images
  +
</pre>
  +
Note: It is recommended to put active disk and hidden disk in ramdisk.
  +
  +
On Primary host:
  +
<pre>xl remus -c -u <domname> 192.168.2.88</pre>
   
 
= Known problems =
 
= Known problems =
   
 
# Secondary vm may crash due to triple fault.
 
# Secondary vm may crash due to triple fault.
# The heartbeat is not reliable. If you want to test the performance, please disable the heartbeat(modify the xen codes). You can use the branch colo-v4-noheartbeat.
 
# Suspending the vm fails, and the error message is:<br>
 
: libxl: error: libxl_qmp.c:429:qmp_next: timeout
 
   
<pre>NOTE: Problem 1 and 3 don't happen every time. So you can run colo again to
+
<pre>NOTE: Problem 1 doesn't happen every time. So you can run colo again to
 
avoid this problem.</pre>
 
avoid this problem.</pre>
 
= Example =
 
 
If you use SLES11.3, you can get the detailed steps from the wiki: [[COLO_-_Coarse_Grain_Lock_Stepping_SLES|Setup COLO on SLES11 SP3]]
 
   
 
= Trouble shooting =
 
= Trouble shooting =
Line 122: Line 232:
 
# Make sure you have followed all the instructions in this README.
 
# Make sure you have followed all the instructions in this README.
 
# Try to reboot both primary and secondary host.
 
# Try to reboot both primary and secondary host.
# If you still have problems, collect the error logs and contact Wen Congyang(wency@cn.fujitsu.com) for help.
+
# If you still have problems, collect the error logs and contact Yang Hongyang(yanghy@cn.fujitsu.com) Wen Congyang(wency@cn.fujitsu.com) for help.
  +
  +
= Example =
  +
  +
If you use SLES11.3, you can get the detailed steps from the wiki: [[COLO_-_Coarse_Grain_Lock_Stepping_SLES|Setup COLO on SLES11 SP3]]
  +
  +
An example guest config:
  +
<pre>
  +
builder='hvm'
  +
  +
memory = 512
  +
vcpus=2
  +
cpus=["2", "3"]
  +
  +
name = "hvm_nopv_colo"
  +
  +
disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/test/colo-hvm-suse64.img' ]
  +
  +
  +
vif = [ 'mac=00:16:4f:00:00:11, bridge=xenbr0, model=virtio-net, forwarddev=eth1' ]
  +
  +
#-----------------------------------------------------------------------------
  +
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
  +
# default: hard disk, cd-rom, floppy
  +
boot="c"
  +
  +
sdl=0
  +
  +
vnc=1
  +
  +
vnclisten='0.0.0.0'
  +
  +
vncunused = 1
  +
  +
stdvga = 0
  +
  +
serial='pty'
  +
  +
apic=1
  +
apci=1
  +
pae=1
  +
  +
extid=0
  +
keymap="en-us"
  +
localtime=1
  +
hpet=1
  +
  +
usbdevice='tablet'
  +
  +
xen_platform_pci=0
  +
</pre>
   
 
= Links =
 
= Links =
Line 131: Line 291:
 
* [http://www.slideshare.net/xen_com_mgr/colo-coarsegrain-lockstepping-virtual-machines-for-nonstop-service COLO Intro (Long), 2012]
 
* [http://www.slideshare.net/xen_com_mgr/colo-coarsegrain-lockstepping-virtual-machines-for-nonstop-service COLO Intro (Long), 2012]
 
* [http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-COLO-xen-dong.pdf COLO Intro (Short), 2012]
 
* [http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-COLO-xen-dong.pdf COLO Intro (Short), 2012]
  +
* [http://wiki.qemu.org/Features/COLO COLO on QEMU/KVM]
   
 
[[Category:High Availability]]
 
[[Category:High Availability]]

Revision as of 03:00, 8 June 2015

COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus. It is one of the features that has been updated in 4.5, but has not been integrated into the Xen Hypervisor code base (in other words, the code is out-of-tree). However, COLO works with Remus for XL as developed in Xen 4.5.

Background

COLO FT/HA (COarse-grain LOck-stepping Virtual Machines for Non-stop Service) project is a high availability solution. Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the same request from client, and generate response in parallel too. If the response packets from PVM and SVM are identical, they are released immediately. Otherwise, a VM checkpoint (on demand) is conducted. The idea is presented in Xen summit 2012, and 2013, and academia paper in SOCC 2013.

Components

  • COLO Manager:
    • COLO Checkpoint/Failover Controller
Modifications of save/restore flow to realize continuous migration, to make sure the state of VM in Secondary side
always be consistent with VM in Primary side.
When primary VM writes data into image, the colo disk manger captures this data
and send it to secondary VM’s which makes sure the context of secondary VM's image is consentient with
the ontext of primary VM 's image.
  • COLO Proxy:
We need an module to compare the packets returned by Primary VM and Secondary VM
and decide whether to start a checkpoint according to some rules. It is a linux kernel module
for host.

Current Status

COLO (based on xm) has already been in development for over three years. A paper has come out at 2013. Since XEN has deprecated xm and turn to xl, we are implementing COLO on xl now. The overall status of COLO:

Requirements

Hardware requriements

There is at least one directly connected nic to forward the network requests from client to secondary vm. The directly connected nic must not be used by any other purpose. If your guest has more than one nic, you should have directly connected nic for each guest nic. If you don't have enouth directly connected nic, you can use vlan.

Dom0 requirements

  1. Kernel with dom0 support
  2. kernel module
    1. nf_conntrack
    2. nf_conntrack_ipv4
    3. nf_nat
  3. libnl-tools >= 3.0.
  4. If your host os has OEM-released xen tools, please uninstall it first.

Guest requirements

Only HVM guest(without pv extensions) is supported now. If you want to use OEM released guest os, please use SUSE. REDHAT and Ubuntu is not supported now because I don't find any way to disable pv extensions. If you want to use REDHAT or Ubuntu, you need to build the newest kernel which has the parameter xen_nopv.

Setup COLO environment

Network link topology

=================================normal ======================================
                                +--------+
                                |client  |
         master                 +----+---+                    slave
-------------------------+           |            + -------------------------+
   PVM                   |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]---------+              |
|guest  |     +---+-+    |                        |       +---+-+            |
|     [tap0]--+ br0 |    |                        |       | br0 |            |
|       |     +-----+  [eth1]-----[forward]----[eth1]--+  +-----+     SVM    |
+-------+                |                        |    |            +-------+|
                         |                        |    |  +-----+   | guest ||
                       [eth2]---[checkpoint]---[eth2]  +--+br1  |-[tap0]    ||
                         |                        |       +-----+   |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+
e.g.
master:
br0: 192.168.0.33
eth1: 192.168.1.33
eth2: 192.168.2.33

slave:
br0: 192.168.0.88
br1: no ip address
eth1: 192.168.1.88
eth2: 192.168.2.88
===========================after failover=====================================
                                +--------+
                                |client  |
    master (dead)               +----+---+                 slave (alive)
-------------------------+           |            ---------------------------+
  PVM                    |           +            |                          |
+-------+         +----[eth0]-----[switch]-----[eth0]-------+                |
|guest  |     +---+-+    |                        |     +---+-+              |
|     [tap0]--+ br0 |    |                        |     | br0 +--+           |
|       |     +-----+  [eth1]-----[forward]----[eth1]   +-----+  |     SVM   |
+-------+                |                        |              |  +-------+|
                         |                        |     +-----+  |  | guest ||
                       [eth2]---[checkpoint]---[eth2]   |br1  |  +[tap0]    ||
                         |                        |     +-----+     |       ||
                         |                        |                 +-------+|
-------------------------+                        +--------------------------+

Test environment prepare

On both Primary/Secondary hosts:

  • checkout necessary repos:
# cd ~
# git clone https://github.com/macrosheep/colo-ft-proxy
# git clone https://github.com/macrosheep/iptables
# git clone https://github.com/macrosheep/ColoPatchForQemu
# git clone https://github.com/macrosheep/xen
  • Prepare host kernel
colo-proxy kernel module need cooperate with linux kernel.
You should patch kernel with ~/colo-ft-proxy/colo-patch-for-kernel.patch
Then compile kernel and intall the new kernel (Recommend kernel-3.18.10)
  • Proxy module
    • proxy module is used for network packets compare.
# cd ~/colo-ft-proxy
# make
# make install
  • Modified iptables
    • We have added a new rule to iptables command.
# cd ~/iptables
# ./autogen.sh && ./configure
# make && make install
  • Build and install xen
# cd xen
# git checkout -b colo_v6
# make dist-xen; make install-xen
# make dist-tools; make install-tools
  • Apply the patch for qemu xen, and rebuild xen tools:
# cd ~/xen/tools/qemu-xen-dir
# git am ~/ColoPatchForQemu/*.patch
# cd ~/xen
# make dist-tools && make install-tools
Note: You must use qemu-xen. qemu-xen-traditional is not supported.

On Primary host:

  • Install the guest
Add "xen_platform_pci=0" into the guest configfile
If you use suse, please select physical machine
copy the disk image to the other host
  • Update your guest config file for COLO:
    • disk
disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/yanghy/colo-hvm-suse64.img' ]
    • nic
vif = [ 'mac=00:16:4f:00:00:11, bridge=br0, model=e1000, forwarddev=eth1' ]
NOTE: The ip/port in colo-params is the secondary host's IP.
forwarddev is the directly connected nic.

Run COLO

On both Primary/Secondary hosts:

# modprobe nf_conntrack_colo

On Secondary host: excute the following script:

#! /bin/bash

active_disk=/mnt/ramfs/active_disk.img
hidden_disk=/mnt/ramfs/hidden_disk.img

function create_image()
{
    /home/yanghy/client-xen/tools/qemu-xen-dir/qemu-img create -f qcow2 $1 10G
}

function prepare_temp_images()
{
    grep -q "^none /mnt/ramfs ramfs" /proc/mounts
    if [[ $? -ne 0 ]]; then
        mount -t ramfs none /mnt/ramfs/ -o size=2G
    fi

    if [[ ! -e $active_disk ]]; then
        create_image $active_disk
    fi

    if [[ ! -e $hidden_disk ]]; then
        create_image $hidden_disk
    fi
}

prepare_temp_images

Note: It is recommended to put active disk and hidden disk in ramdisk.

On Primary host:

xl remus -c -u <domname> 192.168.2.88

Known problems

  1. Secondary vm may crash due to triple fault.
NOTE: Problem 1 doesn't happen every time. So you can run colo again to
avoid this problem.

Trouble shooting

If there's some error happend when staritng COLO, you can do:

  1. Make sure you have all necessary modules that DOM0 needed on both side.
  2. Make sure you have followed all the instructions in this README.
  3. Try to reboot both primary and secondary host.
  4. If you still have problems, collect the error logs and contact Yang Hongyang(yanghy@cn.fujitsu.com) Wen Congyang(wency@cn.fujitsu.com) for help.

Example

If you use SLES11.3, you can get the detailed steps from the wiki: Setup COLO on SLES11 SP3

An example guest config:

builder='hvm'

memory = 512
vcpus=2
cpus=["2", "3"]

name = "hvm_nopv_colo"

disk = [ 'format=raw,devtype=disk,access=w,vdev=hda,backendtype=qdisk,colo,colo-params=192.168.2.88:9000:exportname=qdisk1,active-disk=/mnt/ramfs/active_disk.img,hidden-disk=/mnt/ramfs/hidden_disk.img,target=/home/test/colo-hvm-suse64.img' ]


vif = [ 'mac=00:16:4f:00:00:11, bridge=xenbr0, model=virtio-net, forwarddev=eth1' ]

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c), Network (n) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
boot="c"

sdl=0

vnc=1

vnclisten='0.0.0.0'

vncunused = 1 

stdvga = 0 

serial='pty'

apic=1
apci=1
pae=1

extid=0
keymap="en-us"
localtime=1
hpet=1

usbdevice='tablet'

xen_platform_pci=0

Links

For more information see: