CrossPoolMigrationv3: Difference between revisions
From Xen
Jump to navigationJump to search
Dave.scott (talk | contribs) No edit summary |
Dave.scott (talk | contribs) No edit summary |
||
Line 15: | Line 15: | ||
# by creating a synchronous mirror, we don't increase the migration blackout time |
# by creating a synchronous mirror, we don't increase the migration blackout time |
||
# we can re-use the disk replication service to do efficient cross-site backup/restore (ie to make periodic VM snapshot and archive use an incremental archive) |
# we can re-use the disk replication service to do efficient cross-site backup/restore (ie to make periodic VM snapshot and archive use an incremental archive) |
||
= Proposed APIs = |
|||
# VM.migrate_receive(Host host, SR sr, Map(String,String) options): Map(String, String) |
|||
## host: the host to move the running VM to |
|||
## sr: the SR to replicate the VM's disks to |
|||
## options: for future advanced options |
|||
## the return value should be considered an abstract token, identifying the receiver. The token should be passed to the transmitter. |
|||
# VM.migrate(VM vm, Map(String, String) dest, Bool live, Map(String, String) options) |
|||
## vm: the running VM to migrate |
|||
## dest: the result of a previous VM.migrate_receive call |
|||
## live: if true this is a "live" migration |
|||
## options: for future advanced options |
Revision as of 11:56, 3 January 2012
This page describes a possible design for Cross-pool migration (which also works for within-pool migration with and without shared storage).
This design has the following features:
- disks are replicated between the two sites/SRs using a "replication service" which is aware of the underlying disk structure (e.g. in the case of .vhd it can use the sparseness information to speed up the copying)
- the mirror is made synchronous by using the *tapdisk* "mirror" plugin (the same as used by the existing disk caching feature)
- the pool-level VM metadata is export/imported by xapi
- the domain-level VM metadata is export/imported by xenopsd
See the following diagram:
This design has the following advantages:
- by separating the act of mirroring the disks (like a storage array would do) from the act of copying a running memory image, we don't need to hack libxenguest. There is a clean division of responsibility between managing storage and managing running VMs.
- by creating a synchronous mirror, we don't increase the migration blackout time
- we can re-use the disk replication service to do efficient cross-site backup/restore (ie to make periodic VM snapshot and archive use an incremental archive)
Proposed APIs
- VM.migrate_receive(Host host, SR sr, Map(String,String) options): Map(String, String)
- host: the host to move the running VM to
- sr: the SR to replicate the VM's disks to
- options: for future advanced options
- the return value should be considered an abstract token, identifying the receiver. The token should be passed to the transmitter.
- VM.migrate(VM vm, Map(String, String) dest, Bool live, Map(String, String) options)
- vm: the running VM to migrate
- dest: the result of a previous VM.migrate_receive call
- live: if true this is a "live" migration
- options: for future advanced options