CrossPoolMigrationv1
From Xen
Jump to navigationJump to search
This page describes a possible design for Cross-pool migration (which also works for within-pool migration with and without shared storage).
This is an alternative design to CrossPoolMigration
This design has the following features:
- DRBD is used to replicate disks on demand
- this is storage-format agnostic: it doesn't require .vhd
- the SM on one host can generate URIs which can be used by other SMs to bootstrap the disk mirroring process
- a single codepath is used in xapi, making testing easier
- this replaces the existing migration API
- a simple RESTful API makes the whole thing quite easy to test, prod etc.
- the xapi pieces and the SM pieces can be developed and tested independently and integrated at the end
Pros/Cons of DRBD vs snapshot/copy
- Pro: by separating storage replication from memory transfer we can switch easily to "libxl" without modifications. Otherwise we would need callbacks from libxl to synchronise the disk copy with the memory copy.
- Pro: we don't need to write a snapshot/copy loop
- Con: we do need to write something to manage a drbd instance, which may be on a shared host (eg vanilla Debian dom0)
- Pro: migration downtime is expected to be lower, since replication cost is spread over each I/O request, rather than being bursty
- Con: continuous replication might slow down some workloads more than snapshot/copy
- Con: DRBD is linux only: it's not clear how this would work with (eg) a FreeBSD storage driver domain.
Component diagram
Migration sequence
Proposed milestones and task list
The following milestones are proposed:
Milestone number |
1 |
2 |
3 |
4 |
The following tasks are proposed:
Task | Status |
Create sample drbd.conf files | Completed |
Implement a DRBD service in dom0 | In progress |
Implement xapi HTTP operations | |
Implement SMAPI VDI.replicate_to, VDI.get_replication_target | |
Implement xapi HTTP heartbeat | |
Implement XenAPI VM.migrate, VM.receive |