CrossPoolMigrationv3: Difference between revisions

From Xen
Jump to navigationJump to search
No edit summary
No edit summary
 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{TODO|Has this been implemented? If so, it should be moved to Designs in the XAPI Devel Index}}
This page describes a possible design for Cross-pool migration (which also works for within-pool migration with and without shared storage).

This page describes '''the chosen design''' for [[CrossPoolMigration|Cross-pool migration]] (which also works for within-pool migration with and without shared storage).

= Summary =
This design has the following features:
# disks are replicated between the two sites/SRs using a "replication service" which is aware of the underlying disk structure (e.g. in the case of .vhd it can use the sparseness information to speed up the copying)
# the mirror is made synchronous by using the *tapdisk* "mirror" plugin (the same as used by the existing disk caching feature)
# the pool-level VM metadata is export/imported by xapi
# the domain-level VM metadata is export/imported by xenopsd

See the following diagram:


[[File:components3.png]]
[[File:components3.png]]

= Advantages =

This design has the following advantages:
# by separating the act of mirroring the disks (like a storage array would do) from the act of copying a running memory image, we don't need to hack libxenguest. There is a clean division of responsibility between managing storage and managing running VMs.
# by creating a synchronous mirror, we don't increase the migration blackout time
# we can re-use the disk replication service to do efficient cross-site backup/restore (ie to make periodic VM snapshot and archive use an incremental archive)

= Proposed APIs =

The following APIs are proposed:

# VM.migrate_receive(Host host, SR sr, Map(String,String) options): Map(String, String)
## host: the host to move the running VM to
## sr: the SR to replicate the VM's disks to
## options: for future advanced options
## the return value should be considered an abstract token, identifying the receiver. The token should be passed to the transmitter.

# VM.migrate(VM vm, Map(String, String) dest, Bool live, Map(String, String) options)
## vm: the running VM to migrate
## dest: the result of a previous VM.migrate_receive call
## live: if true this is a "live" migration
## options: for future advanced options

A client will first authenticate to the receiver and call "VM.migrate_receive". The client will then authenticate to the sender and call "VM.migrate", passing along the data provided by the receiving pool. If used this way then only the client has credentials for both pools. The receiving pool should take care to only include data in the "dest" result that can be used for a migrate; it should not include (for example) username and password information or a session_id. This avoids granting the transmitting pool too much access to the receiving pool.

The VM.migrate operation should be cancellable.

The APIs should work within a pool as well as between two pools.

If the destination host is the host the VM is '''already''' running on, then no domain memory copy will be performed unless the option "force-memory-copy" has value "true". This allows "VM.migrate" to be used to perform a live storage migration, without artificially needing more RAM. The "force-memory-copy" option is provided for testing purposes only and will be undocumented.

If the destination SR is the same SR which already contains the VMs disks then no disk copy will be performed and the operation shall behave like a regular VM migrate.

== Usage ==

A client wishing to perform a storage migration could do something like this:
<pre>
# 1. Log into the remote pool
remote = xmlrpclib.Server("https://my.remote.pool/")
remote_session_id = remote.session.login_with_password("root", "password")["Value"]

# 2. Decide where you want the VM to go
remote_sr = remote.SR.get_by_name_label(remote_session_id, "my favourite storage")["Value"]
remote_host = remote.Host.get_by_name_label(remote_session_id, "my favourite host")["Value"]

# 3. Generate a token representing this destination
dest = remote.VM.migrate_receive(remote_session_id, host, sr, {})["Value"]

# Note: We don't log out of the remote session because it would invalidate 'dest'

# 4. Log into the local pool
local = xmlrpclib.Server("https://my.local.pool/")
local_session_id = local.session.login_with_password("root", "password")["Value"]

# 5. Decide which VM we want to move
local_vm = local.VM.get_by_name_label(local_session_id, "my lovely VM")["Value"]

# 6. Migrate the VM:
task = local.Async.VM.migrate(local_session_id, vm, dest, true, {})["Value"]

# Monitor the task for success/failure

# 7. Clean up
remote.session.logout(remote_session_id)
local.session.logout(local_session_id)
</pre>

== Using the xe command-line ==

I propose to overload the existing "xe vm-migrate" CLI command as follows:
<pre>
xe vm-migrate vm=... remote-address=... remote-username=... remote-password=... [destination-sr-uuid=...]
</pre>
where the arguments:
# remote-address: is the IP (v4, v6) address of the remote pool
# remote-username: names a user with sufficient privileges to receive a migration
# remote-password: password for remote-username
# destination-sr-uuid: optionally specifies an SR to contain the disks. If no SR is provided then the Pool's default SR will be chosen. If there is no provided SR and no default then an error will be returned.

Note the slight semantic difference between using the two API calls and the CLI command: by the design of the CLI the command will be sent as a whole to the transmitting pool, which means the transmitting pool will process credentials for the receiving pool. Users who care about this should use the raw API to write their own clients instead.

[[Category:XAPI Devel]]

Latest revision as of 13:02, 11 July 2013

Icon todo.png To Do:

Has this been implemented? If so, it should be moved to Designs in the XAPI Devel Index


This page describes the chosen design for Cross-pool migration (which also works for within-pool migration with and without shared storage).

Summary

This design has the following features:

  1. disks are replicated between the two sites/SRs using a "replication service" which is aware of the underlying disk structure (e.g. in the case of .vhd it can use the sparseness information to speed up the copying)
  2. the mirror is made synchronous by using the *tapdisk* "mirror" plugin (the same as used by the existing disk caching feature)
  3. the pool-level VM metadata is export/imported by xapi
  4. the domain-level VM metadata is export/imported by xenopsd

See the following diagram:

Components3.png

Advantages

This design has the following advantages:

  1. by separating the act of mirroring the disks (like a storage array would do) from the act of copying a running memory image, we don't need to hack libxenguest. There is a clean division of responsibility between managing storage and managing running VMs.
  2. by creating a synchronous mirror, we don't increase the migration blackout time
  3. we can re-use the disk replication service to do efficient cross-site backup/restore (ie to make periodic VM snapshot and archive use an incremental archive)

Proposed APIs

The following APIs are proposed:

  1. VM.migrate_receive(Host host, SR sr, Map(String,String) options): Map(String, String)
    1. host: the host to move the running VM to
    2. sr: the SR to replicate the VM's disks to
    3. options: for future advanced options
    4. the return value should be considered an abstract token, identifying the receiver. The token should be passed to the transmitter.
  1. VM.migrate(VM vm, Map(String, String) dest, Bool live, Map(String, String) options)
    1. vm: the running VM to migrate
    2. dest: the result of a previous VM.migrate_receive call
    3. live: if true this is a "live" migration
    4. options: for future advanced options

A client will first authenticate to the receiver and call "VM.migrate_receive". The client will then authenticate to the sender and call "VM.migrate", passing along the data provided by the receiving pool. If used this way then only the client has credentials for both pools. The receiving pool should take care to only include data in the "dest" result that can be used for a migrate; it should not include (for example) username and password information or a session_id. This avoids granting the transmitting pool too much access to the receiving pool.

The VM.migrate operation should be cancellable.

The APIs should work within a pool as well as between two pools.

If the destination host is the host the VM is already running on, then no domain memory copy will be performed unless the option "force-memory-copy" has value "true". This allows "VM.migrate" to be used to perform a live storage migration, without artificially needing more RAM. The "force-memory-copy" option is provided for testing purposes only and will be undocumented.

If the destination SR is the same SR which already contains the VMs disks then no disk copy will be performed and the operation shall behave like a regular VM migrate.

Usage

A client wishing to perform a storage migration could do something like this:

# 1. Log into the remote pool
remote = xmlrpclib.Server("https://my.remote.pool/")
remote_session_id = remote.session.login_with_password("root", "password")["Value"]

# 2. Decide where you want the VM to go
remote_sr = remote.SR.get_by_name_label(remote_session_id, "my favourite storage")["Value"]
remote_host = remote.Host.get_by_name_label(remote_session_id, "my favourite host")["Value"]

# 3. Generate a token representing this destination
dest = remote.VM.migrate_receive(remote_session_id, host, sr, {})["Value"]

# Note: We don't log out of the remote session because it would invalidate 'dest'

# 4. Log into the local pool
local = xmlrpclib.Server("https://my.local.pool/")
local_session_id = local.session.login_with_password("root", "password")["Value"]

# 5. Decide which VM we want to move
local_vm = local.VM.get_by_name_label(local_session_id, "my lovely VM")["Value"]

# 6. Migrate the VM:
task = local.Async.VM.migrate(local_session_id, vm, dest, true, {})["Value"]

# Monitor the task for success/failure

# 7. Clean up
remote.session.logout(remote_session_id)
local.session.logout(local_session_id)

Using the xe command-line

I propose to overload the existing "xe vm-migrate" CLI command as follows:

xe vm-migrate vm=... remote-address=... remote-username=... remote-password=... [destination-sr-uuid=...]

where the arguments:

  1. remote-address: is the IP (v4, v6) address of the remote pool
  2. remote-username: names a user with sufficient privileges to receive a migration
  3. remote-password: password for remote-username
  4. destination-sr-uuid: optionally specifies an SR to contain the disks. If no SR is provided then the Pool's default SR will be chosen. If there is no provided SR and no default then an error will be returned.

Note the slight semantic difference between using the two API calls and the CLI command: by the design of the CLI the command will be sent as a whole to the transmitting pool, which means the transmitting pool will process credentials for the receiving pool. Users who care about this should use the raw API to write their own clients instead.