Archive/Storage XenMotion: Difference between revisions

From Xen
Jump to navigationJump to search
mNo edit summary
m (moved Storage XenMotion to Archive/Storage XenMotion: XCP page; no longer hosted in this wiki, but now on XenServer.org)
 
(One intermediate revision by one other user not shown)
Line 34: Line 34:
; remote-network=:Ideas?
; remote-network=:Ideas?
; destination-sr-uuid=:the destination storage pool you want the VM to be migrated to
; destination-sr-uuid=:the destination storage pool you want the VM to be migrated to
; vif:parameter take the form 'vif:<source vif uuid>=<dest vif uuid>'
; vif:parameter take the form 'vif:<source vif uuid>=<dest network uuid>'
; vdi:parameters take the form 'vdi:<source vdi uuid>=<dest vdi uuid>'
; vdi:parameters take the form 'vdi:<source vdi uuid>=<dest sr uuid>'
; force=[true|false]
; force=[true|false]
; <vm-selectors>:examples would be uuid=00000000-0000-0000-0000-000000000000 or vm=vmNameHere
; <vm-selectors>:examples would be uuid=00000000-0000-0000-0000-000000000000 or vm=vmNameHere

Latest revision as of 13:23, 9 July 2015

What is Storage XenMotion?

Storage XenMotion (SXM) is an extention to the existing XenMotion live VM migration feature, which allows VMs to be migrated between XCP/XenServer hosts in a resource pool. SXM extends this feature by removing the restriction that the VM can only migrate within its current resource pool. We now provide the option to live migrate a VM’s disks along with the VM itself: it is now possible to migrate a VM from one resource pool to another, or to migrate a VM whose disks are on local storage, or even to migrate a VM’s disks from one storage repository to another, all while the VM is running.

What can I do with this feature?

With Storage XenMotion, system administrators now have the ability to upgrade storage arrays, without VM downtime, by migrating a VM’s disks from one array to another. This same operation can be used to provide customers with a tiered storage solution, allowing operators to charge customers different rates for the use of different classes of storage hardware, and then allow customers to upgrade or downgrade between classes with no VM downtime. SXM also supports multiple storage repository types, including Local Ext, Local LVM, NFS, iSCSI, and Fibre Channel, meaning that it is possible to move a VM’s disks between different storage repository types. It is even possible to convert a thick-provisioned disk into a thin-provisioned disk by migrating it to a thin-provisioning storage repository.

Now that XCP no longer restricts VM migrations to hosts in the same resource pool as the source host, it is much easier to rebalance VM workloads between different pools. This is especially useful in cloud environments, and our Cloud team is currently in the processes of integrating SXM with CloudStack and OpenStack open-source cloud orchestration frameworks.

How does it work?

Storage XenMotion works by moving a VM’s virtual disks prior to performing a traditional XenMotion migration. To support this, we have introduced a new internal operation: snapshot and mirror. Each of a VM’s disks are snapshotted, and from the point of the snapshot onwards, all of the disk’s writes are synchronously mirrored to the destination storage repository. In the background, the snapshotted disk is copied to the destination location. Once a snapshot has finished copying, the next disk to be migrated is snapshot/mirrored. This operation is repeated until all of the VM’s disks are in the process of being synchronously mirrored.

If the VM is being migrated to a different resource pool, a new VM object is created in the destination pool’s database, and the migrating VM’s metadata is copied into this new object. This new VM’s metadata is then remapped so that it references the new disks that have been created on the destination storage repository, and so that the VM’s virtual NICs (VIFs) point to the correct networks on the destination. This network mapping is specified by the user. In the case of an in-pool Storage XenMotion, instead of creating a new VM object, the migrating VM’s metadata is remapped in-place.

Once the VM metadata remapping is complete, the VM is ready to be migrated. At this point, the migration follows the same process as for the normal XenMotion operation. After the VM has migrated successfully, the VM metadata object on the source pool is deleted, and the leftover virtual disks, having been safely copied to their new location, are deleted from the source storage repository.

How does it compare to other virtualization platforms?

One of the major differences in the way that XCP implements live storage migration is in the storage migration algorithm itself. Other virtualization products implement storage migration by performing copy-dirty, a process in which the entire disk is copied once, and then any disk blocks that have been written to during that initial copy, called “dirty blocks,” are copied as well. This process repeats until there are very few dirty blocks left, at which point the VM is migrated as usual, with the remaining storage copied over with the migrating VM. This is effectively like snapshotting and copying disks repeatedly, until the difference between the new snapshots is very small.

XCP’s snapshot/mirror operation is unique. In Storage XenMotion, disk writes are written synchronously to both the source and the destination storage repositories. For some workload, e.g. write intensive workloads, the synchronous mirroring approach enables us to minimize VM downtime by avoiding a potentially lengthy stop-and-copy phase.

How to migrate?

The command you are looking for is "xe vm-migrate". Below is the detail of what each option means to the migration.

live=[true|false]
migrate the VM without shutting it down
host= or host-uuid=
the name/uuid of the remote host if in the same pool
remote-master=
the network address of the remote master host
remote-username=
the remote username
remote-password=
the remote password
remote-network=
Ideas?
destination-sr-uuid=
the destination storage pool you want the VM to be migrated to
vif
parameter take the form 'vif:<source vif uuid>=<dest network uuid>'
vdi
parameters take the form 'vdi:<source vdi uuid>=<dest sr uuid>'
force=[true|false]
<vm-selectors>
examples would be uuid=00000000-0000-0000-0000-000000000000 or vm=vmNameHere