Xen Storage Management: Difference between revisions
Lars.kurth (talk | contribs) m (moved XenStorage to Xen Storage Management) |
Rcpavlicek (talk | contribs) |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 35: | Line 35: | ||
hardware or software block device. |
hardware or software block device. |
||
One can even combine all three classes. A Xen domain (or any unix host for |
One can even combine all three classes. A Xen Project domain (or any unix host for |
||
that matter) could create a filesystem on an evms-managed volume created |
that matter) could create a filesystem on an evms-managed volume created |
||
from a region of an lvm2 container whose storage regions are each software |
from a region of an lvm2 container whose storage regions are each software |
||
Line 55: | Line 55: | ||
features equally, though. |
features equally, though. |
||
There are many to chose from. Here are some candidates for use with Xen |
There are many to chose from. Here are some candidates for use with Xen |
||
Project software: |
|||
; nfs3 : mature, widely supported |
; nfs3 : mature, widely supported |
||
Line 111: | Line 112: | ||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category: |
[[Category:Users]] |
||
[[Category: |
[[Category:Beginners]] |
||
[[Category:Overview]] |
[[Category:Overview]] |
Latest revision as of 14:06, 28 August 2014
Definitions
Block Device
Storage 'published' in the form of a block type device file, usually somewhere in /dev. (See mknod (1)). Block devices act like one huge file, possibly with extra features defined by the hardware or software behind them.
Hardware Block Device
Domain0 access to real physical storage, be it a whole IDE disk, a partition, or a hardware RAID.
Software Block Device
Software volume management (evms, lvm) and software raid (md) wrap hardware and other software block devices and give them new names as part of the interface of adding functionality to the storage they manage. For example, a software raid of /dev/hda and /dev/hdb might be accessed through an 'md' device named /dev/md0. Alternatively, it might also be part of an lvm volume group which is managed by evms and named 'example'. In that case its name would be /dev/evms/example.
Network Block Device
There are several kernel drivers which export storage as a simple block device which clients may use identically to how they would use a regular hardware or software block device.
One can even combine all three classes. A Xen Project domain (or any unix host for that matter) could create a filesystem on an evms-managed volume created from a region of an lvm2 container whose storage regions are each software raid devices created from network block devices exported from other hosts. This sort of configuration could be used to create a cluster-wide block device which would tolerate some storage node failures.
Types of network block device
- nbd
- an extremely simple protocol available in the mainline unix kernel
- gnbd
- an enhanced nbd from RedHat, enhanced for cluser use
- iSCSI
- a heavily-engineered protocol and industry standard for accessing devices across an IP network
Network File System
Network filesystems differ from block devices in that like regular filesystems, they implement unix filesystem semantics such as directories, symlinks, hardlinks, charachter and block device files, access control lists and locking mechanisms. Not all network filesystems implement these features equally, though.
There are many to chose from. Here are some candidates for use with Xen Project software:
- nfs3
- mature, widely supported
- gfs
- developed by RedHat for clusters
- gfs2
- successor to gfs, still in development
- afs
- designed for high client:server ratios and high latency networks
- Coda
- descended from afs, still in development
- Lustre
- descended (indirectl) from Coda, designed for large clusters
Use Volume Management
There are many reasons to use volume management for all block devices, and few disadvantages.
A volume manager can gracefully handle situations in which the hardware block device names change. The volume manager scans devices as it comes to know of them and stitches them together or gives them access-method-independent names so that, for example, guest domain configurations need not be updated when /dev/hda suddenly becomes /dev/hde because the hard drive was moved from the ata66 controller on the old motherboard to the spiffy ultra133 controller which was just added to the host.
A volume manager can also facilitate hardware upgrades. It can allow the administrator to safely migrate data in use by a live guest domain from one device to another without impacting the operation of the guest.
Configuring Guest Access
Domain0 grants a guest domain access to host domain block devices with the 'disk' parameter in the domain's configuration:
disk = [ 'phy:dom0,domU,mode' ]
- dom0
- Domain0 device path, in this case /dev/dom0. Paths are relative to /dev
- domU
- How the host domain presents the device to the guest domain.
- mode
- 'r' for read-only, 'w' for read-write.
The path and filename of a block device in Domain0 will depend on the modules and userspace processes exposing the hardware and the (optional) presence of a /dev manager such as udev.
Swap
Guest domains can be given swap space by exporting any block device as a VBD, the same way one exports a device containing a filesystem. As with filesystem devices, it is helpful to manage swap devices with a volume manager to simplify device management. For example, if a guest domain's root, home and etc filesystems and swapspace were all in the same container, that container and all its contents could be transparently migrated to a new device, local or remote, without impacting the operation of the guest.