Dm-thin for local storage: Difference between revisions
From Xen
				
				
				Jump to navigationJump to search
				
				| Dave.scott (talk | contribs) No edit summary | Dave.scott (talk | contribs)  No edit summary | ||
| Line 1: | Line 1: | ||
| The Storage Manager (SM) currently supports 2 kinds of local storage: | The Storage Manager (SM) currently supports 2 kinds of local storage: | ||
| # .vhd files on an ext3 filesystem on an LVM LV on a local disk | |||
| # vhd-format data written directly to LVM LVs on a local disk | |||
| We can also directly import and export .vhd-format data using HTTP PUT and GET operations, see [[Disk import/export]]. | We can also directly import and export .vhd-format data using HTTP PUT and GET operations, see [[Disk import/export]]. | ||
| In all cases the data path uses "blktap" (the kernel module) and "tapdisk" (the user-space process). This means that: | In all cases the data path uses "blktap" (the kernel module) and "tapdisk" (the user-space process). This means that: | ||
| # constant maintenance is required because blktap is an out-of-tree kernel module | |||
| # every I/O request incurs extra latency due to kernelspace/userspace transitions, a big problem on fast flash devices (PCIe) | |||
| # we only support vhd, and not vmdk or qcow2 (and in future direct access to object stores?) | |||
| = Analysis = | = Analysis = | ||
| We currently use the vhd format and blktap/tapdisk implementation for 2 distinct purposes: | We currently use the vhd format and blktap/tapdisk implementation for 2 distinct purposes: | ||
| # as a convenient, reasonably efficient, standard format for sharing images such as templates | |||
| # as a means of implementing thin provisioning on the data path: where blocks are allocated on demand, and storage is over provisioned | |||
| If instead of using vhd format and blktap/tapdisk everywhere we | If instead of using vhd format and blktap/tapdisk everywhere we | ||
| # use a tool (e.g. qemu-img) which reads and writes vhd, qcow2, vmdk and which can be mounted as a block device on an unmodified kernel (e.g. via NBD) | |||
| # use device-mapper modules to provide thin provisioning and low-latency access to the data | |||
| then we | then we | ||
| # avoid the blktap kernel module maintenance | |||
| # reduce the common-case I/O request latency by keeping it all in-kernel | |||
| # extend the number of formats we support, and make it easier to support direct object store access in future. | |||
Revision as of 13:53, 14 July 2014
The Storage Manager (SM) currently supports 2 kinds of local storage:
- .vhd files on an ext3 filesystem on an LVM LV on a local disk
- vhd-format data written directly to LVM LVs on a local disk
We can also directly import and export .vhd-format data using HTTP PUT and GET operations, see Disk import/export.
In all cases the data path uses "blktap" (the kernel module) and "tapdisk" (the user-space process). This means that:
- constant maintenance is required because blktap is an out-of-tree kernel module
- every I/O request incurs extra latency due to kernelspace/userspace transitions, a big problem on fast flash devices (PCIe)
- we only support vhd, and not vmdk or qcow2 (and in future direct access to object stores?)
Analysis
We currently use the vhd format and blktap/tapdisk implementation for 2 distinct purposes:
- as a convenient, reasonably efficient, standard format for sharing images such as templates
- as a means of implementing thin provisioning on the data path: where blocks are allocated on demand, and storage is over provisioned
If instead of using vhd format and blktap/tapdisk everywhere we
- use a tool (e.g. qemu-img) which reads and writes vhd, qcow2, vmdk and which can be mounted as a block device on an unmodified kernel (e.g. via NBD)
- use device-mapper modules to provide thin provisioning and low-latency access to the data
then we
- avoid the blktap kernel module maintenance
- reduce the common-case I/O request latency by keeping it all in-kernel
- extend the number of formats we support, and make it easier to support direct object store access in future.

