Xen FAQ Installation: Difference between revisions

From Xen
Jump to navigationJump to search
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= File Systems =
= File Systems =


== Is there a way to have a shared root file system amongst a set of Xen servers? ==
== Is there a way to have a shared root file system amongst a set of guest VMs? ==


Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command:
Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command:
Line 10: Line 10:
Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk.
Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk.


== Will I get good I/O performance if I use a file-backed (.img) block device? ==
Like 1. mount ramfs in /tmp and in /var ... http://en.opensuse.org/How-To_Make_the_root_filesystem_read-only
No. Using lvm to create a volume, and using the
I use 2 disks in Xen with one as read-only mounted as / and the other is the data partition. I have a need to have scratch partition with pre-populated data and for this I create a LV and put data into it (eg:- software etc.,) and then create a snapshot of this volume and send it as rw to the xen machine. This way my original software partitions are intact and also the changes (may be damaging) done in the xen volumes are lost once the snapshot grows to 100%.


disk=['phy:/...']
== Why is PV performance on my installation poor? ==
I installed two Debian web server which run a phpbb3 forum. One stays on a Xen paravirtualized domU (512 MB of ram, 1 vcpu, disc on a raw file file:/home/vale/debian.img,hda,w) on [[OpenSuse]] 11.0 and one run on a hyper-v virtual machine (512 MB 1 cpu) build on Windows Server
2008 R2. The performances on PV are very poor than hyper-v. ab -n 3000 -k -c50 http://site.lan/phpBB3/ returns 13,22 req/sec on PV domU and 38,37 req/sec on hyper-v. Why?


method in your .cfg file will yield better performance. This is particularly important for I/O intensive VMs, such as databases.
I installed O.S. guest as a HVM domain, then I installed linux-xen-image files and I use them for vmlinuz and initrd. I also installed libc6-xen.


== How can I make disk resizing work? ==
Xen PV config file:

<pre><nowiki>
name='pv'
ramdisk='/home/vale/initrd.img-2.6.18-6-xen-686' kernel='/home/vale/vmlinuz-2.6.18-6-xen-686' bootloader=''
vif=['mac=00:16:3e:33:37:4f, bridge=xenbr0']
vcpus=1 memory=512 disk=['file:/home/vale/pv.img,hda,w'] on_reboot='restart'
on_crash='restart' extra='' root='/dev/hda1' platform='xen'
</nowiki></pre>

I'm assuming that phpBB3 is relatively I/O intensive (since it uses db, which I assume you also installed on the same host). In that case, your bad numbers are probably because of this

<pre><nowiki>
disk=['file:/home/vale/pv.img,hda,w']
</nowiki></pre>


On Xen, file:/ is not recommended, and you should use tap:aio:/ instead for file-backed storage. Then again, another user reported that even tap:aio isn't good enough

http://lists.xensource.com/archives/html/xen-users/2009-01/msg00820.html

So in short, if you use Xen PV, you might want to consider using LVM/partition-backed storage.

== Is it possible to start a VM that contains just gpxe? ==
(which when started, will get an image from a provisioning server and will load that image)

In this article, we'll show you the prcesses to setup PXE boot environment for Xen host (hypervisor + dom0) and Xen guest, both PV (Para-Virtualized) guest and HVM (Hardware-assisted Virtual Machine). Details at http://os-drive.com/files/docbook/xen-pxeboot.html.

== How can I make disk r4esizing work? ==
I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize
I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize
/dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G
/dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G
Line 57: Line 27:
Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV.
Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV.


== How do I use Xen with Dynamic Slices? ==
== How do I overcommit storage? ==
This is a matter of what storage backend you use. If you use one of the following, you will be able to overcommit storage:
I want use xen with dynamic slices. For example, I have 20 domU based on FreeBSD, xen hypervisor 3.3.1, Debian Lenny dom0 system. All domUs have 80Gb LVM partitions, but realy they use 20 of this 80Gb and I want to create more domU's. How can I do it? I know that some virtualisation have possibility to do dynamic slices(4 example Virtul box)
*sparse raw file (with file: or tap:aio:)

*qcow
Do you mean storage overcommit? That is, assign more storage to domU than what you actually have?
*vmdk/vdisk (I think full support is only in newer Xen or Opensolaris)
*zvol (on Opensolaris)


If you use disk/partition/LVM for domU storage, you won't be able to.
If yes, it's not a matter of Xen vs [[VirtualBox]]. It's a matter of what storage backend you use. If you use one of these: - sparse raw file (with file: or tap:aio:) - qcow - vmdk/vdisk (I think full support is only in newer Xen or Opensolaris) - zvol (on Opensolaris) then you can overcommit storage. But if you use disk/partition/LVM for domU storage, you won't be able to.


= 32bit vs 64 bit =
= 32bit vs 64 bit =

Latest revision as of 13:27, 9 July 2015

File Systems

Is there a way to have a shared root file system amongst a set of guest VMs?

Yes, the best way to achieve this is to install your guest with an LVM-backed block device. You can then create a snapshot of this filesystem, with the command: lvcreate -L<size of snapshot> -s -n <snapshot name> <backend disk name>

You should create one snapshot per guest, and then put the snapshot into the guest's .cfg file.

Using snapshots allows you to avoid having to make a read-only root filesystem. However, should you wish to use a read only root fs, you can install the OS in an LVM partition and use it shared across all the xen domUs when you use the parition as 'r' instead of 'w' when defining the disk.

Will I get good I/O performance if I use a file-backed (.img) block device?

No. Using lvm to create a volume, and using the

disk=['phy:/...']

method in your .cfg file will yield better performance. This is particularly important for I/O intensive VMs, such as databases.

How can I make disk resizing work?

I tried to resize a disk of my data guest from 100 to 400 GB. I did an lvresize /dev/xendata/data-disk -L 400G an it works. I started the Guest and did an df -h to check the size but there are still 100 G

The container is bigger but the filesystem isn't. Resizing an LV doesn't make the FS any bigger.

Log into the DomU and do a resize2fs <device>. You can do this while it's mounted as long as the filesystem is getting bigger.

Oh, and if you've partitioned the LV inside the guest, you'll also need to resize the partition (BEFORE you do a resize2fs, etc.). There are two ways to do this - the safest is to use parted, which works if you're using ext2/ext3 (and a couple other of the most popular filesystems - reiser, I think). The other method is to delete the partition and recreate it with the extended end points. This isn't quite as safe and requires that 1) your start point for the partition is exactly the same as it was before, and 2) the partition is the last (or only) one on the LV.

How do I overcommit storage?

This is a matter of what storage backend you use. If you use one of the following, you will be able to overcommit storage:

  • sparse raw file (with file: or tap:aio:)
  • qcow
  • vmdk/vdisk (I think full support is only in newer Xen or Opensolaris)
  • zvol (on Opensolaris)

If you use disk/partition/LVM for domU storage, you won't be able to.

32bit vs 64 bit

Is there any way to install 64Bit Linux DomU on 32Bit Linux Dom0?

Types of domU that can be run depends mostly on the hypervisor, and not dom0. So if you have a 64bit hypervisor, you should be able to run 32 and 64bit PV and HVM domUs, regardless whether dom0 is 32 or 64bit.

If you have 32bit dom0 and 32bit hypervisor, you should be able to run 64bit HVM domU, but not 64bit PV domU.