Xen 4.5 RC1 test instructions: Difference between revisions
Rcpavlicek (talk | contribs) m (→Getting RC1) |
Rcpavlicek (talk | contribs) m (→Getting RC1) |
||
Line 17: | Line 17: | ||
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.5.0-rc1 |
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.5.0-rc1 |
||
* '''tarball''': here it is a 4.5.0 RC1 [http://bits.xensource.com/oss-xen/release/4.5.0-rc1/xen-4.5.0-rc1.tar.gz tarball] (and its [http://bits.xensource.com/oss-xen/release/4.5.0-rc1/xen-4.5.0-rc1.tar.gz.sig signature]) |
* '''tarball''': here it is a 4.5.0 RC1 [http://bits.xensource.com/oss-xen/release/4.5.0-rc1/xen-4.5.0-rc1.tar.gz tarball] (and its [http://bits.xensource.com/oss-xen/release/4.5.0-rc1/xen-4.5.0-rc1.tar.gz.sig signature]) |
||
* '''RPMS''': Michael Young graciously provided temporary Xen 4.5.0-rc1 RPMs. They are at [http://koji.fedoraproject.org/koji/taskinfo?taskID=7942114 Koji temporary build]. You can use [http://people.redhat.com/mikeb/scripts/download-scratch.py an scratch downloader to get all the RPMs] and its command line would be: <code>./download-scratch.py -t 7942114</code>. Since ALL 'virt-manager','virsh' and 'qemu-system-x86' are built against Xen 4.4, you will have dependency problems. You can use |
* '''RPMS''': Michael Young graciously provided temporary Xen 4.5.0-rc1 RPMs. They are at [http://koji.fedoraproject.org/koji/taskinfo?taskID=7942114 Koji temporary build]. You can use [http://people.redhat.com/mikeb/scripts/download-scratch.py an scratch downloader to get all the RPMs] and its command line would be: <code>./download-scratch.py -t 7942114</code>. Since ALL 'virt-manager','virsh' and 'qemu-system-x86' are built against Xen 4.4, you will have dependency problems. You can use [http://koji.fedoraproject.org/koji/taskinfo?taskID=7954061 xen-compat-libs] if you want to re-install back the old Xen libraries. |
||
== Building == |
== Building == |
Revision as of 01:28, 29 October 2014
What needs to be tested
General things:
- Making sure that Xen 4.5 compiles and installs properly on different software configurations; particularly on distros
- Making sure that Xen 4.5, along with appropriately up-to-date kernels, work on different hardware.
For more ideas about what to test, please see Testing Xen.
Installing
Getting RC1
- xen: with a recent enough
git
(>= 1.7.8.2) just pull from the proper tag (4.5.0-rc1
) from the main repo directly:
git clone -b 4.5.0-rc1 git://xenbits.xen.org/xen.git
With an older git
version (and/or if that does not work, e.g., complaining with a message like this: Remote branch 4.5.0-rc1 not found in upstream origin, using HEAD instead
), do the following:
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.5.0-rc1
- tarball: here it is a 4.5.0 RC1 tarball (and its signature)
- RPMS: Michael Young graciously provided temporary Xen 4.5.0-rc1 RPMs. They are at Koji temporary build. You can use an scratch downloader to get all the RPMs and its command line would be:
./download-scratch.py -t 7942114
. Since ALL 'virt-manager','virsh' and 'qemu-system-x86' are built against Xen 4.4, you will have dependency problems. You can use xen-compat-libs if you want to re-install back the old Xen libraries.
Building
Instructions are available for building Xen on Linux, NetBSD, and FreeBSD
Known issues
Systemd integration
Affects CentOS7, SLES12, Fedora Core 21 and Debian Jessie. Xen source contains systemd files that can be used to configure the various run-time services. In the past the distributions would carry their own version of it - but now we host them. This is not yet complete - patches for this are being worked on for RC2.
Stubdomains build issues
The stubdomains will not build. Fix is in staging (and will make RC2) or stubdom/Makefile should use QEMU_TRADITIONAL_LOC
Building against libxl (outside code)
If you are building against libxl for any APIs before Xen 4.5 you will encounter building errors. Patch for this issue will be in RC2.
Migrating large Windows guests can cause WMI service to hang
Patch is in 'staging' and will be in RC2.
pygrub parsing grub1 problems
pygrub (Python code) has problems parsing grub.cfg file and hence cannot boot PV guests.
Test instructions
General
- Remove any old versions of Xen toolstack and userspace binaries (including
qemu
). - Download and install the most recent Xen 4.5 RC, as described above. Make sure to check the
README
for changes in required development libraries and procedures. Some particular things to note:- Since Xen 4.4 the default installation path has changed from
/usr
to/usr/local
. Take extra care when removing any old versions to allow for this.
- Since Xen 4.4 the default installation path has changed from
Once you have Xen 4.5 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.
In particular if you were using the (deprecated) xm/XEND toolstack it is now REMOVED- hence please do try your normal use cases with the XL toolstack. The XL page has some information on the differences between XEND and XL.
Specific RC1 things
None at this time.
Specific ARM Test Instructions
Follow Xen_ARMv7_with_Virtualization_Extensions http://wiki.xen.org/wiki/Xen_ARMv7_with_Virtualization_Extensions
MMIO passthrough
To allow auto-translated domains to directly access specific hardware I/O memory pages pertaining a device that is not IOMMU-protected, use the iomem configuration option, whose usage is described in the following paragraph.
iomem=[ "IOMEM_START,NUM_PAGES[@GFN]", "IOMEM_START,NUM_PAGES[@GFN]", ... ]
IOMEM_START is a physical page number. NUM_PAGES is the number of pages beginning with START_PAGE> to allow access to. GFN specifies the guest frame number where the mapping will start in the domU's address space. If GFN is not specified, the mapping will be performed using IOMEM_START as a start in the domU's address space, therefore performing an 1:1 mapping as default. All of these values must be given in hexadecimal.
Specific x86 Test Instructions
PVH
Xen 4.4 added support to run certain PV guests in PVH mode. This requires the operating system to support a subset of PV ABI, as such only two exist:
- Linux 3.18-rc1 and later (The previous versions of Linux had an ABI violation so they do not work),
- FreeBSD guest wiki.
- FreeBSD initial domain support out of Roger's branch (based on stable/10 snapshot).
In Xen 4.5 we also added the support to run those guests as the initial domain (dom0). Unfortunately the work to make this work on AMD did not make, so it only works on Intel. To use this an extra parameter on Xen command line is required: dom0pvh=1.
Fedora
If you are using Fedora, you can install the RPMs mentioned above and use this test-case:
Note that the RPM mentioned above has conflicts with the 'virt' type tools in Fedora 21 (as they are built against Xen 4.4). One workaround is to do after installing the RPMs:
cd /usr/lib64 ln -s libxenlight.so.4.5 libxenlight.so.4.4 ln -s libxenctrl.so.4.5 libxenctrl.so.4.4
After that restart libvirtd:
systemctl restart libvirtd
Your launch of guests might still not work with libvirt, and you can export the configuration to the native format and use xl to launch the guests:
[root@localhost ~]# virsh -c xen:/// dumpxml F21-PV-32 > F21-PV-32.xml [root@localhost ~]# virsh -c xen:/// domxml-to-native xen-xm F21-PV-32.xml name = "F21-PV-32" uuid = "3c2c560f-61d3-42d3-9152-77f13de80686" maxmem = 1024 memory = 1024 vcpus = 1 bootloader = "/usr/bin/pygrub" localtime = 0 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ "type=vnc,vncunused=1,keymap=en-us" ] vif = [ "mac=00:16:3e:ad:cc:c6,bridge=xenbr0,script=vif-bridge" ] disk = [ "phy:/dev/g/F21-PV-32,xvda,w" ] [root@localhost ~]# virsh -c xen:/// domxml-to-native xen-xm F21-PV-32.xml > F21-PV-32.xm [root@localhost ~]# xl create F21-PV-32.xm Parsing config from F21-PV-32.xm libxl: warning: libxl_bootloader.c:415:bootloader_disk_attached_cb: bootloader='/usr/bin/pygrub' is deprecated; use bootloader='pygrub' instead
EFI
Xen 4.3 and later can be built as EFI binaries. Xen 4.5 can be built as an EFI under ARM.
Instruction on how to build Xen as EFI and boot under it can be found here.
libvirt
libvirt is usually shipped by the distro. You would need to use libvirt-daemon-driver-xen to manage your Xen instances. If you are building from scratch, follow [Libvirt compiling HOWTO]
For instructions on how to install guests, please visit: Guest install using libvirt
Reporting Bugs (& Issues)
- Use Freenode IRC channel #xentest to discuss questions interactively
- Report any bugs / missing functionality / unexpected results.
- Please put [TestDay] into the subject line
- Also make sure you specify the RC number you are using
- Make sure to follow the guidelines on Reporting Bugs against Xen.
Reporting success
We would love it if you could report successes by e-mailing xen-devel@lists.xen.org
, preferably including:
- Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
- Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
- Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
- Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
The following template might be helpful: should you use Xen 4.5.0-RC1 for testing, please make sure you state that information!
Subject: [TESTDAY] Test report * Hardware: * Software: * Guest operating systems: * Functionality tested: * Comments:
For example:
Subject: [TESTDAY] Test report * Hardware: Dell 390's (Intel, dual-core) x15 HP (AMD, quad-core) x5 * Software: Ubuntu 10.10,11.10 Fedora 17 * Guest operating systems: Windows 8 Ubuntu 12.10,11.10 (HVM) Fedora 17 (PV) * Functionality tested: xl suspend/resume pygrub * Comments: Window 8 booting seemed a little slower than normal. Other than that, great work!