Xen 4.7 RC test instructions: Difference between revisions
Lars.kurth (talk | contribs) |
Lars.kurth (talk | contribs) No edit summary |
||
(79 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
{{TODO|Restructure to make it RC independent}} |
|||
{{InfoLeft|If you come to this page before or after the Test Day is completed, your testing is still valuable, and you can use the information on this page to test, post any bugs and test reports to xen-devel@. If this page is more than two weeks old when you arrive here, please check the [[Xen Project Test Days|current schedule]] and see if a similar but more recent Test Day is planned or has already happened. |
{{InfoLeft|If you come to this page before or after the Test Day is completed, your testing is still valuable, and you can use the information on this page to test, post any bugs and test reports to xen-devel@. If this page is more than two weeks old when you arrive here, please check the [[Xen Project Test Days|current schedule]] and see if a similar but more recent Test Day is planned or has already happened. |
||
}} |
}} |
||
Line 6: | Line 5: | ||
General things: |
General things: |
||
* Making sure that Xen 4. |
* Making sure that Xen 4.7 compiles and installs properly on different software configurations; particularly on distros |
||
* Making sure that Xen 4. |
* Making sure that Xen 4.7, along with appropriately up-to-date kernels, work on different hardware. |
||
For more ideas about what to test, please see [[Testing_Xen|Testing Xen]]. |
For more ideas about what to test, please see [[Testing_Xen|Testing Xen]]. |
||
== ARM Smoke Testing == |
== ARM Smoke Testing == |
||
If you use ARM Hardware, which is not widely available or not rackable (and thus not part of our automated test suite), please check out [[Xen ARM Manual Smoke Test]]. Helping out to manually test ARM boards (which will only take a few minutes) will guarantee that Xen 4.7 will work on the board that you use. |
If you use ARM Hardware, which is not widely available or not rackable (and thus not part of our automated test suite), please check out [[Xen ARM Manual Smoke Test]]. Helping out to manually test ARM boards (which will only take a few minutes) will guarantee that Xen 4.7 will work on the board that you use. If you want to see which boards need testing, check [[Xen ARM Manual Smoke Test/Results]]. |
||
= Installing = |
= Installing = |
||
== Getting a RC == |
== Getting a RC == |
||
For the expressions/examples below, set the following bash/sh/... variable to the release candidate number (e.g. one of <code>rc1</code>, <code>rc2</code>, ... ) |
|||
RC="<release candidate number>" # rc1, rc2 ... |
|||
=== From xen.git === |
|||
With a recent enough <code>git</code> (>= 1.7.8.2), just pull from the proper tag (<code>4.7.0-$RC</code>) from the main repo directly: |
|||
git clone -b 4.7.0-$RC git://xenbits.xen.org/xen.git |
|||
With an older <code>git</code> version (and/or if that does not work, e.g., complaining with a message like this: <code>Remote branch 4.7.0-$RC not found in upstream origin, using HEAD instead</code>), do the following: |
|||
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.7.0-$RC |
|||
=== From tarball === |
|||
Download: |
|||
* '''xen''': with a recent enough <code>git</code> (>= 1.7.8.2) just pull from the proper tag (<code>4.6.0-rc3</code>) from the main repo directly: |
|||
http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz |
|||
git clone -b 4.6.0-rc3 git://xenbits.xen.org/xen.git |
|||
http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz.sig |
|||
With an older <code>git</code> version (and/or if that does not work, e.g., complaining with a message like this: <code>Remote branch 4.6.0-rc3 not found in upstream origin, using HEAD instead</code>), do the following: |
|||
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.6.0-rc3 |
|||
* '''tarball''': http://bits.xensource.com/oss-xen/release/4.6.0-rc3/xen-4.6.0-rc3.tar.gz http://bits.xensource.com/oss-xen/release/4.6.0-rc3/xen-4.6.0-rc3.tar.gz.sig |
|||
== Building == |
== Building == |
||
Line 30: | Line 39: | ||
= Known issues = |
= Known issues = |
||
== RC1 == |
|||
See http://lists.xen.org/archives/html/xen-devel/2015-09/msg01106.html |
|||
* [http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00346.html XSM denials with 4.7.0 RC1] - fixed in RC2 |
|||
* [http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00225.html Regression in Xen 4.7-rc1 - can't boot HVM guests with more than 64 vCPUS] (this is caused by a [http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00379.html bug in the Linux kernel], not a bug in Xen) |
|||
= Test instructions = |
= Test instructions = |
||
Line 36: | Line 47: | ||
== General == |
== General == |
||
* Remove any old versions of Xen toolstack and userspace binaries (including <code>qemu</code>). |
* Remove any old versions of Xen toolstack and userspace binaries (including <code>qemu</code>). |
||
* Remove any udev files under /etc because Xen 4. |
* Remove any Xen-related udev files under /etc because Xen 4.7 doesn't use those anymore. |
||
* Download and install the most recent Xen 4. |
* Download and install the most recent Xen 4.7 RC, as described above. Make sure to check the <code>README</code> and <code>INSTALL</code> for changes in required development libraries and procedures. Some particular things to note: |
||
Once you have Xen 4. |
Once you have Xen 4.7 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work. |
||
=== USB Support for xl === |
|||
{{Anchor|RC3}} |
|||
xl introduces PVUSB support as well as the following commands: |
|||
== Specific RC3 things == |
|||
<code> |
|||
usbctrl-attach |
|||
usbctrl-detach |
|||
usbdev-attach |
|||
usbdev-detach |
|||
usb-list |
|||
</code> |
|||
which corresponds to |
|||
<code> |
|||
usbctrl=[ "USBCTRL_SPEC_STRING", "USBCTRL_SPEC_STRING", ... ] |
|||
usbdev=[ "USB_SPEC_STRING", "USB_SPEC_STRING", ... ] |
|||
</code> |
|||
as outlined in the [http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html xl.cfg(5)] man page. For an overview, see [[Xen USB Passthrough]]. |
|||
=== RTDS scheduler improvements === |
|||
None at this time. |
|||
The RTDS scheduler was improved in the following way |
|||
* The RTDS scheduler has been changed from a quantum-driven model to an event-driven model, which will not invoke the scheduler unnecessarily: if you use this scheduler, you may want to test your workload using the RC and check whether there are any unexpected side effects |
|||
* Support to get/set RTDS scheduling parameters on a per-VCPU basis has been added to libxl and xl |
|||
So, for instance: |
|||
* to see the scheduling parameters of all VCPUs of a VM use <code>-v all</code> |
|||
xl sched-rtds -d vm1 -v all |
|||
Name ID VCPU Period Budget |
|||
vm1 1 0 300 150 |
|||
vm1 1 1 400 200 |
|||
vm1 1 2 10000 4000 |
|||
vm1 1 3 1000 500 |
|||
* to change (or check) the scheduling parameters of VCPUs 0 and 3 '''only''', use <code>-v</code> |
|||
# xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150 |
|||
# xl sched-rtds -d vm1 -v 0 -v 3 |
|||
Name ID VCPU Period Budget |
|||
vm1 1 0 300 150 |
|||
vm1 1 3 1000 500 |
|||
For more information and examples, see [http://xenbits.xen.org/docs/unstable/man/xl.1.html#DOMAIN-SUBCOMMANDS the xl manual page] (search for sched-rtds). |
|||
=== Credit2 runqueue arrangement and hard-affinity support === |
|||
==== Runqueue arrangement ==== |
|||
Xen 4.7 allows one to specify how host CPUs are arranged in runqueues, within the Credit2 scheduler. Valid alternatives are <code>core</code>, <code>socket</code>, <code>node</code> and <code>all</code>. |
|||
More fine grained runqueue arrangement (as in with <code>core</code>) means more accurate load balancing (e.g., it will deal better with hyperthreading), but also more overhead. |
|||
To make this effective (e.g., to use Credit2 per-socket runqueues) add the following to the hypervisor boot command line: |
|||
sched=credit2 credit2_runqueue=socket |
|||
More information [http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html here] (search for ''credit2_runqueue'') |
|||
==== Hard-affinity support ==== |
|||
In Xen 4.7, Credit2 supports hard-affinity. It can be set by means of the <code>xl vcpu-pin</code> [http://xenbits.xen.org/docs/unstable/man/xl.1.html#DOMAIN-SUBCOMMANDS xl subcommand]. If set for a VCPU, hard-affinity restricts the set of PCPUs where such VCPU can run. |
|||
To check that it works, give to the VCPUs of a VM an hard-affinity, by doing as follows: |
|||
# xl vcpu-pin 1 all 16-18 |
|||
And then check where they actually execute, by looking at: |
|||
# xl vcpu-list 1 |
|||
Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) |
|||
debian.guest.osstest 1 0 17 r-- 5.3 16-18 / all |
|||
debian.guest.osstest 1 1 18 r-- 3.3 16-18 / all |
|||
What we want is that the values in the <code>CPU</code> column to be (for VCPUs that are running) '''always''' within the set of PCPUs we specified. |
|||
=== Hotplug disk backends (drbd, iscsi, etc.) for HVM guests === |
|||
If you use drbd, iscsi, nbd, or other hotplug-script-based disk backends, try them with HVM guests. |
|||
Also see |
|||
* [[Storage options]] |
|||
* [http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt XL DISK CONFIGURATION] |
|||
=== KCONFIG support === |
|||
Xen 4.7 introduces the removal of core Xen Hypervisor features at compile time via [https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt KCONFIG]. We expect that this functionality is initially only going to be used for security and embedded applications, primarily targeting integration via the [https://www.yoctoproject.org Yocto project]. Yocto integrates with Xen via its [http://git.yoctoproject.org/cgit/cgit.cgi/meta-virtualization/about/ meta-virtualization layer] and the [http://www.yoctoproject.org/docs/current/yocto-project-qs/yocto-project-qs.html xen-image-minimal build support]. The Yocto project currently integrates with Xen 4.6.1 (Yocto kergoth release). We expect that Xen with KCONFIG will be integrated with upstream Yocto, once Xen 4.7.0 has been released. |
|||
If you do want to test specific aspects of this new feature before Yocto integration has completed, please send a mail to xen-devel@ and CC the maintainer (cardoe AT cardoe DOT com) for further instructions. |
|||
== Specific ARM Test Instructions == |
== Specific ARM Test Instructions == |
||
Follow |
Follow |
||
[[Xen_ARM_with_Virtualization_Extensions]] |
[[Xen_ARM_with_Virtualization_Extensions]] and [[Xen_ARM_with_Virtualization_Extensions#Testing]] |
||
=== Boards and hardware we do not test in our CI Loop === |
|||
Although we do have automated Test Infrastructure for the project, we only include '''rackable''' hardware into our CI Loop. We do have a mixture of Allwinner and Exynos processors in a custom chassis. If you have one of the following boards and want to ensure it runs on Xen 4.7, please make sure you run the [[Xen ARM Manual Smoke Test]] on an RC. |
|||
'''Boards not tested by our CI Loop:''' Allwinner sun6i/A31, DRA7[J6] EVM, Exynos5410, HiKey board from 96boards.org, Mustang (XC-1), OMAP5432, Renesas R-Car H2, Versatile Express and Xilinx Zynq Ultrascale MPSoC |
|||
We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure. |
|||
=== ACPI support on ARM === |
|||
ACPI support requires a platform with support for ACPI 6.0 (or later). Currently there is no publicly available hardware where this can be tested, with the exception of the [http://www.arm.com/products/tools/models/fast-models/foundation-model.php AEMv8A Foundation Model]. For more information, see |
|||
* [[Xen_ARM_with_Virtualization_Extensions/FastModels|Fast Models]] |
|||
* [https://wiki.linaro.org/LEG/Engineering/Xen_boot_on_FVP_ACPI_UEFI Xen boot on FVP ACPI UEFI] |
|||
=== Wallclock support === |
|||
Xen now exposes the wallclock time to guests. Checking the date and time in an ARM guest is all that is needed to verify this, as long as the guest doesn't run ntpdate or doesn't have access to network. |
|||
== Specific x86 Test Instructions == |
== Specific x86 Test Instructions == |
||
=== |
=== Huge PV Domains === |
||
The Xen Project Hypervisor supports starting a Dom0 with very large memory. PV guest limit restrictions of 512GB have been removed to allow the creation of huge PV domains in the TB range via the XL command line interface. |
|||
To test, create a PV domain with >512 GB of RAM. |
|||
=== Intel Code and Data Prioritization (CDP) === |
|||
{{TODOLeft| |
|||
* This section was not written by the author of this feature. Thus there may be inaccuracies |
|||
* Maybe some instructions on what to test and what one expects to see |
|||
}} |
|||
Code and Data Prioritization (CDP) Technology is an extension of CAT, which is available on Intel Broadwell and later server platforms. CDP enables isolation and separate prioritization of code and data fetches to the L3 cache in a software configurable manner, which can enable workload prioritization and tuning of cache capacity to the characteristics of the workload. CDP extends Cache Allocation Technology (CAT) by providing separate code and data masks per Class of Service (COS). |
|||
For more information see: |
|||
* [http://xenbits.xen.org/docs/unstable/man/xl.1.html#CACHE-ALLOCATION-TECHNOLOGY xl man page (CACHE-ALLOCATION-TECHNOLOGY)] |
|||
* [http://xenbits.xen.org/docs/unstable/misc/xl-psr.html xl-psr] |
|||
* [[Intel Platform QoS Technologies]] |
|||
=== COLO - Coarse Grain Lock Stepping === |
|||
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus. |
|||
COLO is different from traditional High Availability solutions, which are either based on instruction level lock stepping (excessive overheads) and periodic checkpointing such as [[Remus]] (high network latency, large VM checkpointing overhead). On Xen, COLO builds on top of [[Remus]] and uses a “relaxed approach to checkpointing: in other words, COLO only checkpoints if absolutely necessary, which for many use-cases provides near native performance. |
|||
The COLO Manager component is now part of Xen 4.7, while other components will eventually be part of QEMU (they can be downloaded from a specific git repository). |
|||
'''Test Instructions''': See [[COLO_-_Coarse_Grain_Lock_Stepping|COLO Docs]] and [[COLO_-_Coarse_Grain_Lock_Stepping#Test_environment_prepare|Test Environment]]. |
|||
Xen 4.6 supports configuring virtual NUMA topology for HVM guests. It's not required for the host to have NUMA capability to use vNUMA. See xl manual for configuration syntax. |
|||
=== xSplice - binary patching of the hypervisor === |
|||
Save / restore and migrate guest with vNUMA configured will lead to unstable performance. |
|||
xSplice is a Xen technology that enables to binary patch the running hypervisor with a '''payload file''' that is intended to primarily contain security updates (but not necessarily only so). v1 of xSplice is in technology preview mode and compile-disabled by default. It also has [[XSplice#TODOs|some restrictions]] on what payloads can be encoded in the '''payload file''', most notably support to generate payloads against .data sections and payloads that NOP (remove) existing functions. Xen 4.7 comes with built in Hypervisor support and the <code>xen-xsplice upload|apply|replace|revert</code> tool to manage payloads (the code is in [http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/misc tools/misc]). Additional tools such as <code>xsplice-build</code> to create a payload are at this stage not shipped with Xen, but are available out-of-tree. |
|||
To test xSplice, check out: |
|||
=== Migration v2 === |
|||
* Build Xen with xSplice enabled, see [[XSplice#How_to_enable_it|Enabling xSplice in hypervisor]] |
|||
* Patch hypervisor as it is running. There are three simple built-in simple examples, see [[XSplice#How_to_build_built-in_examples|How to build built-in examples]] on how to build, install and test it. |
|||
* Alternatively, see [[XSplice#xsplice-build-tools|xsplice-build-tools]] on how to build, install, and test it. |
|||
Major restrictions: |
|||
Migration v2 is supposed to work transparently to end users. There are two things that worth testing: |
|||
* Works for Linux and FreeBSD dom0's only |
|||
* Does not yet work on the ARM architecture |
|||
* Cannot generate payloads for patches with .data sections in the ELF file (in other words, patches that introduce global or static variables cannot be encoded) |
|||
* Cannot generate payloads that remove (NOP) functions from the hypervisor |
|||
{{Anchor|RC}} |
|||
* Migrate from 4.6 to 4.6 |
|||
* Migrate from 4.5 to 4.6 |
|||
== RC specific things to test == |
|||
=== RC2 === |
|||
* XSM and driver domain: start xl devd in driver domain, and see if there is XSM denial message shown in xl dmesg. |
|||
Migrating Linux 3.19+ 32bit PV kernel was broken, it's now fixed. |
|||
= Reporting Bugs (& Issues) = |
= Reporting Bugs (& Issues) = |
||
Line 76: | Line 214: | ||
* Please put '''[TestDay]''' into the subject line |
* Please put '''[TestDay]''' into the subject line |
||
* Also make sure you specify the RC number you are using |
* Also make sure you specify the RC number you are using |
||
* Make sure to follow the guidelines on [[Reporting Bugs against Xen]]. |
* Make sure to follow the guidelines on [[Reporting Bugs against Xen]] (please CC the relevant maintainers and the Release Manager - wei dot liu2 at citrix dot com). |
||
= Reporting success = |
= Reporting success = |
||
Line 86: | Line 224: | ||
* '''Functionality tested''': High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c) |
* '''Functionality tested''': High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c) |
||
The following template might be helpful: should you use |
The following template might be helpful: should you use <code>Xen 4.7.0-<Some RC></code>''' for testing, please make sure you state that information! |
||
<pre> |
<pre> |
||
Subject: [TESTDAY] Test report |
Subject: [TESTDAY] Test report |
||
Line 129: | Line 267: | ||
</pre> |
</pre> |
||
[[Category:Xen_4. |
[[Category:Xen_4.7]] |
||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category:Yocto]] |
|||
[[Category:Community]] |
[[Category:Community]] |
||
[[Category:Events]] |
[[Category:Events]] |
Latest revision as of 07:36, 24 April 2017
If you come to this page before or after the Test Day is completed, your testing is still valuable, and you can use the information on this page to test, post any bugs and test reports to xen-devel@. If this page is more than two weeks old when you arrive here, please check the current schedule and see if a similar but more recent Test Day is planned or has already happened. |
What needs to be tested
General things:
- Making sure that Xen 4.7 compiles and installs properly on different software configurations; particularly on distros
- Making sure that Xen 4.7, along with appropriately up-to-date kernels, work on different hardware.
For more ideas about what to test, please see Testing Xen.
ARM Smoke Testing
If you use ARM Hardware, which is not widely available or not rackable (and thus not part of our automated test suite), please check out Xen ARM Manual Smoke Test. Helping out to manually test ARM boards (which will only take a few minutes) will guarantee that Xen 4.7 will work on the board that you use. If you want to see which boards need testing, check Xen ARM Manual Smoke Test/Results.
Installing
Getting a RC
For the expressions/examples below, set the following bash/sh/... variable to the release candidate number (e.g. one of rc1
, rc2
, ... )
RC="<release candidate number>" # rc1, rc2 ...
From xen.git
With a recent enough git
(>= 1.7.8.2), just pull from the proper tag (4.7.0-$RC
) from the main repo directly:
git clone -b 4.7.0-$RC git://xenbits.xen.org/xen.git
With an older git
version (and/or if that does not work, e.g., complaining with a message like this: Remote branch 4.7.0-$RC not found in upstream origin, using HEAD instead
), do the following:
git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.7.0-$RC
From tarball
Download:
http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz.sig
Building
Instructions are available for building Xen on Linux, NetBSD, and FreeBSD
Known issues
RC1
- XSM denials with 4.7.0 RC1 - fixed in RC2
- Regression in Xen 4.7-rc1 - can't boot HVM guests with more than 64 vCPUS (this is caused by a bug in the Linux kernel, not a bug in Xen)
Test instructions
General
- Remove any old versions of Xen toolstack and userspace binaries (including
qemu
). - Remove any Xen-related udev files under /etc because Xen 4.7 doesn't use those anymore.
- Download and install the most recent Xen 4.7 RC, as described above. Make sure to check the
README
andINSTALL
for changes in required development libraries and procedures. Some particular things to note:
Once you have Xen 4.7 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.
USB Support for xl
xl introduces PVUSB support as well as the following commands:
usbctrl-attach
usbctrl-detach
usbdev-attach
usbdev-detach
usb-list
which corresponds to
usbctrl=[ "USBCTRL_SPEC_STRING", "USBCTRL_SPEC_STRING", ... ]
usbdev=[ "USB_SPEC_STRING", "USB_SPEC_STRING", ... ]
as outlined in the xl.cfg(5) man page. For an overview, see Xen USB Passthrough.
RTDS scheduler improvements
The RTDS scheduler was improved in the following way
- The RTDS scheduler has been changed from a quantum-driven model to an event-driven model, which will not invoke the scheduler unnecessarily: if you use this scheduler, you may want to test your workload using the RC and check whether there are any unexpected side effects
- Support to get/set RTDS scheduling parameters on a per-VCPU basis has been added to libxl and xl
So, for instance:
- to see the scheduling parameters of all VCPUs of a VM use
-v all
xl sched-rtds -d vm1 -v all Name ID VCPU Period Budget vm1 1 0 300 150 vm1 1 1 400 200 vm1 1 2 10000 4000 vm1 1 3 1000 500
- to change (or check) the scheduling parameters of VCPUs 0 and 3 only, use
-v
# xl sched-rtds -d vm1 -v 0 -p 100 -b 50 -v 3 -p 300 -b 150 # xl sched-rtds -d vm1 -v 0 -v 3 Name ID VCPU Period Budget vm1 1 0 300 150 vm1 1 3 1000 500
For more information and examples, see the xl manual page (search for sched-rtds).
Credit2 runqueue arrangement and hard-affinity support
Runqueue arrangement
Xen 4.7 allows one to specify how host CPUs are arranged in runqueues, within the Credit2 scheduler. Valid alternatives are core
, socket
, node
and all
.
More fine grained runqueue arrangement (as in with core
) means more accurate load balancing (e.g., it will deal better with hyperthreading), but also more overhead.
To make this effective (e.g., to use Credit2 per-socket runqueues) add the following to the hypervisor boot command line:
sched=credit2 credit2_runqueue=socket
More information here (search for credit2_runqueue)
Hard-affinity support
In Xen 4.7, Credit2 supports hard-affinity. It can be set by means of the xl vcpu-pin
xl subcommand. If set for a VCPU, hard-affinity restricts the set of PCPUs where such VCPU can run.
To check that it works, give to the VCPUs of a VM an hard-affinity, by doing as follows:
# xl vcpu-pin 1 all 16-18
And then check where they actually execute, by looking at:
# xl vcpu-list 1 Name ID VCPU CPU State Time(s) Affinity (Hard / Soft) debian.guest.osstest 1 0 17 r-- 5.3 16-18 / all debian.guest.osstest 1 1 18 r-- 3.3 16-18 / all
What we want is that the values in the CPU
column to be (for VCPUs that are running) always within the set of PCPUs we specified.
Hotplug disk backends (drbd, iscsi, etc.) for HVM guests
If you use drbd, iscsi, nbd, or other hotplug-script-based disk backends, try them with HVM guests.
Also see
KCONFIG support
Xen 4.7 introduces the removal of core Xen Hypervisor features at compile time via KCONFIG. We expect that this functionality is initially only going to be used for security and embedded applications, primarily targeting integration via the Yocto project. Yocto integrates with Xen via its meta-virtualization layer and the xen-image-minimal build support. The Yocto project currently integrates with Xen 4.6.1 (Yocto kergoth release). We expect that Xen with KCONFIG will be integrated with upstream Yocto, once Xen 4.7.0 has been released.
If you do want to test specific aspects of this new feature before Yocto integration has completed, please send a mail to xen-devel@ and CC the maintainer (cardoe AT cardoe DOT com) for further instructions.
Specific ARM Test Instructions
Follow Xen_ARM_with_Virtualization_Extensions and Xen_ARM_with_Virtualization_Extensions#Testing
Boards and hardware we do not test in our CI Loop
Although we do have automated Test Infrastructure for the project, we only include rackable hardware into our CI Loop. We do have a mixture of Allwinner and Exynos processors in a custom chassis. If you have one of the following boards and want to ensure it runs on Xen 4.7, please make sure you run the Xen ARM Manual Smoke Test on an RC.
Boards not tested by our CI Loop: Allwinner sun6i/A31, DRA7[J6] EVM, Exynos5410, HiKey board from 96boards.org, Mustang (XC-1), OMAP5432, Renesas R-Car H2, Versatile Express and Xilinx Zynq Ultrascale MPSoC
We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure.
ACPI support on ARM
ACPI support requires a platform with support for ACPI 6.0 (or later). Currently there is no publicly available hardware where this can be tested, with the exception of the AEMv8A Foundation Model. For more information, see
Wallclock support
Xen now exposes the wallclock time to guests. Checking the date and time in an ARM guest is all that is needed to verify this, as long as the guest doesn't run ntpdate or doesn't have access to network.
Specific x86 Test Instructions
Huge PV Domains
The Xen Project Hypervisor supports starting a Dom0 with very large memory. PV guest limit restrictions of 512GB have been removed to allow the creation of huge PV domains in the TB range via the XL command line interface.
To test, create a PV domain with >512 GB of RAM.
Intel Code and Data Prioritization (CDP)
To Do:
|
Code and Data Prioritization (CDP) Technology is an extension of CAT, which is available on Intel Broadwell and later server platforms. CDP enables isolation and separate prioritization of code and data fetches to the L3 cache in a software configurable manner, which can enable workload prioritization and tuning of cache capacity to the characteristics of the workload. CDP extends Cache Allocation Technology (CAT) by providing separate code and data masks per Class of Service (COS).
For more information see:
COLO - Coarse Grain Lock Stepping
COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus.
COLO is different from traditional High Availability solutions, which are either based on instruction level lock stepping (excessive overheads) and periodic checkpointing such as Remus (high network latency, large VM checkpointing overhead). On Xen, COLO builds on top of Remus and uses a “relaxed approach to checkpointing: in other words, COLO only checkpoints if absolutely necessary, which for many use-cases provides near native performance.
The COLO Manager component is now part of Xen 4.7, while other components will eventually be part of QEMU (they can be downloaded from a specific git repository).
Test Instructions: See COLO Docs and Test Environment.
xSplice - binary patching of the hypervisor
xSplice is a Xen technology that enables to binary patch the running hypervisor with a payload file that is intended to primarily contain security updates (but not necessarily only so). v1 of xSplice is in technology preview mode and compile-disabled by default. It also has some restrictions on what payloads can be encoded in the payload file, most notably support to generate payloads against .data sections and payloads that NOP (remove) existing functions. Xen 4.7 comes with built in Hypervisor support and the xen-xsplice upload|apply|replace|revert
tool to manage payloads (the code is in tools/misc). Additional tools such as xsplice-build
to create a payload are at this stage not shipped with Xen, but are available out-of-tree.
To test xSplice, check out:
- Build Xen with xSplice enabled, see Enabling xSplice in hypervisor
- Patch hypervisor as it is running. There are three simple built-in simple examples, see How to build built-in examples on how to build, install and test it.
- Alternatively, see xsplice-build-tools on how to build, install, and test it.
Major restrictions:
- Works for Linux and FreeBSD dom0's only
- Does not yet work on the ARM architecture
- Cannot generate payloads for patches with .data sections in the ELF file (in other words, patches that introduce global or static variables cannot be encoded)
- Cannot generate payloads that remove (NOP) functions from the hypervisor
RC specific things to test
RC2
- XSM and driver domain: start xl devd in driver domain, and see if there is XSM denial message shown in xl dmesg.
Reporting Bugs (& Issues)
- Use Freenode IRC channel #xentest to discuss questions interactively
- Report any bugs / missing functionality / unexpected results.
- Please put [TestDay] into the subject line
- Also make sure you specify the RC number you are using
- Make sure to follow the guidelines on Reporting Bugs against Xen (please CC the relevant maintainers and the Release Manager - wei dot liu2 at citrix dot com).
Reporting success
We would love it if you could report successes by e-mailing xen-devel@lists.xen.org
, preferably including:
- Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
- Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
- Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
- Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
The following template might be helpful: should you use Xen 4.7.0-<Some RC>
for testing, please make sure you state that information!
Subject: [TESTDAY] Test report * Hardware: * Software: * Guest operating systems: * Functionality tested: * Comments:
For example:
Subject: [TESTDAY] Test report * Hardware: Dell 390's (Intel, dual-core) x15 HP (AMD, quad-core) x5 * Software: Ubuntu 10.10,11.10 Fedora 17 * Guest operating systems: Windows 8 Ubuntu 12.10,11.10 (HVM) Fedora 17 (PV) * Functionality tested: xl suspend/resume pygrub * Comments: Window 8 booting seemed a little slower than normal. Other than that, great work!