Xen 4.7 RC test instructions: Difference between revisions

From Xen
Jump to navigationJump to search
Line 93: Line 93:


We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure.
We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure.


=== ACPI support on ARM ===

ACPI support requires a platform with support for ACPI 6.0 (or later). Currently there is no publicly available hardware where this can be tested, with the exception of the [http://www.arm.com/products/tools/models/fast-models/foundation-model.php AEMv8A Foundation Model]. For more information, see
* [[Xen_ARM_with_Virtualization_Extensions/FastModels|Fast Models]]
* [https://wiki.linaro.org/LEG/Engineering/Xen_boot_on_FVP_ACPI_UEFI Xen boot on FVP ACPI UEFI]

=== Wallclock support ===

=== Support of PSCI 1.0 for the host ===


== Specific x86 Test Instructions ==
== Specific x86 Test Instructions ==

Revision as of 12:48, 10 May 2016

Icon Info.png If you come to this page before or after the Test Day is completed, your testing is still valuable, and you can use the information on this page to test, post any bugs and test reports to xen-devel@. If this page is more than two weeks old when you arrive here, please check the current schedule and see if a similar but more recent Test Day is planned or has already happened.


What needs to be tested

General things:

  • Making sure that Xen 4.7 compiles and installs properly on different software configurations; particularly on distros
  • Making sure that Xen 4.7, along with appropriately up-to-date kernels, work on different hardware.

For more ideas about what to test, please see Testing Xen.

ARM Smoke Testing

If you use ARM Hardware, which is not widely available or not rackable (and thus not part of our automated test suite), please check out Xen ARM Manual Smoke Test. Helping out to manually test ARM boards (which will only take a few minutes) will guarantee that Xen 4.7 will work on the board that you use. If you want to see which boards need testing, check Xen ARM Manual Smoke Test/Results.

Installing

Getting a RC

For the expressions/examples below, set the following bash/sh/... variable to the release candidate number (e.g. one of rc1, rc2, ... )

RC="<release candidate number>"

From xen.git

With a recent enough git (>= 1.7.8.2), just pull from the proper tag (4.7.0-$RC) from the main repo directly:

git clone -b 4.7.0-$RC git://xenbits.xen.org/xen.git

With an older git version (and/or if that does not work, e.g., complaining with a message like this: Remote branch 4.7.0-$RC not found in upstream origin, using HEAD instead), do the following:

git clone git://xenbits.xen.org/xen.git ; cd xen ; git checkout 4.7.0-$RC

From tarball

Download:

 http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz 
 http://bits.xensource.com/oss-xen/release/4.7.0-$RC/xen-4.7.0-$RC.tar.gz.sig

Building

Instructions are available for building Xen on Linux, NetBSD, and FreeBSD

Known issues

RC1

Test instructions

General

  • Remove any old versions of Xen toolstack and userspace binaries (including qemu).
  • Remove any udev files under /etc because Xen 4.7 doesn't use those anymore.
  • Download and install the most recent Xen 4.7 RC, as described above. Make sure to check the README and INSTALL for changes in required development libraries and procedures. Some particular things to note:

Once you have Xen 4.7 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.

USB Support for xl

xl introduces PVUSB support as well as the following commands:

 usbctrl-attach
 usbctrl-detach 
 usbdev-attach
 usbdev-detach 
 usb-list

which corresponds to

 usbctrl=[ "USBCTRL_SPEC_STRING", "USBCTRL_SPEC_STRING", ... ]
 usbdev=[ "USB_SPEC_STRING", "USB_SPEC_STRING", ... ]

as outlined in the xl.cfg(5) man page.

RTDS scheduler improvements

The RTDS scheduler was improved in the following way

  • The RTDS scheduler has been changed from a quantum-driven model to an event-driven model, which will not invoke the scheduler unnecessarily: if you use this scheduler, you may want to test your workload using the RC and check whether there are any unexpected side effects
  • Support to get/set per-VCPU parameters have been added to the RTDS toolstack: see search for sched-rtds
    • The -v VCPUID/all, --vcpuid=VCPUID/all options have been added

Hotplug disk backends (drbd, iscsi, etc.) for HVM guests

If you use drbd, iscsi or other non-default disk backends, try them with HVM guests.

Also see

Specific ARM Test Instructions

Follow Xen_ARM_with_Virtualization_Extensions and Xen_ARM_with_Virtualization_Extensions#Testing

Boards and hardware we do not test in our CI Loop

Although we do have automated Test Infrastructure for the project, we only include rackable hardware into our CI Loop. We do have a mixture of Allwinner and Exynos processors in a custom chassis. If you have one of the following boards and want to ensure it runs on Xen 4.7, please make sure you run the Xen ARM Manual Smoke Test on an RC.

Boards not tested by our CI Loop: Allwinner sun6i/A31, DRA7[J6] EVM, Exynos5410, HiKey board from 96boards.org, Mustang (XC-1), OMAP5432, Renesas R-Car H2, Versatile Express and Xilinx Zynq Ultrascale MPSoC

We are also not able to include non-production servers that require a legal agreement such as an NDA into our Test Infrastructure.


ACPI support on ARM

ACPI support requires a platform with support for ACPI 6.0 (or later). Currently there is no publicly available hardware where this can be tested, with the exception of the AEMv8A Foundation Model. For more information, see

Wallclock support

Support of PSCI 1.0 for the host

Specific x86 Test Instructions

Intel Code and Data Prioritization (CDP)

Icon todo.png To Do:
  • This section was not written by the author of the relevant features. Thus there may be inaccuracies
  • Maybe some instructions on what to test and what one expects to see


Code and Data Prioritization (CDP) Technology is an extension of CAT, which is available on Intel Broadwell and later server platforms. CDP enables isolation and separate prioritization of code and data fetches to the L3 cache in a software configurable manner, which can enable workload prioritization and tuning of cache capacity to the characteristics of the workload. CDP extends Cache Allocation Technology (CAT) by providing separate code and data masks per Class of Service (COS).

For more information see:

COLO - Coarse Grain Lock Stepping

COLO or Coarse Grain Lock Stepping is an High Availability solution that builds on top of Remus.

COLO is different from traditional High Availability solutions, which are either based on instruction level lock stepping (excessive overheads) and periodic checkpointing such as Remus (high network latency, large VM checkpointing overhead). On Xen, COLO builds on top of Remus and uses a “relaxed approach to checkpointing: in other words, COLO only checkpoints if absolutely necessary, which for many use-cases provides near native performance.

The COLO Manager component is now part of Xen 4.7, while other components will eventually be part of QEMU (they can be downloaded from a specific git repository).

Test Instructions: See COLO Docs and Test Environment.

xSplice - binary patching of the hypervisor


RC specific things to test

RC2

  • XSM and driver domain: start xl devd in driver domain, and see if there is XSM denial message shown in xl dmesg.

Reporting Bugs (& Issues)

  • Use Freenode IRC channel #xentest to discuss questions interactively
  • Report any bugs / missing functionality / unexpected results.
  • Please put [TestDay] into the subject line
  • Also make sure you specify the RC number you are using
  • Make sure to follow the guidelines on Reporting Bugs against Xen (please CC the relevant maintainers and the Release Manager - wei dot liu2 at citrix dot com).

Reporting success

We would love it if you could report successes by e-mailing xen-devel@lists.xen.org, preferably including:

  • Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
  • Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
  • Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
  • Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)

The following template might be helpful: should you use Xen 4.7.0-<Some RC> for testing, please make sure you state that information!

Subject: [TESTDAY] Test report
 
* Hardware:
 
* Software:

* Guest operating systems:

* Functionality tested:

* Comments:

For example:

Subject: [TESTDAY] Test report
 
* Hardware: 
Dell 390's (Intel, dual-core) x15
HP (AMD, quad-core) x5
 
* Software: 
Ubuntu 10.10,11.10
Fedora 17

* Guest operating systems:
Windows 8
Ubuntu 12.10,11.10 (HVM)
Fedora 17 (PV)

* Functionality tested:
xl
suspend/resume
pygrub

* Comments:
Window 8 booting seemed a little slower than normal.

Other than that, great work!