Difference between revisions of "Archived/Xen 4.3 RC1 test instructions"

From Xen
(4.3)
 
 
(16 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
= What needs to be tested =
 
= What needs to be tested =
   
  +
General things:
{{TODO|Add a list of what needs to be tested at the Xen Document Day}}
 
  +
* Making sure that Xen 4.3 compiles and installs properly on different software configurations; particularly on distros
  +
* Making sure that Xen 4.3, along with appropriately up-to-date kernels, work on different hardware.
   
  +
Specific features:
== Getting RC1 ==
 
  +
* Automatic NUMA placement of guests.
* '''mercurial''': Pull from the main repo, and update to tag <code>4.2.0-rc2</code>:
 
  +
* Upstream Qemu for HVM domains
hg clone http://xenbits.xen.org/xen-unstable.hg
 
  +
* Openvswitch integration
hg update -C 4.2.0-rc2
 
  +
* Xen on ARM
* '''tarball''': [http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz Xen 4.0.2 RC2 Tarball] (and [http://bits.xensource.com/oss-xen/release/4.2.0-rc2/xen-4.2.0-rc2.tar.gz.sig signature])
 
  +
* ''others?''
* '''rpms''': Fedora users can grab xen-4.2.0-rc2 rpms and src.rpm from http://koji.fedoraproject.org/koji/taskinfo?taskID=4386920 . Thanks to Michael Young for these rpms!
 
* '''git''': Get source from git mirror:
 
git clone -b 4.2.0-rc2 git://xenbits.xen.org/xen.git
 
   
  +
Older Xen Features where we are not sure how much test coverage these got (andf are thuis marked experimentral):
== Build requirements ==
 
  +
* [[Credit2_Scheduler_Development|Credit 2 Scheduler]]
* '''Fedora 16/17''': You need these packages installed to build Xen 4.3 from source. Also applies to RHEL6/CentOS6, but there you probably first need to enable EPEL repo to be able to find some of the rpms:
 
  +
* Nested Virtualization
yum groupinstall "Development Libraries"
 
yum groupinstall "Development Tools"
 
yum install transfig wget tar less texi2html libaio-devel dev86 glibc-devel e2fsprogs-devel gitk mkinitrd iasl xz-devel bzip2-devel
 
yum install pciutils-libs pciutils-devel SDL-devel libX11-devel gtk2-devel bridge-utils PyXML qemu-common qemu-img mercurial texinfo
 
yum install libidn-devel yajl yajl-devel ocaml ocaml-findlib ocaml-findlib-devel python-devel uuid-devel libuuid-devel openssl-devel
 
yum install glibc-devel.i686
 
   
  +
For more ideas about what to test, please see [[Testing_Xen|Testing Xen]].
* '''Debian/Ubuntu''': This list might be incomplete, please complete/fix/test the package list!:
 
apt-get install build-essential bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
 
apt-get install texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
 
apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
 
apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
 
apt-get install gettext
 
   
  +
= Installing =
If you have a 64 bit Debian/Ubuntu system, you also need to install:
 
apt-get install gcc-multilib libc6-dev-i386
 
   
  +
== Getting RC1 ==
* '''NetBSD''': You will need to following packages from pkgsrc to build Xen 4.2 from source:
 
  +
<pre>
 
  +
* '''xen''': Pull from the main repo, and update to tag <code>4.3.0-rc1</code>:
devel/mercurial
 
  +
git clone -b 4.3.0-rc1 git://xenbits.xen.org/xen.git
devel/scmgit
 
  +
* '''tarball''': [http://bits.xensource.com/oss-xen/release/4.3.0-rc1/xen-4.3.0-rc1.tar.gz Xen 4.3.0 RC1 Tarball] (and [http://bits.xensource.com/oss-xen/release/4.3.0-rc1/xen-4.3.0-rc1.tar.gz.sig signature])
devel/gmake
 
  +
lang/python27
 
  +
== Building ==
devel/py-curses
 
  +
devel/dev86
 
  +
Instructions are available for building Xen on [[Compiling_Xen_From_Source|Linux]] and [[Compiling_Xen_From_Source_on_NetBSD|NetBSD]].
devel/glib2
 
devel/yajl
 
</pre>
 
   
 
= Test instructions =
 
= Test instructions =
   
  +
== General ==
* Remove any old versions of Xen toolstack binaries (including <code>qemu</code>).
 
  +
* Remove any old versions of Xen toolstack and userspace binaries (including <code>qemu</code>).
* Download and install the most recent Xen 4.2 RC. Make sure to check the <code>README</code> for changes in required development libraries and procedures. Some particular things to note:
 
  +
* Download and install the most recent Xen 4.3 RC, as described above. Make sure to check the <code>README</code> for changes in required development libraries and procedures. Some particular things to note:
** Xen 4.2 now uses autoconf, so after downloading, you will need to run <code>./configure</code> before running <code>make</code>
 
  +
** In Xen 4.3 the default installation path has changed from <code>/usr</code> to <code>/usr/local</code>. Take extra care when removing any old versions to allow for this.
*** The default is now to install libraries in /usr/lib by default. Use --libdir=/usr/lib64 if your system uses those paths.
 
** To build everything you can use "./configure ; make world". After successful compilation the built binaries are under dist/install/ directory, and they can be installed with "make install" command.
 
** You can also use more specific build commands: "./configure ; make xen ; make tools ; make stubdom". To install use "make install-xen ; make install-tools ; make install-stubdom".
 
** Xen 4.2 now provides a <code>make deb</code> target as a convenience to those building from source on Debian systems. The resulting <code>.deb</code> doesn't do any set-up or check any dependencies; it is simply a convenient way to keep track of installed files and allow them to be removed or upgraded easily.
 
   
  +
Once you have Xen 4.3 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.
* Read the [[XL|xl wiki page]] to understand what <code>xl</code> is, and what differences are expected between <code>xm</code> and <code>xl</code>.
 
** In particular, <code>xend</code> typically did network setup (i.e., bridging, &c), while <code>xl</code> does not. So you will need to set up your networking through the standard distro tools.
 
** For less common commands, check the <code>xl</code> man page to make sure names or argument conventions haven't changed.
 
   
  +
In particular if you are still using the (deprecated) xm/[[XEND]] toolstack please do try your normal use cases with the [[XL]] toolstack. The [[XL]] page has some information on the differences between XEND and XL. As do the instructions from the [[Xen_4.2_RC2_test_instructions|Xen 4.2 test day]].
* Try to use <code>xl</code> for things that you used to use <code>xm</code> for.
 
  +
  +
== Specific Test Instructions ==
  +
  +
=== Automatic NUMA placement of guests ===
  +
  +
TBD
  +
  +
=== Upstream Qemu for HVM domains ===
  +
  +
In Xen 4.3 we have switched to using upstream qemu (which xl calls "qemu-xen") to provide the device model when running HVM guests instead of the older Xen fork of Qemu (which xl calls "qemu-xen-traditional). Interesting things to test in this context:
  +
  +
* Does the new device model support the guest OSes which you use, can you install as you would have with the old device model?
  +
* Do features behave as expected, e.g. migration, VNC console.
  +
* Do previously installed HVM guests, installed with qemu-xen-traditional, work when switched to qemu-xen?
  +
** It is expected that some guest types will not like the change in hardware which this entails. In this case is setting <code>device_model_version="qemu-xen-traditional"</code> in the guest configuration sufficient to make the guest OS happy again?
  +
** If the guest doesn't seem to mind this change then this is useful information, please report it to us.
  +
** ''Note'': The new device model does not yet support stubdomains and so the default is unchanged if you request stubdomains.
  +
* Does the old device model still work if you set <code>device_model_version="qemu-xen-traditional"</code> in the guest configuration?
  +
* Do new features enabled by the new device model, such as SPICE graphics, work?
  +
  +
=== Openvswitch integration ===
  +
  +
Xen 4.3 adds support for [http://www.openvswitch.org Open vSwitch] based networking in addition to the existing bridge and routed networking schemes.
  +
  +
In order to test this you will need to setup a host with openvswitch support. Information on this is available at http://openvswitch.org/support/. In summary you need to:
  +
* Install a domain 0 kernel with CONFIG_OPENVSWITCH enabled (any recent PVOPS kernel should have this option).
  +
* Install the Open vSwitch userspace, see http://openvswitch.org/download/.
  +
* Configure the host networking to use Open vSwitch instead of bridge.
  +
  +
e.g. to create a switch (which we will call xenbr0 to simplify the transition) and add eth0 as a physical port:
  +
# ovs-vsctl add-br xenbr0
  +
# ovs-vsctl add-port xenbr0 eth0
  +
These appear to be persistent reboot, so only need to be done once. Now you should arrange to add an IP address to xenbr0, e.g. under Debian create an entry in <code>/etc/network/interfaces</code>:
  +
auto xenbr0
  +
iface xenbr0 inet dhcp
  +
(remember to remove/comment any bridge related items like <code>bridge_ports eth0</code>)
  +
  +
Once the host is configured you need to configure the system to use vswitch for guests. You can do this by editing /etc/xen/xl.conf and setting:
  +
vifscript=vif-openvswitch
  +
  +
Now you can try starting your guests and performing the usual operations on them (e.g. reboot, migrate etc) and verify that the network is accessible to the guest.
  +
  +
=== Xen on ARM ===
  +
  +
TBD
   
 
= Reporting Bugs (& Issues) =
 
= Reporting Bugs (& Issues) =
  +
 
* Report any bugs / missing functionality / unexpected results.
 
* Report any bugs / missing functionality / unexpected results.
 
* Please put '''[TestDay]''' into the subject line
 
* Please put '''[TestDay]''' into the subject line
* Also make sure you specify the RC number you are using, in particular if you are using '''4.2.0-rc3-pre''' instead of RC2
+
* Also make sure you specify the RC number you are using
 
* Make sure to follow the guidelines on [[Reporting Bugs against Xen]].
 
* Make sure to follow the guidelines on [[Reporting Bugs against Xen]].
   
 
= Reporting success =
 
= Reporting success =
   
We would love it if you could report successes by e-mailing <code>xen-devel@lists.xen.org</code>, preferrably including:
+
We would love it if you could report successes by e-mailing <code>xen-devel@lists.xen.org</code>, preferably including:
* '''Hardware''': Please at least include the processor manufaturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
+
* '''Hardware''': Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
 
* '''Software''': If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
 
* '''Software''': If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
 
* '''Guest operating systems''': If running a Linux version, please specify whether you ran it in PV or HVM mode.
 
* '''Guest operating systems''': If running a Linux version, please specify whether you ran it in PV or HVM mode.
 
* '''Functionality tested''': High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
 
* '''Functionality tested''': High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)
   
The following template might be helpful: should you use '''Xen 4.2.0-RC3-PRE''' for testing, please make sure you state that information!
+
The following template might be helpful: should you use '''Xen 4.3.0-RC1''' for testing, please make sure you state that information!
 
<pre>
 
<pre>
 
Subject: [TESTDAY] Test report
 
Subject: [TESTDAY] Test report
Line 115: Line 143:
 
Other than that, great work!
 
Other than that, great work!
 
</pre>
 
</pre>
  +
  +
[[Category:Xen_4.3]]
  +
[[Category:Xen]]
  +
[[Category:Community]]
  +
[[Category:Events]]
  +
[[Category:Test Day]]
  +
[[Category:Archived]]

Latest revision as of 14:54, 17 May 2013

What needs to be tested

General things:

  • Making sure that Xen 4.3 compiles and installs properly on different software configurations; particularly on distros
  • Making sure that Xen 4.3, along with appropriately up-to-date kernels, work on different hardware.

Specific features:

  • Automatic NUMA placement of guests.
  • Upstream Qemu for HVM domains
  • Openvswitch integration
  • Xen on ARM
  • others?

Older Xen Features where we are not sure how much test coverage these got (andf are thuis marked experimentral):

For more ideas about what to test, please see Testing Xen.

Installing

Getting RC1

  • xen: Pull from the main repo, and update to tag 4.3.0-rc1:
git clone -b 4.3.0-rc1 git://xenbits.xen.org/xen.git

Building

Instructions are available for building Xen on Linux and NetBSD.

Test instructions

General

  • Remove any old versions of Xen toolstack and userspace binaries (including qemu).
  • Download and install the most recent Xen 4.3 RC, as described above. Make sure to check the README for changes in required development libraries and procedures. Some particular things to note:
    • In Xen 4.3 the default installation path has changed from /usr to /usr/local. Take extra care when removing any old versions to allow for this.

Once you have Xen 4.3 RC installed check that you can install a guest etc and use it in the ways which you normally would, i.e. that your existing guest configurations, scripts etc still work.

In particular if you are still using the (deprecated) xm/XEND toolstack please do try your normal use cases with the XL toolstack. The XL page has some information on the differences between XEND and XL. As do the instructions from the Xen 4.2 test day.

Specific Test Instructions

Automatic NUMA placement of guests

TBD

Upstream Qemu for HVM domains

In Xen 4.3 we have switched to using upstream qemu (which xl calls "qemu-xen") to provide the device model when running HVM guests instead of the older Xen fork of Qemu (which xl calls "qemu-xen-traditional). Interesting things to test in this context:

  • Does the new device model support the guest OSes which you use, can you install as you would have with the old device model?
  • Do features behave as expected, e.g. migration, VNC console.
  • Do previously installed HVM guests, installed with qemu-xen-traditional, work when switched to qemu-xen?
    • It is expected that some guest types will not like the change in hardware which this entails. In this case is setting device_model_version="qemu-xen-traditional" in the guest configuration sufficient to make the guest OS happy again?
    • If the guest doesn't seem to mind this change then this is useful information, please report it to us.
    • Note: The new device model does not yet support stubdomains and so the default is unchanged if you request stubdomains.
  • Does the old device model still work if you set device_model_version="qemu-xen-traditional" in the guest configuration?
  • Do new features enabled by the new device model, such as SPICE graphics, work?

Openvswitch integration

Xen 4.3 adds support for Open vSwitch based networking in addition to the existing bridge and routed networking schemes.

In order to test this you will need to setup a host with openvswitch support. Information on this is available at http://openvswitch.org/support/. In summary you need to:

  • Install a domain 0 kernel with CONFIG_OPENVSWITCH enabled (any recent PVOPS kernel should have this option).
  • Install the Open vSwitch userspace, see http://openvswitch.org/download/.
  • Configure the host networking to use Open vSwitch instead of bridge.

e.g. to create a switch (which we will call xenbr0 to simplify the transition) and add eth0 as a physical port:

# ovs-vsctl add-br xenbr0
# ovs-vsctl add-port xenbr0 eth0

These appear to be persistent reboot, so only need to be done once. Now you should arrange to add an IP address to xenbr0, e.g. under Debian create an entry in /etc/network/interfaces:

auto xenbr0
iface xenbr0 inet dhcp

(remember to remove/comment any bridge related items like bridge_ports eth0)

Once the host is configured you need to configure the system to use vswitch for guests. You can do this by editing /etc/xen/xl.conf and setting:

vifscript=vif-openvswitch

Now you can try starting your guests and performing the usual operations on them (e.g. reboot, migrate etc) and verify that the network is accessible to the guest.

Xen on ARM

TBD

Reporting Bugs (& Issues)

  • Report any bugs / missing functionality / unexpected results.
  • Please put [TestDay] into the subject line
  • Also make sure you specify the RC number you are using
  • Make sure to follow the guidelines on Reporting Bugs against Xen.

Reporting success

We would love it if you could report successes by e-mailing xen-devel@lists.xen.org, preferably including:

  • Hardware: Please at least include the processor manufacturer (Intel/AMD). Other helpful information might include specific processor models, amount of memory, number of cores, and so on
  • Software: If you're using a distro, the distro name and version would be the most helpful. Other helpful information might include the kernel that you're running, or other virtualization-related software you're using (e.g., libvirt, xen-tools, drbd, &c).
  • Guest operating systems: If running a Linux version, please specify whether you ran it in PV or HVM mode.
  • Functionality tested: High-level would include toolstacks, and major functionality (e.g., suspend/resume, migration, pass-through, stubdomains, &c)

The following template might be helpful: should you use Xen 4.3.0-RC1 for testing, please make sure you state that information!

Subject: [TESTDAY] Test report
 
* Hardware:
 
* Software:

* Guest operating systems:

* Functionality tested:

* Comments:

For example:

Subject: [TESTDAY] Test report
 
* Hardware: 
Dell 390's (Intel, dual-core) x15
HP (AMD, quad-core) x5
 
* Software: 
Ubuntu 10.10,11.10
Fedora 17

* Guest operating systems:
Windows 8
Ubuntu 12.10,11.10 (HVM)
Fedora 17 (PV)

* Functionality tested:
xl
suspend/resume
pygrub

* Comments:
Window 8 booting seemed a little slower than normal.

Other than that, great work!