Archived/Xen Development Projects: Difference between revisions

From Xen
Jump to navigationJump to search
(Added vif multiqueue)
No edit summary
Line 164: Line 164:
The Xen hypervisor can do this similary.
The Xen hypervisor can do this similary.
}}
}}

{{project
|Project=Introducing PowerClamp-like driver for Xen
|Date=01/22/2012
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum
power usage limit. This is particularly useful for data centers, where there may be a need to
reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup]
is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen,
and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used
for both.
|GSoC=Yes
}}



{{project
{{project
Line 313: Line 330:
|GSoC=Yes
|GSoC=Yes
}}
}}

{{project
|Project=Allowing guests to boot with a passed-through GPU as the primary display
|Date=01/22/2012
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to
pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play
games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed
through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can
boot with it as the primary display.
|GSoC=Yes
}}

{{project
|Project=Advanced Scheduling Parameters
|Date=01/22/2012
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,
a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would
be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.
|GSoC=Yes
}}



=== Performance ===
=== Performance ===

Revision as of 19:25, 22 January 2013

This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?

To work on a project:

  • Find a project that looks interesting (or a bug if you want to start with something simple)
  • Send an email to xen-devel mailinglist and let us know you started working on a specific project.
  • Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
  • Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
  • Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
  • Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).

xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel

Before to submit patches, please look at Submitting Xen Patches wiki page.

If you have new ideas, suggestions or development plans let us know and we'll update this list!

List of projects

Domain support

Upstreaming Xen PVSCSI drivers to mainline Linux kernel

Date of insert: 01/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: PVSCSI drivers needs to be upstreamed yet. Necessary operations may include:
Outcomes: Not specified, project outcomes


Upstreaming Xen PVUSB drivers to mainline Linux kernel

Date of insert: 01/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: PVUSB drivers needs to be upstreamed yet. Necessary operations may include:
Outcomes: Not specified, project outcomes


Implement Xen PVSCSI support in xl/libxl toolstack

Date of insert: 01/12/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Pasi Karkkainen <pasik@iki.fi>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: xl/libxl does not currently support Xen PVSCSI functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
  • Task 1: Implement PVSCSI in xl/libxl, make it functionally equivalent to xm/xend.
  • Send to xen-devel mailinglist for review, comments.
  • Fix any upcoming issues.
  • Repeat until merged to xen-unstable.
  • See above for PVSCSI drivers for dom0/domU.
  • Xen PVSCSI supports both PV domUs and HVM guests with PV drivers.
  • More info: http://wiki.xen.org/xenwiki/XenPVSCSI
Outcomes: Not specified, project outcomes


Implement Xen PVUSB support in xl/libxl toolstack

Date of insert: 01/12/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Pasi Karkkainen <pasik@iki.fi>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: xl/libxl does not currently support Xen PVUSB functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
  • Task 1: Implement PVUSB in xl/libxl, make it functionally equivalent to xm/xend.
  • Send to xen-devel mailinglist for review, comments.
  • Fix any upcoming issues.
  • Repeat until merged to xen-unstable.
  • See above for PVUSB drivers for dom0/domU.
  • Xen PVUSB supports both PV domUs and HVM guests with PV drivers.
  • More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough
Outcomes: Not specified, project outcomes


Blkback improvements

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Blkback requires a number of improvements, some of them being:
  • Multiple disks in a guest cause contention in the global pool of pages.
  • There is only one ring page and with SSDs nowadays we should make this larger, implementing some multi-page support.
  • With multi-page it becomes apparent that the segment size ends up wasting a bit of space on the ring. BSD folks fixed that by negotiating a new parameter to utilize the full size of the ring.
  • Add DIF/DIX support [1] for T10 PI (Protection Information), to support data integrity fields and checksums.
  • Further perf evaluation needs to be done to see how it behaves under high load.
Outcomes: Not specified, project outcomes


Netback overhaul

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Wei Liu posted RFC patches that make the driver be multi-page, multi-event channel and with a page-pool. However not all the issues have been addressed yet, meaning that the patches need to be finished and cleaned up yet. Additively, a zero-copy implementation can be considered. Patch serie and discussions:
Outcomes: Not specified, project outcomes


Multiqueue support for Xen netback/netfront in Linux kernel

Date of insert: 22/01/2013; Verified: Not updated in 2020; GSoC: Unknown
Technical contact:
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Multiqueue support allows a single virtual network interface (vif) to scale to multiple vcpus. Each queue has it's own interrupt, and thus can be bind to a different vcpu. KVM VirtIO, VMware VMXNet3, tun/tap and various other drivers already support multiqueue in upstream Linux.
Outcomes: Not specified, project outcomes


PAT writecombine fixup

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: The writecombine feature (especially for graphic adapters) is turned off due to stability reasons. More specifically, the code involved in page transition from WC to WB gets confused about the PSE bit state in the page, resulting in a set of repeated warnings.

For more informations please check:

Outcomes: Not specified, project outcomes


ACPI S3-state investigation and fixup

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: During Linux-3.3 release the the S3-state was supposed to work including these patches: But now it is not working anymore. Scope of the project is understanding the reasons for the issues and fix them.
Outcomes: Not specified, project outcomes



Parallel xenwatch

Date of insert: 01/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.
Outcomes: Not specified, project outcomes

Hypervisor

Microcode uploader implementation

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Intel is working on early implementation where the microcode blob would be appended to the initrd image. The kernel would scan for the appropiate magic constant and load the microcode very early. The Xen hypervisor can do this similary.
Outcomes: Not specified, project outcomes


Introducing PowerClamp-like driver for Xen

Date of insert: 01/22/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum

power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A more complete writeup is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used

for both.
Outcomes: Not specified, project outcomes


Xen in the Real-Time/Embedded World: Are We Ready?

Date of insert: 08/08/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: No matter if it is to build a multi-personallity mobile phone, or help achieving consolidation in industrial and factory automation, embedded virtualization ([1], [2], [3]) is upon us. In fact, quite a number of embedded hypervisors already exist, e.g.: Wind River Hypervisor, CodeZero or PikeOS. Xen definitely is small, fast type-1 hypervisor with support for multiple VMs [1], so it could be a good candidate embedded hypervisor.

Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the Earliest Deadline First (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).

In the context of embedded virtualization, providing low latency (when reacting to external events) to the hosted VMs, and a good level of temporal isolation among the different hosted VMs is something fundamental. This means not only the failures in any VM should not affect any other one, but also that, independently of what a VM is doing, it shouldn't be possible for it to cause any other VM to suffer from higher (and potentially dangerous!) latency and/or impairing the capability of any other VM to respect its own real-time constraints.

Whether or not Xen is able to provide its (potential) real-time VMs with all this, is something which has not been investigated thoroughly enough yet. This project therefore aims at defining, implementing and executing the proper set of tests, becnhmarks and measuring to understand where Xen stands, identify the sources of inefficiencies and alleviate or remove them. The very first step would be to check how much latency Xen introduces, by running some typical real-time workload within a set of VM, under different host and guest load conditions (e.g., by using cyclictest within the VM themselves). Results can then be compared to what it is achievable with other hypervisors. After this, the code paths contributing most to latency and/or disrupting temporal isolation will need to be identified, and proper solutions to mitigate their effect envisioned and implemented.

Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).
Outcomes: Not specified, project outcomes


Xen in the Real-Time/Embedded World: Improve the Temporal Isolation among vCPUs in SEDF

Date of insert: 08/08/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: No matter if it is to build a multi-personallity mobile phone, or help achieving consolidation in industrial and factory automation, embedded virtualization ([1], [2], [3]) is upon us. In fact, quite a number of embedded hypervisors already exist, e.g.: Wind River Hypervisor, CodeZero or PikeOS. Xen definitely is small, fast type-1 hypervisor with support for multiple VMs [1], so it could be a good candidate embedded hypervisor.

Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the Earliest Deadline First (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).

Unfortunately, SEDF, the EDF implementation in Xen, still has some rough edges that need to be properly addressed, if we want it to be a valid solution for providing the temporal isolation real-time applications requires. In fact, as of now, SEDF deals with events such as a vCPU blocking (in general, stopping running) and unblocking (in general, restarting running) by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs.

Therefore, this project aims at implementing one of this solutions in SEDF, and more specifically the one called Constant BandWidth Server (CBS, [1], [2], [3]).

Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).
Outcomes: Not specified, project outcomes


Xen in the Real-Time/Embedded World: Improve Multiprocessor Support in SEDF

Date of insert: 08/08/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: No matter if it is to build a multi-personallity mobile phone, or help achieving consolidation in industrial and factory automation, embedded virtualization ([1], [2], [3]) is upon us. In fact, quite a number of embedded hypervisors already exist, e.g.: Wind River Hypervisor, CodeZero or PikeOS. Xen definitely is small, fast type-1 hypervisor with support for multiple VMs [1], so it could be a good candidate embedded hypervisor.

Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the Earliest Deadline First (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).

Unfortunately, SEDF, the EDF implementation in Xen, does not properly handle SMP systems yet, unless specific vCPU pinning is specified by the user. That is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most.

Therefore, this project aims at extending the SEDF scheduler, enabling proper support for SMP hardware. The fist step would be to use, instead of one vCPU run-queue per each physical processor (pCPU), only one per each "set of pCPUs" (e.g., only one run-queue for all the pCPUs that have a common L3 cache). This would already increase the effectiveness of the scheduler on current hardware a lot. After that, a mechanism for balancing and migrating vCPUs among different run-queues should be envisioned and implemented.

Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).
Outcomes: Not specified, project outcomes


Virtual NUMA topology exposure to VMs

Date of insert: 12/12/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

Ideally, each VM should have its memory allocated out of just one node, and if its VCPUs are always run there, both throughput and latency are kept to the maximum possible level. However, there are a number of reasons for this not to be possible. In such case, i.e., if a VM ends up on having its memory and executing consistently on multiple nodes, we should make sure it knows it's running on a NUMA platform --a smaller one than the actual host, but still NUMA. This is something very important for some specific workloads, for instance, all the HPC ones. In fact, it the guest OS (and application) has any NUMA support, exporting a virtual topology to the guest is the only way to render that effective, and perhaps filling, at least to some extent the gap introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 ("vNUMA: what it is and why it matters").

This project aims at introducing virtual NUMA in Xen. This has some non-trivial interaction with other aspects of the NUMA support of Xen itself, namely automatic placement at VM creation time, dynamic memory migration among nodes, and others, meaning that some design decision needs to be made. After that, virtual topology exposure will be implemented for all the kind of guests supported by Xen.

This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap
Outcomes: Not specified, project outcomes


NUMA effects on inter-VM communication and on multi-VM workloads

Date of insert: 12/12/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

If a workload is made up of more than just a VM, running on the same NUMA host, it might be best to have two (or more) VMs share a node, as well as right the opposite, depending on the specific characteristics of he workload itself, and this might be considered during placement, memory migration and perhaps scheduling.

The idea is that sometimes you have a bunch of VMs that would like to cooperate, and sometimes you have a bunch of VMs that would like to be kept as apart as possible from other VMs (competitive). In the cooperative VMs scenario, one wants to optimize for data flowing across VMs in the same host, e.g., because a lot of data copying is involved (a WebApp and DB VMs working together). This means trying to have VMs sharing data in the same node and, if possible, even in the same PCPU's caches, in order to maximize the memory throughput between the VMs. On the other hand, in the competitive VMs scenario, one wants to optimize for data flowing between the VMs and outside the host (e.g., when PCI-passthrough is used for NICs). In this case it would be a lot better for these VMs to use memory from different nodes and avoid wasting each other cache lines.

This project aims at making it possible for the Xen virtualization platform (intended as hypervisor + toolstack) to take advantage of this knowledge about the characteristics of the workload and use it to maximize performances. A first step would be to enhance the automatic NUMA placement algorithm to consider the cooperative-ness and/or the competitive-ness of a VM during placement itself, if provided with such information by the user. A much more complicated development could be to have this relationship between the various running VMs guessed automatically on-line (e.g., by watching the memory mappings and looking for specific patterns), and update the situation accordingly.

This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap
Outcomes: Not specified, project outcomes


Integrating NUMA and Tmem

Date of insert: 08/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Dan Magenheimer <dan.magenheimer_AT_oracle_DOT_com>, Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

Trascendent memory (Tmem) can be seen as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. It would be interesting to investigate and implement all the necessary mechanisms to take advantage of this and improve performances of Tmem enabled guests running on NUMA machines.

For instance, implementing something like alloc_page_on_any_node_but_the_current_one() (or any_node_except_this_guests_node_set() for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests.
Outcomes: Not specified, project outcomes

Userspace Tools

Convert PyGrub to C

Date of insert: 15/11/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Roger Pau Monné <roger.pau@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: With the replacement of xend with xl/libxl, PyGrub is the only remaining Python userspace component of the Xen tools. Since it already depends on a C library (libfsimage), converting it to C code should not be a huge effort. PyGrub Python code is no more than a parser to grub and syslinux configuration files. Some embedded distros (mainly Alpine Linux) already mentioned it's interest in dropping the Python package as a requirement for a Dom0, this will make a Xen Dom0 much more smaller.
Outcomes: Not specified, project outcomes


Refactor Linux hotplug scripts

Date of insert: 15/11/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Roger Pau Monné <roger.pau@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices. Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
Outcomes: Not specified, project outcomes


XL to XCP VM motion

Date of insert: 15/11/12; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Ian Campbell
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Currently xl and xapi (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. This project is to produce one or more command-line tools which support migrating VMs from one toolstack to another.

One tool should be provided which takes an xl configuration file and a reference to an XCP pool, creates a VM in the pool with a close approximation of the same configuration and streams the configured disk image into a selected SR.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the VDIs into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps OVF or similar) and the xl toolstack configuration file and disk image format.
Outcomes: Not specified, project outcomes


VM Snapshots

Date of insert: 16/01/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: <Stefano Stabellini>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.

QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.

This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl. The candidate should also explore ways to integrate disk snapshots into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl.
Outcomes: Not specified, project outcomes


Allowing guests to boot with a passed-through GPU as the primary display

Date of insert: 01/22/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to

pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can

boot with it as the primary display.
Outcomes: Not specified, project outcomes


Advanced Scheduling Parameters

Date of insert: 01/22/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,

a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would

be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.
Outcomes: Not specified, project outcomes


Performance

Performance tools overhaul

Date of insert: 02/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Generally, works on the performance tool themselves should be listes separately to the Xen_Profiling:_oprofile_and_perf wiki page.
Outcomes: Not specified, project outcomes

Upstream bugs!

VCPU hotplug bug

Date of insert: Sep 1 2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: VCPU hotplug. [2]

To get this its as easy as having this in your guest config:

vcpus=2
maxvcpus=3

And when you launch the guest to play with 'xm vcpu-set 0 2', xm vcpu-set 0 3' and see the guest forget about one of the CPUs.

This is what you will see in the guest:

udevd-work[2421]: error opening ATTR{/sys/devices/system/cpu/cpu2/online} for writing: No such file or directory

If you instrument udev and look at the code you will find somebody came up with a "fix": http://serverfault.com/questions/329329/pv-ops-kernel-ignoring-cpu-hotplug-under-xen-4-domu

But the real fix is what Greg outlines in the URL above.
Outcomes: Not specified, project outcomes

RCU timer sent to offline VCPU

Date of insert: Sep 1 2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description:
[    0.073006] WARNING: at /home/konrad/ssd/linux/kernel/rcutree.c:1547 __rcu_process_callbacks+0x42e/0x440()
[    0.073008] Modules linked in:
[    0.073010] Pid: 12, comm: migration/2 Not tainted 3.5.0-rc2 #2
[    0.073011] Call Trace:
[    0.073017]  <IRQ>  [<ffffffff810718ea>] warn_slowpath_common+0x7a/0xb0

which I get with this guest config:

vcpus=2
maxvcpus=3


Here is what Paul says: https://lkml.org/lkml/2012/6/19/360
Outcomes: Not specified, project outcomes


CONFIG_NUMA on 32-bit.

Date of insert: Sep 1 2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: http://www.spinics.net/lists/kernel/msg1350338.html

I came up with a patch for the problem that William found: http://lists.xen.org/archives/html/xen-devel/2012-05/msg01963.html

and narrowed it down the Linux calling xen_set_pte with a PMD flag (so trying to setup a 2MB page). Currently the implemenation of xen_set_pte can't do 2MB but it will gladly accept the argument and the multicall will fail.

Peter did not like the x86 implemenation so I was thinking to implement some code in xen_set_pte that will detect that its a PMD flag and do "something". That something could be either probe the PTE's and see if there is enough space and if so just call the multicall 512 times, or perform a hypercall to setup a super-page. .. But then I wasn't sure how we would

tear down such super-page.
Outcomes: Not specified, project outcomes


Time accounting for stolen ticks.

Date of insert: Sep 1 2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: This is http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/pvtime.v1.1

and whether those patches are the right way or the bad way.

The discussion of this is at http://lists.xen.org/archives/html/xen-devel/2011-10/msg01477.html
Outcomes: Not specified, project outcomes

Xen Cloud Platform (XCP) and XAPI projects

There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!


Fuzz testing Xen with Mirage

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
Outcomes: Not specified, project outcomes


Mirage OS XCP/Xen support

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. We would like to explore several strategies for distributed computation using these microkernels, and this project covers writing API bindings for XCP in OCaml. Optionally, we would also like bindings to popular cloud APIs such as Openstack and Amazon EC2. These API bindings can be used to provide operating-system-level abstractions to the exokernels. For example, hotplugging a vCPU would perform a "VM create" at the XCP level, and make the extra process known to the Mirage runtime so that it can be scheduled for computation. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment.
Outcomes: Not specified, project outcomes


From simulation to emulation to production: self-scaling apps

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. An interesting consequence of programming Mirage applications in a functional language is that the device drivers can be substituted with emulated equivalents. Therefore, it should be possible to test an application under extreme load conditions as a simulation, and then recompile the *same* code into production. The simulation can inject faults and test data structures under distributed conditions, but using a fraction of the resources required for a real deployment. This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource...
Outcomes: Not specified, project outcomes


Towards a multi-language unikernel substrate for Xen

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include: Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain based on XCP that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.
Outcomes: Not specified, project outcomes


DRBD Integration

Date of insert: 07/01/2013; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: John Morris <john@zultron.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: DRBD is potentially a great addition to the other high-availability features in XenAPI. An architecture of as few as two Dom0s with DRBD mirrored local storage is an inexpensive minimal HA configuration enabling live migration of VMs between physical hosts and providing failover in case of disk failure, and eliminates the need for external storage. This setup can be used in small shop or hobbyist environments, or could be used as a basic unit in a much larger scalable architecture.

Existing attempts at integrating DRBD sit below the SM layer and thus do not enable one VBD per DRBD device. They also suffer from a split-brain situation that could be avoided by controlling active/standby status from XenAPI.

DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes.
Outcomes: Not specified, project outcomes

Quick links to changelogs of the various Xen related repositories/trees

Please see XenRepositories wiki page!