Archived/GSoc 2014: Difference between revisions

From Xen
Jump to navigationJump to search
(Copied page from Xen Development Projects)
 
No edit summary
 
(66 intermediate revisions by 12 users not shown)
Line 1: Line 1:
{{InfoLeft|The application deadline for GSoC has closed for 2014.}}

__TOC__
__TOC__


The Xen Project is a Linux Foundation collaborative project that develops the
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
* Xen Hypervisor (for x86 and ARM)
* The XAPI toolstack
* Mirage OS
The project also has excellent relationships with its upstreams (Linux Kernel, the BSDs, QEMU and other projects) and upstreams such as Linux distributions. This is reflected in the project list, which contains many interesting cross-project development projects for students.

== GSoC and Xen ==
This page is used to list project ideas for [http://www.google-melange.com/gsoc/homepage/google/gsoc2014 Google Summer of Code (GSOC) 2014].


=== Key GSoC resources ===
To work on a project:
Google Summer of Code 2014 is On (see [http://google-opensource.blogspot.com/2013/10/google-code-in-2013-and-google-summer.html]). The Xen Project has applied as a Mentoring Organization. Stay posted.


* [http://google-opensource.blogspot.com/2013/10/google-code-in-2013-and-google-summer.html Google Code-in 2013 and Google Summer of Code 2014 are on GSoC announcement]
* [http://www.google-melange.com/gsoc/homepage/google/gsoc2014 GSoC Homepage]

=== Finding a project that fits you ===
This page lists Xen Project development projects for GSoC that can be picked up by anyone! If you're interesting in hacking Xen Project code and want to become a part of our friendly developer community this is the place to start! Ready for the challenge?

'''To work on a project:'''
* Find a project that looks interesting (or a bug if you want to start with something simple)
* Find a project that looks interesting (or a bug if you want to start with something simple)
* Send an email to xen-devel mailinglist and let us know you started working on a specific project.
* Send an email to the relevant [http://www.xenproject.org/help/mailing-list.html mailing list] (see '''Developer Mailing Lists''') and let us know if you are interested in starting to work or applying on a specific project.
* Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
* Post your ideas, questions, RFCs to the relevant [http://www.xenproject.org/help/mailing-list.html mailing list] sooner than later so you can get comments and feedback.
* Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
* Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
* Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).


'''You have your own project idea: no problem!'''
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
* If you have your own project idea, outline what you are trying to do on the mailing list. If you know the right list, post your project idea on [http://www.xenproject.org/help/mailing-list.html mailing list]. Failing that post on xen-devel and we can redirect you to the right list. Make sure you add '''GSoC 2014''' to the subject line.


'''It is a good idea to ...'''<br>
Before to submit patches, please look at [[Submitting Xen Patches]] wiki page.
The Xen Project has also participated in the Gnome Outreach Program for Women (OPW) in the past. One of the things we learned by participating in OPW is that you will be more successful, happier and get more out of participating in student programs such as GSoC, if you do a bit of prep-work before writing an application. Here is some stuff you can do:
* Contact your mentor early and get to know him or her
* If the Xen Project is accepted into GSoC, start hanging out on our IRC channel. You can use the #xen-opw IRC channel on freenode.net for now (if accepted, we will create a GSoC channel)
* You may want to ask the mentor for a couple of small bitesize work-items (such as reviewing someones patch, a bitesize bug, ...) and start communicating on the relevant [http://www.xenproject.org/help/mailing-list.html mailing list]. That helps you become familiar with our development process, the mentor and other community members and will help you chose the right project and help you decide whether the Xen project is for you.
* Note that quite a few Xen maintainers used to be GSoC students once. Feel free to ask community dot manager at xenproject dot org to put you in touch with them if you have questions about their experience.
* Any work you submit before applying for a project should be based on xen-unstable development tree, if the project is Xen Hypervisor and/or tools related. Linux kernel related patches should be based on upstream kernel.org Linux git tree (latest version). XAPI and Mirage OS patches should be based on the right codeline too. Check out the '''navigation by audience''' section on the left to find resources.

==== More resources ====
Quick links to changelogs of the various Xen related repositories/trees: Please see [[XenRepositories]] wiki page!

Before to submit patches, please look at [[Submitting Xen Patches]] wiki page and the relevant [http://www.xenproject.org/developers/teams.html Xen Project team page]. This will contain more information.


If you have new ideas, suggestions or development plans let us know and we'll update this list!
If you have new ideas, suggestions or development plans let us know and we'll update this list!


== List of projects ==
=== Aspiring Students ===
* Please contact the mentor and CC the most appropriate mailing list
=== Domain support ===
* Get a bite-size task from the mentor before the application starts
* If you feel comfortable with an idea, please put your name to an idea using the following format
<pre>
{{project
{{project
...
|Project=Upstreaming Xen PVSCSI drivers to mainline Linux kernel
|Review=(delete as addressed)
|Date=01/08/2012
* {{Comment|~~~~:}} I am interested in this idea ...
|Difficulty=Hard
(note that you may also want to link to the e-mail thread with the mentor)
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
</pre>
|GSoC=No
* You will need to request write access to the wiki by filling out [http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html this form]
|Desc=
PVSCSI drivers needs to be upstreamed yet. Necessary operations may include:
* Task 1: Upstream PVSCSI scsifront frontend driver (for domU).
* Task 2: Upstream PVSCSI scsiback backend driver (for dom0).
* Send to various related upstream mailinglists for review, comments.
* Fix any upcoming issues.
* Repeat until merged to upstream Linux kernel git tree.
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-scsi.v1.0
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI
}}


=== Applying for GSoC ===
{{project
{{InfoLeft|Note that we will update this section when more student information on [http://www.google-melange.com/gsoc/homepage/google/gsoc2014 melange] is available, to make it easier for you to find information. And of course assuming that the Xen Project will be accepted into GSoC.}}
|Project=Upstreaming Xen PVUSB drivers to mainline Linux kernel
|Date=01/08/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Difficulty=Hard
|GSoC=No, unless Konrad believes these can be done.
|Desc=
PVUSB drivers needs to be upstreamed yet. Necessary operations may include:
* Upstream PVUSB usbfront frontend driver (for domU).
* Upstream PVUSB usbback backend driver (for dom0).
* Send to various related upstream mailinglists for review, comments.
* Fix any upcoming issues.
* Repeat until merged to upstream Linux kernel git tree.
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-usb.v1.1
* More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Would also need more detail
}}


To apply for a project, follow the steps outlined on
{{project
* [http://www.google-melange.com/gsoc/homepage/google/gsoc2014 melange]
|Project=Implement Xen PVSCSI support in xl/libxl toolstack
* We do have our own [[GSoC Student Application Template]] form
|Date=01/12/2012

|Contact=Pasi Karkkainen <pasik@iki.fi>
=== GSoC Projects that were accepted in 2014 ===
|GSoC=Yes
|Desc=
xl/libxl does not currently support Xen PVSCSI functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
* Task 1: Implement PVSCSI in xl/libxl, make it functionally equivalent to xm/xend.
* Send to xen-devel mailinglist for review, comments.
* Fix any upcoming issues.
* Repeat until merged to xen-unstable.
* See above for PVSCSI drivers for dom0/domU.
* Xen PVSCSI supports both PV domUs and HVM guests with PV drivers.
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel.
}}


{{project
{{project
|Project=Implement Xen PVUSB support in xl/libxl toolstack
|Project=Implement Xen PVUSB support in xl/libxl toolstack
|Date=01/12/2012
|Date=01/12/2012
|Contact=Pasi Karkkainen <pasik@iki.fi>
|Contact=Mentor: George Dunlap, Student: Bo Cao
|GSoC=Yes
|GSoC=Yes
|Desc=
|Desc=
Line 90: Line 84:


{{project
{{project
|Project=Block backend/frontend improvements
|Project=Lazy restore using memory paging
|Date=01/01/2013
|Date=01/20/2014
|Contact=Mentor: Andres Lagar-Cavilla, Student: Dushyant Behl
|Difficulty=Medium
|Difficulty=Medium
|GSoC=Yes
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.
|Desc=

Blkback requires a number of improvements, some of them being:
Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.
* Multiple disks in a guest cause contention in the global pool of pages.
* There is only one ring page and with SSDs nowadays we should make this larger, implementing some multi-page support.
* With multi-page it becomes apparent that the segment size ends up wasting a bit of space on the ring. BSD folks fixed that by negotiating a new parameter to utilize the full size of the ring. Intel had an idea for descriptor page.
* Add DIF/DIX support [http://oss.oracle.com/~mkp/docs/lpc08-data-integrity.pdf] for T10 PI (Protection Information), to support data integrity fields and checksums.
* Further perf evaluation needs to be done to see how it behaves under high load.
* Further discussion and issues outlined in http://lists.xen.org/archives/html/xen-devel/2012-12/msg01346.html and https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqAsCIjuI


There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.
|GSoC=Yes, but we would have to chop them in a nice chunks

|Skills= A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.

|Outcomes=Expected outcome:
* Mainline patches for libxc and libxl
}}
}}
* {{Comment|[[User:dushyant|dushyant]]}} Hi, I am working on this project.


{{project
{{project
|Project=HVM per-event-channel interrupts
|Project=Utilize Intel QuickPath on network and block path.
|Date=01/22/2013
|Date=01/30/2013
|Contact=Mentor: Paul Durrant, Student: Yandong Han
|Difficulty=High
|Skills=C, some prior knowledge of Xen useful
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load.
|Desc=The Intel QuickPath, also known as Direct Cache Access, is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache.


Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest.
Adding this component in the network or block backends can mean that we can keep the data longer in the cache and the guest can process the data right off the cache.
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in xen-unstable
|GSoC=yes}}


{{project
|Skills=The basic requirement for this project is Linux kernel programming skill.
|Project=Mirage OS cloud API support
The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
|Date=28/11/2013
|Contact=Mentor: Dave Scott; Student: Jyotsna Prakash
|Skills=OCaml
|Difficulty=medium
|Desc=
MirageOS (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) is a type-safe unikernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. A MirageOS application typically runs via several communicating kernel instances on the cloud. Today these instances are difficult to manage; we would like to explore strategies for managing these distributed computations using common public cloud APIs such as those exposed by Amazon EC2 and Rackspace.


First we need to create pure OCaml API bindings for (e.g.) EC2 and Rackspace (purity is needed to ensure portability). These API bindings can then be used to provide operating-system-level abstractions to the unikernels. For example, a traditional VM might hotplug a vCPU; while a MirageOS application would request a "VM create" using the cloud API and "connect" the new instance to the existing network. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment.
|Outcomes=Expected outcome:

* Have upstream patches.
As well as helping Xen/Mirage, the public cloud API bindings will be very useful to other people in other contexts-- a nice side-effect.
* benchmark report of with and without.

|GSoC=Yes
See https://fedoraproject.org/wiki/User:Gholms/EC2_Primer for a primer on how to use EC2
|Outcomes=1. one or more public cloud API bindings plus examples, in a standalone repo on github; 2. an example mirage app which uses these APIs to spin up a new VM
|GSoC=yes
}}
}}



{{project
{{project
Line 129: Line 135:
|Date=01/08/2012
|Date=01/08/2012
|Difficulty=Low-Medium
|Difficulty=Low-Medium
|Contact=Mentor: Boris Ostrovsky, Student: Tülin İZER
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=
|Desc=


Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.
Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.
See https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/xenbus/xenbus_xs.c#n768
for the code.

|Outcomes=Expected outcome:
|Outcomes=Expected outcome:
* Have upstream patches.
* Have upstream patches or a draft of them.
* benchmark report of with and without.
* benchmark report of with and without.
|GSoC=Yes
|GSoC=Yes
| Skills=You need to have understanding of:
* locks - spinlocks and mutexes
* build Linux kernel
}}
}}


== List of peer reviewed Projects ==
{{project
=== Domain support (PVOPS and Linux) ===
|Project=Enabling the 9P File System transport as a paravirt device
|Date=01/20/2014
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|GSoC=Yes
|Desc=VirtIO provides a 9P FS transport, which is essentially a paravirt file system device. VMs can mount arbitrary file system hierarchies exposed by the backend. The 9P FS specification has been around for a while, while the VirtIO transport is relatively new. The project would consist of implementing a classic Xen front/back pv driver pair to provide a transport for the 9P FS Protocol.

* More info: http://www.linux-kvm.org/page/9p_virtio

|Skills= Required skills include knowledge of kernel hacking, file system internals. Desired skills include: understanding of Xen PV driver structure, and VirtIO.

|Outcomes=Expected outcome:
* LKML patches for front and back end drivers.
* In particular, domain should be able to boot from the 9P FS.
}}


{{project
{{project
Line 160: Line 159:
|Contact=Wei Liu <wei.liu2@citrix.com>
|Contact=Wei Liu <wei.liu2@citrix.com>
|GSoC=Yes
|GSoC=Yes
|Difficulty=Easy
|Difficulty=Medium
|Desc=
|Desc=
OVMF is a project to enable UEFI support for virtual machine. http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=OVMF

SeaBIOS is a legacy BIOS implementation used by Xen to boot HVM guests. http://www.coreboot.org/SeaBIOS

Currently Xen supports booting HVM guest with Seabios and OVMF UEFI firmware, but those are separate binaries. OVMF supports adding legacy BIOS blob in its binary with Compatibility Support Module support. We can try to produce single OVMF binary with Seabios in it, thus having only one firmware binary.
Currently Xen supports booting HVM guest with Seabios and OVMF UEFI firmware, but those are separate binaries. OVMF supports adding legacy BIOS blob in its binary with Compatibility Support Module support. We can try to produce single OVMF binary with Seabios in it, thus having only one firmware binary.


Tasks may include:
Tasks may include:
* understand the boot process of HVM guests
* figure out how CSM works
* figure out how CSM works
* design / implement interface between Hvmloader and the unified binary
* design / implement interface between Hvmloader and the unified binary
| Outcomes=Produce a single firmware binary that can be used for legacy boot HVM guest and UEFI HVM guest

| Skills=You need to have understanding of:
* Firmware internal
* Some C programming skills
|Review=
}}
}}


* {{Comment|[[User:sdytlm|sdytlm]]}} Hi, I am interested to work on this project.
=== Hypervisor ===


{{project
{{project
|Project=Utilize Intel QuickData on network and block path.
|Project=Introducing PowerClamp-like driver for Xen
|Date=01/22/2013
|Date=01/22/2013
|Difficulty=High
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=
|Desc=The Intel QuickData, also known as Direct Cache Access (or IOA/T), is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache.
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum
power usage limit. This is particularly useful for data centers, where there may be a need to
reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup]
is available at LWN.


Adding this component in the network or block backends can mean that we can keep the data longer in the cache and the guest can process the data right off the cache.
These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen,
and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used
for both. [[GSoC_2013#powerclamp-for-xen]]
|GSoC=Yes
}}


See these for references:
{{project
http://www.intel.com/content/www/us/en/wireless-network/accel-technology.html
|Project=Integrating NUMA and Tmem
http://www.intel.com/content/www/us/en/chipsets/quickdata-technology-software-guide-for-linux-paper.html
|Date=08/08/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>
|Desc=NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.


Also the dmaengine@vger.kernel.org is an excellent mailing list to subscribe to.
Trascendent memory (Tmem) can be seen as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. It would be interesting to investigate and implement all the necessary mechanisms to take advantage of this and improve performances of Tmem enabled guests running on NUMA machines.


|Skills=The basic requirement for this project is Linux kernel programming skill.
For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests.
The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
}}

{{project
|Project=HVM per-event-channel interrupts
|Date=01/30/2013
|Contact=Paul Durrant <paul.durrant@citrix.com>
|Difficulty=
|Skills=C, some prior knowledge of Xen useful
|Desc=Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load.

Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest.
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in xen-unstable
|GSoC=yes}}

=== Userspace Tools ===


|Outcomes=Expected outcome:
* Investigate whether DCA (aka QuickData aka I/O AT) works with Xen.
* If above is true: have upstream patches (or draft patches)
* and benchmark report of with and without.
|GSoC=Yes
}}


{{project
{{project
|Project=Refactor Linux hotplug scripts
|Project=Xen block backend/frontend multiqueue support
|Date=15/11/2012
|Date=03/09/2014
|Difficulty=High
|Contact=Roger Pau Monné <[mailto:roger.pau@citrix.com roger.pau@citrix.com]>
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=
|Desc=
The Linux kernel (and FreeBSD, Windows, etc) have ParaVirtualized (PV) drivers to perform
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
the I/O instead of using the emulated devices that appear in QEMU (IDE, SCSI, etc). This is
done because the emulation of the IDE drivers is quite slow - and if you dig in how it
actually is done - it is full of bit-banging registers. The PV drivers are an answer to this
and eliminate the need for emulation. The mechanism by which they work is
nicely draw out in http://wiki.xen.org/wiki/PV_Protocol
and http://www.informit.com/articles/article.aspx?p=1160234&seqNum=3
("Definitive Guide to the Xen Hypervisor, The")


There have been improvements done in it - see http://wiki.xen.org/wiki/Xen_4.3_Block_Protocol_Scalability
Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
and http://blog.xen.org/index.php/2013/08/07/indirect-descriptors-for-xen-pv-disks/


However, there are still room for improvement. We can utilize the new
[[GSoC_2013#linux-hotplug-scripts]]
block multiqueue API support in Linux (See https://lwn.net/Articles/552904/
|GSoC=Yes
and http://kernel.dk/systor13-final18.pdf)
}}
to allocate per CPU a block thread (which handles the I/O transmission).


That should provide greater throughput and lower latency for I/O workloads.
{{project
|Project=XL to XCP VM motion
|Date=15/11/12
|Contact=[mailto:ian.campbell@citrix.com Ian Campbell]
|Desc=Currently [[XL|xl]] (the toolstack supplied alongside Xen) and [[XAPI|xapi]] (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see [[XL]]. For more information on XCP see [[XCP Overview]].


Also see https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqAsCIjuI
This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.
which has some of the explanation.


| Skills=You need to have understanding of:
One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.
* Knowledge of Linux kernel

* How I/O works
A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.
* C language

|Outcomes=Expected outcome:
These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.
* Patches for the Linux Kernel Mailing list (LKML).

* Benchmark reports.
The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps [http://en.wikipedia.org/wiki/Open_Virtualization_Format OVF] or similar) and the xl toolstack configuration file and disk image formats.

[[GSoC_2013#xl-to-xcp-vm-motion]]
|GSoC=Yes
|GSoC=Yes
}}
}}


{{project
{{project
|Project=Enabling the 9P File System transport as a paravirt device
|Project=VM Snapshots
|Date=16/01/2013
|Date=01/20/2014
|Difficulty=High
|Contact=<[mailto:stefano.stabellini@eu.citrix.com Stefano Stabellini]>
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|Desc=Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.

QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.

This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl. The candidate should also explore ways to integrate disk snapshots into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl.

[[GSoC_2013#vm-snapshots]]
|GSoC=Yes
|GSoC=Yes
|Desc=VirtIO provides a 9P FS transport, which is essentially a paravirt file system device. VMs can mount arbitrary file system hierarchies exposed by the backend. The 9P FS specification has been around for a while, while the VirtIO transport is relatively new. The project would consist of implementing a classic Xen front/back pv driver pair to provide a transport for the 9P FS Protocol.
}}


* More info: http://www.linux-kvm.org/page/9p_virtio
{{project
* Also the Bell Labs original OS that introduced the 9P protocol: http://plan9.bell-labs.com/sources/plan9/sys/src/
|Project=Allowing guests to boot with a passed-through GPU as the primary display
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to
pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play
games requiring proprietary operating systems without rebooting.


|Skills= Required skills include knowledge of kernel hacking, file system internals. Desired skills include: understanding of Xen PV driver structure, and VirtIO.
GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed
through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.


|Outcomes=Expected outcome:
The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can
* LKML patches for front and back end drivers.
boot with it as the primary display.
* In particular, domain should be able to boot from the 9P FS.

|Review=(delete as addressed)
[[GSoC_2013#gpu-passthrough]]
* {{Comment|[[User:Lars.kurth|Lars.kurth]] 15:24, 17 February 2014 (UTC):}} This project would benefit from links to the virtio specs and documents explaining how the PV protocol works.
|GSoC=Yes
}}
}}


=== Xen Hypervisor Userspace Tools ===
{{project
|Project=Advanced Scheduling Parameters
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,
a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would
be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.

|GSoC=Yes
}}


{{project
{{project
|Project=CPU/RAM/PCI diagram tool
|Project=CPU/RAM/PCI diagram tool
|Date=01/30/2013
|Date=01/30/2014
|Contact=Andy Cooper <andrew.cooper3@citrix.com>
|Contact=Andrew Cooper <andrew.cooper3@citrix.com>
|Difficulty=Moderate, to Extremely Difficult (depending on which area of the problem you choose to tackle)
|Difficulty=Low to medium
|Skills=Understanding of PC server hardware, Understanding of ACPI/SMBios tables, Linux scripting or kernel hacking (depending on which area of the problem you choose to tackle)
|Skills=Linux scripting; basic understanding of PC server hardware
|Desc=It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems.
|Desc=It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems.


Line 310: Line 279:
{{project
{{project
|Project=KDD (Windows Debugger Stub) enhancements
|Project=KDD (Windows Debugger Stub) enhancements
|Date=01/30/2013
|Date=01/30/2014
|Contact=Paul Durrant <paul.durrant@citrix.com>
|Contact=Paul Durrant <paul.durrant@citrix.com>
|Difficulty=Medium
|Difficulty=Medium
|Skills=C, Kernel Debuggers, Xen, Windows
|Skills=C, Kernel Debuggers, Xen, Windows
|Desc=kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/hg/xen-unstable.hg/tools/debugger/kdd
|Desc=kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/debugger/kdd;h=fd82789a678fb8060cc74ebbe0a04dc58309d6d7;hb=refs/heads/master
kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel.
kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel.
Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled.
Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled.
Line 327: Line 296:
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project.
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project.
|GSoC=yes}}
|GSoC=yes}}

{{project
|Project=Lazy restore using memory paging
|Date=01/20/2014
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|GSoC=Yes
|Desc=VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.

Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.

There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.

|Skills= A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.

|Outcomes=Expected outcome:
* Mainline patches for libxc and libxl
}}


{{project
{{project
Line 349: Line 301:
|Date=02/04/2014
|Date=02/04/2014
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|Difficulty=Easy
|GSoC=Yes
|GSoC=Yes
|Desc=When creating a VM, a policy is applied to mask certain CPUID features. Right now it's black magic.
|Desc=When creating a VM, a policy is applied to mask certain CPUID features. Right now it's black magic.


THe KVM stack has done an excellent job of making this human-useable, and understandable.
The KVM stack has done an excellent job of making this human-useable, and understandable.


For example, in a qemu-kvm command-line you may encounter:
For example, in a qemu-kvm command-line you may encounter:
Line 368: Line 321:
CPUID management is crucial in a heterogeneous cluster where migrations and save restore require careful processor feature selection to avoid blow-ups.
CPUID management is crucial in a heterogeneous cluster where migrations and save restore require careful processor feature selection to avoid blow-ups.


See: http://wiki.qemu.org/images/c/c8/Cpu-models-and-libvirt-devconf-2014.pdf
and https://www.berrange.com/posts/2010/02/15/guest-cpu-model-configuration-in-libvirt-with-qemukvm/
and http://blog.xen.org/index.php/2014/01/17/libvirt-support-for-xens-new-libxenlight-toolstack/
|Skills= A good understanding of C user-land programming, and the ability to dive into qemu/libvirt (for reference code and integration), as well as libxc and libxl (for implementation).
|Skills= A good understanding of C user-land programming, and the ability to dive into qemu/libvirt (for reference code and integration), as well as libxc and libxl (for implementation).


Line 374: Line 330:
}}
}}


* {{Comment|[[User:Vasilev|Vasilev]]}} I am interested in this idea ( [http://lists.xen.org/archives/html/xen-api/2014-03/msg00011.html] )
=== Mirage and XAPI projects ===

There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
=== Mirage OS ===


{{project
{{project
|Project=Create a tiny VM for easy load testing
|Project=Create a tiny VM for easy load testing
|Date=01/30/2013
|Date=01/30/2014
|Contact=Dave Scott <''first.last''@citrix.com>
|Contact=Dave Scott <dave.scott@eu.citrix.com>
|Difficulty=Medium
|Difficulty=Medium
|Skills=OCaml
|Skills=OCaml
|Desc=The http://www.openmirage.org/ framework can be used to create tiny 'exokernels': entire software stacks which run directly on the xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific 'exokernel' that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting exokernel will then enable cheap system load testing.
|Desc=The Mirage OS framework (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) can be used to create tiny 'unikernels': entire software stacks which run directly on the Xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific unikernel that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting unikernel will then enable cheap system load testing.


The first task is to generate an I/O trace from a VM. For this we could use 'xen-disk', a userspace Mirage application which acts as a block backend for xen guests (see http://openmirage.org/wiki/xen-synthesize-virtual-disk). Following the wiki instructions we could modify a 'file' backend to log the request timestamps, offsets, buffer lengths.
The first task is to generate an I/O trace from a VM. For this we could use 'xen-disk', a userspace Mirage application which acts as a block backend for xen guests (see http://openmirage.org/wiki/xen-synthesize-virtual-disk). Following the wiki instructions we could modify a 'file' backend to log the request timestamps, offsets, buffer lengths.


The second task is to create a simple kernel based on one of the MirageOS examples (see http://github.com/mirage/mirage-skeleton). The 'basic_block' example shows how reads and writes are done. The previously-generated log could be statically compiled into the kernel and executed to generate load.
The second task is to create a simple kernel based on one of the MirageOS examples (see http://github.com/mirage/mirage-skeleton). The 'block' example shows how reads and writes are done. The previously-generated log could be statically compiled into the kernel and executed to generate load.
|Outcomes=1. a repository containing an 'exokernel' (see http://github.com/mirage/mirage-skeleton)
|Outcomes=1. a repository containing an 'unikernel' (see http://github.com/mirage/mirage-skeleton)
2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version)
2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version)
|GSoC=yes}}
|GSoC=yes}}



{{project
{{project
|Project=Fuzz testing Xen with Mirage
|Project=Fuzz testing Xen with Mirage
|Date=28/11/2012
|Date=28/11/2013
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Skills=OCaml
|Skills=OCaml, Xen
|Difficulty=medium
|Difficulty=medium
|Desc=
|Desc=
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
Mirage OS (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) is a type-safe unikernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.


The first task would be to become familiar with a specification-based testing tool like Kaputt (see http://kaputt.x9c.fr/). The second task would be to choose an interface for testing; perhaps one of the hypercall ones.
The first task would be to become familiar with a specification-based testing tool like Kaputt (see http://kaputt.x9c.fr/). The second task would be to choose an interface for testing; perhaps one of the hypercall ones.
Line 409: Line 365:


{{project
{{project
|Project=Mirage OS cloud API support
|Project=Mirage OS web stack testing
|Date=28/11/2012
|Date=25/02/2014
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Skills=OCaml
|Skills=OCaml, shell scripting
|Difficulty=medium
|Difficulty=medium
|Desc=
|Desc=
MirageOS has an emerging web toolstack that's broken up as a series of libraries -- for example, Cohttp, Uri, Cow, Ipaddr, RSS and Cowabloga. This project will get you familiar with them by building a protocol testing framework that can generate traffic using off-the-shelf tools such as httperf, and evaluate the results vs applications such as Apache or Nginx.
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. A MirageOS application typically runs via several communicating kernel instances on the cloud. Today these instances are difficult to manage; we would like to explore strategies for managing these distributed computations using common public cloud APIs such as those exposed by Amazon EC2 and Rackspace.


|Outcomes=1. a test harness for HTTP; 2. some results of the evaluation using the test harness
First we need to create pure OCaml API bindings for (e.g.) EC2 and Rackspace (purity is needed to ensure portability). These API bindings can then be used to provide operating-system-level abstractions to the exokernels. For example, a traditional VM might hotplug a vCPU; while a MirageOS application would request a "VM create" using the cloud API and "connect" the new instance to the existing network. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment.

As well as helping Xen/Mirage, the public cloud API bindings will be very useful to other people in other contexts-- a nice side-effect.
|Outcomes=1. one or more public cloud API bindings plus examples, in a standalone repo on github; 2. an example mirage app which uses these APIs to spin up a new VM
|GSoC=yes
|GSoC=yes
}}
}}


== List of projects that need more work ==
{{Anchor|Unreviewed Project Ideas}}

=== Domain support (PVOPS and Linux) ===
{{project
{{project
|Project=Implement Xen PVSCSI support in xl/libxl toolstack
|Project=From simulation to emulation to production: self-scaling apps
|Date=28/11/2012
|Date=01/12/2012
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Contact=Pasi Karkkainen <pasik@iki.fi>
|GSoC=Yes
|Difficulty=hard
|Skills=OCaml
|Desc=
|Desc=
xl/libxl does not currently support Xen PVSCSI functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. An interesting consequence of programming Mirage applications in a functional language is that the device drivers can be substituted with emulated equivalents. Therefore, it should be possible to test an application under extreme load conditions as a simulation, and then recompile the *same* code into production. The simulation can inject faults and test data structures under distributed conditions, but using a fraction of the resources required for a real deployment.
* Task 1: Implement PVSCSI in xl/libxl, make it functionally equivalent to xm/xend.
* Send to xen-devel mailinglist for review, comments.
* Fix any upcoming issues.
* Repeat until merged to xen-unstable.
* See above for PVSCSI drivers for dom0/domU.
* Xen PVSCSI supports both PV domUs and HVM guests with PV drivers.
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel.
}}


=== Xen Hypervisor ===
The first task is to familiarise yourself with a typical Mirage application, I suggest a webserver (see https://github.com/mirage/mirage-www). The second task is to replace the ethernet driver with a synthetic equivalent, so we can feed it simulated traffic. Third, we should inject simulated web traffic (recorded from a real session) and attempt to determine how the application response time varies with load (number of connections; incoming packet rate).


{{project
This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource...
|Project=Introducing PowerClamp-like driver for Xen
|Outcomes=1. a repo/branch with a fake ethernet device and a traffic simulator; 2. an interesting performance graph
|Date=01/22/2013
|GSoC=no, too much work
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum
power usage limit. This is particularly useful for data centers, where there may be a need to
reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup]
is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen,
and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used
for both. [[GSoC_2013#powerclamp-for-xen]]
|GSoC=Yes
}}
}}


=== Xen Hypervisor Userspace Tools ===
{{project
{{project
|Project=Refactor Linux hotplug scripts
|Project=Towards a multi-language unikernel substrate for Xen
|Date=28/11/2012
|Date=15/11/2012
|Contact=Roger Pau Monné <[mailto:roger.pau@citrix.com roger.pau@citrix.com]>
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Difficulty=hard
|Skills=OCaml, Haskell, Java
|Desc=
|Desc=
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include:


Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
* OCaml: Mirage http://openmirage.org
* Haskell: HalVM https://github.com/GaloisInc/HaLVM#readme
* Erlang: ErlangOnXen http://erlangonxen.org
* Java: GuestVM http://labs.oracle.com/projects/guestvm/, OSv https://github.com/cloudius-systems/osv


[[GSoC_2013#linux-hotplug-scripts]]
Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain based on XCP that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.
|GSoC=Yes
}}


{{project
[[GSoC_2013#unikernel-substrate]]
|Project=XL to XCP VM motion
|Outcomes=1. a repo containing a common library of low-level functions; 2. a proof of concept port of at least 2 systems to this new library
|Date=15/11/12
|GSoC=no, too difficult
|Contact=[mailto:ian.campbell@citrix.com Ian Campbell]
|Desc=Currently [[XL|xl]] (the toolstack supplied alongside Xen) and [[XAPI|xapi]] (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see [[XL]]. For more information on XCP see [[XCP Overview]].

This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.

One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps [http://en.wikipedia.org/wiki/Open_Virtualization_Format OVF] or similar) and the xl toolstack configuration file and disk image formats.

[[GSoC_2013#xl-to-xcp-vm-motion]]
|GSoC=Yes
}}
}}

{{project
|Project=Advanced Scheduling Parameters
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,
a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would
be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.

|GSoC=Yes
}}



=== PCI Pass-through improvements ===

{{project
|Project=Allowing guests to boot with a passed-through GPU as the primary display
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to
pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play
games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed
through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can
boot with it as the primary display.

[[GSoC_2013#gpu-passthrough]]
|GSoC=Yes
}}

{{project
|Project=Improve PCIe Advanced Error Reporting (AER) handling for passed-through devices
|Date=03/04/2014
|Difficulty=Medium-High
|Skills=Understanding of PC server hardware, PCIe, C
|Contact=Matt Wilson <msw@amazon.com>
|Outcomes=Patches for libxl, qemu, and perhaps xen-pciback posted
|Desc=Today the xen-pciback driver handles an AER event for passed-through PCI devices. If the device is assigned to a PV guest, it uses xenstore to request a reset from xen-pcifront. If the device is assigned to a HVM guest, the toolstack is notified and is expected to take corrective action.

The toolstack support for taking corrective action is only implemented in [http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/python/xen/xend/server/pciif.py;h=27c1b75cfcf2695740e57bde557d2ee04c4d7322;hb=HEAD#l459 xend], not libxl. For HVM guests ideally, the AER event would be propagated into the guest through the device model (qemu) so that the driver inside the guest can take reset actions.

|GSoC=Yes
}}

=== XAPI ===


{{project
{{project
Line 473: Line 517:
}}
}}


== New Project Ideas ==
{{Anchor|New_Project_Ideas}}
'''Please add new project ideas here, following '''

== Conventions for Projects and Project Mentors ==
=== Rules and Advice for Adding Ideas ===
* Be creative
* Add projects into [[#New_Project_Ideas|New Project Ideas]] or improve projects in [[#Unreviewed Project Ideas|Project Ideas that Need Review or more work]] through review comments.
* Use the {{tl|GSoC Project}} template to encode ideas on this page. Please read the [[Template:GSoC Project|Template Documentation]] before you do so.
* Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
* Check that the project meets the [[#Goals|GSoC Program Goals]]
* If you are willing to mentors those ideas, add your name and email to the idea.
* Aspiring mentors should introduce themselves on the most appropriate Xen Project mailing list

=== Peer Review Goals ===
We strongly recommend and invite project proposers and project mentors to review each others proposals. When you review, please look out for
* Can a student get going and started with the information in the project description
* Are any unstated assumptions in the proposal, is there undefined terminology, etc. in the proposal
* Can the project completed in 3 months (assume that one month is needed for preparation)
* {{Anchor|Goals}}Does the project meet Google Summer of Code goals, which are
** Create and release open source code for the benefit of all
** Inspire young developers to begin participating in open source development
** Help open source projects identify and bring in new developers and committers
** Provide students the opportunity to do work related to their academic pursuits (think "flip bits, not burgers")
** Give students more exposure to real-world software development scenarios (e.g., distributed development, software licensing questions, mailing-list etiquette)

=== Peer Review Conventions ===
The {{tl|GSoC Project}} template used to encode GSoC projects, contains some review functionality. Please read the [[Template:GSoC Project|Template Documentation]] before you add a template, also please use the conventions below to make comments.


<pre>
* XCP and XAPI development projects: [[XAPI project suggestions]]
|Review=(delete as addressed)
* XCP short-term roadmap: [[XCP short term roadmap]]
* {{Comment|~~~~:}} Comment 1
* XCP monthly developer meetings: [[XCP Monthly Meetings]]
* {{Comment|~~~~:}} Comment 2
* XAPI developer guide: [[XAPI Developer Guide]]
</pre>


=== Choosing Projects ===
== Quick links to changelogs of the various Xen related repositories/trees ==
We have a bi-weekly mentor meeting overlooked by our program management team, which are a core team of 2-3 mentors and a program administrator. This group will work with mentors to ensure that project proposals are of good quality and whether mentors are engaging with the program management team and students in the weeks before the application period ends.
Please see [[XenRepositories]] wiki page!


[[Category:GSoC]]
[[Category:GSoC]]
[[Category:GSoC 2014]]
[[Category:Developers]]
[[Category:Developers]]
[[Category:Index]]
[[Category:Index]]
[[Category:Project]]
[[Category:Project]]
[[Category:Archived]]
[[Category:Internships]]
[[Category:Transient]] <!-- as if not maintained it becomes stale -->
[[Category:Transient]] <!-- as if not maintained it becomes stale -->

Latest revision as of 18:07, 2 February 2017

Icon Info.png The application deadline for GSoC has closed for 2014.


The Xen Project is a Linux Foundation collaborative project that develops the

  • Xen Hypervisor (for x86 and ARM)
  • The XAPI toolstack
  • Mirage OS

The project also has excellent relationships with its upstreams (Linux Kernel, the BSDs, QEMU and other projects) and upstreams such as Linux distributions. This is reflected in the project list, which contains many interesting cross-project development projects for students.

GSoC and Xen

This page is used to list project ideas for Google Summer of Code (GSOC) 2014.

Key GSoC resources

Google Summer of Code 2014 is On (see [1]). The Xen Project has applied as a Mentoring Organization. Stay posted.

Finding a project that fits you

This page lists Xen Project development projects for GSoC that can be picked up by anyone! If you're interesting in hacking Xen Project code and want to become a part of our friendly developer community this is the place to start! Ready for the challenge?

To work on a project:

  • Find a project that looks interesting (or a bug if you want to start with something simple)
  • Send an email to the relevant mailing list (see Developer Mailing Lists) and let us know if you are interested in starting to work or applying on a specific project.
  • Post your ideas, questions, RFCs to the relevant mailing list sooner than later so you can get comments and feedback.

You have your own project idea: no problem!

  • If you have your own project idea, outline what you are trying to do on the mailing list. If you know the right list, post your project idea on mailing list. Failing that post on xen-devel and we can redirect you to the right list. Make sure you add GSoC 2014 to the subject line.

It is a good idea to ...
The Xen Project has also participated in the Gnome Outreach Program for Women (OPW) in the past. One of the things we learned by participating in OPW is that you will be more successful, happier and get more out of participating in student programs such as GSoC, if you do a bit of prep-work before writing an application. Here is some stuff you can do:

  • Contact your mentor early and get to know him or her
  • If the Xen Project is accepted into GSoC, start hanging out on our IRC channel. You can use the #xen-opw IRC channel on freenode.net for now (if accepted, we will create a GSoC channel)
  • You may want to ask the mentor for a couple of small bitesize work-items (such as reviewing someones patch, a bitesize bug, ...) and start communicating on the relevant mailing list. That helps you become familiar with our development process, the mentor and other community members and will help you chose the right project and help you decide whether the Xen project is for you.
  • Note that quite a few Xen maintainers used to be GSoC students once. Feel free to ask community dot manager at xenproject dot org to put you in touch with them if you have questions about their experience.
  • Any work you submit before applying for a project should be based on xen-unstable development tree, if the project is Xen Hypervisor and/or tools related. Linux kernel related patches should be based on upstream kernel.org Linux git tree (latest version). XAPI and Mirage OS patches should be based on the right codeline too. Check out the navigation by audience section on the left to find resources.

More resources

Quick links to changelogs of the various Xen related repositories/trees: Please see XenRepositories wiki page!

Before to submit patches, please look at Submitting Xen Patches wiki page and the relevant Xen Project team page. This will contain more information.

If you have new ideas, suggestions or development plans let us know and we'll update this list!

Aspiring Students

  • Please contact the mentor and CC the most appropriate mailing list
  • Get a bite-size task from the mentor before the application starts
  • If you feel comfortable with an idea, please put your name to an idea using the following format
{{project
...
|Review=(delete as addressed)
* {{Comment|~~~~:}} I am interested in this idea ... 
                    (note that you may also want to link to the e-mail thread with the mentor)
  • You will need to request write access to the wiki by filling out this form

Applying for GSoC

Icon Info.png Note that we will update this section when more student information on melange is available, to make it easier for you to find information. And of course assuming that the Xen Project will be accepted into GSoC.


To apply for a project, follow the steps outlined on

GSoC Projects that were accepted in 2014

Implement Xen PVUSB support in xl/libxl toolstack

Date of insert: 01/12/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: George Dunlap, Student: Bo Cao
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: xl/libxl does not currently support Xen PVUSB functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
  • Task 1: Implement PVUSB in xl/libxl, make it functionally equivalent to xm/xend.
  • Send to xen-devel mailinglist for review, comments.
  • Fix any upcoming issues.
  • Repeat until merged to xen-unstable.
  • See above for PVUSB drivers for dom0/domU.
  • Xen PVUSB supports both PV domUs and HVM guests with PV drivers.
  • More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough
Pictogram voting comment 15px.png Lars.kurth 14:14, 23 January 2013 (UTC): Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel.
Outcomes: Not specified, project outcomes


Lazy restore using memory paging

Date of insert: 01/20/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: Andres Lagar-Cavilla, Student: Dushyant Behl
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.
Description: VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.

Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.

There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.
Outcomes: Expected outcome:
  • Mainline patches for libxc and libxl
  • Pictogram voting comment 15px.png dushyant Hi, I am working on this project.


HVM per-event-channel interrupts

Date of insert: 01/30/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Mentor: Paul Durrant, Student: Yandong Han
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: C, some prior knowledge of Xen useful
Description: Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load. Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest.
Outcomes: Code is submitted to xen-devel@xen.org for inclusion in xen-unstable


Mirage OS cloud API support

Date of insert: 28/11/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Mentor: Dave Scott; Student: Jyotsna Prakash
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: medium
Skills Needed: OCaml
Description: MirageOS (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) is a type-safe unikernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. A MirageOS application typically runs via several communicating kernel instances on the cloud. Today these instances are difficult to manage; we would like to explore strategies for managing these distributed computations using common public cloud APIs such as those exposed by Amazon EC2 and Rackspace.

First we need to create pure OCaml API bindings for (e.g.) EC2 and Rackspace (purity is needed to ensure portability). These API bindings can then be used to provide operating-system-level abstractions to the unikernels. For example, a traditional VM might hotplug a vCPU; while a MirageOS application would request a "VM create" using the cloud API and "connect" the new instance to the existing network. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment.

As well as helping Xen/Mirage, the public cloud API bindings will be very useful to other people in other contexts-- a nice side-effect.

See https://fedoraproject.org/wiki/User:Gholms/EC2_Primer for a primer on how to use EC2
Outcomes: 1. one or more public cloud API bindings plus examples, in a standalone repo on github; 2. an example mirage app which uses these APIs to spin up a new VM


Parallel xenwatch kthread

Date of insert: 01/08/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Mentor: Boris Ostrovsky, Student: Tülin İZER
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Low-Medium
Skills Needed: You need to have understanding of:
  • locks - spinlocks and mutexes
  • build Linux kernel
Description: Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.

See https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/xen/xenbus/xenbus_xs.c#n768

for the code.
Outcomes: Expected outcome:
  • Have upstream patches or a draft of them.
  • benchmark report of with and without.

List of peer reviewed Projects

Domain support (PVOPS and Linux)

OVMF Compatibility Support Module support in Xen

Date of insert: 2/5/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Wei Liu <wei.liu2@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: You need to have understanding of:
  • Firmware internal
  • Some C programming skills
Description: OVMF is a project to enable UEFI support for virtual machine. http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=OVMF

SeaBIOS is a legacy BIOS implementation used by Xen to boot HVM guests. http://www.coreboot.org/SeaBIOS

Currently Xen supports booting HVM guest with Seabios and OVMF UEFI firmware, but those are separate binaries. OVMF supports adding legacy BIOS blob in its binary with Compatibility Support Module support. We can try to produce single OVMF binary with Seabios in it, thus having only one firmware binary.

Tasks may include:

  • understand the boot process of HVM guests
  • figure out how CSM works
  • design / implement interface between Hvmloader and the unified binary
Outcomes: Produce a single firmware binary that can be used for legacy boot HVM guest and UEFI HVM guest
  • Pictogram voting comment 15px.png sdytlm Hi, I am interested to work on this project.


Utilize Intel QuickData on network and block path.

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: High
Skills Needed: The basic requirement for this project is Linux kernel programming skill. The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
Description: The Intel QuickData, also known as Direct Cache Access (or IOA/T), is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache.

Adding this component in the network or block backends can mean that we can keep the data longer in the cache and the guest can process the data right off the cache.

See these for references: http://www.intel.com/content/www/us/en/wireless-network/accel-technology.html http://www.intel.com/content/www/us/en/chipsets/quickdata-technology-software-guide-for-linux-paper.html

Also the dmaengine@vger.kernel.org is an excellent mailing list to subscribe to.
Outcomes: Expected outcome:
  • Investigate whether DCA (aka QuickData aka I/O AT) works with Xen.
  • If above is true: have upstream patches (or draft patches)
  • and benchmark report of with and without.


Xen block backend/frontend multiqueue support

Date of insert: 03/09/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: High
Skills Needed: You need to have understanding of:
  • Knowledge of Linux kernel
  • How I/O works
  • C language
Description: The Linux kernel (and FreeBSD, Windows, etc) have ParaVirtualized (PV) drivers to perform

the I/O instead of using the emulated devices that appear in QEMU (IDE, SCSI, etc). This is done because the emulation of the IDE drivers is quite slow - and if you dig in how it actually is done - it is full of bit-banging registers. The PV drivers are an answer to this and eliminate the need for emulation. The mechanism by which they work is nicely draw out in http://wiki.xen.org/wiki/PV_Protocol and http://www.informit.com/articles/article.aspx?p=1160234&seqNum=3 ("Definitive Guide to the Xen Hypervisor, The")

There have been improvements done in it - see http://wiki.xen.org/wiki/Xen_4.3_Block_Protocol_Scalability and http://blog.xen.org/index.php/2013/08/07/indirect-descriptors-for-xen-pv-disks/

However, there are still room for improvement. We can utilize the new block multiqueue API support in Linux (See https://lwn.net/Articles/552904/ and http://kernel.dk/systor13-final18.pdf) to allocate per CPU a block thread (which handles the I/O transmission).

That should provide greater throughput and lower latency for I/O workloads.

Also see https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqAsCIjuI

which has some of the explanation.
Outcomes: Expected outcome:
  • Patches for the Linux Kernel Mailing list (LKML).
  • Benchmark reports.


Enabling the 9P File System transport as a paravirt device

Date of insert: 01/20/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: High
Skills Needed: Required skills include knowledge of kernel hacking, file system internals. Desired skills include: understanding of Xen PV driver structure, and VirtIO.
Description: VirtIO provides a 9P FS transport, which is essentially a paravirt file system device. VMs can mount arbitrary file system hierarchies exposed by the backend. The 9P FS specification has been around for a while, while the VirtIO transport is relatively new. The project would consist of implementing a classic Xen front/back pv driver pair to provide a transport for the 9P FS Protocol.
Outcomes: Expected outcome:
  • LKML patches for front and back end drivers.
  • In particular, domain should be able to boot from the 9P FS.
Peer Review Comments
(delete as addressed)
  • Pictogram voting comment 15px.png Lars.kurth 15:24, 17 February 2014 (UTC): This project would benefit from links to the virtio specs and documents explaining how the PV protocol works.

Xen Hypervisor Userspace Tools

CPU/RAM/PCI diagram tool

Date of insert: 01/30/2014; Verified: Not updated in 2020; GSoC: yes
Technical contact: Andrew Cooper <andrew.cooper3@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Moderate, to Extremely Difficult (depending on which area of the problem you choose to tackle)
Skills Needed: Understanding of PC server hardware, Understanding of ACPI/SMBios tables, Linux scripting or kernel hacking (depending on which area of the problem you choose to tackle)
Description: It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems. There are many avenues for extension such as labelling relevant hardware errata, performing bus throughput calculations etc.
Outcomes: A tool is created that can either run on a live Linux system or offline using captured data to produce a graphical representation of the hardware topology of the system including bus topology, hardware device locations, memory bank locations, etc. The tool would be submitted to a suitable open-source project such as the Xen hypervisor project or XCP.


KDD (Windows Debugger Stub) enhancements

Date of insert: 01/30/2014; Verified: Not updated in 2020; GSoC: yes
Technical contact: Paul Durrant <paul.durrant@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: C, Kernel Debuggers, Xen, Windows
Description: kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/debugger/kdd;h=fd82789a678fb8060cc74ebbe0a04dc58309d6d7;hb=refs/heads/master

kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel. Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled.

Expected Results:

  1. Add support for Windows 8 (x86, x64) to kdd
  2. Add support for Windows Server 2012 to kdd
  3. Enhance kdd to allow WinDbg to write out usable Windows memory dumps (via .dump debugger extension) for all supported versions
  4. Produce a user guide for kdd on Xen wiki page
Nice to have: Allow kdd to operate on a Windows domain checkpoint file (output of xl save for e.g.)
Outcomes: Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project.


CPUID Programming for Humans

Date of insert: 02/04/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Easy
Skills Needed: A good understanding of C user-land programming, and the ability to dive into qemu/libvirt (for reference code and integration), as well as libxc and libxl (for implementation).
Description: When creating a VM, a policy is applied to mask certain CPUID features. Right now it's black magic.

The KVM stack has done an excellent job of making this human-useable, and understandable.

For example, in a qemu-kvm command-line you may encounter:

-cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme

And in <qemu>/target-i386.c you find a fairly comprehensive description of x86 processor models, what CPUID features are inherent, and what CPUID feature each of these symbolic flags enables.

In the Xen world, there is a libxc interface to do the same, although it's all hex and register driven. It's effective, yet horrible to use.

An ideal outcome would have libxl config files and command line absorb a similarly human-friendly description of the CPUID features a user wishes for the VM, and interface appropriately with libxl. Further, autodetection of best CPUID shuold yield a human-readable output to be able to easily understand what the VM thinks about its processor.

Finally, interfacing with libvirt should be carefully considered.

CPUID management is crucial in a heterogeneous cluster where migrations and save restore require careful processor feature selection to avoid blow-ups.

See: http://wiki.qemu.org/images/c/c8/Cpu-models-and-libvirt-devconf-2014.pdf and https://www.berrange.com/posts/2010/02/15/guest-cpu-model-configuration-in-libvirt-with-qemukvm/

and http://blog.xen.org/index.php/2014/01/17/libvirt-support-for-xens-new-libxenlight-toolstack/
Outcomes: Expected outcome:
  • Mainline patches for libxl

Mirage OS

Create a tiny VM for easy load testing

Date of insert: 01/30/2014; Verified: Not updated in 2020; GSoC: yes
Technical contact: Dave Scott <dave.scott@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: OCaml
Description: The Mirage OS framework (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) can be used to create tiny 'unikernels': entire software stacks which run directly on the Xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific unikernel that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting unikernel will then enable cheap system load testing.

The first task is to generate an I/O trace from a VM. For this we could use 'xen-disk', a userspace Mirage application which acts as a block backend for xen guests (see http://openmirage.org/wiki/xen-synthesize-virtual-disk). Following the wiki instructions we could modify a 'file' backend to log the request timestamps, offsets, buffer lengths.

The second task is to create a simple kernel based on one of the MirageOS examples (see http://github.com/mirage/mirage-skeleton). The 'block' example shows how reads and writes are done. The previously-generated log could be statically compiled into the kernel and executed to generate load.
Outcomes: 1. a repository containing an 'unikernel' (see http://github.com/mirage/mirage-skeleton) 2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version)


Fuzz testing Xen with Mirage

Date of insert: 28/11/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: medium
Skills Needed: OCaml, Xen
Description: Mirage OS (see http://xenproject.org/developers/teams/mirage-os.html, http://www.openmirage.org/) is a type-safe unikernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.

The first task would be to become familiar with a specification-based testing tool like Kaputt (see http://kaputt.x9c.fr/). The second task would be to choose an interface for testing; perhaps one of the hypercall ones.

GSoC_2013#fuzz-testing-mirage
Outcomes: 1. a repo containing a fuzz testing tool; 2. some unexpected behaviour with a backtrace (NB it's not required that we find a critical bug, we just need to show the approach works)


Mirage OS web stack testing

Date of insert: 25/02/2014; Verified: Not updated in 2020; GSoC: yes
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: medium
Skills Needed: OCaml, shell scripting
Description: MirageOS has an emerging web toolstack that's broken up as a series of libraries -- for example, Cohttp, Uri, Cow, Ipaddr, RSS and Cowabloga. This project will get you familiar with them by building a protocol testing framework that can generate traffic using off-the-shelf tools such as httperf, and evaluate the results vs applications such as Apache or Nginx.
Outcomes: 1. a test harness for HTTP; 2. some results of the evaluation using the test harness

List of projects that need more work

Domain support (PVOPS and Linux)

Implement Xen PVSCSI support in xl/libxl toolstack

Date of insert: 01/12/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Pasi Karkkainen <pasik@iki.fi>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: xl/libxl does not currently support Xen PVSCSI functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include:
  • Task 1: Implement PVSCSI in xl/libxl, make it functionally equivalent to xm/xend.
  • Send to xen-devel mailinglist for review, comments.
  • Fix any upcoming issues.
  • Repeat until merged to xen-unstable.
  • See above for PVSCSI drivers for dom0/domU.
  • Xen PVSCSI supports both PV domUs and HVM guests with PV drivers.
  • More info: http://wiki.xen.org/xenwiki/XenPVSCSI
Pictogram voting comment 15px.png Lars.kurth 14:14, 23 January 2013 (UTC): Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel.
Outcomes: Not specified, project outcomes

Xen Hypervisor

Introducing PowerClamp-like driver for Xen

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum

power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A more complete writeup is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used

for both. GSoC_2013#powerclamp-for-xen
Outcomes: Not specified, project outcomes

Xen Hypervisor Userspace Tools

Refactor Linux hotplug scripts

Date of insert: 15/11/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Roger Pau Monné <roger.pau@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.

Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.

GSoC_2013#linux-hotplug-scripts
Outcomes: Not specified, project outcomes


XL to XCP VM motion

Date of insert: 15/11/12; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Ian Campbell
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Currently xl (the toolstack supplied alongside Xen) and xapi (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see XL. For more information on XCP see XCP Overview.

This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.

One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps OVF or similar) and the xl toolstack configuration file and disk image formats.

GSoC_2013#xl-to-xcp-vm-motion
Outcomes: Not specified, project outcomes


Advanced Scheduling Parameters

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,

a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would

be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.
Outcomes: Not specified, project outcomes


PCI Pass-through improvements

Allowing guests to boot with a passed-through GPU as the primary display

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to

pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can boot with it as the primary display.

GSoC_2013#gpu-passthrough
Outcomes: Not specified, project outcomes


Improve PCIe Advanced Error Reporting (AER) handling for passed-through devices

Date of insert: 03/04/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Matt Wilson <msw@amazon.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium-High
Skills Needed: Understanding of PC server hardware, PCIe, C
Description: Today the xen-pciback driver handles an AER event for passed-through PCI devices. If the device is assigned to a PV guest, it uses xenstore to request a reset from xen-pcifront. If the device is assigned to a HVM guest, the toolstack is notified and is expected to take corrective action. The toolstack support for taking corrective action is only implemented in xend, not libxl. For HVM guests ideally, the AER event would be propagated into the guest through the device model (qemu) so that the driver inside the guest can take reset actions.
Outcomes: Patches for libxl, qemu, and perhaps xen-pciback posted

XAPI

DRBD Integration

Date of insert: 07/01/2013; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: John Morris <john@zultron.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: DRBD is potentially a great addition to the other high-availability features in XenAPI. An architecture of as few as two Dom0s with DRBD mirrored local storage is an inexpensive minimal HA configuration enabling live migration of VMs between physical hosts and providing failover in case of disk failure, and eliminates the need for external storage. This setup can be used in small shop or hobbyist environments, or could be used as a basic unit in a much larger scalable architecture.

Existing attempts at integrating DRBD sit below the SM layer and thus do not enable one VBD per DRBD device. They also suffer from a split-brain situation that could be avoided by controlling active/standby status from XenAPI.

DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes.
Outcomes: Not specified, project outcomes

New Project Ideas

Please add new project ideas here, following

Conventions for Projects and Project Mentors

Rules and Advice for Adding Ideas

  • Be creative
  • Add projects into New Project Ideas or improve projects in Project Ideas that Need Review or more work through review comments.
  • Use the {{GSoC Project}} template to encode ideas on this page. Please read the Template Documentation before you do so.
  • Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
  • Check that the project meets the GSoC Program Goals
  • If you are willing to mentors those ideas, add your name and email to the idea.
  • Aspiring mentors should introduce themselves on the most appropriate Xen Project mailing list

Peer Review Goals

We strongly recommend and invite project proposers and project mentors to review each others proposals. When you review, please look out for

  • Can a student get going and started with the information in the project description
  • Are any unstated assumptions in the proposal, is there undefined terminology, etc. in the proposal
  • Can the project completed in 3 months (assume that one month is needed for preparation)
  • Does the project meet Google Summer of Code goals, which are
    • Create and release open source code for the benefit of all
    • Inspire young developers to begin participating in open source development
    • Help open source projects identify and bring in new developers and committers
    • Provide students the opportunity to do work related to their academic pursuits (think "flip bits, not burgers")
    • Give students more exposure to real-world software development scenarios (e.g., distributed development, software licensing questions, mailing-list etiquette)

Peer Review Conventions

The {{GSoC Project}} template used to encode GSoC projects, contains some review functionality. Please read the Template Documentation before you add a template, also please use the conventions below to make comments.

|Review=(delete as addressed)
* {{Comment|~~~~:}} Comment 1
* {{Comment|~~~~:}} Comment 2

Choosing Projects

We have a bi-weekly mentor meeting overlooked by our program management team, which are a core team of 2-3 mentors and a program administrator. This group will work with mentors to ensure that project proposals are of good quality and whether mentors are engaging with the program management team and students in the weeks before the application period ends.