Archived/Xen Development Projects: Difference between revisions

From Xen
Jump to navigationJump to search
No edit summary
No edit summary
 
(103 intermediate revisions by 21 users not shown)
Line 1: Line 1:
__TOC__
<!-- MoinMoin name: XenDevelopmentProjects -->
<!-- Comment: -->
<!-- WikiMedia name: XenDevelopmentProjects -->
<!-- Page revision: 00000010 -->
<!-- Original date: Thu Jan 6 13:04:51 2011 (1294319091000000) -->


__NOTOC__
= Xen Development Projects =
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?


To work on a project:
To work on a project:


* Find a project that looks interesting.
* Find a project that looks interesting (or a bug if you want to start with something simple)
* Send an email to xen-devel mailinglist and let us know you started working on a specific project.
* Send an email to xen-devel mailinglist and let us know you started working on a specific project.
* Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
* Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
* Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
* Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
* Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.0, Xen 3.4, etc).
* Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
* Your kernel related patches should be based on either upstream kernel.org git tree (latest version), or xen/stable-2.6.32.x tree, depending if it's upstream or xen dom0 related work.
* Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).

xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel


Before to submit patches, please look at [[Submitting Xen Patches]] wiki page.
Before to submit patches, please look at [[Submitting Xen Patches]] wiki page.



== List of projects ==
If you have new ideas, suggestions or development plans let us know and we'll update this list!
If you have new ideas, suggestions or development plans let us know and we'll update this list!


== List of projects ==
=== Domain support ===
=== Domain support ===

{{project
{{project
|Project=Upstreaming Xen PVSCSI drivers to mainline Linux kernel
|Project=Utilize Intel QuickPath on network and block path.
|Date=01/08/2012
|Date=01/22/2013
|Difficulty=High
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=The Intel QuickPath, also known as Direct Cache Access, is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache.
|Desc=

PVSCSI drivers needs to be upstreamed yet. Necessary operations may include:
Adding this component in the network or block backends can mean that we can keep the data longer in the cache and the guest can process the data right off the cache.
* Task 1: Upstream PVSCSI scsifront frontend driver (for domU).

* Task 2: Upstream PVSCSI scsiback backend driver (for dom0).
|Skills=The basic requirement for this project is Linux kernel programming skill.
* Send to various related upstream mailinglists for review, comments.
The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
* Fix any upcoming issues.

* Repeat until merged to upstream Linux kernel git tree.
|Outcomes=Expected outcome:
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-scsi.v1.0
* Have upstream patches.
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI
* benchmark report of with and without.
|GSoC=Yes
}}
}}



{{project
{{project
|Project=Upstreaming Xen PVUSB drivers to mainline Linux kernel
|Project=Enabling the 9P File System transport as a paravirt device
|Date=01/08/2012
|Date=01/20/2014
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|Desc=
|GSoC=Yes
|Desc=VirtIO provides a 9P FS transport, which is essentially a paravirt file system device. VMs can mount arbitrary file system hierarchies exposed by the backend. The 9P FS specification has been around for a while, while the VirtIO transport is relatively new. The project would consist of implementing a classic Xen front/back pv driver pair to provide a transport for the 9P FS Protocol.
PVUSB drivers needs to be upstreamed yet. Necessary operations may include:

* Upstream PVUSB usbfront frontend driver (for domU).
* More info: http://www.linux-kvm.org/page/9p_virtio
* Upstream PVUSB usbback backend driver (for dom0).

* Send to various related upstream mailinglists for review, comments.
|Skills= Required skills include knowledge of kernel hacking, file system internals. Desired skills include: understanding of Xen PV driver structure, and VirtIO.
* Fix any upcoming issues.

* Repeat until merged to upstream Linux kernel git tree.
|Outcomes=Expected outcome:
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-usb.v1.1
* LKML patches for front and back end drivers.
* More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough
* In particular, domain should be able to boot from the 9P FS.
}}
}}


{{project
{{project
|Project=OVMF Compatibility Support Module support in Xen
|Project=Blkback improvements
|Date=02/08/2012
|Date=2/5/2014
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Wei Liu <wei.liu2@citrix.com>
|GSoC=Yes
|Difficulty=Easy
|Desc=
|Desc=
Currently Xen supports booting HVM guest with Seabios and OVMF UEFI firmware, but those are separate binaries. OVMF supports adding legacy BIOS blob in its binary with Compatibility Support Module support. We can try to produce single OVMF binary with Seabios in it, thus having only one firmware binary.
Blkback requires a number of improvements, some of them being:

* Multiple disks in a guest cause contention in the global pool of pages.
Tasks may include:
* There is only one ring page and with SSDs nowadays we should make this larger, implementing some multi-page support.
* figure out how CSM works
* With multi-page it becomes apparent that the segment size ends up wasting a bit of space on the ring. BSD folks fixed that by negotiating a new parameter to utilize the full size of the ring.
* design / implement interface between Hvmloader and the unified binary
* Add DIF/DIX support [http://oss.oracle.com/~mkp/docs/lpc08-data-integrity.pdf]

* Further perf evaluation needs to be done to see how it behaves under high load.
}}
}}


{{project
{{project
|Project=Improvements to firmware handling HVM guests
|Project=Netback overhaul
|Date=02/08/2012
|Date=07/16/2015
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Andrew Cooper <andrew.cooper3@citrix.com>
|GSoC=Yes
|Difficulty=Easy
|Skills Needed=Gnu toolchain, Familiarity with Multiboot, C
|Desc=
|Desc=
Currently, all firmware is compiled into HVMLoader.
Wei Liu posted RFC patches that make the driver be multi-page, multi-event channel and with a page-pool. However not all the issues have been addressed yet, meaning that the patches need to be finished and cleaned up yet. Additively, a zero-copy implementation can be considered. Patch serie and discussions:

* http://lists.xen.org/archives/html/xen-devel/2012-01/msg02561.html
This works, but is awkward when used using a single distro seabios/ovmf designed for general use. In such a case, any time an update to seabios/ovmf happens, hvmloader must be rebuilt.
* http://www.spinics.net/lists/linux-nfs/msg22575.html

The purpose of this project is to alter hvmloader to take firmware blobs as a multiboot module rather than requiring them to be built in. This reduces the burden of looking after Xen in a distro environment, and will also be useful for developers wanting to work with multiple versions of firmware.

As an extension, support loading an OVMF NVRAM blob. This enabled EFI NVRAM support for guests.
}}
}}

=== Hypervisor ===


{{project
{{project
|Project=PAT writecombine fixup
|Project=Introducing PowerClamp-like driver for Xen
|Date=02/08/2012
|Date=01/22/2013
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
|Desc=
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum
The writecombine feature (especially for graphic adapters) is turned off due to stability reasons. More specifically, the code involved in page transition from WC to WB gets confused about the PSE bit state in the page, resulting in a set of repeated warnings.
power usage limit. This is particularly useful for data centers, where there may be a need to
For more informations please check:
reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup]
* Linux git revision 8eaffa67b43e99ae581622c5133e20b0f48bcef1
is available at LWN.
* http://lists.xen.org/archives/html/xen-devel/2012-06/msg01950.html

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen,
and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used
for both. [[GSoC_2013#powerclamp-for-xen]]
|GSoC=Yes
}}
}}


{{project
{{project
|Project=ACPI S3-state investigation and fixup
|Project=Integrating NUMA and Tmem
|Date=02/08/2012
|Date=08/08/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>
|Desc=NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.
|Desc=
During Linux-3.3 release the the S3-state was supposed to work including these patches:
* https://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/acpi-s3.v9


Trascendent memory (Tmem) can be seen as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. It would be interesting to investigate and implement all the necessary mechanisms to take advantage of this and improve performances of Tmem enabled guests running on NUMA machines.
But now it is not working anymore. Scope of the project is understanding the reasons for the issues and fix them.

For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests.
}}
}}

=== Userspace Tools ===



{{project
{{project
|Project=PUD L3 - big memory - fixup
|Project=Refactor Linux hotplug scripts
|Date=02/08/2012
|Date=15/11/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Roger Pau Monné <[mailto:roger.pau@citrix.com roger.pau@citrix.com]>
|Desc=
|Desc=
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
Right now guests don't boot with a huge amount of requested kernel memory (tries report failing with certainly 384GB). Scope of the project is understanding the reasons and fix them. This likely involves digging into the toolstack too.

Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.

[[GSoC_2013#linux-hotplug-scripts]]
|GSoC=Yes
}}
}}


{{project
{{project
|Project=Parallel xenwatch
|Project=XL to XCP VM motion
|Date=01/08/2012
|Date=15/11/12
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=[mailto:ian.campbell@citrix.com Ian Campbell]
|Desc=Currently [[XL|xl]] (the toolstack supplied alongside Xen) and [[XAPI|xapi]] (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see [[XL]]. For more information on XCP see [[XCP Overview]].
|Desc=

This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.

One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps [http://en.wikipedia.org/wiki/Open_Virtualization_Format OVF] or similar) and the xl toolstack configuration file and disk image formats.


[[GSoC_2013#xl-to-xcp-vm-motion]]
Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability.
|GSoC=Yes
}}
}}



=== Hypervisor ===
{{project
{{project
|Project=Allowing guests to boot with a passed-through GPU as the primary display
|Project=Microcode uploader implementation
|Date=02/08/2012
|Date=01/22/2013
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to
Intel is working on early implementation where the microcode blob would be appended to the initrd image. The kernel would scan for the appropiate magic constant and load the microcode very early.
pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play
games requiring proprietary operating systems without rebooting.


GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed
The Xen hypervisor can do this similary.
through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can
boot with it as the primary display.

[[GSoC_2013#gpu-passthrough]]
|GSoC=Yes
}}
}}


{{project
{{project
|Project=SEDF Handling of Blocking/Unblocking
|Project=Advanced Scheduling Parameters
|Date=08/08/2012
|Date=01/22/2013
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
|Desc=
The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,
The SEDF scheduler, within Xen, currently deals with events such as a vcpu blocking (in general, stopping running) and unblocking (in general, restarting running) by trying (but failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF (the algorithm the scheduler uses), without compromising the temporal isolation it provides to the various tasks/vcpus. More specifically, the technique called Constant BandWidth Server (CBS) could easily be implemented.
a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would
be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.

|GSoC=Yes
}}
}}


{{project
{{project
|Project=SEDF Multiprocessor Support
|Project=CPU/RAM/PCI diagram tool
|Date=08/08/2012
|Date=01/30/2013
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>
|Contact=Andy Cooper <andrew.cooper3@citrix.com>
|Difficulty=Low to medium
|Skills=Linux scripting; basic understanding of PC server hardware
|Desc=It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems.

There are many avenues for extension such as labelling relevant hardware errata, performing bus throughput calculations etc.
|Outcomes=A tool is created that can either run on a live Linux system or offline using captured data to produce a graphical representation of the hardware topology of the system including bus topology, hardware device locations, memory bank locations, etc. The tool would be submitted to a suitable open-source project such as the Xen hypervisor project or XCP.
|GSoC=yes}}

{{project
|Project=KDD (Windows Debugger Stub) enhancements
|Date=01/30/2013
|Contact=Paul Durrant <paul.durrant@citrix.com>
|Difficulty=Medium
|Skills=C, Kernel Debuggers, Xen, Windows
|Desc=kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/hg/xen-unstable.hg/tools/debugger/kdd
kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel.
Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled.

Expected Results:
# Add support for Windows 8 (x86, x64) to kdd
# Add support for Windows Server 2012 to kdd
# Enhance kdd to allow WinDbg to write out usable Windows memory dumps (via .dump debugger extension) for all supported versions
# Produce a user guide for kdd on Xen wiki page

Nice to have: Allow kdd to operate on a Windows domain checkpoint file (output of xl save for e.g.)
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project.
|GSoC=yes}}

{{project
|Project=Lazy restore using memory paging
|Date=01/20/2014
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|GSoC=Yes
|Desc=VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.

Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.

There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.

|Skills= A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.

|Outcomes=Expected outcome:
* Mainline patches for libxc and libxl
}}

{{project
|Project=CPUID Programming for Humans
|Date=02/04/2014
|Contact=Andres Lagar-Cavilla <andres@lagarcavilla.org>
|GSoC=Yes
|Desc=When creating a VM, a policy is applied to mask certain CPUID features. Right now it's black magic.

THe KVM stack has done an excellent job of making this human-useable, and understandable.

For example, in a qemu-kvm command-line you may encounter:

-cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme

And in <qemu>/target-i386.c you find a fairly comprehensive description of x86 processor models, what CPUID features are inherent, and what CPUID feature each of these symbolic flags enables.

In the Xen world, there is a libxc interface to do the same, although it's all hex and register driven. It's effective, yet horrible to use.

An ideal outcome would have libxl config files and command line absorb a similarly human-friendly description of the CPUID features a user wishes for the VM, and interface appropriately with libxl. Further, autodetection of best CPUID shuold yield a human-readable output to be able to easily understand what the VM thinks about its processor.

Finally, interfacing with libvirt should be carefully considered.

CPUID management is crucial in a heterogeneous cluster where migrations and save restore require careful processor feature selection to avoid blow-ups.

|Skills= A good understanding of C user-land programming, and the ability to dive into qemu/libvirt (for reference code and integration), as well as libxc and libxl (for implementation).

|Outcomes=Expected outcome:
* Mainline patches for libxl
}}

=== Mirage and XAPI projects ===
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!

{{project
|Project=Create a tiny VM for easy load testing
|Date=01/30/2013
|Contact=Dave Scott <''first.last''@citrix.com>
|Difficulty=Medium
|Skills=OCaml
|Desc=The http://www.openmirage.org/ framework can be used to create tiny 'exokernels': entire software stacks which run directly on the xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific 'exokernel' that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting exokernel will then enable cheap system load testing.

The first task is to generate an I/O trace from a VM. For this we could use 'xen-disk', a userspace Mirage application which acts as a block backend for xen guests (see http://openmirage.org/wiki/xen-synthesize-virtual-disk). Following the wiki instructions we could modify a 'file' backend to log the request timestamps, offsets, buffer lengths.

The second task is to create a simple kernel based on one of the MirageOS examples (see http://github.com/mirage/mirage-skeleton). The 'basic_block' example shows how reads and writes are done. The previously-generated log could be statically compiled into the kernel and executed to generate load.
|Outcomes=1. a repository containing an 'exokernel' (see http://github.com/mirage/mirage-skeleton)
2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version)
|GSoC=yes}}


{{project
|Project=Fuzz testing Xen with Mirage
|Date=28/11/2012
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Skills=OCaml
|Difficulty=medium
|Desc=
|Desc=
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
The SEDF scheduler, within Xen, does not properly handle SMP systems, unless specific vcpu pinning is specified by the user. That is a big limitation of the current implementation, especially since EDF (the algorithm the scheduler uses) could be easily extended to work in that situations.


The first task would be to become familiar with a specification-based testing tool like Kaputt (see http://kaputt.x9c.fr/). The second task would be to choose an interface for testing; perhaps one of the hypercall ones.
The first thing to do would be turn from using one SEDF runqueue per processor one runqueue per "cluster of processors" (like for instance using one runqueue per-L3, as scredit2 is doing). That would already increase the effectiveness of the scheduler on current hardware a lot. After that, a mechanism for balancing and migrating vcpus among different runqueues can be designed and implemented.
[[GSoC_2013#fuzz-testing-mirage]]
|Outcomes=1. a repo containing a fuzz testing tool; 2. some unexpected behaviour with a backtrace (NB it's not required that we find a critical bug, we just need to show the approach works)
|GSoC=yes
}}
}}


{{project
{{project
|Project=From simulation to emulation to production: self-scaling apps
|Project=Integrating NUMA and Tmem
|Date=08/08/2012
|Date=28/11/2012
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]>, Dan Magenheimer <[mailto:dan.magenheimer_AT_oracle_DOT_com dan.magenheimer_AT_oracle_DOT_com]>
|Difficulty=hard
|Skills=OCaml
|Desc=
|Desc=
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. An interesting consequence of programming Mirage applications in a functional language is that the device drivers can be substituted with emulated equivalents. Therefore, it should be possible to test an application under extreme load conditions as a simulation, and then recompile the *same* code into production. The simulation can inject faults and test data structures under distributed conditions, but using a fraction of the resources required for a real deployment.
Trascendent memory (Tmem) as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. It would be interesting to investigate and implement all the necessary mechanisms to take advantage of this and improve performances of Tmem enabled guests running on NUMA machines. Some more details [[Xen_NUMA_Roadmap#NUMA_and_memory_shrinking_and_sharing|here]].

The first task is to familiarise yourself with a typical Mirage application, I suggest a webserver (see https://github.com/mirage/mirage-www). The second task is to replace the ethernet driver with a synthetic equivalent, so we can feed it simulated traffic. Third, we should inject simulated web traffic (recorded from a real session) and attempt to determine how the application response time varies with load (number of connections; incoming packet rate).

This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource...
|Outcomes=1. a repo/branch with a fake ethernet device and a traffic simulator; 2. an interesting performance graph
|GSoC=no, too much work
}}
}}


=== Performance ===
{{project
{{project
|Project=Towards a multi-language unikernel substrate for Xen
|Project=Performance tools overhaul
|Date=02/08/2012
|Date=28/11/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Difficulty=hard
|Skills=OCaml, Haskell, Java
|Desc=
|Desc=
There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include:
Generally, works on the performance tool themselves should be listes separately to the [[Xen_Profiling:_oprofile_and_perf]] wiki page.

* OCaml: Mirage http://openmirage.org
* Haskell: HalVM https://github.com/GaloisInc/HaLVM#readme
* Erlang: ErlangOnXen http://erlangonxen.org
* Java: GuestVM http://labs.oracle.com/projects/guestvm/, OSv https://github.com/cloudius-systems/osv

Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain based on XCP that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.

[[GSoC_2013#unikernel-substrate]]
|Outcomes=1. a repo containing a common library of low-level functions; 2. a proof of concept port of at least 2 systems to this new library
|GSoC=no, too difficult
}}
}}


{{project
=== Xen Cloud Platform (XCP) and XAPI projects ===
|Project=DRBD Integration
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
|Date=07/01/2013
|Contact=John Morris <john@zultron.com>
|Desc=
DRBD is potentially a great addition to the other high-availability features in XenAPI. An architecture of as few as two Dom0s with DRBD mirrored local storage is an inexpensive minimal HA configuration enabling live migration of VMs between physical hosts and providing failover in case of disk failure, and eliminates the need for external storage. This setup can be used in small shop or hobbyist environments, or could be used as a basic unit in a much larger scalable architecture.

Existing attempts at integrating DRBD sit below the SM layer and thus do not enable one VBD per DRBD device. They also suffer from a split-brain situation that could be avoided by controlling active/standby status from XenAPI.

DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes.
}}


<!--
* XCP and XAPI development projects: [[XAPI project suggestions]]
* XCP and XAPI development projects: [[XAPI project suggestions]]
* XCP short-term roadmap: [[XCP short term roadmap]]
* XCP short-term roadmap: [[XCP short term roadmap]]
* XCP monthly developer meetings: [[XCP Monthly Meetings]]
* XCP monthly developer meetings: [[XCP Monthly Meetings]]
-->
* XAPI developer guide: [[XAPI Developer Guide]]
* XAPI developer guide: [[XAPI Developer Guide]]


Line 175: Line 360:
Please see [[XenRepositories]] wiki page!
Please see [[XenRepositories]] wiki page!


[[Category:XCP]]
[[Category:Archived]]
[[Category:Xen]]
[[Category:Xen]]
[[Category:Xen 4.4]]
[[Category:PVOPS]]
[[Category:PVOPS]]
[[Category:Developers]]
[[Category:Developers]]
[[Category:Index]]
[[Category:Index]]
[[Category:Project]]
[[Category:Project]]
[[Category:Transient]] <!-- as if not maintained it becomes stale -->

Latest revision as of 19:04, 18 February 2016

This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?

To work on a project:

  • Find a project that looks interesting (or a bug if you want to start with something simple)
  • Send an email to xen-devel mailinglist and let us know you started working on a specific project.
  • Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
  • Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
  • Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
  • Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).

xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel

Before to submit patches, please look at Submitting Xen Patches wiki page.

If you have new ideas, suggestions or development plans let us know and we'll update this list!

List of projects

Domain support

Utilize Intel QuickPath on network and block path.

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: High
Skills Needed: The basic requirement for this project is Linux kernel programming skill. The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
Description: The Intel QuickPath, also known as Direct Cache Access, is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache. Adding this component in the network or block backends can mean that we can keep the data longer in the cache and the guest can process the data right off the cache.
Outcomes: Expected outcome:
  • Have upstream patches.
  • benchmark report of with and without.


Enabling the 9P File System transport as a paravirt device

Date of insert: 01/20/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Required skills include knowledge of kernel hacking, file system internals. Desired skills include: understanding of Xen PV driver structure, and VirtIO.
Description: VirtIO provides a 9P FS transport, which is essentially a paravirt file system device. VMs can mount arbitrary file system hierarchies exposed by the backend. The 9P FS specification has been around for a while, while the VirtIO transport is relatively new. The project would consist of implementing a classic Xen front/back pv driver pair to provide a transport for the 9P FS Protocol.
Outcomes: Expected outcome:
  • LKML patches for front and back end drivers.
  • In particular, domain should be able to boot from the 9P FS.


OVMF Compatibility Support Module support in Xen

Date of insert: 2/5/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Wei Liu <wei.liu2@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Easy
Skills Needed: Unknown
Description: Currently Xen supports booting HVM guest with Seabios and OVMF UEFI firmware, but those are separate binaries. OVMF supports adding legacy BIOS blob in its binary with Compatibility Support Module support. We can try to produce single OVMF binary with Seabios in it, thus having only one firmware binary.

Tasks may include:

  • figure out how CSM works
  • design / implement interface between Hvmloader and the unified binary
Outcomes: Not specified, project outcomes


Improvements to firmware handling HVM guests

Date of insert: 07/16/2015; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andrew Cooper <andrew.cooper3@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Easy
Skills Needed: Unknown
Description: Currently, all firmware is compiled into HVMLoader.

This works, but is awkward when used using a single distro seabios/ovmf designed for general use. In such a case, any time an update to seabios/ovmf happens, hvmloader must be rebuilt.

The purpose of this project is to alter hvmloader to take firmware blobs as a multiboot module rather than requiring them to be built in. This reduces the burden of looking after Xen in a distro environment, and will also be useful for developers wanting to work with multiple versions of firmware.

As an extension, support loading an OVMF NVRAM blob. This enabled EFI NVRAM support for guests.
Outcomes: Not specified, project outcomes

Hypervisor

Introducing PowerClamp-like driver for Xen

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum

power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A more complete writeup is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used

for both. GSoC_2013#powerclamp-for-xen
Outcomes: Not specified, project outcomes


Integrating NUMA and Tmem

Date of insert: 08/08/2012; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Dario Faggioli <dario.faggioli@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

Trascendent memory (Tmem) can be seen as a mechanism for discriminating between frequently and infrequently used data, and thus helping allocating them properly. It would be interesting to investigate and implement all the necessary mechanisms to take advantage of this and improve performances of Tmem enabled guests running on NUMA machines.

For instance, implementing something like alloc_page_on_any_node_but_the_current_one() (or any_node_except_this_guests_node_set() for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests.
Outcomes: Not specified, project outcomes

Userspace Tools

Refactor Linux hotplug scripts

Date of insert: 15/11/2012; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Roger Pau Monné <roger.pau@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.

Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.

GSoC_2013#linux-hotplug-scripts
Outcomes: Not specified, project outcomes


XL to XCP VM motion

Date of insert: 15/11/12; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Ian Campbell
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: Currently xl (the toolstack supplied alongside Xen) and xapi (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see XL. For more information on XCP see XCP Overview.

This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.

One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps OVF or similar) and the xl toolstack configuration file and disk image formats.

GSoC_2013#xl-to-xcp-vm-motion
Outcomes: Not specified, project outcomes


Allowing guests to boot with a passed-through GPU as the primary display

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to

pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can boot with it as the primary display.

GSoC_2013#gpu-passthrough
Outcomes: Not specified, project outcomes


Advanced Scheduling Parameters

Date of insert: 01/22/2013; Verified: Not updated in 2020; GSoC: Yes
Technical contact: George Dunlap <george.dunlap@eu.citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: The credit scheduler provides a range of "knobs" to control guest behavior, including CPU weight and caps. However,

a number of users have requested the ability to encode more advanced scheduling logic. For instance, "Let this VM max out for 5 minutes out of any given hour; but after that, impose a cap of 20%, so that even if the system is idle he can't an unlimited amount of CPU power without paying for a higher level of service."

This is too coarse-grained to do inside the hypervisor; a user-space tool would be sufficient. The goal of this project would

be to come up with a good way for admins to support these kinds of complex policies in a simple and robust way.
Outcomes: Not specified, project outcomes


CPU/RAM/PCI diagram tool

Date of insert: 01/30/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Andy Cooper <andrew.cooper3@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Low to medium
Skills Needed: Linux scripting; basic understanding of PC server hardware
Description: It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems. There are many avenues for extension such as labelling relevant hardware errata, performing bus throughput calculations etc.
Outcomes: A tool is created that can either run on a live Linux system or offline using captured data to produce a graphical representation of the hardware topology of the system including bus topology, hardware device locations, memory bank locations, etc. The tool would be submitted to a suitable open-source project such as the Xen hypervisor project or XCP.


KDD (Windows Debugger Stub) enhancements

Date of insert: 01/30/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Paul Durrant <paul.durrant@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: C, Kernel Debuggers, Xen, Windows
Description: kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/hg/xen-unstable.hg/tools/debugger/kdd

kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel. Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled.

Expected Results:

  1. Add support for Windows 8 (x86, x64) to kdd
  2. Add support for Windows Server 2012 to kdd
  3. Enhance kdd to allow WinDbg to write out usable Windows memory dumps (via .dump debugger extension) for all supported versions
  4. Produce a user guide for kdd on Xen wiki page
Nice to have: Allow kdd to operate on a Windows domain checkpoint file (output of xl save for e.g.)
Outcomes: Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project.


Lazy restore using memory paging

Date of insert: 01/20/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: A good understanding of save/restore, and virtualized memory management (e.g. EPT, shadow page tables, etc). In principle the entire project can be implemented in user-space C code, but it may be the case that new hypercalls are needed for performance reasons.
Description: VM Save/restore results in a boatload of IO and non-trivial downtime as the entire memory footprint of a VM is read from IO.

Xen memory paging support in x86 is now mature enough to allow for lazy restore, whereby the footprint of a VM is backfilled while the VM executes. If the VM hits a page not yet present, it is eagerly paged in.

There has been some concern recently about the lack of docs and/or mature tools that use xen-paging. This is a good way to address the problem.
Outcomes: Expected outcome:
  • Mainline patches for libxc and libxl


CPUID Programming for Humans

Date of insert: 02/04/2014; Verified: Not updated in 2020; GSoC: Yes
Technical contact: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: A good understanding of C user-land programming, and the ability to dive into qemu/libvirt (for reference code and integration), as well as libxc and libxl (for implementation).
Description: When creating a VM, a policy is applied to mask certain CPUID features. Right now it's black magic.

THe KVM stack has done an excellent job of making this human-useable, and understandable.

For example, in a qemu-kvm command-line you may encounter:

-cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme

And in <qemu>/target-i386.c you find a fairly comprehensive description of x86 processor models, what CPUID features are inherent, and what CPUID feature each of these symbolic flags enables.

In the Xen world, there is a libxc interface to do the same, although it's all hex and register driven. It's effective, yet horrible to use.

An ideal outcome would have libxl config files and command line absorb a similarly human-friendly description of the CPUID features a user wishes for the VM, and interface appropriately with libxl. Further, autodetection of best CPUID shuold yield a human-readable output to be able to easily understand what the VM thinks about its processor.

Finally, interfacing with libvirt should be carefully considered.

CPUID management is crucial in a heterogeneous cluster where migrations and save restore require careful processor feature selection to avoid blow-ups.
Outcomes: Expected outcome:
  • Mainline patches for libxl

Mirage and XAPI projects

There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!


Create a tiny VM for easy load testing

Date of insert: 01/30/2013; Verified: Not updated in 2020; GSoC: yes
Technical contact: Dave Scott <first.last@citrix.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Medium
Skills Needed: OCaml
Description: The http://www.openmirage.org/ framework can be used to create tiny 'exokernels': entire software stacks which run directly on the xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific 'exokernel' that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting exokernel will then enable cheap system load testing.

The first task is to generate an I/O trace from a VM. For this we could use 'xen-disk', a userspace Mirage application which acts as a block backend for xen guests (see http://openmirage.org/wiki/xen-synthesize-virtual-disk). Following the wiki instructions we could modify a 'file' backend to log the request timestamps, offsets, buffer lengths.

The second task is to create a simple kernel based on one of the MirageOS examples (see http://github.com/mirage/mirage-skeleton). The 'basic_block' example shows how reads and writes are done. The previously-generated log could be statically compiled into the kernel and executed to generate load.
Outcomes: 1. a repository containing an 'exokernel' (see http://github.com/mirage/mirage-skeleton) 2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version)


Fuzz testing Xen with Mirage

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: yes
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: medium
Skills Needed: OCaml
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.

The first task would be to become familiar with a specification-based testing tool like Kaputt (see http://kaputt.x9c.fr/). The second task would be to choose an interface for testing; perhaps one of the hypercall ones.

GSoC_2013#fuzz-testing-mirage
Outcomes: 1. a repo containing a fuzz testing tool; 2. some unexpected behaviour with a backtrace (NB it's not required that we find a critical bug, we just need to show the approach works)


From simulation to emulation to production: self-scaling apps

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: no, too much work
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: hard
Skills Needed: OCaml
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. An interesting consequence of programming Mirage applications in a functional language is that the device drivers can be substituted with emulated equivalents. Therefore, it should be possible to test an application under extreme load conditions as a simulation, and then recompile the *same* code into production. The simulation can inject faults and test data structures under distributed conditions, but using a fraction of the resources required for a real deployment.

The first task is to familiarise yourself with a typical Mirage application, I suggest a webserver (see https://github.com/mirage/mirage-www). The second task is to replace the ethernet driver with a synthetic equivalent, so we can feed it simulated traffic. Third, we should inject simulated web traffic (recorded from a real session) and attempt to determine how the application response time varies with load (number of connections; incoming packet rate).

This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource...
Outcomes: 1. a repo/branch with a fake ethernet device and a traffic simulator; 2. an interesting performance graph


Towards a multi-language unikernel substrate for Xen

Date of insert: 28/11/2012; Verified: Not updated in 2020; GSoC: no, too difficult
Technical contact: Anil Madhavapeddy <anil@recoil.org>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: hard
Skills Needed: OCaml, Haskell, Java
Description: There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include:

Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain based on XCP that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.

GSoC_2013#unikernel-substrate
Outcomes: 1. a repo containing a common library of low-level functions; 2. a proof of concept port of at least 2 systems to this new library


DRBD Integration

Date of insert: 07/01/2013; Verified: Not updated in 2020; GSoC: Unknown
Technical contact: John Morris <john@zultron.com>
Mailing list/forum for project: xen-devel@
IRC channel for project: #xen-devel
Difficulty: Unknown
Skills Needed: Unknown
Description: DRBD is potentially a great addition to the other high-availability features in XenAPI. An architecture of as few as two Dom0s with DRBD mirrored local storage is an inexpensive minimal HA configuration enabling live migration of VMs between physical hosts and providing failover in case of disk failure, and eliminates the need for external storage. This setup can be used in small shop or hobbyist environments, or could be used as a basic unit in a much larger scalable architecture.

Existing attempts at integrating DRBD sit below the SM layer and thus do not enable one VBD per DRBD device. They also suffer from a split-brain situation that could be avoided by controlling active/standby status from XenAPI.

DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes.
Outcomes: Not specified, project outcomes

Quick links to changelogs of the various Xen related repositories/trees

Please see XenRepositories wiki page!