|
This Page is under construction.
|
GSoC and Xen
This page is used to list project ideas for Google Summer of Code (GSOC) 2013.
Key Google Pages
|
Note that Google has not yet announced GSOC 2013
|
Project List
This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the Unreviewed Project Ideas section of this document.
.
Introducing PowerClamp-like driver for Xen
Date of insert: 01/22/2013; Verified: Not specified, date when created; GSoC: Yes |
Mentor: George Dunlap <george.dunlap@eu.citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A more complete writeup is available at LWN.
These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both. |
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
Implement Temporal Isolation and Multiplocessor Support in the SEDF Scheduler
Date of insert: 08/08/2012; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Dario Faggioli <dario.faggioli@citrix.com> |
Difficulty: Basic to Medium |
Skills Needed: C programming, genuine interest in scheduling algorithm design and implementation |
Description: No matter if it is to build a multi-personallity mobile phone, or help achieving consolidation in industrial and factory automation, embedded virtualization ([1], [2], [3]) is upon us. In fact, quite a number of embedded hypervisors already exist, e.g.: Wind River Hypervisor, CodeZero or PikeOS. Xen definitely is small, fast type-1 hypervisor with support for multiple VMs [1], so it could be a good candidate embedded hypervisor.
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the Earliest Deadline First (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).
However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs. It also lacks pproper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most.
Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. |
Outcomes: The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
Steps: The work on the project can be subdivided in the following phases:
- Phase 1: investigate and understand the Constant BandWidth Server (CBS, [1], [2], [3]);
- Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code;
- Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already;
- Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues.
|
References: Useful references inlined in the project description |
|
Refactor Linux hotplug scripts
Date of insert: 15/11/2012; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Roger Pau Monné <roger.pau@citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished. |
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
XL to XCP VM motion
Date of insert: 15/11/12; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Ian Campbell <ian.campbell@citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: Currently xl (the toolstack supplied alongside Xen) and xapi (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see XL. For more information on XCP see XCP Overview.
This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.
One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.
A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.
These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.
The tool need not operate on a live VM but that could be considered a stretch goal.
An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps OVF or similar) and the xl toolstack configuration file and disk image formats. |
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
VM Snapshots
Date of insert: 16/01/2013; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.
QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.
This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl. The candidate should also explore ways to integrate disk snapshots into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl. |
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
Allowing guests to boot with a passed-through GPU as the primary display
|
Fuzz testing Xen with Mirage
Date of insert: 28/11/2012; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Anil Madhavapeddy <anil@recoil.org> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces. |
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
Testing PV and HVM installs of Debian using debian-installer
Date of insert: 2013-01-23; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Ian Jackson <ian.jackson@eu.citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: The testing system "osstest" which is used for the push gate for the xen and related trees should have Debian PV and HVM guest installations, based on the standard Debian installer, in its repertoire. Also it currently always tests
kernels as host and guest in the same installation.
- Task 1: Generalise the functions in osstest which generate debian-installer preseed files and manage the installation, to teach them how to set up PV and HVM guests, and provide an appropriate ts-* invocation script.
- Task 2: Extend the guest installer from task 1 to be able to install a kernel other than the one which comes from the Debian repository, so that it is possible to test one kernel as host with a different specified kernel as guest.
- Task 3: Determine which combinations of kernel branches should be added to the test schedules, push gates, etc. and write this up in a report for deployment by the infrastructure maintainers.
- More information: See xen-devel test reports. Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary
|
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
Testing NetBSD
Date of insert: 2013-01-23; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Ian Jackson <ian.jackson@eu.citrix.com> |
Difficulty: Unknown |
Skills Needed: Unknown |
Description: The testing system "osstest" which is used for the push gate for the xen and related trees should be able to test NetBSD both as host and guest.
- Task 1: Understand how best to automate installation of NetBSD. Write code in osstest which is able to automatically and noninteractively install NetBSD on a bare host.
- Task 2: Test and debug osstest's automatic building arrangements so that they can correctly build Xen on NetBSD.
- Task 3: Write code in osstest which can automatically install the Xen from task 2 on the system installed by task 1.
- Task 4: Debug at least one of the guest installation capabilities in osstest so that it works on the Xen system from task 3.
- Task 5: Rework the code from task 1 so that it can also install a NetBSD guest, ideally either as a guest of a Linux dom0 or of a NetBSD dom0.
- Task 6: Determine which versions of NetBSD and of Linux should be tested in which combinations and write this up in a report for deployment by the infrastructure maintainers.
- More information: See xen-devel test reports. Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary
|
Outcomes: Not specified, project outcomes |
Steps: Not specified, necessary steps to accomplish project goal |
References: Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description". |
|
Unreviewed Project Ideas
Virtual NUMA for Xen guests
Date of insert: 12/12/2012; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Dario Faggioli <dario.faggioli@citrix.com> |
Difficulty: Medium |
Skills Needed: C programming, computer architecture, virtualization concepts |
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.
Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 (vNUMA: what it is and why it matters).
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap |
Outcomes: The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
Steps: The work on the project can be subdivided in the following phases:
- Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it;
- Phase 2: implement virtual NUMA for Xen PV guests;
- Phase 3: implement virtual NUMA for Xen HV guests.
|
References: Useful references inlined in the project description |
|
NUMA aware ballooning for Xen
Date of insert: 12/12/2012; Verified: Not specified, date when created; GSoC: Yes |
Mentor: Dario Faggioli <dario.faggioli@citrix.com> |
Difficulty: Medium |
Skills Needed: C programming, virtualization concepts |
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.
When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to balloon down existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another.
What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node.
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap |
Outcomes: The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
Steps: The work on the project can be subdivided in the following phases:
- Phase 1: understand the existing ballooning algorithms and code;
- Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both?
- Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise.
|
References: Useful references inlined in the project description |
|
Rules and Advice for Adding Ideas
- Be creative
- Use the Template:GSoC Project template to encode ideas on this page. Please read the Template Documentation before you do so.
- Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
- If you are willing to mentors those ideas, add your name and email to the idea.
- If you're an interested student, add your name and email next to the idea. It is ok to have several students interested by one idea.
- Aspiring students need to get in touch with the xen.org community manager via community.manager@xen.org to register their interest
New Project Ideas
|
Add new projects here.
|