Archived/GSoC 2013: Difference between revisions

From Xen
Jump to navigationJump to search
m (Add anchor for mirage project)
No edit summary
 
(61 intermediate revisions by 10 users not shown)
Line 1: Line 1:
{{InfoLeft|'''Unfortunately, Xen.org did not get accepted as mentoring organization in 2013.'''}}
{{Warning|This Page is under construction.}}
__TOC__


== GSoC and Xen ==
== GSoC and Xen ==
This page is used to list project ideas for Google Summer of Code (GSOC) 2013.
This page is used to list project ideas for [http://www.google-melange.com/gsoc/homepage/google/gsoc2013 Google Summer of Code (GSOC) 2013].

== Conventions for Projects ==
=== Rules and Advice for Adding Ideas ===
* Be creative
* Add projects into [[#Unreviewed Project Ideas|Project Ideas that Need Review]].
* Use the {{tl|GSoC Project}} template to encode ideas on this page. Please read the [[Template:GSoC Project|Template Documentation]] before you do so.
* Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
* Check that the project meets the [[#Goals|GSoC Program Goals]]
* If you are willing to mentors those ideas, add your name and email to the idea.
* If you're an interested student, add your name and email next to the idea. It is ok to have several students interested by one idea.
* Aspiring students need to get in touch with the xen.org community manager via community.manager@xen.org to register their interest

=== Peer Review Goals ===
We strongly recommend and invite project proposers and project mentors to review each others proposals. When you review, please look out for
* Can a student get going and started with the information in the project description
* Are any unstated assumptions in the proposal, is there undefined terminology, etc. in the proposal
* Can the project completed in 3 months (assume that one month is needed for preparation)
* {{Anchor|Goals}}Does the project meet Google Summer of Code goals, which are
** Create and release open source code for the benefit of all
** Inspire young developers to begin participating in open source development
** Help open source projects identify and bring in new developers and committers
** Provide students the opportunity to do work related to their academic pursuits (think "flip bits, not burgers")
** Give students more exposure to real-world software development scenarios (e.g., distributed development, software licensing questions, mailing-list etiquette)

=== Peer Review Conventions ===
The {{tl|GSoC Project}} template used to encode GSoC projects, contains some review functionality. Please read the [[Template:GSoC Project|Template Documentation]] before you add a template, also please use the conventions below to make comments.

<pre>
|Review=(delete as addressed)
* {{Comment|~~~~:}} Comment 1
* {{Comment|~~~~:}} Comment 2
</pre>


== Key Google Pages ==
== Key Google Pages ==
Google Summer of Code 2013 is On (see [http://google-opensource.blogspot.co.uk/2013/02/flip-bits-not-burgers-google-summer-of.html announcement]). Xen.org is intending to apply as a Mentoring Organization. Stay posted.
{{Warning|Note that Google has not yet announced GSOC 2013}}


* [http://google-opensource.blogspot.co.uk/2013/02/flip-bits-not-burgers-google-summer-of.html GSoC announcement]
== Project List ==
* [http://www.google-melange.com/gsoc/homepage/google/gsoc2013 GSoC Homepage]


=== Community Reviewed Project List ===
=== Timeline ===
* '''March 18, 19:00 UTC:''' Mentoring organizations can begin submitting applications to Google.
* '''March 29, 19:00 UTC:''' Mentoring organization application deadline.
* '''April 1 - 5:''' Google program administrators review organization applications.
* '''April 8 19:00 UTC:''' List of accepted mentoring organizations published on the Google Summer of Code 2013 site.
* '''April 9 - 21:''' Would-be student participants discuss application ideas with mentoring organizations.
* '''April 22, 19:00 UTC:''' Student application period opens.
* '''May 3, 19:00 UTC:''' Student application deadline.

== Community Reviewed Project List ==


This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the [[#Unreviewed Project Ideas|Unreviewed Project Ideas]] section of this document.
This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the [[#Unreviewed Project Ideas|Unreviewed Project Ideas]] section of this document.


{{InfoLeft|This section contains peer reviewed projects that have been selected based on the following criteria:
{{TODO|Migrate reviewed and suitable project ideas from [[Xen Development Projects]] to this page}}.
* A diverse list of projects, covering different level of difficulties and required skills
* Well written (in particular have a well written description)
* Contain steps, outcomes, skills required, ... all written down
* Are peer reviewed and debated
* Contain a diverse set of mentors

'''If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready when we apply for GSoC. Please add projects into [[#Unreviewed Project Ideas|Unreviewed Project Ideas]].'''
}}


{{GSoC Project
{{GSoC Project
|Project=Microcode uploader implementation
|Project=Microcode uploader implementation in Xen hypervisor
|Anchor=microcode-uploader
|Anchor=microcode-uploader
|Steps=

|Difficulty=Medium to Hard.
|Skills=Knowledge of C for Phase #1. For Phase #2 potentially x86 assembler and deep knowledge of early bootup. Familiarity with Intel SDM is a plus.
|Date=02/08/2012
|Date=02/08/2012
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|Desc=
|Desc=
Intel is working on early implementation where the microcode blob would be appended to the initrd image. The kernel would scan for the appropiate magic constant and load the microcode very early.
Intel is working on early implementation where the microcode binary would be appended to the initrd image. The kernel would scan for the appropriate magic constant (http://thread.gmane.org/gmane.linux.kernel/1413384; looks for "kernel/x86/microcode/GenuineIntel.bin") and load the microcode very early. This is all done for the Linux kernel code, but we currently do not that in the Xen hypervisor.
The scope of the work can be split up in
# just do the extraction of microcode from the initial ramdisk binary (aka initrd) and apply it. This can be done during the parsing of the dom0 initial ramdisk. The hypervisor already has the functionality to apply a microcode from the multiboot targets. This would add code to parse the initrd image
# do it during very early bootup - which is why the early microcode work started - to deal with CPUs which don't expose certain CPUID flags because they need a microcode update. This part of work is much more difficult - as it would involve working only with early bootup pagetables. This being done _before_ the Xen hypervisor sets its own pagetables - as some of the fixes that the microcode has, can be for the CPU to be able to do PSE properly.


|Outcomes=Patch to Xen hypervisor to take advantage of this.
The Xen hypervisor can do this similarly.
The Xen hypervisor can do this similarly.
|References=The Intel SDM 3a (http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf) gives an excellent overview what microcode is and how to update it. The mechanism for bundling the microcode binary with the initrd along with the initial implementation in the Linux kernel to take advantage of this is explained here http://thread.gmane.org/gmane.linux.kernel/1413384
|Review=
* {{Comment|[[User:Ijc|Ijc]] 09:08, 8 February 2013 (UTC):}} It is not clear if this proposal is suggesting to add microcode loader support to the dom0 kernel or the hypervisor. It seems to be during the dom0 loader, however Linux already has this support since Intel have effectively completed the project described by the first link above, extending this to Xen would involve the same objections from upstream as they had to the original microcode patches. In any case Xen also already has support for CPU microcode loading very early on, which is much better than doing it from dom0 (which is arguably too late). The only useful extension I can see to the existing functionality is to add support to Xen for parsing dom0's initrd to pull out the microcode blob instead of obtaining it from the multiboot modules as is currently supported. Phase 2 here just isn't necessary, both Linux and Xen already contain the code described.
}}
}}


Line 31: Line 93:
|Date=01/22/2013
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Difficulty=Medium to difficult
|Skills=C programming. A solid knowledge of how to use spinlocks to avoid race conditions and deadlock. Ability to read diffs, ability to use git and mercurial (hg).
|Desc=
|Desc=
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup] is available at LWN.
PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A [http://lwn.net/Articles/528124/ more complete writeup] is available at LWN.


These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both.
These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both.

The basic mechanism for PowerClamp in Linux is to monitor the percentage of time spent idling. When this time goes below a user-specified threshold, it activates a high-priority real-time process to force the CPU to idle for the specified amount of time. The student would have to figure out how to apply this to Xen's main scheduler, the Credit scheduler.

The idea is fairly straightforward, but working with the scheduler involves dealing with very tricky race conditions and deadlock. This should be fairly straightforward project for a very clever student looking for a fun problem to solve. It should also provide a good taste of what operating-system level programming is like.
|Outcomes=&nbsp;
* Mechanism to enforce a certain percentage of idle time in Xen
* An appropriate way to access this; either using Xen's xl command-line, or the PowerClamp tools for Linux (if any), or both.
|Steps=
The first step would be to apply the idea to the main Xen scheduler, the Credit1 scheduler. What is the best way to implement this? Adding an extra priority level? Re-using the existing credit mechanism?

The second step would be to design an appropriate interface. Are there any PowerClamp userspace tools for Linux? Does it make sense to try to integrate those tools with Xen, or should we just have this be a separate Xen feature, accessible via Xen's xl command-line interface?
|References=[http://lwn.net/Articles/528124/ LWN Article on PowerClamp]
}}
}}


{{GSoC Project
{{GSoC Project
|Project=Virtual NUMA for Xen guests
|Project=Xen in the Real-Time/Embedded World: Improve the Temporal Isolation among vCPUs in SEDF
|Anchor=isolation-vcpu-sedf
|Anchor=vnuma
|Difficulty=Medium
|Date=08/08/2012
|Date=12/12/2012
|Contact=Dario Faggioli <dario.faggioli@citrix.com>
|Contact=Dario Faggioli <dario.faggioli@citrix.com>
|Skills=C programming, computer architecture, virtualization concepts
|Desc=
|Desc=
NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.
No matter if it is to build a [http://gigaom.com/2011/06/25/mobile-virtualization-finds-its-home-in-the-enterprise/ multi-personallity mobile phone], or [http://www.youtube.com/watch?v=j4uMdROzEGI help achieving consolidation in industrial and factory automation], embedded virtualization ([http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], [http://www.ibm.com/developerworks/linux/library/l-embedded-virtualization/index.html [2]], [http://www.wirevolution.com/2012/02/18/mobile-virtualization/ [3]]) is upon us. In fact, quite a number of ''embedded hypervisors'' already exist, e.g.: [http://www.windriver.com/products/hypervisor/ Wind River Hypervisor], [http://dev.b-labs.com/ CodeZero] or [http://www.sysgo.com/products/pikeos-rtos-and-virtualization-concept/ PikeOS]. Xen definitely '''is''' ''small, fast type-1 hypervisor with support for multiple VMs'' [http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], so it could be a good candidate embedded hypervisor.


Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 ([http://cto.vmware.com/vnuma-what-it-is-and-why-it-matters/ vNUMA: what it is and why it matters]).
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).


This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]]
Unfortunately, SEDF, the EDF implementation in Xen, still has some rough edges that need to be properly addressed, if we want it to be a valid solution for providing the temporal isolation real-time applications requires. In fact, as of now, SEDF deals with events such as a vCPU blocking (in general, stopping running) and unblocking (in general, restarting running) by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs.
|Steps=The work on the project can be subdivided in the following phases:


* Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it;
Therefore, this project aims at implementing one of this solutions in SEDF, and more specifically the one called Constant BandWidth Server (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]).


* Phase 2: implement virtual NUMA for Xen PV guests;
''Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).''

* Phase 3: implement virtual NUMA for Xen HVM guests.
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful for this project. However, if that is not the case, solutions will be found to allow the participant to properly test the code.
}}
}}


{{GSoC Project
{{GSoC Project
|Project=NUMA aware ballooning for Xen
|Project=Xen in the Real-Time/Embedded World: Improve Multiprocessor Support in SEDF
|Anchor=multiprocessor-sedf
|Anchor=numa-ballooning
|Difficulty=Medium
|Date=12/12/2012
|Contact=Dario Faggioli <dario.faggioli@citrix.com>
|Skills=C programming, virtualization concepts
|Desc=NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to ''balloon down'' existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another.

What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node.

This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]]
|Steps=The work on the project can be subdivided in the following phases:

* Phase 1: understand the existing ballooning algorithms and code. While at it, check whether the currently available documentation (both on the Xen Wiki and in the source tree), is updated and aligned with the actual code behavior and, if not, fix it;

* Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both?

* Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise.
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful. However, if that is not the case, solutions will be found to allow the participant to properly test the code.
}}

{{GSoC Project
|Project=Temporal Isolation and Multiprocessor Support in the SEDF Scheduler
|Anchor=sedf-improvements
|Difficulty=Basic to Medium
|Date=08/08/2012
|Date=08/08/2012
|Contact=Dario Faggioli <dario.faggioli@citrix.com>
|Contact=Dario Faggioli <dario.faggioli@citrix.com>
|Skills=C programming, genuine interest in scheduling algorithm design and implementation
|Desc=
|Desc=
No matter if it is to build a [http://gigaom.com/2011/06/25/mobile-virtualization-finds-its-home-in-the-enterprise/ multi-personallity mobile phone], or [http://www.youtube.com/watch?v=j4uMdROzEGI help achieving consolidation in industrial and factory automation], embedded virtualization ([http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], [http://www.ibm.com/developerworks/linux/library/l-embedded-virtualization/index.html [2]], [http://www.wirevolution.com/2012/02/18/mobile-virtualization/ [3]]) is upon us. In fact, quite a number of ''embedded hypervisors'' already exist, e.g.: [http://www.windriver.com/products/hypervisor/ Wind River Hypervisor], [http://dev.b-labs.com/ CodeZero] or [http://www.sysgo.com/products/pikeos-rtos-and-virtualization-concept/ PikeOS]. Xen definitely '''is''' ''small, fast type-1 hypervisor with support for multiple VMs'' [http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], so it could be a good candidate embedded hypervisor.
No matter if it is to build a [http://gigaom.com/2011/06/25/mobile-virtualization-finds-its-home-in-the-enterprise/ multi-personallity mobile phone], or [http://www.youtube.com/watch?v=j4uMdROzEGI help achieving consolidation in industrial and factory automation], embedded virtualization ([http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], [http://www.ibm.com/developerworks/linux/library/l-embedded-virtualization/index.html [2]], [http://www.wirevolution.com/2012/02/18/mobile-virtualization/ [3]]) is upon us. In fact, quite a number of ''embedded hypervisors'' already exist, e.g.: [http://www.windriver.com/products/hypervisor/ Wind River Hypervisor], [http://dev.b-labs.com/ CodeZero] or [http://www.sysgo.com/products/pikeos-rtos-and-virtualization-concept/ PikeOS]. Xen definitely '''is''' ''small, fast type-1 hypervisor with support for multiple VMs'' [http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], so it could be a good candidate embedded hypervisor.
Line 64: Line 174:
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).


However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs.
Unfortunately, SEDF, the EDF implementation in Xen, does not properly handle SMP systems yet, unless specific vCPU pinning is specified by the user. That is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most.
SEDF also lacks proper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most.

Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. For temporal isolation, among the various solutions proposed in real-time academic literature, one that is very effective and yet very simple to implement is the Constant BandWidth Server algorithm (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]). For multiprocessing, just adopting a different approach in managing the scheduling ready queues (e.g., having one queue serving multiple pCPUs) would be enough. Of course, envisioning and implementing mechanisms for migrating the vCPUs among different queues would be even better.
|Steps=The work on the project can be subdivided in the following phases:

* Phase 1: study and understand the CBS algorithm, and figure out what are the differences between it and the current SEDF implementation;

* Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code. Completing this phase would mean having successfully enabled proper temporal isolation within SEDF;

* Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already. Completing this phase would mean having turned SEDF into a decent enough multiprocessor enabled scheduler;


* Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues Completing this phase would mean having turned SEDF into a full-fledged multiprocessor enabled scheduler.
Therefore, this project aims at extending the SEDF scheduler, enabling proper support for SMP hardware. The fist step would be to use, instead of one vCPU run-queue per each physical processor (pCPU), only one per each "set of pCPUs" (e.g., only one run-queue for all the pCPUs that have a common L3 cache). This would already increase the effectiveness of the scheduler on current hardware a lot. After that, a mechanism for balancing and migrating vCPUs among different run-queues should be envisioned and implemented.
|Outcomes=The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.


Having reached a good level of temporal isolation must be verified by running some typical real-time workload (e.g., [https://rt.wiki.kernel.org/index.php/Cyclictest Cyclictest] and [https://github.com/gbagnoli/rt-app rt-app]) inside a VM, and checking its timing requirements are being respected, despite the integerence of other VMs. Correct exploitation of multiprocessor platforms must be verified by making sure the vCPUs are automatically spreading around, instead of all being stuck on just one pCPU.
''Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).''
|References=Useful references inlined in the project description
|Review=(delete as addressed)
* {{Comment|[[User:Ijc|Ijc]] 09:36, 11 February 2013 (UTC):}} What is the outcome/deliverable for stage 1 (investigate CBS)? Is CBS the only option here or does the candidate need to evaluate other techniques? Is CBS the "Unified approaches ... enabling blocking" which the description refers to? Are there any particular success criteria for the other phases, e.g. specific performance characteristics of benchmark results which must be achieved? Where does the "implemented multiprocessor support" appear in the phases, is it a side effect of CBS or is it phase 3/4?
* {{Comment|[[User:Dariof|Dariof]], 13 March 2013 (Replying to [[User:Ijc|Ijc]]):}} Abstracted the part about the CBS algorithm outside from the "Steps" section, so that it is more clear that this is the solution allowing to kill the special cases and enable proper temporal isolation (as well as that there is no need to investigate any different algorithm). Clarified a bit more, both in the general description than in the description of the various phases, what each steps contributes to (to make it clear than turning SEDF into an SMP scheduler is not a consequence of CBS, it rather is what is done in phases 3 and 4). Gave some directions about validation and benchmarking.
}}
}}


Line 76: Line 201:
|Date=15/11/2012
|Date=15/11/2012
|Contact=Roger Pau Monné <roger.pau@citrix.com>
|Contact=Roger Pau Monné <roger.pau@citrix.com>
|Difficulty=Medium
|Skills=Knowledge of C and good level of shell scripting
|Desc=
|Desc=
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.


Linux hotplug scripts should be analized, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.

Also, a new hotplug implementation is currently under review [http://lists.xen.org/archives/html/xen-devel/2013-01/msg01962.html [1]], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the student can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...).
|Steps=The work on the project can be subdivided in the following phases:

* Phase 1: analyze hotplug scripts and determine what each script does internally in order to attach the device
* Phase 2: move common bits of code to shared files, providing a sane API
* Phase 3: refactor hotplug scripts to use this new API, and clean the code applying an uniform coding style
* Phase 4 [Optional]: create hotplug scripts for new backends (GlusterFS, Ceph)
|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.
|References=[http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/hotplug/Linux;hb=HEAD Source of current scripts]
|Review=(delete as addressed)
* {{Comment|[[User:Ijc|Ijc]] 09:49, 11 February 2013 (UTC):}} Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like [[Xen Networking]]. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to students, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems.
}}
}}


Line 87: Line 226:
|Date=15/11/12
|Date=15/11/12
|Contact=Ian Campbell <ian.campbell@citrix.com>
|Contact=Ian Campbell <ian.campbell@citrix.com>
|Difficulty=Medium
|Skills=Knowledge of either C or oCaml (or both) or another suitable language.
|Outcomes=Code submitted to xen-devel@ and/or xen-api@ for tools to migrate Virtual Machines between toolstacks. Must include documentation
|Steps=A suggested set of steps for completion of the project is:
# Setup both toolstacks and create suitable virtual machines on both
# Investigation of both toolstacks to determine what existing import/export functionality is present
# Documentation of mapping between Virtual Machine properties of both toolstacks
# Evaluate mechanisms for conversion, including disk image format conversion and which VM properties are required for success
# Documentation of how to perform a manual (i.e. by hand) conversion in either direction
# Implementation and documentation of tool(s) to automate the conversion process
# Post patches for review
# Iterate patches until acceptance
|Desc=
|Desc=
Currently [[XL|xl]] (the toolstack supplied alongside Xen) and [[XAPI|xapi]] (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see [[XL]]. For more information on XCP see [[XCP Overview]].
Currently [[XL|xl]] (the toolstack supplied alongside Xen) and [[XAPI|xapi]] (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see [[XL]]. For more information on XCP see [[XCP Overview]].
Line 97: Line 248:


These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.
These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tools should work on both PV and HVM domains. The subset of properties which are common to both toolstacks which are to be considered required for successful completion of the project will be determined early on in the project.


The tool need not operate on a live VM but that could be considered a stretch goal.
The tool need not operate on a live VM but that could be considered a stretch goal.


An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps [http://en.wikipedia.org/wiki/Open_Virtualization_Format OVF] or similar) and the xl toolstack configuration file and disk image formats.
An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps [http://en.wikipedia.org/wiki/Open_Virtualization_Format OVF] or similar) and the xl toolstack configuration file and disk image formats.
|References=[[XAPI]], [[XL]], [[XCP Overview]]
}}
}}


Line 107: Line 261:
|Anchor=vm-snapshots
|Anchor=vm-snapshots
|Date=16/01/2013
|Date=16/01/2013
|Contact=Stefano Stabellini <stefano.stabellini@eu.citrix.com>
|Contact=Anthony Perard <anthony.perard@citrix.com>
|Difficulty=Medium
|Skills=C programming
|Desc=
|Desc=
Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.
Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.
Line 113: Line 269:
QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.
QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.


This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl. The candidate should also explore ways to integrate disk snapshots into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl.
This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl.
}}

{{GSoC Project
|Project=Allowing guests to boot with a passed-through GPU as the primary display
|Anchor=gpu-passthrough
|Date=01/22/2013
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to
pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play
games requiring proprietary operating systems without rebooting.


The candidate should also explore ways to integrate QEMU disk snapshots and disk mirroring into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl.
GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed
|Steps=&nbsp;
through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.
Basic steps:
* Study libxl APIs for storage
* Study QEMU QMP commands for VM snapshots
* Implement support for QMP snapshots commands in libxl
* Implement VM snapshots functionalities in libxl using the QMP functions previously written
* Add VM snapshot commands to XL
Stretch goals:
* Add VM snapshot functionalities to libxl save/restore and migration functions
* Evaluate QEMU QMP disk mirroring capabilities (QMP command "drive-mirror")
* Implement support for QMP drive-mirror command in libxl
* Hook disk mirroring into libxl VM save/restore and migration functions (migrating a VM from one host to another is also capable of migrating the VM disk from the two hosts).
|Outcomes=&nbsp;
Basic goal: disk snapshots can be handled entirely by xl.


Stretch goals: xl can automatically save a disk snapshot at the time of saving a VM. xl can also mirror the disk of a VM between two hosts and can do that automatically at the time of VM migration.
The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can
|References=[[XL]], [http://www.qemu.org QEMU]
boot with it as the primary display.
|Review=(delete as addressed)
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} Although this project is specifically targeting the QEMU snapshot mechanism we should require that the libxl API which is exposed is general enough to be applied to other disk backends (blktap3, lvm snapshot, btrfs, etc)
}}
}}


Line 138: Line 298:
|Date=28/11/2012
|Date=28/11/2012
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Difficulty=Medium
|Skills=OCaml programming. C programming.
|Desc=
|Desc=
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
|Steps=&nbsp;
* Set up a mirage build environment, test by building and running the examples (http://github.com/mirage/mirage-skeleton)
* Make a simple fuzzer with the existing hypercall bindings
* Extend the set of bindings, and extend the fuzzer to match
|Outcomes=&nbsp;
* a git repository containing a mirage-based fuzz tester
* a pull request (to http://github.com/mirage/mirage-platform) containing additional hypercall bindings
|References=&nbsp;
* http://www.openmirage.org -- the mirage website
* http://github.com/mirage/mirage-platform -- the mirage xen and unix runtimes
* http://github.com/mirage/mirage-skeleton -- example mirage programs
|Review=(delete as addressed)
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} There are some interesting challenges which aren't mentioned here, specifically:
** reproducability of a given run leading to a crash
** how to handle guests which crash themselves while fuzzing, e.g. management of random seeds and respawning, measuring progress and perhaps snapshotting and restarting along multiple paths (so a single crash doesn't wipe out all the interesting state built up by the fuzzer up to that point)
** logging of what is going on in the face of hosts which may crash when the fuzzer "succeeds".
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} It would also be useful to take inspiration from the [http://codemonkey.org.uk/projects/trinity/ trinity] Linux system call fuzzer which encodes a certain level of knowledge of what the inputs to each system/hypercall should look like such that it can probe "interesting" (i.e. limits) values with more than random probability and also provide plausible input for some arguments so as to not continually mask errors in the other options (e.g. with some probability pass a valid socket to the int fd argument of a call which expects a socket, so that the other arguments have some chance of even being evaluated). Likewise for calls which take a pointer you would want to make sure the fuzzer would occasionally (or even mostly) pass in valid pointers such that the contents of the pointed to struct can also be fuzzed.
}}

{{GSoC Project
|Project=Towards a multi-language unikernel substrate for Xen
|Anchor=unikernel-substrate
|Date=28/11/2012
|Contact=Anil Madhavapeddy <anil@recoil.org>
|Desc=
There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include:

* OCaml: Mirage http://openmirage.org
* Haskell: HalVM https://github.com/GaloisInc/HaLVM#readme
* Erlang: ErlangOnXen http://erlangonxen.org
* Java: GuestVM http://labs.oracle.com/projects/guestvm/

Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.
|Difficulty=Difficult
|Skills=Familiarity with C and at least 1 other of OCaml,Haskell,Erlang and Java.
|Steps=&nbsp;
* Set up a test and dev environment capable of building at least one of the above "unikernels" in the language you are most familiar with.
* Create a simple (less than 1000 words) description of how the code is structured. Include a description of how the VM starts, how the runtime starts and how the application runtime starts.
* Get the code for the other runtimes, and create one document for each of them, using the one you're familiar with as a reference.
* Create a C library and build environment containing code which could be used by all the runtimes.
|Outcomes=&nbsp;
* One document per runtime describing how it works
* A proposed common library which would allow all the runtimes to share the low-level routines
|References=see above
}}
}}


Line 147: Line 353:
|Date=2013-01-23
|Date=2013-01-23
|Contact=Ian Jackson <ian.jackson@eu.citrix.com>
|Contact=Ian Jackson <ian.jackson@eu.citrix.com>
|Difficulty=Basic
|Skills=Knowledge of Perl, some familiarity with Debian preseeding
|Desc=
|Desc=
The testing system "osstest" which is used for the push gate for the xen and related trees should have Debian PV and HVM guest installations, based on the standard Debian installer, in its repertoire. Also it currently always tests
The testing system "osstest" which is used for the [[Submitting_Xen_Patches#After_your_patch_is_committed |push gate]] for the xen and related trees should have Debian PV and HVM guest installations, based on the standard Debian installer, in its repertoire. Also it currently always tests
kernels as host and guest in the same installation.
kernels as host and guest in the same installation.
|Outcomes=Code for guest installation in live osstest.git; public report on which Linux branches to deploy for; new test cases enabled in production.
|Steps=&nbsp;
* Task 1: Generalise the functions in osstest which generate debian-installer preseed files and manage the installation, to teach them how to set up PV and HVM guests, and provide an appropriate ts-* invocation script.
* Task 1: Generalise the functions in osstest which generate debian-installer preseed files and manage the installation, to teach them how to set up PV and HVM guests, and provide an appropriate ts-* invocation script.
* Task 1b: Upstream this code into the production osstest.git.
* Task 2: Extend the guest installer from task 1 to be able to install a kernel other than the one which comes from the Debian repository, so that it is possible to test one kernel as host with a different specified kernel as guest.
* Task 2: Extend the guest installer from task 1 to be able to install a kernel other than the one which comes from the Debian repository, so that it is possible to test one kernel as host with a different specified kernel as guest.
* Task 2b: Upstream this code into the production osstest.git.
* Task 3: Determine which combinations of kernel branches should be added to the test schedules, push gates, etc. and write this up in a report for deployment by the infrastructure maintainers.
* Task 3: Determine which combinations of kernel branches should be added to the test schedules, push gates, etc. and write this up in a report for deployment by the infrastructure maintainers.
* Task 4: Assist with deployment and debugging after the new functionality is deployed in production in accordance with the report from Task 3.
* More information: See xen-devel test reports. Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary
|References=
See xen-devel test reports (via the xen-devel list archives). Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary

The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/
|Review=(delete as addressed)
* {{Comment|[[User:Ijc|Ijc]] 10:18, 11 February 2013 (UTC):}} I'd be happy to co-advise on the D-I aspects of this. In Task 2 "kernel other than the one which comes from the Debian repository", do you really mean "from the dom0 filesystem"? The D-I kernels do come from the Debian repo. Also is the intention to support testing guests which use pygrub, since that fits naturally with the D-I approach? Is the intention to only do netinst installs or is there scope to do D-I installs from ISO images too?
}}
}}


Line 161: Line 379:
|Date=2013-01-23
|Date=2013-01-23
|Contact=Ian Jackson <ian.jackson@eu.citrix.com>
|Contact=Ian Jackson <ian.jackson@eu.citrix.com>
|Difficulty=Basic to Medium
|Skills=Knowledge of Perl and NetBSD's installer
|Desc=
|Desc=
The testing system "osstest" which is used for the push gate for the xen and related trees should be able to test NetBSD both as host and guest.
The testing system "osstest" which is used for the [[Submitting_Xen_Patches#After_your_patch_is_committed |push gate]] for the xen and related trees should be able to test NetBSD both as host and guest.
|Outcomes=Code for host and guest installation in live osstest.git; public report on which combinations of tests to deploy for; testing of NetBSD enabled in production.
|Steps=&nbsp;
* Task 1: Understand how best to automate installation of NetBSD. Write code in osstest which is able to automatically and noninteractively install NetBSD on a bare host.
* Task 1: Understand how best to automate installation of NetBSD. Write code in osstest which is able to automatically and noninteractively install NetBSD on a bare host.
* Task 2: Test and debug osstest's automatic building arrangements so that they can correctly build Xen on NetBSD.
* Task 2: Test and debug osstest's automatic building arrangements so that they can correctly build Xen on NetBSD.
* Task 2b: Upstream this code into the production osstest.git.
* Task 3: Write code in osstest which can automatically install the Xen from task 2 on the system installed by task 1.
* Task 3: Write code in osstest which can automatically install the Xen from task 2 on the system installed by task 1.
* Task 3b: Upstream this code into the production osstest.git.
* Task 4: Debug at least one of the guest installation capabilities in osstest so that it works on the Xen system from task 3.
* Task 4: Debug at least one of the guest installation capabilities in osstest so that it works on the Xen system from task 3.
* Task 5: Rework the code from task 1 so that it can also install a NetBSD guest, ideally either as a guest of a Linux dom0 or of a NetBSD dom0.
* Task 5: Rework the code from task 1 so that it can also install a NetBSD guest, ideally either as a guest of a Linux dom0 or of a NetBSD dom0.
* Task 5b: Upstream this code into the production oosstest.git.
* Task 6: Determine which versions of NetBSD and of Linux should be tested in which combinations and write this up in a report for deployment by the infrastructure maintainers.
* Task 6: Determine which versions of NetBSD and of Linux should be tested in which combinations and write this up in a report for deployment by the infrastructure maintainers.
* Task 7: Assist with deployment and debugging after the new functionality is deployed in production in accordance with the report from Task 6.
* More information: See xen-devel test reports. Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary
|References=
See xen-devel test reports (via the xen-devel list archives). Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary

The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/
}}
}}


{{Anchor|Unreviewed Project Ideas}}


=== Unreviewed Project Ideas ===
== Project Ideas that Need Review ==


{{GSoC Project
==== Rules and Advice for Adding Ideas ====
|Project=Allowing guests to boot with a passed-through GPU as the primary display
* Be creative
|Anchor=gpu-passthrough
* Use the [[Template:GSoC Project]] template to encode ideas on this page. Please read the [[Template:GSoC Project|Template Documentation]] before you do so.
|Date=01/22/2013
* Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
|Contact=George Dunlap <george.dunlap@eu.citrix.com>
* If you are willing to mentors those ideas, add your name and email to the idea.
|Difficulty=Difficult
* If you're an interested student, add your name and email next to the idea. It is ok to have several students interested by one idea.
|Skills=C programming. Assembly language debugging.
* Aspiring students need to get in touch with the xen.org community manager via community.manager@xen.org to register their interest
|Desc=
One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can boot with it as the primary display.

The concept of this project is straightforward; however, BIOSes are notoriously quirky (to put it mildly). The source code of the VGA BIOS itself would not be available, and would be likely to run in 16-bit mode. It is likely that at some point you will end up decoding machine code from a hex dump to find out what has gone wrong. It should be a very interesting, challenging, and fun project for the right student.
|Outcomes=Mechanism to boot a VM with a passed-through graphics card as the primary display of the VM.
|Steps=&nbsp;
* Set up a machine with a graphics card passed through to a guest as a non-primary display
* Write a mechanism to extract the VGA bios from the card
* Add that blob into the BIOS for the VM
* Track down any problems that arise
|References=[http://wiki.xen.org/wiki/Xen_PCI_Passthrough PCI Passthrough]
}}


{{GSoC Project
|Project=Mini-os for ARM (autotranslated) guests
|Anchor=mini-os
|Date=2013-02-13
|Contact=Ian Campbell <ian.campbell@citrix.com>
|Difficulty=Difficult
|Skills=C programming. ARM (and optionally x86) assembly language debugging. Low-level kernel understanding (e.g. page tables)
|Desc=
[[Mini-OS]] is a simple reference PV guest operating system which serves as both an example of how to write a PV guest as well as providing the base Operating Systems for [[StubDom Stub Domains]] such as [[Device Model Stub Domains]] and xenstored stub domains. Parts of Mini-OS are also used in projects such as [http://openmirage.org/ Mirage] and other exo-kernel projects.

Mini-OS supports a single address space application running directly in the bare Virtual Machine environment and contains PV drivers for disk, net and console as well as a simple co-operative threading model.

Currently minios supports only x86 PV guests however we would also like to eventually support stubdomains (in particular xenstored stub domains) and projects such as Mirage on the [[Xen_ARMv7_with_Virtualization_Extensions|ARM port]] of Xen. This project would involve taking the existing Mini-OS code (see [http://xenbits.xen.org/hg/xen-unstable.hg/file/tip/extras/mini-os ''extras/mini-os''] in the Xen source code) and extending it to work in the ARM PV environment.

As well as authoring the initial bring up code targeting ARM this will also involve modifying the rest of Mini-OS to cope with the fact that Xen ARM guests do not use PV paging but instead rely on hardware virtual paging. This will require modifications to some of the core helper routines and PV drivers to understand this ''autotranslated physmap'' concept (which refers to the idea that guest address are automatically translated into host addresses, compared with x86 PV domains which must perform this translation themselves, using the physmap (or ''p2m'') which is part of the X86 PV paging interfaces).

As an extension once Mini-OS has been extended to work in the ARM environment using ''autotranslated physmap'' this should allow a relatively easy port to an X86 HVM environment, which also differs from X86 PV in its use of autotranslated physmap. This would be useful for running fuzz testers, such as that proposed [[#fuzz-testing-mirage|above]] as well as other test applications.

|Outcomes=Mini-OS based domains running
|Steps=&nbsp;
* Simple Hello World on ARM
* stub C or ocaml xenstored running on ARM.
* Simple Hello World on x86 HVM.
* ...TBD...
|References=Inline
}}
== Useful Resources ==

Here is some links to guides, tools, development flows etc.


* Xen overview: http://wiki.xen.org/wiki/Xen_Overview
==== New Project Ideas ====
* Submitting Xen patches (with Mercurial): http://wiki.xen.org/wiki/Submitting_Xen_Patches
* Submitting Xen patches with Git: http://wiki.xen.org/wiki/Submitting_Xen_Patches_with_Git
* Xen beginner guide: http://wiki.xen.org/wiki/Xen_Beginners_Guide
* Introduction to Git: http://git-scm.com/documentation
* Introduction to Mercurial: http://mercurial.selenic.com/


{{Info|Add new projects here.}}


[[Category:Community]]
[[Category:Community]]
[[Category:GSoC]]
[[Category:GSoC]]
[[Category:GSoC_2013]]
[[Category:GSoC_2013]]
[[Category:Archived]]
[[Category:Internships]]

Latest revision as of 18:07, 2 February 2017

Icon Info.png Unfortunately, Xen.org did not get accepted as mentoring organization in 2013.

GSoC and Xen

This page is used to list project ideas for Google Summer of Code (GSOC) 2013.

Conventions for Projects

Rules and Advice for Adding Ideas

  • Be creative
  • Add projects into Project Ideas that Need Review.
  • Use the {{GSoC Project}} template to encode ideas on this page. Please read the Template Documentation before you do so.
  • Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
  • Check that the project meets the GSoC Program Goals
  • If you are willing to mentors those ideas, add your name and email to the idea.
  • If you're an interested student, add your name and email next to the idea. It is ok to have several students interested by one idea.
  • Aspiring students need to get in touch with the xen.org community manager via community.manager@xen.org to register their interest

Peer Review Goals

We strongly recommend and invite project proposers and project mentors to review each others proposals. When you review, please look out for

  • Can a student get going and started with the information in the project description
  • Are any unstated assumptions in the proposal, is there undefined terminology, etc. in the proposal
  • Can the project completed in 3 months (assume that one month is needed for preparation)
  • Does the project meet Google Summer of Code goals, which are
    • Create and release open source code for the benefit of all
    • Inspire young developers to begin participating in open source development
    • Help open source projects identify and bring in new developers and committers
    • Provide students the opportunity to do work related to their academic pursuits (think "flip bits, not burgers")
    • Give students more exposure to real-world software development scenarios (e.g., distributed development, software licensing questions, mailing-list etiquette)

Peer Review Conventions

The {{GSoC Project}} template used to encode GSoC projects, contains some review functionality. Please read the Template Documentation before you add a template, also please use the conventions below to make comments.

|Review=(delete as addressed)
* {{Comment|~~~~:}} Comment 1
* {{Comment|~~~~:}} Comment 2

Key Google Pages

Google Summer of Code 2013 is On (see announcement). Xen.org is intending to apply as a Mentoring Organization. Stay posted.

Timeline

  • March 18, 19:00 UTC: Mentoring organizations can begin submitting applications to Google.
  • March 29, 19:00 UTC: Mentoring organization application deadline.
  • April 1 - 5: Google program administrators review organization applications.
  • April 8 19:00 UTC: List of accepted mentoring organizations published on the Google Summer of Code 2013 site.
  • April 9 - 21: Would-be student participants discuss application ideas with mentoring organizations.
  • April 22, 19:00 UTC: Student application period opens.
  • May 3, 19:00 UTC: Student application deadline.

Community Reviewed Project List

This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the Unreviewed Project Ideas section of this document.

Icon Info.png This section contains peer reviewed projects that have been selected based on the following criteria:
  • A diverse list of projects, covering different level of difficulties and required skills
  • Well written (in particular have a well written description)
  • Contain steps, outcomes, skills required, ... all written down
  • Are peer reviewed and debated
  • Contain a diverse set of mentors

If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready when we apply for GSoC. Please add projects into Unreviewed Project Ideas.


Microcode uploader implementation in Xen hypervisor

Date of insert: 02/08/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Difficulty: Medium to Hard.
Skills Needed: Knowledge of C for Phase #1. For Phase #2 potentially x86 assembler and deep knowledge of early bootup. Familiarity with Intel SDM is a plus.
Description: Intel is working on early implementation where the microcode binary would be appended to the initrd image. The kernel would scan for the appropriate magic constant (http://thread.gmane.org/gmane.linux.kernel/1413384; looks for "kernel/x86/microcode/GenuineIntel.bin") and load the microcode very early. This is all done for the Linux kernel code, but we currently do not that in the Xen hypervisor.

The scope of the work can be split up in

  1. just do the extraction of microcode from the initial ramdisk binary (aka initrd) and apply it. This can be done during the parsing of the dom0 initial ramdisk. The hypervisor already has the functionality to apply a microcode from the multiboot targets. This would add code to parse the initrd image
  2. do it during very early bootup - which is why the early microcode work started - to deal with CPUs which don't expose certain CPUID flags because they need a microcode update. This part of work is much more difficult - as it would involve working only with early bootup pagetables. This being done _before_ the Xen hypervisor sets its own pagetables - as some of the fixes that the microcode has, can be for the CPU to be able to do PSE properly.
Outcomes: Patch to Xen hypervisor to take advantage of this. The Xen hypervisor can do this similarly.
Steps:
References: The Intel SDM 3a (http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf) gives an excellent overview what microcode is and how to update it. The mechanism for bundling the microcode binary with the initrd along with the initial implementation in the Linux kernel to take advantage of this is explained here http://thread.gmane.org/gmane.linux.kernel/1413384
Peer Review Comments
* Pictogram voting comment 15px.png Ijc 09:08, 8 February 2013 (UTC): It is not clear if this proposal is suggesting to add microcode loader support to the dom0 kernel or the hypervisor. It seems to be during the dom0 loader, however Linux already has this support since Intel have effectively completed the project described by the first link above, extending this to Xen would involve the same objections from upstream as they had to the original microcode patches. In any case Xen also already has support for CPU microcode loading very early on, which is much better than doing it from dom0 (which is arguably too late). The only useful extension I can see to the existing functionality is to add support to Xen for parsing dom0's initrd to pull out the microcode blob instead of obtaining it from the multiboot modules as is currently supported. Phase 2 here just isn't necessary, both Linux and Xen already contain the code described.


Introducing PowerClamp-like driver for Xen

Date of insert: 01/22/2013; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: George Dunlap <george.dunlap@eu.citrix.com>
Difficulty: Medium to difficult
Skills Needed: C programming. A solid knowledge of how to use spinlocks to avoid race conditions and deadlock. Ability to read diffs, ability to use git and mercurial (hg).
Description: PowerClamp was introduced to Linux in late 2012 in order to allow users to set a system-wide maximum power usage limit. This is particularly useful for data centers, where there may be a need to reduce power consumption based on availability of electricity or cooling. A more complete writeup is available at LWN.

These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both.

The basic mechanism for PowerClamp in Linux is to monitor the percentage of time spent idling. When this time goes below a user-specified threshold, it activates a high-priority real-time process to force the CPU to idle for the specified amount of time. The student would have to figure out how to apply this to Xen's main scheduler, the Credit scheduler.

The idea is fairly straightforward, but working with the scheduler involves dealing with very tricky race conditions and deadlock. This should be fairly straightforward project for a very clever student looking for a fun problem to solve. It should also provide a good taste of what operating-system level programming is like.
Outcomes:  
  • Mechanism to enforce a certain percentage of idle time in Xen
  • An appropriate way to access this; either using Xen's xl command-line, or the PowerClamp tools for Linux (if any), or both.
Steps: The first step would be to apply the idea to the main Xen scheduler, the Credit1 scheduler. What is the best way to implement this? Adding an extra priority level? Re-using the existing credit mechanism? The second step would be to design an appropriate interface. Are there any PowerClamp userspace tools for Linux? Does it make sense to try to integrate those tools with Xen, or should we just have this be a separate Xen feature, accessible via Xen's xl command-line interface?
References: LWN Article on PowerClamp


Virtual NUMA for Xen guests

Date of insert: 12/12/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Dario Faggioli <dario.faggioli@citrix.com>
Difficulty: Medium
Skills Needed: C programming, computer architecture, virtualization concepts
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 (vNUMA: what it is and why it matters).

This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap
Outcomes: The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.
Steps: The work on the project can be subdivided in the following phases:
  • Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it;
  • Phase 2: implement virtual NUMA for Xen PV guests;
  • Phase 3: implement virtual NUMA for Xen HVM guests.
References: Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful for this project. However, if that is not the case, solutions will be found to allow the participant to properly test the code.


NUMA aware ballooning for Xen

Date of insert: 12/12/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Dario Faggioli <dario.faggioli@citrix.com>
Difficulty: Medium
Skills Needed: C programming, virtualization concepts
Description: NUMA (Non-Uniform Memory Access) systems are advanced server platforms, comprising multiple nodes. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name.

When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to balloon down existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another.

What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node.

This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: Xen NUMA Roadmap
Outcomes: The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen.
Steps: The work on the project can be subdivided in the following phases:
  • Phase 1: understand the existing ballooning algorithms and code. While at it, check whether the currently available documentation (both on the Xen Wiki and in the source tree), is updated and aligned with the actual code behavior and, if not, fix it;
  • Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both?
  • Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise.
References: Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful. However, if that is not the case, solutions will be found to allow the participant to properly test the code.


Temporal Isolation and Multiprocessor Support in the SEDF Scheduler

Date of insert: 08/08/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Dario Faggioli <dario.faggioli@citrix.com>
Difficulty: Basic to Medium
Skills Needed: C programming, genuine interest in scheduling algorithm design and implementation
Description: No matter if it is to build a multi-personallity mobile phone, or help achieving consolidation in industrial and factory automation, embedded virtualization ([1], [2], [3]) is upon us. In fact, quite a number of embedded hypervisors already exist, e.g.: Wind River Hypervisor, CodeZero or PikeOS. Xen definitely is small, fast type-1 hypervisor with support for multiple VMs [1], so it could be a good candidate embedded hypervisor.

Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the Earliest Deadline First (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation).

However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs. SEDF also lacks proper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most.

Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. For temporal isolation, among the various solutions proposed in real-time academic literature, one that is very effective and yet very simple to implement is the Constant BandWidth Server algorithm (CBS, [1], [2], [3]). For multiprocessing, just adopting a different approach in managing the scheduling ready queues (e.g., having one queue serving multiple pCPUs) would be enough. Of course, envisioning and implementing mechanisms for migrating the vCPUs among different queues would be even better.
Outcomes: The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. Having reached a good level of temporal isolation must be verified by running some typical real-time workload (e.g., Cyclictest and rt-app) inside a VM, and checking its timing requirements are being respected, despite the integerence of other VMs. Correct exploitation of multiprocessor platforms must be verified by making sure the vCPUs are automatically spreading around, instead of all being stuck on just one pCPU.
Steps: The work on the project can be subdivided in the following phases:
  • Phase 1: study and understand the CBS algorithm, and figure out what are the differences between it and the current SEDF implementation;
  • Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code. Completing this phase would mean having successfully enabled proper temporal isolation within SEDF;
  • Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already. Completing this phase would mean having turned SEDF into a decent enough multiprocessor enabled scheduler;
  • Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues Completing this phase would mean having turned SEDF into a full-fledged multiprocessor enabled scheduler.
References: Useful references inlined in the project description
Peer Review Comments
(delete as addressed)
  • Pictogram voting comment 15px.png Ijc 09:36, 11 February 2013 (UTC): What is the outcome/deliverable for stage 1 (investigate CBS)? Is CBS the only option here or does the candidate need to evaluate other techniques? Is CBS the "Unified approaches ... enabling blocking" which the description refers to? Are there any particular success criteria for the other phases, e.g. specific performance characteristics of benchmark results which must be achieved? Where does the "implemented multiprocessor support" appear in the phases, is it a side effect of CBS or is it phase 3/4?
  • Pictogram voting comment 15px.png Dariof, 13 March 2013 (Replying to Ijc): Abstracted the part about the CBS algorithm outside from the "Steps" section, so that it is more clear that this is the solution allowing to kill the special cases and enable proper temporal isolation (as well as that there is no need to investigate any different algorithm). Clarified a bit more, both in the general description than in the description of the various phases, what each steps contributes to (to make it clear than turning SEDF into an SMP scheduler is not a consequence of CBS, it rather is what is done in phases 3 and 4). Gave some directions about validation and benchmarking.

{{GSoC Project |Project=Refactor Linux hotplug scripts |Anchor=linux-hotplug-scripts |Date=15/11/2012 |Contact=Roger Pau Monné <roger.pau@citrix.com> |Difficulty=Medium |Skills=Knowledge of C and good level of shell scripting |Desc= Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.

Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.

Also, a new hotplug implementation is currently under review [1], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the student can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |Steps=The work on the project can be subdivided in the following phases:

  • Phase 1: analyze hotplug scripts and determine what each script does internally in order to attach the device
  • Phase 2: move common bits of code to shared files, providing a sane API
  • Phase 3: refactor hotplug scripts to use this new API, and clean the code applying an uniform coding style
  • Phase 4 [Optional]: create hotplug scripts for new backends (GlusterFS, Ceph)

|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |References=Source of current scripts |Review=(delete as addressed)

  • Pictogram voting comment 15px.png Ijc 09:49, 11 February 2013 (UTC): Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like Xen Networking. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to students, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems.

}}


XL to XCP VM motion

Date of insert: 15/11/12; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Ian Campbell <ian.campbell@citrix.com>
Difficulty: Medium
Skills Needed: Knowledge of either C or oCaml (or both) or another suitable language.
Description: Currently xl (the toolstack supplied alongside Xen) and xapi (the XCP toolstack) have very different concepts about domain configuration, disk image storage etc. In the XCP model domain configuration is persistent and stored in a data base while under xl domain configuration is written in configuration files. Likewise disk images are stored as VDIs in Storage Repositories while under xl disk images are simply files or devices in the dom0 filesystem. For more information on xl see XL. For more information on XCP see XCP Overview.

This project is to produce one or more command-line tools which support migrating VMs between these toolstacks.

One tool should be provided which takes an xl configuration file and details of an XCP pool. Using the XenAPI XML/RPC interface It should create a VM in the pool with a close approximation of the same configuration and stream the configured disk image into a selected Storage Repository.

A second tool should be provided which performs the opposite operation, i.e. give a reference to a VM residing in an XCP pool it should produce an XL compatible configuration file and stream the disk image(s) our of Xapi into a suitable format.

These tools could be reasonably bundled as part of either toolstack and by implication could be written in either C, Ocaml or some other suitable language.

The tools should work on both PV and HVM domains. The subset of properties which are common to both toolstacks which are to be considered required for successful completion of the project will be determined early on in the project.

The tool need not operate on a live VM but that could be considered a stretch goal.

An acceptable alternative to the proposed implementation would be to implement a tool which converts between a commonly used VM container format which is supported by XCP (perhaps OVF or similar) and the xl toolstack configuration file and disk image formats.
Outcomes: Code submitted to xen-devel@ and/or xen-api@ for tools to migrate Virtual Machines between toolstacks. Must include documentation
Steps: A suggested set of steps for completion of the project is:
  1. Setup both toolstacks and create suitable virtual machines on both
  2. Investigation of both toolstacks to determine what existing import/export functionality is present
  3. Documentation of mapping between Virtual Machine properties of both toolstacks
  4. Evaluate mechanisms for conversion, including disk image format conversion and which VM properties are required for success
  5. Documentation of how to perform a manual (i.e. by hand) conversion in either direction
  6. Implementation and documentation of tool(s) to automate the conversion process
  7. Post patches for review
  8. Iterate patches until acceptance
References: XAPI, XL, XCP Overview


VM Snapshots

Date of insert: 16/01/2013; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Anthony Perard <anthony.perard@citrix.com>
Difficulty: Medium
Skills Needed: C programming
Description: Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM.

QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them.

This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl.

The candidate should also explore ways to integrate QEMU disk snapshots and disk mirroring into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl.
Outcomes:  

Basic goal: disk snapshots can be handled entirely by xl.

Stretch goals: xl can automatically save a disk snapshot at the time of saving a VM. xl can also mirror the disk of a VM between two hosts and can do that automatically at the time of VM migration.
Steps:  

Basic steps:

  • Study libxl APIs for storage
  • Study QEMU QMP commands for VM snapshots
  • Implement support for QMP snapshots commands in libxl
  • Implement VM snapshots functionalities in libxl using the QMP functions previously written
  • Add VM snapshot commands to XL

Stretch goals:

  • Add VM snapshot functionalities to libxl save/restore and migration functions
  • Evaluate QEMU QMP disk mirroring capabilities (QMP command "drive-mirror")
  • Implement support for QMP drive-mirror command in libxl
  • Hook disk mirroring into libxl VM save/restore and migration functions (migrating a VM from one host to another is also capable of migrating the VM disk from the two hosts).
References: XL, QEMU
Peer Review Comments
(delete as addressed)
  • Pictogram voting comment 15px.png Ijc 10:05, 11 February 2013 (UTC): Although this project is specifically targeting the QEMU snapshot mechanism we should require that the libxl API which is exposed is general enough to be applied to other disk backends (blktap3, lvm snapshot, btrfs, etc)


Fuzz testing Xen with Mirage

Date of insert: 28/11/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Anil Madhavapeddy <anil@recoil.org>
Difficulty: Medium
Skills Needed: OCaml programming. C programming.
Description: MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening guest kernel. We would like to use the Mirage/Xen libraries to fuzz test all levels of a typical cloud toolstack. Mirage has low-level bindings for Xen hypercalls, mid-level bindings for domain management, and high-level bindings to XCP for cluster management. This project would build a QuickCheck-style fuzzing mechanism that would perform millions of random operations against a real cluster, and identify bugs with useful backtraces.
Outcomes:  
Steps:  
  • Set up a mirage build environment, test by building and running the examples (http://github.com/mirage/mirage-skeleton)
  • Make a simple fuzzer with the existing hypercall bindings
  • Extend the set of bindings, and extend the fuzzer to match
References:  
Peer Review Comments
(delete as addressed)
  • Pictogram voting comment 15px.png Ijc 10:05, 11 February 2013 (UTC): There are some interesting challenges which aren't mentioned here, specifically:
    • reproducability of a given run leading to a crash
    • how to handle guests which crash themselves while fuzzing, e.g. management of random seeds and respawning, measuring progress and perhaps snapshotting and restarting along multiple paths (so a single crash doesn't wipe out all the interesting state built up by the fuzzer up to that point)
    • logging of what is going on in the face of hosts which may crash when the fuzzer "succeeds".
  • Pictogram voting comment 15px.png Ijc 10:05, 11 February 2013 (UTC): It would also be useful to take inspiration from the trinity Linux system call fuzzer which encodes a certain level of knowledge of what the inputs to each system/hypercall should look like such that it can probe "interesting" (i.e. limits) values with more than random probability and also provide plausible input for some arguments so as to not continually mask errors in the other options (e.g. with some probability pass a valid socket to the int fd argument of a call which expects a socket, so that the other arguments have some chance of even being evaluated). Likewise for calls which take a pointer you would want to make sure the fuzzer would occasionally (or even mostly) pass in valid pointers such that the contents of the pointed to struct can also be fuzzed.


Towards a multi-language unikernel substrate for Xen

Date of insert: 28/11/2012; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Anil Madhavapeddy <anil@recoil.org>
Difficulty: Difficult
Skills Needed: Familiarity with C and at least 1 other of OCaml,Haskell,Erlang and Java.
Description: There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include: Each of these is in a different state of reliability and usability. We would like to survey all of them, build some common representative benchmarks to evaluate them, and build a common toolchain that will make it easier to share code across such efforts. This project will require a reasonable grasp of several programming languages and runtimes, and should be an excellent project to learn more about the innards of popular languages.
Outcomes:  
  • One document per runtime describing how it works
  • A proposed common library which would allow all the runtimes to share the low-level routines
Steps:  
  • Set up a test and dev environment capable of building at least one of the above "unikernels" in the language you are most familiar with.
  • Create a simple (less than 1000 words) description of how the code is structured. Include a description of how the VM starts, how the runtime starts and how the application runtime starts.
  • Get the code for the other runtimes, and create one document for each of them, using the one you're familiar with as a reference.
  • Create a C library and build environment containing code which could be used by all the runtimes.
References: see above


Testing PV and HVM installs of Debian using debian-installer

Date of insert: 2013-01-23; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Ian Jackson <ian.jackson@eu.citrix.com>
Difficulty: Basic
Skills Needed: Knowledge of Perl, some familiarity with Debian preseeding
Description: The testing system "osstest" which is used for the push gate for the xen and related trees should have Debian PV and HVM guest installations, based on the standard Debian installer, in its repertoire. Also it currently always tests kernels as host and guest in the same installation.
Outcomes: Code for guest installation in live osstest.git; public report on which Linux branches to deploy for; new test cases enabled in production.
Steps:  
  • Task 1: Generalise the functions in osstest which generate debian-installer preseed files and manage the installation, to teach them how to set up PV and HVM guests, and provide an appropriate ts-* invocation script.
  • Task 1b: Upstream this code into the production osstest.git.
  • Task 2: Extend the guest installer from task 1 to be able to install a kernel other than the one which comes from the Debian repository, so that it is possible to test one kernel as host with a different specified kernel as guest.
  • Task 2b: Upstream this code into the production osstest.git.
  • Task 3: Determine which combinations of kernel branches should be added to the test schedules, push gates, etc. and write this up in a report for deployment by the infrastructure maintainers.
  • Task 4: Assist with deployment and debugging after the new functionality is deployed in production in accordance with the report from Task 3.
References: See xen-devel test reports (via the xen-devel list archives). Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/
Peer Review Comments
(delete as addressed)
  • Pictogram voting comment 15px.png Ijc 10:18, 11 February 2013 (UTC): I'd be happy to co-advise on the D-I aspects of this. In Task 2 "kernel other than the one which comes from the Debian repository", do you really mean "from the dom0 filesystem"? The D-I kernels do come from the Debian repo. Also is the intention to support testing guests which use pygrub, since that fits naturally with the D-I approach? Is the intention to only do netinst installs or is there scope to do D-I installs from ISO images too?


Testing NetBSD

Date of insert: 2013-01-23; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Ian Jackson <ian.jackson@eu.citrix.com>
Difficulty: Basic to Medium
Skills Needed: Knowledge of Perl and NetBSD's installer
Description: The testing system "osstest" which is used for the push gate for the xen and related trees should be able to test NetBSD both as host and guest.
Outcomes: Code for host and guest installation in live osstest.git; public report on which combinations of tests to deploy for; testing of NetBSD enabled in production.
Steps:  
  • Task 1: Understand how best to automate installation of NetBSD. Write code in osstest which is able to automatically and noninteractively install NetBSD on a bare host.
  • Task 2: Test and debug osstest's automatic building arrangements so that they can correctly build Xen on NetBSD.
  • Task 2b: Upstream this code into the production osstest.git.
  • Task 3: Write code in osstest which can automatically install the Xen from task 2 on the system installed by task 1.
  • Task 3b: Upstream this code into the production osstest.git.
  • Task 4: Debug at least one of the guest installation capabilities in osstest so that it works on the Xen system from task 3.
  • Task 5: Rework the code from task 1 so that it can also install a NetBSD guest, ideally either as a guest of a Linux dom0 or of a NetBSD dom0.
  • Task 5b: Upstream this code into the production oosstest.git.
  • Task 6: Determine which versions of NetBSD and of Linux should be tested in which combinations and write this up in a report for deployment by the infrastructure maintainers.
  • Task 7: Assist with deployment and debugging after the new functionality is deployed in production in accordance with the report from Task 6.
References: See xen-devel test reports (via the xen-devel list archives). Code is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=summary The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/

Project Ideas that Need Review

Allowing guests to boot with a passed-through GPU as the primary display

Date of insert: 01/22/2013; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: George Dunlap <george.dunlap@eu.citrix.com>
Difficulty: Difficult
Skills Needed: C programming. Assembly language debugging.
Description: One of the primary drivers of Xen in the "consumer market" of the open-source world is the ability to pass through GPUs to guests -- allowing people to run Linux as their main desktop but easily play games requiring proprietary operating systems without rebooting.

GPUs can be easily passed through to guests as secondary displays, but as of yet cannot be passed through as primary displays. The main reason is the lack of ability to load the VGA BIOS from the card into the guest.

The purpose of this project would be to allow HVM guests to load the physical card's VGA bios, so that the guest can boot with it as the primary display.

The concept of this project is straightforward; however, BIOSes are notoriously quirky (to put it mildly). The source code of the VGA BIOS itself would not be available, and would be likely to run in 16-bit mode. It is likely that at some point you will end up decoding machine code from a hex dump to find out what has gone wrong. It should be a very interesting, challenging, and fun project for the right student.
Outcomes: Mechanism to boot a VM with a passed-through graphics card as the primary display of the VM.
Steps:  
  • Set up a machine with a graphics card passed through to a guest as a non-primary display
  • Write a mechanism to extract the VGA bios from the card
  • Add that blob into the BIOS for the VM
  • Track down any problems that arise
References: PCI Passthrough


Mini-os for ARM (autotranslated) guests

Date of insert: 2013-02-13; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Ian Campbell <ian.campbell@citrix.com>
Difficulty: Difficult
Skills Needed: C programming. ARM (and optionally x86) assembly language debugging. Low-level kernel understanding (e.g. page tables)
Description: Mini-OS is a simple reference PV guest operating system which serves as both an example of how to write a PV guest as well as providing the base Operating Systems for StubDom Stub Domains such as Device Model Stub Domains and xenstored stub domains. Parts of Mini-OS are also used in projects such as Mirage and other exo-kernel projects.

Mini-OS supports a single address space application running directly in the bare Virtual Machine environment and contains PV drivers for disk, net and console as well as a simple co-operative threading model.

Currently minios supports only x86 PV guests however we would also like to eventually support stubdomains (in particular xenstored stub domains) and projects such as Mirage on the ARM port of Xen. This project would involve taking the existing Mini-OS code (see extras/mini-os in the Xen source code) and extending it to work in the ARM PV environment.

As well as authoring the initial bring up code targeting ARM this will also involve modifying the rest of Mini-OS to cope with the fact that Xen ARM guests do not use PV paging but instead rely on hardware virtual paging. This will require modifications to some of the core helper routines and PV drivers to understand this autotranslated physmap concept (which refers to the idea that guest address are automatically translated into host addresses, compared with x86 PV domains which must perform this translation themselves, using the physmap (or p2m) which is part of the X86 PV paging interfaces).

As an extension once Mini-OS has been extended to work in the ARM environment using autotranslated physmap this should allow a relatively easy port to an X86 HVM environment, which also differs from X86 PV in its use of autotranslated physmap. This would be useful for running fuzz testers, such as that proposed above as well as other test applications.
Outcomes: Mini-OS based domains running
Steps:  
  • Simple Hello World on ARM
  • stub C or ocaml xenstored running on ARM.
  • Simple Hello World on x86 HVM.
  • ...TBD...
References: Inline

Useful Resources

Here is some links to guides, tools, development flows etc.