Archived/GSoC 2013: Difference between revisions
mNo edit summary |
Lars.kurth (talk | contribs) No edit summary |
||
(19 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
{{InfoLeft|'''Unfortunately, Xen.org did not get accepted as mentoring organization in 2013.'''}} |
|||
{{Warning|This Page is under construction.}} |
|||
__TOC__ |
|||
== GSoC and Xen == |
== GSoC and Xen == |
||
This page is used to list project ideas for Google Summer of Code (GSOC) 2013. |
This page is used to list project ideas for [http://www.google-melange.com/gsoc/homepage/google/gsoc2013 Google Summer of Code (GSOC) 2013]. |
||
== Conventions for Projects == |
== Conventions for Projects == |
||
Line 37: | Line 38: | ||
== Key Google Pages == |
== Key Google Pages == |
||
Google Summer of Code 2013 is On (see [http://google-opensource.blogspot.co.uk/2013/02/flip-bits-not-burgers-google-summer-of.html announcement]). Xen.org is intending to apply as a Mentoring Organization. Stay posted. |
|||
{{Warning|Note that Google has not yet announced GSOC 2013}} |
|||
* [http://google-opensource.blogspot.co.uk/2013/02/flip-bits-not-burgers-google-summer-of.html GSoC announcement] |
|||
* [http://www.google-melange.com/gsoc/homepage/google/gsoc2013 GSoC Homepage] |
|||
=== Timeline === |
|||
* '''March 18, 19:00 UTC:''' Mentoring organizations can begin submitting applications to Google. |
|||
* '''March 29, 19:00 UTC:''' Mentoring organization application deadline. |
|||
* '''April 1 - 5:''' Google program administrators review organization applications. |
|||
* '''April 8 19:00 UTC:''' List of accepted mentoring organizations published on the Google Summer of Code 2013 site. |
|||
* '''April 9 - 21:''' Would-be student participants discuss application ideas with mentoring organizations. |
|||
* '''April 22, 19:00 UTC:''' Student application period opens. |
|||
* '''May 3, 19:00 UTC:''' Student application deadline. |
|||
== Community Reviewed Project List == |
== Community Reviewed Project List == |
||
Line 43: | Line 56: | ||
This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the [[#Unreviewed Project Ideas|Unreviewed Project Ideas]] section of this document. |
This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the [[#Unreviewed Project Ideas|Unreviewed Project Ideas]] section of this document. |
||
{{InfoLeft|This section contains peer reviewed projects that have been selected based on the following criteria: |
|||
{{InfoLeft|The purpose of this section is to have a good list of projects published when we apply for GSoC. Last year, we were not accepted into GSoC because the initial list of projects that we had at the application deadline, was deemed to be not good enough by Google. So we were thrown out at an early stage of the selection process. As a result EVERYBODY missed out on opportunities. To prevent this from happening again, we need to have a dozen of suitable, peer reviewed and diverse projects when we apply for GSoC. They should be: |
|||
* A diverse list of projects, covering different level of difficulties and required skills |
* A diverse list of projects, covering different level of difficulties and required skills |
||
* Well written (in particular have a well written description) |
* Well written (in particular have a well written description) |
||
Line 49: | Line 62: | ||
* Are peer reviewed and debated |
* Are peer reviewed and debated |
||
* Contain a diverse set of mentors |
* Contain a diverse set of mentors |
||
* Are well presented (i.e. the page looks good) |
|||
If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready when we apply for GSoC. Please add projects into [[#Unreviewed Project Ideas|Unreviewed Project Ideas]]. |
|||
'''If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready when we apply for GSoC. Please add projects into [[#Unreviewed Project Ideas|Unreviewed Project Ideas]].''' |
|||
'''Note: At this stage, some of the ideas in this section are still being improved!''' |
|||
}} |
}} |
||
{{GSoC Project |
{{GSoC Project |
||
|Project=Microcode uploader implementation |
|Project=Microcode uploader implementation in Xen hypervisor |
||
|Anchor=microcode-uploader |
|Anchor=microcode-uploader |
||
|Steps= |
|Steps= |
||
Line 65: | Line 76: | ||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
||
|Desc= |
|Desc= |
||
Intel is working on early implementation where the microcode binary would be appended to the initrd image. The kernel would scan for the appropriate magic constant (http://thread.gmane.org/gmane.linux.kernel/1413384; looks for "kernel/x86/microcode/GenuineIntel.bin") and load the microcode very early. |
Intel is working on early implementation where the microcode binary would be appended to the initrd image. The kernel would scan for the appropriate magic constant (http://thread.gmane.org/gmane.linux.kernel/1413384; looks for "kernel/x86/microcode/GenuineIntel.bin") and load the microcode very early. This is all done for the Linux kernel code, but we currently do not that in the Xen hypervisor. |
||
The scope of the work can be split up in |
The scope of the work can be split up in |
||
# just do the extraction of microcode from the initial ramdisk binary (aka initrd) and apply it. This can be done during the parsing of the dom0 initial ramdisk. |
# just do the extraction of microcode from the initial ramdisk binary (aka initrd) and apply it. This can be done during the parsing of the dom0 initial ramdisk. The hypervisor already has the functionality to apply a microcode from the multiboot targets. This would add code to parse the initrd image |
||
# do it during very early bootup - which is why the early microcode work started - to deal with CPUs which don't expose certain CPUID flags because they need a microcode update. This part of work is much more difficult - as it would involve working only with early bootup pagetables. |
# do it during very early bootup - which is why the early microcode work started - to deal with CPUs which don't expose certain CPUID flags because they need a microcode update. This part of work is much more difficult - as it would involve working only with early bootup pagetables. This being done _before_ the Xen hypervisor sets its own pagetables - as some of the fixes that the microcode has, can be for the CPU to be able to do PSE properly. |
||
|Outcomes=Patch to Xen hypervisor to take advantage of this. |
|Outcomes=Patch to Xen hypervisor to take advantage of this. |
||
Line 103: | Line 114: | ||
{{GSoC Project |
{{GSoC Project |
||
|Project=Virtual NUMA for Xen guests |
|||
|Project=Implement Temporal Isolation and Multiprocessor Support in the SEDF Scheduler |
|||
|Anchor=vnuma |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, computer architecture, virtualization concepts |
|||
|Desc= |
|||
NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 ([http://cto.vmware.com/vnuma-what-it-is-and-why-it-matters/ vNUMA: what it is and why it matters]). |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it; |
|||
* Phase 2: implement virtual NUMA for Xen PV guests; |
|||
* Phase 3: implement virtual NUMA for Xen HVM guests. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful for this project. However, if that is not the case, solutions will be found to allow the participant to properly test the code. |
|||
}} |
|||
{{GSoC Project |
|||
|Project=NUMA aware ballooning for Xen |
|||
|Anchor=numa-ballooning |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, virtualization concepts |
|||
|Desc=NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to ''balloon down'' existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another. |
|||
What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node. |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: understand the existing ballooning algorithms and code. While at it, check whether the currently available documentation (both on the Xen Wiki and in the source tree), is updated and aligned with the actual code behavior and, if not, fix it; |
|||
* Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both? |
|||
* Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful. However, if that is not the case, solutions will be found to allow the participant to properly test the code. |
|||
}} |
|||
{{GSoC Project |
|||
|Project=Temporal Isolation and Multiprocessor Support in the SEDF Scheduler |
|||
|Anchor=sedf-improvements |
|Anchor=sedf-improvements |
||
|Difficulty=Basic to Medium |
|Difficulty=Basic to Medium |
||
Line 114: | Line 174: | ||
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
||
However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs |
However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs. |
||
SEDF also lacks proper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most. |
|||
Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. For temporal isolation, among the various solutions proposed in real-time academic literature, one that is very effective and yet very simple to implement is the Constant BandWidth Server algorithm (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]). For multiprocessing, just adopting a different approach in managing the scheduling ready queues (e.g., having one queue serving multiple pCPUs) would be enough. Of course, envisioning and implementing mechanisms for migrating the vCPUs among different queues would be even better. |
|||
Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|Steps=The work on the project can be subdivided in the following phases: |
||
* Phase 1: study and understand the CBS algorithm, and figure out what are the differences between it and the current SEDF implementation; |
|||
* Phase 1: investigate and understand the Constant BandWidth Server (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]); |
|||
* Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code; |
* Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code. Completing this phase would mean having successfully enabled proper temporal isolation within SEDF; |
||
* Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already; |
* Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already. Completing this phase would mean having turned SEDF into a decent enough multiprocessor enabled scheduler; |
||
* Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues. |
* Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues Completing this phase would mean having turned SEDF into a full-fledged multiprocessor enabled scheduler. |
||
|Outcomes=The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|Outcomes=The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
||
Having reached a good level of temporal isolation must be verified by running some typical real-time workload (e.g., [https://rt.wiki.kernel.org/index.php/Cyclictest Cyclictest] and [https://github.com/gbagnoli/rt-app rt-app]) inside a VM, and checking its timing requirements are being respected, despite the integerence of other VMs. Correct exploitation of multiprocessor platforms must be verified by making sure the vCPUs are automatically spreading around, instead of all being stuck on just one pCPU. |
|||
|References=Useful references inlined in the project description |
|References=Useful references inlined in the project description |
||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 09:36, 11 February 2013 (UTC):}} What is the outcome/deliverable for stage 1 (investigate CBS)? Is CBS the only option here or does the candidate need to evaluate other techniques? Is CBS the "Unified approaches ... enabling blocking" which the description refers to? Are there any particular success criteria for the other phases, e.g. specific performance characteristics of benchmark results which must be achieved? Where does the "implemented multiprocessor support" appear in the phases, is it a side effect of CBS or is it phase 3/4? |
|||
* {{Comment|[[User:Dariof|Dariof]], 13 March 2013 (Replying to [[User:Ijc|Ijc]]):}} Abstracted the part about the CBS algorithm outside from the "Steps" section, so that it is more clear that this is the solution allowing to kill the special cases and enable proper temporal isolation (as well as that there is no need to investigate any different algorithm). Clarified a bit more, both in the general description than in the description of the various phases, what each steps contributes to (to make it clear than turning SEDF into an SMP scheduler is not a consequence of CBS, it rather is what is done in phases 3 and 4). Gave some directions about validation and benchmarking. |
|||
}} |
}} |
||
Line 140: | Line 206: | ||
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices. |
Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices. |
||
Linux hotplug scripts should be |
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished. |
||
Also, a new hotplug implementation is currently under review [http://lists.xen.org/archives/html/xen-devel/2013-01/msg01962.html [1]], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the student can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |
Also, a new hotplug implementation is currently under review [http://lists.xen.org/archives/html/xen-devel/2013-01/msg01962.html [1]], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the student can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |
||
Line 151: | Line 217: | ||
|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
||
|References=[http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/hotplug/Linux;hb=HEAD Source of current scripts] |
|References=[http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/hotplug/Linux;hb=HEAD Source of current scripts] |
||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 09:49, 11 February 2013 (UTC):}} Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like [[Xen Networking]]. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to students, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems. |
|||
}} |
}} |
||
Line 193: | Line 261: | ||
|Anchor=vm-snapshots |
|Anchor=vm-snapshots |
||
|Date=16/01/2013 |
|Date=16/01/2013 |
||
|Contact= |
|Contact=Anthony Perard <anthony.perard@citrix.com> |
||
|Difficulty=Medium |
|Difficulty=Medium |
||
|Skills=C programming |
|Skills=C programming |
||
Line 213: | Line 281: | ||
Stretch goals: |
Stretch goals: |
||
* Add VM snapshot functionalities to libxl save/restore and migration functions |
* Add VM snapshot functionalities to libxl save/restore and migration functions |
||
* |
* Evaluate QEMU QMP disk mirroring capabilities (QMP command "drive-mirror") |
||
* Implement support for QMP drive-mirror command in libxl |
* Implement support for QMP drive-mirror command in libxl |
||
* Hook disk mirroring into libxl VM save/restore and migration functions (migrating a VM from one host to another is also capable of migrating the VM disk from the two hosts). |
* Hook disk mirroring into libxl VM save/restore and migration functions (migrating a VM from one host to another is also capable of migrating the VM disk from the two hosts). |
||
Line 221: | Line 289: | ||
Stretch goals: xl can automatically save a disk snapshot at the time of saving a VM. xl can also mirror the disk of a VM between two hosts and can do that automatically at the time of VM migration. |
Stretch goals: xl can automatically save a disk snapshot at the time of saving a VM. xl can also mirror the disk of a VM between two hosts and can do that automatically at the time of VM migration. |
||
|References=[[XL]], [http://www.qemu.org QEMU] |
|References=[[XL]], [http://www.qemu.org QEMU] |
||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} Although this project is specifically targeting the QEMU snapshot mechanism we should require that the libxl API which is exposed is general enough to be applied to other disk backends (blktap3, lvm snapshot, btrfs, etc) |
|||
}} |
}} |
||
Line 243: | Line 313: | ||
* http://github.com/mirage/mirage-platform -- the mirage xen and unix runtimes |
* http://github.com/mirage/mirage-platform -- the mirage xen and unix runtimes |
||
* http://github.com/mirage/mirage-skeleton -- example mirage programs |
* http://github.com/mirage/mirage-skeleton -- example mirage programs |
||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} There are some interesting challenges which aren't mentioned here, specifically: |
|||
** reproducability of a given run leading to a crash |
|||
** how to handle guests which crash themselves while fuzzing, e.g. management of random seeds and respawning, measuring progress and perhaps snapshotting and restarting along multiple paths (so a single crash doesn't wipe out all the interesting state built up by the fuzzer up to that point) |
|||
** logging of what is going on in the face of hosts which may crash when the fuzzer "succeeds". |
|||
* {{Comment|[[User:Ijc|Ijc]] 10:05, 11 February 2013 (UTC):}} It would also be useful to take inspiration from the [http://codemonkey.org.uk/projects/trinity/ trinity] Linux system call fuzzer which encodes a certain level of knowledge of what the inputs to each system/hypercall should look like such that it can probe "interesting" (i.e. limits) values with more than random probability and also provide plausible input for some arguments so as to not continually mask errors in the other options (e.g. with some probability pass a valid socket to the int fd argument of a call which expects a socket, so that the other arguments have some chance of even being evaluated). Likewise for calls which take a pointer you would want to make sure the fuzzer would occasionally (or even mostly) pass in valid pointers such that the contents of the pointed to struct can also be fuzzed. |
|||
}} |
}} |
||
Line 294: | Line 370: | ||
The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ |
The introduction to Xen automatic test system is at http://blog.xen.org/index.php/2013/02/02/xen-automatic-test-system-osstest/ |
||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 10:18, 11 February 2013 (UTC):}} I'd be happy to co-advise on the D-I aspects of this. In Task 2 "kernel other than the one which comes from the Debian repository", do you really mean "from the dom0 filesystem"? The D-I kernels do come from the Debian repo. Also is the intention to support testing guests which use pygrub, since that fits naturally with the D-I approach? Is the intention to only do netinst installs or is there scope to do D-I installs from ISO images too? |
|||
}} |
}} |
||
Line 326: | Line 404: | ||
== Project Ideas that Need Review == |
== Project Ideas that Need Review == |
||
{{GSoC Project |
|||
|Project=Virtual NUMA for Xen guests |
|||
|Anchor=vnuma |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, computer architecture, virtualization concepts |
|||
|Desc= |
|||
NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 ([http://cto.vmware.com/vnuma-what-it-is-and-why-it-matters/ vNUMA: what it is and why it matters]). |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it; |
|||
* Phase 2: implement virtual NUMA for Xen PV guests; |
|||
* Phase 3: implement virtual NUMA for Xen HV guests. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description |
|||
}} |
|||
{{GSoC Project |
|||
|Project=NUMA aware ballooning for Xen |
|||
|Anchor=numa-ballooning |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, virtualization concepts |
|||
|Desc=NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to ''balloon down'' existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another. |
|||
What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node. |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: understand the existing ballooning algorithms and code; |
|||
* Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both? |
|||
* Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description |
|||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:WeiLiu|WeiLiu]] 18:56, 31 January 2013 (UTC):}} Is normal desktop PC NUMA-capable? If not, are you expecting student to have a NUMA-capable server or experiment via emulation? Could you please clarify this in the proposal? |
|||
}} |
|||
{{GSoC Project |
{{GSoC Project |
||
Line 402: | Line 429: | ||
}} |
}} |
||
{{GSoC Project |
|||
|Project=Mini-os for ARM (autotranslated) guests |
|||
|Anchor=mini-os |
|||
|Date=2013-02-13 |
|||
|Contact=Ian Campbell <ian.campbell@citrix.com> |
|||
|Difficulty=Difficult |
|||
|Skills=C programming. ARM (and optionally x86) assembly language debugging. Low-level kernel understanding (e.g. page tables) |
|||
|Desc= |
|||
[[Mini-OS]] is a simple reference PV guest operating system which serves as both an example of how to write a PV guest as well as providing the base Operating Systems for [[StubDom Stub Domains]] such as [[Device Model Stub Domains]] and xenstored stub domains. Parts of Mini-OS are also used in projects such as [http://openmirage.org/ Mirage] and other exo-kernel projects. |
|||
Mini-OS supports a single address space application running directly in the bare Virtual Machine environment and contains PV drivers for disk, net and console as well as a simple co-operative threading model. |
|||
Currently minios supports only x86 PV guests however we would also like to eventually support stubdomains (in particular xenstored stub domains) and projects such as Mirage on the [[Xen_ARMv7_with_Virtualization_Extensions|ARM port]] of Xen. This project would involve taking the existing Mini-OS code (see [http://xenbits.xen.org/hg/xen-unstable.hg/file/tip/extras/mini-os ''extras/mini-os''] in the Xen source code) and extending it to work in the ARM PV environment. |
|||
As well as authoring the initial bring up code targeting ARM this will also involve modifying the rest of Mini-OS to cope with the fact that Xen ARM guests do not use PV paging but instead rely on hardware virtual paging. This will require modifications to some of the core helper routines and PV drivers to understand this ''autotranslated physmap'' concept (which refers to the idea that guest address are automatically translated into host addresses, compared with x86 PV domains which must perform this translation themselves, using the physmap (or ''p2m'') which is part of the X86 PV paging interfaces). |
|||
As an extension once Mini-OS has been extended to work in the ARM environment using ''autotranslated physmap'' this should allow a relatively easy port to an X86 HVM environment, which also differs from X86 PV in its use of autotranslated physmap. This would be useful for running fuzz testers, such as that proposed [[#fuzz-testing-mirage|above]] as well as other test applications. |
|||
|Outcomes=Mini-OS based domains running |
|||
|Steps= |
|||
* Simple Hello World on ARM |
|||
* stub C or ocaml xenstored running on ARM. |
|||
* Simple Hello World on x86 HVM. |
|||
* ...TBD... |
|||
|References=Inline |
|||
}} |
|||
== Useful Resources == |
== Useful Resources == |
||
Line 417: | Line 471: | ||
[[Category:GSoC]] |
[[Category:GSoC]] |
||
[[Category:GSoC_2013]] |
[[Category:GSoC_2013]] |
||
[[Category:Archived]] |
|||
[[Category:Internships]] |
Latest revision as of 18:07, 2 February 2017
GSoC and Xen
This page is used to list project ideas for Google Summer of Code (GSOC) 2013.
Conventions for Projects
Rules and Advice for Adding Ideas
- Be creative
- Add projects into Project Ideas that Need Review.
- Use the {{GSoC Project}} template to encode ideas on this page. Please read the Template Documentation before you do so.
- Be specific: what do you want to be implemented; if at all possible provide an indication of size and complexity as described above to make it easier for a student to choose ideas
- Check that the project meets the GSoC Program Goals
- If you are willing to mentors those ideas, add your name and email to the idea.
- If you're an interested student, add your name and email next to the idea. It is ok to have several students interested by one idea.
- Aspiring students need to get in touch with the xen.org community manager via community.manager@xen.org to register their interest
Peer Review Goals
We strongly recommend and invite project proposers and project mentors to review each others proposals. When you review, please look out for
- Can a student get going and started with the information in the project description
- Are any unstated assumptions in the proposal, is there undefined terminology, etc. in the proposal
- Can the project completed in 3 months (assume that one month is needed for preparation)
- Does the project meet Google Summer of Code goals, which are
- Create and release open source code for the benefit of all
- Inspire young developers to begin participating in open source development
- Help open source projects identify and bring in new developers and committers
- Provide students the opportunity to do work related to their academic pursuits (think "flip bits, not burgers")
- Give students more exposure to real-world software development scenarios (e.g., distributed development, software licensing questions, mailing-list etiquette)
Peer Review Conventions
The {{GSoC Project}} template used to encode GSoC projects, contains some review functionality. Please read the Template Documentation before you add a template, also please use the conventions below to make comments.
|Review=(delete as addressed) * {{Comment|~~~~:}} Comment 1 * {{Comment|~~~~:}} Comment 2
Key Google Pages
Google Summer of Code 2013 is On (see announcement). Xen.org is intending to apply as a Mentoring Organization. Stay posted.
Timeline
- March 18, 19:00 UTC: Mentoring organizations can begin submitting applications to Google.
- March 29, 19:00 UTC: Mentoring organization application deadline.
- April 1 - 5: Google program administrators review organization applications.
- April 8 19:00 UTC: List of accepted mentoring organizations published on the Google Summer of Code 2013 site.
- April 9 - 21: Would-be student participants discuss application ideas with mentoring organizations.
- April 22, 19:00 UTC: Student application period opens.
- May 3, 19:00 UTC: Student application deadline.
Community Reviewed Project List
This section contains GSoC Projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas, but these need to add them in the Unreviewed Project Ideas section of this document.
This section contains peer reviewed projects that have been selected based on the following criteria:
If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready when we apply for GSoC. Please add projects into Unreviewed Project Ideas. |
Microcode uploader implementation in Xen hypervisor
Peer Review Comments* Ijc 09:08, 8 February 2013 (UTC): It is not clear if this proposal is suggesting to add microcode loader support to the dom0 kernel or the hypervisor. It seems to be during the dom0 loader, however Linux already has this support since Intel have effectively completed the project described by the first link above, extending this to Xen would involve the same objections from upstream as they had to the original microcode patches. In any case Xen also already has support for CPU microcode loading very early on, which is much better than doing it from dom0 (which is arguably too late). The only useful extension I can see to the existing functionality is to add support to Xen for parsing dom0's initrd to pull out the microcode blob instead of obtaining it from the multiboot modules as is currently supported. Phase 2 here just isn't necessary, both Linux and Xen already contain the code described. |
Introducing PowerClamp-like driver for Xen
|
Virtual NUMA for Xen guests
|
NUMA aware ballooning for Xen
|
Temporal Isolation and Multiprocessor Support in the SEDF Scheduler
Peer Review Comments(delete as addressed)
|
{{GSoC Project |Project=Refactor Linux hotplug scripts |Anchor=linux-hotplug-scripts |Date=15/11/2012 |Contact=Roger Pau Monné <roger.pau@citrix.com> |Difficulty=Medium |Skills=Knowledge of C and good level of shell scripting |Desc= Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
Also, a new hotplug implementation is currently under review [1], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the student can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |Steps=The work on the project can be subdivided in the following phases:
- Phase 1: analyze hotplug scripts and determine what each script does internally in order to attach the device
- Phase 2: move common bits of code to shared files, providing a sane API
- Phase 3: refactor hotplug scripts to use this new API, and clean the code applying an uniform coding style
- Phase 4 [Optional]: create hotplug scripts for new backends (GlusterFS, Ceph)
|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |References=Source of current scripts |Review=(delete as addressed)
- Ijc 09:49, 11 February 2013 (UTC): Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like Xen Networking. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to students, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems.
}}
XL to XCP VM motion
|
VM Snapshots
Peer Review Comments(delete as addressed)
|
Fuzz testing Xen with Mirage
Peer Review Comments(delete as addressed)
|
Towards a multi-language unikernel substrate for Xen
|
Testing PV and HVM installs of Debian using debian-installer
Peer Review Comments(delete as addressed)
|
Testing NetBSD
|
Project Ideas that Need Review
Allowing guests to boot with a passed-through GPU as the primary display
|
Mini-os for ARM (autotranslated) guests
|
Useful Resources
Here is some links to guides, tools, development flows etc.
- Xen overview: http://wiki.xen.org/wiki/Xen_Overview
- Submitting Xen patches (with Mercurial): http://wiki.xen.org/wiki/Submitting_Xen_Patches
- Submitting Xen patches with Git: http://wiki.xen.org/wiki/Submitting_Xen_Patches_with_Git
- Xen beginner guide: http://wiki.xen.org/wiki/Xen_Beginners_Guide
- Introduction to Git: http://git-scm.com/documentation
- Introduction to Mercurial: http://mercurial.selenic.com/