OutreachProgramForWomen/Round7: Difference between revisions
Lars.kurth (talk | contribs) No edit summary |
m (Removed project that are already being carried on) |
||
(16 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
= Xen Project and OPW = |
|||
[[File: |
[[File:opw-poster-2013-December-March.png|right|border|400px]]The Xen Project Advisory Board will be sponsoring one intern for '''Round 7''' of the [https://wiki.gnome.org/OutreachProgramForWomen/ Gnome Outreach Program For Women] which runs from December 2013 to March 2014. This program is an internship program specifically targeted at women: our goal is to increase womens' participation in the Xen Project. It is a continuation of the very successful GNOME Outreach Program for Women and we are running the program in conjunction with GNOME and other prominent open source projects. |
||
= Information For Applicants = |
|||
== How To Apply == |
== How To Apply == |
||
The official application period for OPW Round 7 is October 1st to November 11th. Please fill our your [https://wiki.gnome.org/OutreachProgramForWomen#Application_Process initial application] and complete your Xen patch by November 11th. Applicants that do not complete the first patch will not be considered for an internship. Please take a look at our [[OutreachProgramForWomen/OPWApply|application FAQ]] for more info on how to fill our your initial application. Applicants will be notified by November 25th if they have been accepted. |
The official application period for OPW Round 7 is October 1st to November 11th. Please fill our your [https://wiki.gnome.org/OutreachProgramForWomen#Application_Process initial application] and complete your Xen patch by November 11th. Applicants that do not complete the first patch will not be considered for an internship. Please take a look at our [[OutreachProgramForWomen/OPWApply|application FAQ]] for more info on how to fill our your initial application. Applicants will be notified by November 25th if they have been accepted. |
||
Line 7: | Line 8: | ||
If you are interested in being a Xen intern, please: |
If you are interested in being a Xen intern, please: |
||
* Join the [http://www.xenproject.org/help/mailing-list.html xen-devel] mailing list. Depending on the project you choose, you may also need to join xen-api or cl-mirage (do check the information in the project). |
* Join the [http://www.xenproject.org/help/mailing-list.html xen-devel] mailing list. Depending on the project you choose, you may also need to join <em>xen-api</em> or <em>cl-mirage</em> (do check the information in the project). Check the [http://www.xenproject.org/help/mailing-list.html list directory]. |
||
* Join the #opw IRC channel on irc.gnome.org |
* Join the #opw IRC channel on irc.gnome.org |
||
* Join the #xen-opw IRC channel on freenode.net |
|||
* Read our [[OutreachProgramForWomen/OPWApply|instructions for applying]], and apply by Nov 11th. |
* Read our [[OutreachProgramForWomen/OPWApply|instructions for applying]], and apply by Nov 11th. |
||
* Ask one of our friendly developers on for a bite size bug or work item. This can be reviewing somebodies code, submitting a patch, or similar and will need to be done by Nov 11th. |
* Ask one of our friendly developers on for a bite size bug or work item. This can be reviewing somebodies code, submitting a patch, or similar and will need to be done by Nov 11th. |
||
Line 32: | Line 34: | ||
* Contain a diverse set of mentors |
* Contain a diverse set of mentors |
||
'''If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready |
'''If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready for the program. Please add projects into [[#Unreviewed Project Ideas|Unreviewed Project Ideas]].''' |
||
}} |
|||
{{GSoC Project |
|||
|Project=Microcode uploader implementation in Xen hypervisor |
|||
|Anchor=microcode-uploader |
|||
|Steps= |
|||
|Difficulty=Medium to Hard. |
|||
|Skills=Knowledge of C for Phase #1. For Phase #2 potentially x86 assembler and deep knowledge of early bootup. Familiarity with Intel SDM is a plus. |
|||
|Date=02/08/2012 |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|Desc= |
|||
Intel is working on early implementation where the microcode binary would be appended to the initrd image. The kernel would scan for the appropriate magic constant (http://thread.gmane.org/gmane.linux.kernel/1413384; looks for "kernel/x86/microcode/GenuineIntel.bin") and load the microcode very early. This is all done for the Linux kernel code, but we currently do not that in the Xen hypervisor. |
|||
The scope of the work can be split up in |
|||
# just do the extraction of microcode from the initial ramdisk binary (aka initrd) and apply it. This can be done during the parsing of the dom0 initial ramdisk. The hypervisor already has the functionality to apply a microcode from the multiboot targets. This would add code to parse the initrd image |
|||
# do it during very early bootup - which is why the early microcode work started - to deal with CPUs which don't expose certain CPUID flags because they need a microcode update. This part of work is much more difficult - as it would involve working only with early bootup pagetables. This being done _before_ the Xen hypervisor sets its own pagetables - as some of the fixes that the microcode has, can be for the CPU to be able to do PSE properly. |
|||
|Outcomes=Patch to Xen hypervisor to take advantage of this. |
|||
The Xen hypervisor can do this similarly. |
|||
|References=The Intel SDM 3a (http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf) gives an excellent overview what microcode is and how to update it. The mechanism for bundling the microcode binary with the initrd along with the initial implementation in the Linux kernel to take advantage of this is explained here http://thread.gmane.org/gmane.linux.kernel/1413384 |
|||
|Review= |
|||
* {{Comment|[[User:Ijc|Ijc]] 09:08, 8 February 2013 (UTC):}} It is not clear if this proposal is suggesting to add microcode loader support to the dom0 kernel or the hypervisor. It seems to be during the dom0 loader, however Linux already has this support since Intel have effectively completed the project described by the first link above, extending this to Xen would involve the same objections from upstream as they had to the original microcode patches. In any case Xen also already has support for CPU microcode loading very early on, which is much better than doing it from dom0 (which is arguably too late). The only useful extension I can see to the existing functionality is to add support to Xen for parsing dom0's initrd to pull out the microcode blob instead of obtaining it from the multiboot modules as is currently supported. Phase 2 here just isn't necessary, both Linux and Xen already contain the code described. |
|||
}} |
}} |
||
Line 69: | Line 49: | ||
These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both. |
These same arguments apply to Xen. The purpose of this project would be to implement a similar functionality in Xen, and to make it interface as well as possible with the Linux PowerClamp tools, so that the same tools could be used for both. |
||
The basic mechanism for PowerClamp in Linux is to monitor the percentage of time spent idling. When this time goes below a user-specified threshold, it activates a high-priority real-time process to force the CPU to idle for the specified amount of time. The |
The basic mechanism for PowerClamp in Linux is to monitor the percentage of time spent idling. When this time goes below a user-specified threshold, it activates a high-priority real-time process to force the CPU to idle for the specified amount of time. The intern would have to figure out how to apply this to Xen's main scheduler, the Credit scheduler. |
||
The idea is fairly straightforward, but working with the scheduler involves dealing with very tricky race conditions and deadlock. This should be fairly straightforward project for a very clever |
The idea is fairly straightforward, but working with the scheduler involves dealing with very tricky race conditions and deadlock. This should be fairly straightforward project for a very clever intern looking for a fun problem to solve. It should also provide a good taste of what operating-system level programming is like. |
||
|Outcomes= |
|Outcomes= |
||
* Mechanism to enforce a certain percentage of idle time in Xen |
* Mechanism to enforce a certain percentage of idle time in Xen |
||
Line 80: | Line 60: | ||
The second step would be to design an appropriate interface. Are there any PowerClamp userspace tools for Linux? Does it make sense to try to integrate those tools with Xen, or should we just have this be a separate Xen feature, accessible via Xen's xl command-line interface? |
The second step would be to design an appropriate interface. Are there any PowerClamp userspace tools for Linux? Does it make sense to try to integrate those tools with Xen, or should we just have this be a separate Xen feature, accessible via Xen's xl command-line interface? |
||
|References=[http://lwn.net/Articles/528124/ LWN Article on PowerClamp] |
|References=[http://lwn.net/Articles/528124/ LWN Article on PowerClamp] |
||
}} |
|||
{{GSoC Project |
|||
|Project=Virtual NUMA for Xen guests |
|||
|Anchor=vnuma |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, computer architecture, virtualization concepts |
|||
|Desc= |
|||
NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
Ideally, each VM should have its memory allocated out of just one node and, as long as its vCPUs also run there, both throughput and latency are optimal. However, in cases where a VM ends up having its memory allocated from multiple nodes, we should inform it that it's running on a NUMA platform: a virtual NUMA. This could be very important, especially for some specific workloads (for instance, HPC applications). In fact, if the guest OS and application have any NUMA support, exporting the virtual topology is the only way to render that effective, and perhaps filling, at least to some extent, the gap in the performances introduced by the needs of distributing the guests on more than one node. Just for reference, this feature, under the name of vNUMA, is one of the key and most advertised ones of VMWare vSphere 5 ([http://cto.vmware.com/vnuma-what-it-is-and-why-it-matters/ vNUMA: what it is and why it matters]). |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: identify the constraints that introducing virtual NUMA would impose to the other components of the Xen architectures (or, vice-versa, the constraints that the existing components of the Xen architecture would impose to virtual NUMA). Put together a design coherent with these constraints and share it with the Xen development community to get feedback on it; |
|||
* Phase 2: implement virtual NUMA for Xen PV guests; |
|||
* Phase 3: implement virtual NUMA for Xen HVM guests. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful for this project. However, if that is not the case, solutions will be found to allow the participant to properly test the code. |
|||
}} |
|||
{{GSoC Project |
|||
|Project=NUMA aware ballooning for Xen |
|||
|Anchor=numa-ballooning |
|||
|Difficulty=Medium |
|||
|Date=12/12/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, virtualization concepts |
|||
|Desc=NUMA ([http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access]) systems are advanced server platforms, comprising multiple ''nodes''. Each node contains processors and memory. An advanced memory controller allows a node to use memory from all other nodes, but that happens, data transfer is slower than accessing local memory. Memory access times are not uniform and depend on the location of the memory and the node from which it is accessed, hence the name. |
|||
When it comes to memory, Xen offers a set of different mechanisms for over-committing the host memory, the most common, widely known and utililsed is ballooning. This has non-trivial interference with NUMA friendliness. For instance, when freeing some memory, current ballooning implementations try to ''balloon down'' existing guests, but that happens without any knowledge or consideration of on which node(s) the freed memory will end up being. As a result, we may be able to create the new domain, but not quite as able to place all its memory on a single node, as ballooning could well have freed half of the space on a node, and half on another. |
|||
What this project is therefore meant at, is "teach" ballooning how to try to make space "node-wise", i.e., ballooning down the VMs that would allow the new guest to fit into just one node. |
|||
This project fits in the efforts the Xen community is making for improving the performances of Xen on NUMA systems. The full roadmap is available on this Wiki page: [[Xen NUMA Roadmap]] |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: understand the existing ballooning algorithms and code. While at it, check whether the currently available documentation (both on the Xen Wiki and in the source tree), is updated and aligned with the actual code behavior and, if not, fix it; |
|||
* Phase 2: identify where to act to achieve what the project requires in the most effective way, namely: the ballooning code in the hypervisor? The ballooning driver in the guest? Both? |
|||
* Phase 3: modify ballooning algorithms so that memory is reclaimed node-wise. |
|||
|Outcomes=The candidate is expected to produce a set of patch series (one patch series for each phase of the project), send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
|References=Useful references inlined in the project description. Notice that having a NUMA testing machine handy would be really useful. However, if that is not the case, solutions will be found to allow the participant to properly test the code. |
|||
}} |
|||
{{GSoC Project |
|||
|Project=Temporal Isolation and Multiprocessor Support in the SEDF Scheduler |
|||
|Anchor=sedf-improvements |
|||
|Difficulty=Basic to Medium |
|||
|Date=08/08/2012 |
|||
|Contact=Dario Faggioli <dario.faggioli@citrix.com> |
|||
|Skills=C programming, genuine interest in scheduling algorithm design and implementation |
|||
|Desc= |
|||
No matter if it is to build a [http://gigaom.com/2011/06/25/mobile-virtualization-finds-its-home-in-the-enterprise/ multi-personallity mobile phone], or [http://www.youtube.com/watch?v=j4uMdROzEGI help achieving consolidation in industrial and factory automation], embedded virtualization ([http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], [http://www.ibm.com/developerworks/linux/library/l-embedded-virtualization/index.html [2]], [http://www.wirevolution.com/2012/02/18/mobile-virtualization/ [3]]) is upon us. In fact, quite a number of ''embedded hypervisors'' already exist, e.g.: [http://www.windriver.com/products/hypervisor/ Wind River Hypervisor], [http://dev.b-labs.com/ CodeZero] or [http://www.sysgo.com/products/pikeos-rtos-and-virtualization-concept/ PikeOS]. Xen definitely '''is''' ''small, fast type-1 hypervisor with support for multiple VMs'' [http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], so it could be a good candidate embedded hypervisor. |
|||
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
|||
However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs. |
|||
SEDF also lacks proper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most. |
|||
Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. For temporal isolation, among the various solutions proposed in real-time academic literature, one that is very effective and yet very simple to implement is the Constant BandWidth Server algorithm (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]). For multiprocessing, just adopting a different approach in managing the scheduling ready queues (e.g., having one queue serving multiple pCPUs) would be enough. Of course, envisioning and implementing mechanisms for migrating the vCPUs among different queues would be even better. |
|||
|Steps=The work on the project can be subdivided in the following phases: |
|||
* Phase 1: study and understand the CBS algorithm, and figure out what are the differences between it and the current SEDF implementation; |
|||
* Phase 2: get rid of all the special cases for dealing with vCPU blocking and unblocking and implement CBS on top of the existing SEDF code. Completing this phase would mean having successfully enabled proper temporal isolation within SEDF; |
|||
* Phase 3: instead of using one scheduling run-queue per each physical processor (pCPU), only use one per each "set of pCPUs". For instance, one run-queue for all the pCPUs that have a common L3 cache, as credit2, another scheduler present in Xen, is doing already. Completing this phase would mean having turned SEDF into a decent enough multiprocessor enabled scheduler; |
|||
* Phase 4 [Optional]: Envision and implement a mechanism for balancing and migrating vCPUs among different run-queues Completing this phase would mean having turned SEDF into a full-fledged multiprocessor enabled scheduler. |
|||
|Outcomes=The candidate is expected to produce a set of patch series, more specifically one series for each phase of the project, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |
|||
Having reached a good level of temporal isolation must be verified by running some typical real-time workload (e.g., [https://rt.wiki.kernel.org/index.php/Cyclictest Cyclictest] and [https://github.com/gbagnoli/rt-app rt-app]) inside a VM, and checking its timing requirements are being respected, despite the integerence of other VMs. Correct exploitation of multiprocessor platforms must be verified by making sure the vCPUs are automatically spreading around, instead of all being stuck on just one pCPU. |
|||
|References=Useful references inlined in the project description |
|||
|Review=(delete as addressed) |
|||
* {{Comment|[[User:Ijc|Ijc]] 09:36, 11 February 2013 (UTC):}} What is the outcome/deliverable for stage 1 (investigate CBS)? Is CBS the only option here or does the candidate need to evaluate other techniques? Is CBS the "Unified approaches ... enabling blocking" which the description refers to? Are there any particular success criteria for the other phases, e.g. specific performance characteristics of benchmark results which must be achieved? Where does the "implemented multiprocessor support" appear in the phases, is it a side effect of CBS or is it phase 3/4? |
|||
* {{Comment|[[User:Dariof|Dariof]], 13 March 2013 (Replying to [[User:Ijc|Ijc]]):}} Abstracted the part about the CBS algorithm outside from the "Steps" section, so that it is more clear that this is the solution allowing to kill the special cases and enable proper temporal isolation (as well as that there is no need to investigate any different algorithm). Clarified a bit more, both in the general description than in the description of the various phases, what each steps contributes to (to make it clear than turning SEDF into an SMP scheduler is not a consequence of CBS, it rather is what is done in phases 3 and 4). Gave some directions about validation and benchmarking. |
|||
}} |
}} |
||
Line 177: | Line 74: | ||
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished. |
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished. |
||
Also, a new hotplug implementation is currently under review [http://lists.xen.org/archives/html/xen-devel/2013-01/msg01962.html [1]], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the |
Also, a new hotplug implementation is currently under review [http://lists.xen.org/archives/html/xen-devel/2013-01/msg01962.html [1]], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the intern can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |
||
|Steps=The work on the project can be subdivided in the following phases: |
|Steps=The work on the project can be subdivided in the following phases: |
||
Line 187: | Line 84: | ||
|References=[http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/hotplug/Linux;hb=HEAD Source of current scripts] |
|References=[http://xenbits.xen.org/gitweb/?p=xen.git;a=tree;f=tools/hotplug/Linux;hb=HEAD Source of current scripts] |
||
|Review=(delete as addressed) |
|Review=(delete as addressed) |
||
* {{Comment|[[User:Ijc|Ijc]] 09:49, 11 February 2013 (UTC):}} Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like [[Xen Networking]]. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to |
* {{Comment|[[User:Ijc|Ijc]] 09:49, 11 February 2013 (UTC):}} Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like [[Xen Networking]]. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to interns, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems. |
||
}} |
}} |
||
Line 266: | Line 163: | ||
|Anchor=fuzz-testing-mirage |
|Anchor=fuzz-testing-mirage |
||
|Date=28/11/2012 |
|Date=28/11/2012 |
||
|Contact=Anil Madhavapeddy <anil@recoil.org> |
|Contact=Anil Madhavapeddy <anil@recoil.org> - '''IMPORTANT''' also join the <em>cl-mirage</em> list |
||
|Difficulty=Medium |
|Difficulty=Medium |
||
|Skills=OCaml programming. C programming. |
|Skills=OCaml programming. C programming. |
||
Line 294: | Line 191: | ||
|Anchor=unikernel-substrate |
|Anchor=unikernel-substrate |
||
|Date=28/11/2012 |
|Date=28/11/2012 |
||
|Contact=Anil Madhavapeddy <anil@recoil.org> |
|Contact=Anil Madhavapeddy <anil@recoil.org> - '''IMPORTANT''' also join the <em>cl-mirage</em> list |
||
|Desc= |
|Desc= |
||
There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include: |
There are several languages available that compile directly to Xen microkernels, instead of running under an intervening guest OS. We're dubbing such specialised binaries as "unikernels". Examples include: |
||
Line 380: | Line 277: | ||
* [http://git-scm.com/documentation Introduction to Git] |
* [http://git-scm.com/documentation Introduction to Git] |
||
= Information For Mentors = |
|||
== Creative Commons Photo Credits == |
|||
See [https://wiki.gnome.org/OutreachProgramForWomen#For_Mentors here] |
|||
* [http://tux.crystalxp.net/ Pink Tux] |
|||
[[Category:Community]] |
[[Category:Community]] |
Latest revision as of 17:00, 24 October 2013
Xen Project and OPW
The Xen Project Advisory Board will be sponsoring one intern for Round 7 of the Gnome Outreach Program For Women which runs from December 2013 to March 2014. This program is an internship program specifically targeted at women: our goal is to increase womens' participation in the Xen Project. It is a continuation of the very successful GNOME Outreach Program for Women and we are running the program in conjunction with GNOME and other prominent open source projects.
Information For Applicants
How To Apply
The official application period for OPW Round 7 is October 1st to November 11th. Please fill our your initial application and complete your Xen patch by November 11th. Applicants that do not complete the first patch will not be considered for an internship. Please take a look at our application FAQ for more info on how to fill our your initial application. Applicants will be notified by November 25th if they have been accepted.
If you are interested in being a Xen intern, please:
- Join the xen-devel mailing list. Depending on the project you choose, you may also need to join xen-api or cl-mirage (do check the information in the project). Check the list directory.
- Join the #opw IRC channel on irc.gnome.org
- Join the #xen-opw IRC channel on freenode.net
- Read our instructions for applying, and apply by Nov 11th.
- Ask one of our friendly developers on for a bite size bug or work item. This can be reviewing somebodies code, submitting a patch, or similar and will need to be done by Nov 11th.
Schedule
- October 1: program announced and application form made available
- October 1 - November 11: applicants need to get in touch with at least one project and make a contribution to it
- November 11: application deadline
- November 25: accepted participants announced
- December 10 - March 10: internship period
Community Reviewed Project List
This section contains projects that have been reviewed by Xen Maintainers and Committers. Community members are free to add their own project ideas. We have unreviewed project ideas in the following places:
This section contains peer reviewed projects that have been selected based on the following criteria:
If your project did not make it into this list, it does not mean it will be excluded. It merely is not one of the projects that were ready for the program. Please add projects into Unreviewed Project Ideas. |
Introducing PowerClamp-like driver for Xen
|
{{GSoC Project |Project=Refactor Linux hotplug scripts |Anchor=linux-hotplug-scripts |Date=15/11/2012 |Contact=Roger Pau Monné <roger.pau@citrix.com> |Difficulty=Medium |Skills=Knowledge of C and good level of shell scripting |Desc= Current Linux hotplug scripts are all entangled, which makes them really difficult to understand or modify. The reason of hotplug scripts is to give end-users the chance to "easily" support different configuration for Xen devices.
Linux hotplug scripts should be analyzed, providing a good description of what each hotplug script is doing. After this, scripts should be cleaned, putting common pieces of code in shared files across all scripts. A Coding style should be applied to all of them when the refactoring is finished.
Also, a new hotplug implementation is currently under review [1], which will allow the user to create more complex hotplug scripts that offer extended functionality. Optionally the intern can implement support for other backends using the new hotplug interface (GlusterFS, Ceph...). |Steps=The work on the project can be subdivided in the following phases:
- Phase 1: analyze hotplug scripts and determine what each script does internally in order to attach the device
- Phase 2: move common bits of code to shared files, providing a sane API
- Phase 3: refactor hotplug scripts to use this new API, and clean the code applying an uniform coding style
- Phase 4 [Optional]: create hotplug scripts for new backends (GlusterFS, Ceph)
|Outcomes=The candidate is expected to produce at least a series of patches, that contains the new internal hotplug API and the old scripts refactoring, send them to the Xen development mailing list and follow all the typical Open Source process for having them upstreamed in Xen. |References=Source of current scripts |Review=(delete as addressed)
- Ijc 09:49, 11 February 2013 (UTC): Can we include a specific requirement to not just analyze but also document the behavior of the scripts, both the high-level semantics of each class of script (vif, block etc) but also the specifics of each (e.g. vif-{bridge,route,etc})? Ideally this would integrate with existing pages like Xen Networking. Should there also be a focus on customizability? I think it is expected that people will customize the scripts to suit there environments but due to the complexity a lot of folks don't. A refactoring project is not inherently that exiting, so I'm not sure how much it would appeal to interns, perhaps Phase 4 could be non-optional and require the creation of at least one new set of hotplug scripts to be created, as a kind of concrete end goal to all the refactoring? Not sure if that explodes the scope/time required out too far though. Ideally a new network script would be included to (to cover both main sets of bases) but we already cover most of the interesting cases there I think, openvswitch perhaps? I'm a little bit concerned that this project might also be chasing a moving target as the hotplug mechanism is refactored, but perhaps much of that will be finished by the time GSoC starts and having the person doing that refactoring also mentor the project should help minimise problems.
}}
XL to XCP VM motion
|
VM Snapshots
Peer Review Comments(delete as addressed)
|
Fuzz testing Xen with Mirage
Peer Review Comments(delete as addressed)
|
Towards a multi-language unikernel substrate for Xen
|
Testing PV and HVM installs of Debian using debian-installer
Peer Review Comments(delete as addressed)
|
Testing NetBSD
|
Useful Resources
Here is some links to guides, tools, development flows etc.
- All Developer Information
- Xen Overview
- Submitting Xen Patches with Git
- Xen Beginners Guide
- Introduction to Git
Information For Mentors
See here