Template:GSoC Project/doc2
From Xen
Jump to navigationJump to search
Parameters
Notes on formatting
You can use any formatting withing the template (see Help:Formatting. Hiowever do note that you will need to be careful with line-breaks. Each line-break will result in a new parameter.
Project Fields
- Project : Project Title
- Date : Date of creation
- Contact : Mentor of project
- Difficulty : Level of difficulty for project
- Skills : Skills needed and other pre-conditions
- Desc : Description of project
- Outcomes: Project outcomes
- Steps: Necessary steps to accomplish project goal
- References: Useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description".
Meta Information
- Anchor : Anchor for project, e.g.
|Anchor=my-unique-anchor
can be referenced via[[#my-unique-anchor]]
Fields for Students
We will add additional fields for students, in due course.
Example
The following is an example of a GSoC project example using this template
{{GSoC Project |Project=Multiqueue support for Xen netback/netfront in Linux kernel |Date=01/22/2013 |Contact=Wei Liu |Difficulty=High |Skills=Linux kernel programming skill, knowledge of Xen PV device model. The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties. |Desc=Multiqueue support allows a single virtual network interface (vif) to scale to multiple vcpus. Each queue has it's own interrupt, and thus can be bind to a different vcpu. KVM VirtIO, VMware VMXNet3, tun/tap and various other drivers already support multiqueue in upstream Linux. Some general info about multiqueue: http://lists.linuxfoundation.org/pipermail/virtualization/2011-August/018247.html In the current implementation of Xen PV network, every vif is equipped with only one TX/RX ring pair and one event channel, which does not scale when a guest has multiple vcpus. If we need to utilize all vcpus to do network job then we need to configure multiple vifs and bind interrupts to vcpus manuals. This is not ideal and involves too much configuration. The multiqueue support in Xen vif should be straight forward. It requires changing the current vif protocol and the code used to initialize / connect / reconnect vifs. However, there are risks in terms of collaboration, it is possible multiple parties will work on same piece of code. Here are possible obstacles and thoughts: * netback worker model change - the possible change is from M:N to 1:1 is not really an obstacle because 1:1 is just a special case for M:N * netback page allocation mechanism change - not likely to have protocol change * netback zero-copy - not likely to have protocol change * receiver-side copy - touches both protocol and implementation, * multi-page ring - touches protocol and implementation, should be easy to merge * split event channel - touches protocol and implementation, should be easy to merge |Outcomes=The project is expected to have the following outcomes: * have multi-queue patch ready to upstream or upstreamed * benchmark report (basic: compare single-queue / multi-queue vif. advanced: compare Xen multi-queue vif against KVM multi-queue VirtIO etc.) }}