Template:GSoC Project/doc2

From Xen
< Template:GSoC Project
Revision as of 12:09, 27 January 2013 by Lars.kurth (talk | contribs) (Created page with "== Parameters == * Project : Project Title * Date : Date of creation * Contact : Mentor of project * Difficulty : Level of difficulty for project * Skills : Skills needed and ot…")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Parameters

  • Project : Project Title
  • Date : Date of creation
  • Contact : Mentor of project
  • Difficulty : Level of difficulty for project
  • Skills : Skills needed and other pre-conditions
  • Desc : Description of project
  • Outcomes: Project outcomes

Example

The following is an example of a GSoC project example using this template

{{GSoC Project
|Project=Multiqueue support for Xen netback/netfront in Linux kernel
|Date=01/22/2013
|Contact=Wei Liu
|Difficulty=High
|Skills=Linux kernel programming skill, knowledge of Xen PV device model. The candidate for 
this project should be familiar with open source development workflow as it may require 
collaboration with several parties.
|Desc=Multiqueue support allows a single virtual network interface (vif) to scale to multiple vcpus. 
Each queue has it's own interrupt, and thus can be bind to a different vcpu. KVM VirtIO, VMware 
VMXNet3, tun/tap and various other drivers already support multiqueue in upstream Linux.

Some general info about multiqueue: http://lists.linuxfoundation.org/pipermail/virtualization/2011-August/018247.html
In the current implementation of Xen PV network, every vif is equipped with only one TX/RX ring 
pair and one event channel, which does not scale when a guest has multiple vcpus. If we need to 
utilize all vcpus to do network job then we need to configure multiple vifs and bind interrupts to 
vcpus manuals. This is not ideal and involves too much configuration.

The multiqueue support in Xen vif should be straight forward. It requires changing the current vif 
protocol and the code used to initialize / connect / reconnect vifs. However, there are risks in terms 
of collaboration, it is possible multiple parties will work on same piece of code. Here are possible 
obstacles and thoughts:
* netback worker model change - the possible change is from M:N to 1:1 is not really an obstacle 
because 1:1 is just a special case for M:N
* netback page allocation mechanism change - not likely to have protocol change
* netback zero-copy - not likely to have protocol change
* receiver-side copy - touches both protocol and implementation,
* multi-page ring - touches protocol and implementation, should be easy to merge
* split event channel - touches protocol and implementation, should be easy to merge
|Outcomes=The project is expected to have the following outcomes:
* have multi-queue patch ready to upstream or upstreamed
* benchmark report (basic: compare single-queue / multi-queue vif. advanced: compare Xen multi-queue vif against KVM multi-queue VirtIO etc.)
}}

It renders as

Multiqueue support for Xen netback/netfront in Linux kernel

Date of insert: 01/22/2013; Verified: Icon Ambox.png Not specified, date when created; GSoC: Yes
Mentor: Wei Liu
Difficulty: High
Skills Needed: Linux kernel programming skill, knowledge of Xen PV device model. The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties.
Description: Multiqueue support allows a single virtual network interface (vif) to scale to multiple vcpus. Each queue has it's own interrupt, and thus can be bind to a different vcpu. KVM VirtIO, VMware VMXNet3, tun/tap and various other drivers already support multiqueue in upstream Linux.

Some general info about multiqueue: http://lists.linuxfoundation.org/pipermail/virtualization/2011-August/018247.html In the current implementation of Xen PV network, every vif is equipped with only one TX/RX ring pair and one event channel, which does not scale when a guest has multiple vcpus. If we need to utilize all vcpus to do network job then we need to configure multiple vifs and bind interrupts to vcpus manuals. This is not ideal and involves too much configuration.

The multiqueue support in Xen vif should be straight forward. It requires changing the current vif protocol and the code used to initialize / connect / reconnect vifs. However, there are risks in terms of collaboration, it is possible multiple parties will work on same piece of code. Here are possible obstacles and thoughts:

  • netback worker model change - the possible change is from M:N to 1:1 is not really an obstacle

because 1:1 is just a special case for M:N

  • netback page allocation mechanism change - not likely to have protocol change
  • netback zero-copy - not likely to have protocol change
  • receiver-side copy - touches both protocol and implementation,
  • multi-page ring - touches protocol and implementation, should be easy to merge
  • split event channel - touches protocol and implementation, should be easy to merge
Outcomes: The project is expected to have the following outcomes:
  • have multi-queue patch ready to upstream or upstreamed
  • benchmark report (basic: compare single-queue / multi-queue vif. advanced: compare Xen multi-queue vif against KVM multi-queue VirtIO etc.)
Steps: Icon Ambox.png Not specified, necessary steps to accomplish project goal
References: Icon Ambox.png Not specified, useful references (mail threads / manuals / web pages) for students to learn background and motivation of the project. If the references are inlined in description, simply write "References inline in description".