Archived/Xen Development Projects: Difference between revisions
Dave.scott (talk | contribs) (XCP ceph subsumed by ceph tech preview) |
Lars.kurth (talk | contribs) No edit summary |
||
(15 intermediate revisions by 5 users not shown) | |||
Line 20: | Line 20: | ||
== List of projects == |
== List of projects == |
||
=== Domain support === |
=== Domain support === |
||
{{project |
|||
|Project=Upstreaming Xen PVSCSI drivers to mainline Linux kernel |
|||
|Date=01/08/2012 |
|||
|Difficulty=Hard |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|GSoC=No |
|||
|Desc= |
|||
PVSCSI drivers needs to be upstreamed yet. Necessary operations may include: |
|||
* Task 1: Upstream PVSCSI scsifront frontend driver (for domU). |
|||
* Task 2: Upstream PVSCSI scsiback backend driver (for dom0). |
|||
* Send to various related upstream mailinglists for review, comments. |
|||
* Fix any upcoming issues. |
|||
* Repeat until merged to upstream Linux kernel git tree. |
|||
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-scsi.v1.0 |
|||
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI |
|||
}} |
|||
{{project |
|||
|Project=Upstreaming Xen PVUSB drivers to mainline Linux kernel |
|||
|Date=01/08/2012 |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|Difficulty=Hard |
|||
|GSoC=No, unless Konrad believes these can be done. |
|||
|Desc= |
|||
PVUSB drivers needs to be upstreamed yet. Necessary operations may include: |
|||
* Upstream PVUSB usbfront frontend driver (for domU). |
|||
* Upstream PVUSB usbback backend driver (for dom0). |
|||
* Send to various related upstream mailinglists for review, comments. |
|||
* Fix any upcoming issues. |
|||
* Repeat until merged to upstream Linux kernel git tree. |
|||
* http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=shortlog;h=refs/heads/devel/xen-usb.v1.1 |
|||
* More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough |
|||
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Would also need more detail |
|||
}} |
|||
{{project |
|||
|Project=Implement Xen PVSCSI support in xl/libxl toolstack |
|||
|Date=01/12/2012 |
|||
|Contact=Pasi Karkkainen <pasik@iki.fi> |
|||
|GSoC=Yes |
|||
|Desc= |
|||
xl/libxl does not currently support Xen PVSCSI functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include: |
|||
* Task 1: Implement PVSCSI in xl/libxl, make it functionally equivalent to xm/xend. |
|||
* Send to xen-devel mailinglist for review, comments. |
|||
* Fix any upcoming issues. |
|||
* Repeat until merged to xen-unstable. |
|||
* See above for PVSCSI drivers for dom0/domU. |
|||
* Xen PVSCSI supports both PV domUs and HVM guests with PV drivers. |
|||
* More info: http://wiki.xen.org/xenwiki/XenPVSCSI |
|||
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel. |
|||
}} |
|||
{{project |
|||
|Project=Implement Xen PVUSB support in xl/libxl toolstack |
|||
|Date=01/12/2012 |
|||
|Contact=Pasi Karkkainen <pasik@iki.fi> |
|||
|GSoC=Yes |
|||
|Desc= |
|||
xl/libxl does not currently support Xen PVUSB functionality. Port the feature from xm/xend to xl/libxl. Necessary operations include: |
|||
* Task 1: Implement PVUSB in xl/libxl, make it functionally equivalent to xm/xend. |
|||
* Send to xen-devel mailinglist for review, comments. |
|||
* Fix any upcoming issues. |
|||
* Repeat until merged to xen-unstable. |
|||
* See above for PVUSB drivers for dom0/domU. |
|||
* Xen PVUSB supports both PV domUs and HVM guests with PV drivers. |
|||
* More info: http://wiki.xen.org/xenwiki/XenUSBPassthrough |
|||
{{Comment|[[User:Lars.kurth|Lars.kurth]] 14:14, 23 January 2013 (UTC):}} Should be suitable, but desc needs. Rate in terms of challenges, size and skill. Also kernel functionality is not yet upstreamed. Maybe Suse kernel. |
|||
}} |
|||
{{project |
|||
|Project=Block backend/frontend improvements |
|||
|Date=01/01/2013 |
|||
|Difficulty=Medium |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|Desc= |
|||
Blkback requires a number of improvements, some of them being: |
|||
* Multiple disks in a guest cause contention in the global pool of pages. |
|||
* There is only one ring page and with SSDs nowadays we should make this larger, implementing some multi-page support. |
|||
* With multi-page it becomes apparent that the segment size ends up wasting a bit of space on the ring. BSD folks fixed that by negotiating a new parameter to utilize the full size of the ring. Intel had an idea for descriptor page. |
|||
* Add DIF/DIX support [http://oss.oracle.com/~mkp/docs/lpc08-data-integrity.pdf] for T10 PI (Protection Information), to support data integrity fields and checksums. |
|||
* Further perf evaluation needs to be done to see how it behaves under high load. |
|||
* Further discussion and issues outlined in http://lists.xen.org/archives/html/xen-devel/2012-12/msg01346.html and https://docs.google.com/document/d/1Vh5T8Z3Tx3sUEhVB0DnNDKBNiqB_ZA8Z5YVqAsCIjuI |
|||
|GSoC=Yes, but we would have to chop them in a nice chunks |
|||
}} |
|||
{{project |
{{project |
||
Line 124: | Line 39: | ||
}} |
}} |
||
{{project |
|||
|Project=Parallel xenwatch kthread |
|||
|Date=01/08/2012 |
|||
|Difficulty=Low-Medium |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|Desc= |
|||
Xenwatch is locked with a coarse lock. For a huge number of guests this represents a scalability issue. The need is to rewrite the xenwatch locking in order to support full scalability. |
|||
|Outcomes=Expected outcome: |
|||
* Have upstream patches. |
|||
* benchmark report of with and without. |
|||
|GSoC=Yes |
|||
}} |
|||
{{project |
{{project |
||
Line 168: | Line 69: | ||
* design / implement interface between Hvmloader and the unified binary |
* design / implement interface between Hvmloader and the unified binary |
||
}} |
|||
{{project |
|||
|Project=Improvements to firmware handling HVM guests |
|||
|Date=07/16/2015 |
|||
|Contact=Andrew Cooper <andrew.cooper3@citrix.com> |
|||
|GSoC=Yes |
|||
|Difficulty=Easy |
|||
|Skills Needed=Gnu toolchain, Familiarity with Multiboot, C |
|||
|Desc= |
|||
Currently, all firmware is compiled into HVMLoader. |
|||
This works, but is awkward when used using a single distro seabios/ovmf designed for general use. In such a case, any time an update to seabios/ovmf happens, hvmloader must be rebuilt. |
|||
The purpose of this project is to alter hvmloader to take firmware blobs as a multiboot module rather than requiring them to be built in. This reduces the burden of looking after Xen in a distro environment, and will also be useful for developers wanting to work with multiple versions of firmware. |
|||
As an extension, support loading an OVMF NVRAM blob. This enabled EFI NVRAM support for guests. |
|||
}} |
}} |
||
Line 198: | Line 116: | ||
For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests. |
For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests. |
||
}} |
}} |
||
{{project |
|||
|Project=HVM per-event-channel interrupts |
|||
|Date=01/30/2013 |
|||
|Contact=Paul Durrant <paul.durrant@citrix.com> |
|||
|Difficulty= |
|||
|Skills=C, some prior knowledge of Xen useful |
|||
|Desc=Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load. |
|||
Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest. |
|||
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in xen-unstable |
|||
|GSoC=yes}} |
|||
=== Userspace Tools === |
=== Userspace Tools === |
||
Line 249: | Line 155: | ||
}} |
}} |
||
{{project |
|||
|Project=VM Snapshots |
|||
|Date=16/01/2013 |
|||
|Contact=<[mailto:stefano.stabellini@eu.citrix.com Stefano Stabellini]> |
|||
|Desc=Although xl is capable of saving and restoring a running VM, it is not currently possible to create a snapshot of the disk together with the rest of the VM. |
|||
QEMU is capable of creating, listing and deleting disk snapshots on QCOW2 and QED files, so even today, issuing the right commands via the QEMU monitor, it is possible to create disk snapshots of a running Xen VM. xl and libxl don't have any knowledge of these snapshots, don't know how to create, list or delete them. |
|||
This project is about implementing disk snapshots support in libxl, using the QMP protocol to issue commands to QEMU. Users should be able to manage the entire life-cycle of their disk snapshots via xl. The candidate should also explore ways to integrate disk snapshots into the regular Xen save/restore mechanisms and provide a solid implementation for xl/libxl. |
|||
[[GSoC_2013#vm-snapshots]] |
|||
|GSoC=Yes |
|||
}} |
|||
{{project |
{{project |
||
Line 374: | Line 267: | ||
}} |
}} |
||
=== |
=== Mirage and XAPI projects === |
||
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell! |
|||
{{project |
{{project |
||
Line 391: | Line 285: | ||
|GSoC=yes}} |
|GSoC=yes}} |
||
=== Xen Cloud Platform (XCP) and XAPI projects === |
|||
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell! |
|||
{{project |
{{project |
||
Line 406: | Line 298: | ||
[[GSoC_2013#fuzz-testing-mirage]] |
[[GSoC_2013#fuzz-testing-mirage]] |
||
|Outcomes=1. a repo containing a fuzz testing tool; 2. some unexpected behaviour with a backtrace (NB it's not required that we find a critical bug, we just need to show the approach works) |
|Outcomes=1. a repo containing a fuzz testing tool; 2. some unexpected behaviour with a backtrace (NB it's not required that we find a critical bug, we just need to show the approach works) |
||
}} |
|||
{{project |
|||
|Project=Mirage OS cloud API support |
|||
|Date=28/11/2012 |
|||
|Contact=Anil Madhavapeddy <anil@recoil.org> |
|||
|Skills=OCaml |
|||
|Difficulty=medium |
|||
|Desc= |
|||
MirageOS (http://openmirage.org) is a type-safe exokernel written in OCaml which generates highly specialised "appliance" VMs that run directly on Xen without requiring an intervening kernel. A MirageOS application typically runs via several communicating kernel instances on the cloud. Today these instances are difficult to manage; we would like to explore strategies for managing these distributed computations using common public cloud APIs such as those exposed by Amazon EC2 and Rackspace. |
|||
First we need to create pure OCaml API bindings for (e.g.) EC2 and Rackspace (purity is needed to ensure portability). These API bindings can then be used to provide operating-system-level abstractions to the exokernels. For example, a traditional VM might hotplug a vCPU; while a MirageOS application would request a "VM create" using the cloud API and "connect" the new instance to the existing network. We should be able to spin up 1000s of "CPUs" by using such APIs in a cluster environment. |
|||
As well as helping Xen/Mirage, the public cloud API bindings will be very useful to other people in other contexts-- a nice side-effect. |
|||
|Outcomes=1. one or more public cloud API bindings plus examples, in a standalone repo on github; 2. an example mirage app which uses these APIs to spin up a new VM |
|||
|GSoC=yes |
|GSoC=yes |
||
}} |
}} |
||
Line 437: | Line 314: | ||
This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource... |
This project will require a solid grasp of distributed protocols, and functional programming. Okasaki's book will be a useful resource... |
||
|Outcomes=1. a repo/branch with a fake ethernet device and a traffic simulator; 2. an interesting performance graph |
|Outcomes=1. a repo/branch with a fake ethernet device and a traffic simulator; 2. an interesting performance graph |
||
|GSoC=no, too much work |
|||
}} |
}} |
||
Line 457: | Line 335: | ||
[[GSoC_2013#unikernel-substrate]] |
[[GSoC_2013#unikernel-substrate]] |
||
|Outcomes=1. a repo containing a common library of low-level functions; 2. a proof of concept port of at least 2 systems to this new library |
|Outcomes=1. a repo containing a common library of low-level functions; 2. a proof of concept port of at least 2 systems to this new library |
||
|GSoC=no, too difficult |
|||
}} |
}} |
||
Line 471: | Line 350: | ||
}} |
}} |
||
<!-- |
|||
{{project |
|||
|Project= Expose counters for additional aspects of system performance in XCP |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Davies <''first.last''@citrix.com> |
|||
|Difficulty=Low |
|||
|Skills=Basic familiarity with administration of a Xen and/or Linux host |
|||
|Desc=XCP stores performance data persistently in round robin databases (RRDs). Presently, XCP only exposes a few aspects of system performance through the RRD mechanism, e.g. vCPU and pCPU utilisation, VM memory size, host network throughput. |
|||
The XCP RRD daemon (xcp-rrdd) provides a plugin interface to allow other processes to provide data sources. In principle, these plugins can be written in any language by using the XML-RPC/JSON interface (although presently bindings only exist for OCaml). |
|||
The project: |
|||
Create plugins that expose additional information, including things like: |
|||
* total amount of CPU cycles used by each VM |
|||
* VM- or VBD/VIF-level disk and network throughput |
|||
* number of event channels consumed per domain |
|||
* how much work is each VM demanding of qemu, netback, blkback, xenstored? |
|||
* perhaps other statistics only currently easily obtainable via xentrace |
|||
|Outcomes=A set of plugins is authored in a suitable language and demonstrated to work in XCP. The code is submitted to the XCP project on github. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=Add support for XCP performance counters to be sampled at varying rates |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Davies <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=xcp-rrdd is coded in OCaml, so familiarity with this language would be helpful |
|||
|Desc=XCP's RRD daemon (xcp-rrdd) stores performance data persistently in 'round robin databases' (RRDs). Each of these is a fixed size structure containing data at multiple resolutions. 'Data sources' are sampled at five-second intervals and points are added to the highest resolution RRD. Periodically each high-frequency RRD is 'consolidated' (e.g. averaged) to produce a data point for a lower-frequency RRD. In this way, data for a long period of time can be stored in a space-efficient manner, with the older data being lower in resolution than more recent data. |
|||
However, some data sources change very slowly (e.g. CPU temperature, available disk capacity). So it is overkill to sample them every five seconds. This becomes a problem when it is costly to sample them, perhaps because it involves a CPU-intensive computation or disk activity. |
|||
The RRD daemon provides a plugin interface to allow other processes to provide data sources. |
|||
The project goal is to generalise the RRD daemon's data-source sampling mechanism to allow it to sample data sources at different frequencies. Extend the plugin interface to allow plugins to suggest the frequency at which they are sampled. |
|||
|Outcomes=A mechanism is defined and code produced to meet the project goals. The code is submitted to the XCP project on github. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=XCP backend to Juju/Chef/Puppet/Vagrant |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Ludlam <''first.last''@citrix.com> |
|||
|Difficulty=Medium to small |
|||
|Skills= |
|||
|Desc=Juju, chef and puppet are all tools that are used to provision and configure virtual and in some cases physical machines. They all have pluggable backends and can target many cloud providers APIs, but none of them currently target the Xen API. |
|||
|Outcomes=A new backend for one or more of these, able to install and configure virtual machines on a machine running XCP. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=Add connection tracking capability to the Linux OVS |
|||
|Date=01/30/2013 |
|||
|Contact=Mike Bursell <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=C, networking concepts, OpenFlow useful (but not essential upfront) |
|||
|Desc=The open-vswitch (OVS) currently has no concept of connections - only flows. One piece of functionality which it would be interesting to create would be to allow incoming flows for existing outgoing flows in order, for instance, to allow telnet or HTTP connections. This would require an add-on which would act as proxy between an OpenFlow controller and the OVS instance. It would intercept requests for flow rules for incoming flows, match against existing outgoing flows, and, based on simple rules, decide whether to set up a relevant OpenFlow rule in the OVS. This is comparable to iptables’ “RELATED/ESTABLISHED” state matching. If time were short, a simpler, non-proxying version for use without a Controller would still be useful. |
|||
|Outcomes=A daemon which would bind to the local OVS OpenFlow interface and perform the following work: |
|||
* parse OpenFlow requests |
|||
* read OVS flow table(s) |
|||
* optionally maintain a whitelist of acceptable port/IP address tuples |
|||
* write OVS flow rules |
|||
* optionally act as a proxy between an OVS instance and an OpenFlow Controller, relaying requests which it has decided not to parse out to the Controller, and the results back to the OVS instance. |
|||
Code would be submitted to the openvswitch project. |
|||
|GSoC=yes}} |
|||
* XCP and XAPI development projects: [[XAPI project suggestions]] |
* XCP and XAPI development projects: [[XAPI project suggestions]] |
||
* XCP short-term roadmap: [[XCP short term roadmap]] |
* XCP short-term roadmap: [[XCP short term roadmap]] |
||
* XCP monthly developer meetings: [[XCP Monthly Meetings]] |
* XCP monthly developer meetings: [[XCP Monthly Meetings]] |
||
--> |
|||
* XAPI developer guide: [[XAPI Developer Guide]] |
* XAPI developer guide: [[XAPI Developer Guide]] |
||
Line 542: | Line 360: | ||
Please see [[XenRepositories]] wiki page! |
Please see [[XenRepositories]] wiki page! |
||
[[Category: |
[[Category:Archived]] |
||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category:Xen 4.4]] |
[[Category:Xen 4.4]] |
Latest revision as of 19:04, 18 February 2016
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
To work on a project:
- Find a project that looks interesting (or a bug if you want to start with something simple)
- Send an email to xen-devel mailinglist and let us know you started working on a specific project.
- Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
- Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
- Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
- Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
Before to submit patches, please look at Submitting Xen Patches wiki page.
If you have new ideas, suggestions or development plans let us know and we'll update this list!
List of projects
Domain support
Utilize Intel QuickPath on network and block path.
|
Enabling the 9P File System transport as a paravirt device
|
OVMF Compatibility Support Module support in Xen
|
Improvements to firmware handling HVM guests
|
Hypervisor
Introducing PowerClamp-like driver for Xen
|
Integrating NUMA and Tmem
|
Userspace Tools
Refactor Linux hotplug scripts
|
XL to XCP VM motion
|
Allowing guests to boot with a passed-through GPU as the primary display
|
Advanced Scheduling Parameters
|
CPU/RAM/PCI diagram tool
|
KDD (Windows Debugger Stub) enhancements
|
Lazy restore using memory paging
|
CPUID Programming for Humans
|
Mirage and XAPI projects
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
Create a tiny VM for easy load testing
|
Fuzz testing Xen with Mirage
|
From simulation to emulation to production: self-scaling apps
|
Towards a multi-language unikernel substrate for Xen
|
DRBD Integration
|
- XAPI developer guide: XAPI Developer Guide
Please see XenRepositories wiki page!