Archived/Xen Development Projects: Difference between revisions
m (Add references to GSoC 2013) |
Jamesbulpin (talk | contribs) (Add proposed GSoC projects from Citrix XenServer team) |
||
Line 366: | Line 366: | ||
For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests. |
For instance, implementing something like <code>alloc_page_on_any_node_but_the_current_one()</code> (or <code>any_node_except_this_guests_node_set()</code> for multinode guests), and have Xen's Tmem implementation use it (especially in combination with selfballooning), could solve a significant part of the NUMA problem when running Tmem-enabled guests. |
||
}} |
}} |
||
{{project |
|||
|Project=IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA allocations |
|||
|Date=01/30/2013 |
|||
|Contact=Andy Cooper <andrew.cooper3@citrix.com> |
|||
|Difficulty=High |
|||
|Skills=C, Xen and Linux kernel knowledge |
|||
|Desc=With VT-d/SVM-vi capable systems, this would cause far less overhead (1 cpu copy in both directions), and allow the use of 9K MTU at a sensible rate. This would allow for better I/O with modern hardware. |
|||
|Outcomes=TBD |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=HVM per-event-channel interrupts |
|||
|Date=01/30/2013 |
|||
|Contact=Paul Durrant <''first.last''@citrix.com> |
|||
|Difficulty= |
|||
|Skills=C, some prior knowledge of Xen useful |
|||
|Desc=Windows PV drivers currently have to multiplex all event channel processing onto a single interrupt which is registered with Xen using the HVM_PARAM_CALLBACK_IRQ parameter. This results in a lack of scalability when multiple event channels are heavily used, such as when multiple VIFs in the VM as simultaneously under load. |
|||
Goal: Modify Xen to allow each event channel to be bound to a separate interrupt (the association being controlled by the PV drivers in the guest) to allow separate event channel interrupts to be handled by separate vCPUs. There should be no modifications required to the guest OS interrupt logic to support this (as there is with the current Linux PV-on-HVM code) as this will not be possible with a Windows guest. |
|||
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in xen-unstable |
|||
|GSoC=yes}} |
|||
=== Userspace Tools === |
=== Userspace Tools === |
||
Line 463: | Line 485: | ||
}} |
}} |
||
{{project |
|||
|Project=CPU/RAM/PCI diagram tool |
|||
|Date=01/30/2013 |
|||
|Contact=Andy Cooper <andrew.cooper3@citrix.com> |
|||
|Difficulty=Low to medium |
|||
|Skills=Linux scripting; basic understanding of PC server hardware |
|||
|Desc=It is often useful in debugging kernel, hypervisor or performance problems to understand the bus topology of a server. This project will create a layout diagram for a server automatically using data from ACPI Tables, SMBios Tables, lspci output etc. This tool would be useful in general Linux environments including Xen and KVM based virtualisation systems. |
|||
There are many avenues for extension such as labelling relevant hardware errata, performing bus throughput calculations etc. |
|||
|Outcomes=A tool is created that can either run on a live Linux system or offline using captured data to produce a graphical representation of the hardware topology of the system including bus topology, hardware device locations, memory bank locations, etc. The tool would be submitted to a suitable open-source project such as the Xen hypervisor project or XCP. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=KDD (Windows Debugger Stub) enhancements |
|||
|Date=01/30/2013 |
|||
|Contact=Santosh Jodh <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=C, Kernel Debuggers, Xen, Windows |
|||
|Desc=kdd is a Windows Debugger Stub for Xen hypervisor. It is OSS found under http://xenbits.xen.org/hg/xen-unstable.hg/tools/debugger/kdd |
|||
kdd allows you to debug a running Windows virtual machine on Xen using standard Windows kernel debugging tools like WinDbg. kdd is an external debugger stub for the windows kernel. |
|||
Windows can be debugged without enabling the debugger stub inside windows kernel by using kdd. This is important for debugging hard to reproduce problems on Windows virtual machines that may not have debugging enabled. |
|||
Expected Results: |
|||
# Add support for Windows 8 (x86, x64) to kdd |
|||
# Add support for Windows Server 2012 to kdd |
|||
# Enhance kdd to allow WinDbg to write out usable Windows memory dumps (via .dump debugger extension) for all supported versions |
|||
# Produce a user guide for kdd on Xen wiki page |
|||
Nice to have: Allow kdd to operate on a Windows domain checkpoint file (output of xl save for e.g.) |
|||
|Outcomes=Code is submitted to xen-devel@xen.org for inclusion in the xen-unstable project. |
|||
|GSoC=yes}} |
|||
=== Performance === |
=== Performance === |
||
Line 472: | Line 525: | ||
Generally, works on the performance tool themselves should be listes separately to the [[Xen_Profiling:_oprofile_and_perf]] wiki page. |
Generally, works on the performance tool themselves should be listes separately to the [[Xen_Profiling:_oprofile_and_perf]] wiki page. |
||
}} |
}} |
||
{{project |
|||
|Project=Create a tiny VM for easy load testing |
|||
|Date=01/30/2013 |
|||
|Contact=Dave Scott <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=OCaml, C |
|||
|Desc=The http://www.openmirage.org/ framework can be used to create tiny 'exokernels': entire software stacks which run directly on the xen hypervisor. These VMs have such a small memory footprint (16 MiB or less) that many of them can be run even on relatively small hosts. The goal of this project is to create a specific 'exokernel' that can be configured to generate a specific I/O pattern, and to create configurations that mimic the boot sequence of Linux and Windows guests. The resulting exokernel will then enable cheap system load testing. |
|||
|Outcomes=1. a repository containing an 'exokernel' (see http://github.com/mirage/mirage-skeleton) |
|||
2. at least 2 I/O traces, one for Windows boot and one for Linux boot (any version) |
|||
|GSoC=yes}} |
|||
=== Upstream bugs! === |
=== Upstream bugs! === |
||
Line 617: | Line 681: | ||
DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes. |
DRBD should be implemented as a new SR type on top of LVM. The tools for managing DRBD devices need to be built into storage management, along with the logic for switching the active and standby nodes. |
||
}} |
}} |
||
{{project |
|||
|Project= Expose counters for additional aspects of system performance in XCP |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Davies <''first.last''@citrix.com> |
|||
|Difficulty=Low |
|||
|Skills=Basic familiarity with administration of a Xen and/or Linux host |
|||
|Desc=XCP stores performance data persistently in round robin databases (RRDs). Presently, XCP only exposes a few aspects of system performance through the RRD mechanism, e.g. vCPU and pCPU utilisation, VM memory size, host network throughput. |
|||
The XCP RRD daemon (xcp-rrdd) provides a plugin interface to allow other processes to provide data sources. In principle, these plugins can be written in any language by using the XML-RPC/JSON interface (although presently bindings only exist for OCaml). |
|||
The project: |
|||
Create plugins that expose additional information, including things like: |
|||
* total amount of CPU cycles used by each VM |
|||
* VM- or VBD/VIF-level disk and network throughput |
|||
* number of event channels consumed per domain |
|||
* how much work is each VM demanding of qemu, netback, blkback, xenstored? |
|||
* perhaps other statistics only currently easily obtainable via xentrace |
|||
|Outcomes=A set of plugins is authored in a suitable language and demonstrated to work in XCP. The code is submitted to the XCP project on github. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=Add support for XCP performance counters to be sampled at varying rates |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Davies <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=xcp-rrdd is coded in OCaml, so familiarity with this language would be helpful |
|||
|Desc=XCP's RRD daemon (xcp-rrdd) stores performance data persistently in 'round robin databases' (RRDs). Each of these is a fixed size structure containing data at multiple resolutions. 'Data sources' are sampled at five-second intervals and points are added to the highest resolution RRD. Periodically each high-frequency RRD is 'consolidated' (e.g. averaged) to produce a data point for a lower-frequency RRD. In this way, data for a long period of time can be stored in a space-efficient manner, with the older data being lower in resolution than more recent data. |
|||
However, some data sources change very slowly (e.g. CPU temperature, available disk capacity). So it is overkill to sample them every five seconds. This becomes a problem when it is costly to sample them, perhaps because it involves a CPU-intensive computation or disk activity. |
|||
The RRD daemon provides a plugin interface to allow other processes to provide data sources. |
|||
The project goal is to generalise the RRD daemon's data-source sampling mechanism to allow it to sample data sources at different frequencies. Extend the plugin interface to allow plugins to suggest the frequency at which they are sampled. |
|||
|Outcomes=A mechanism is defined and code produced to meet the project goals. The code is submitted to the XCP project on github. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=XCP backend to Juju/Chef/Puppet/Vagrant |
|||
|Date=01/30/2013 |
|||
|Contact=Jonathan Ludlam <''first.last''@citrix.com> |
|||
|Difficulty=Medium to small |
|||
|Skills= |
|||
|Desc=Juju, chef and puppet are all tools that are used to provision and configure virtual and in some cases physical machines. They all have pluggable backends and can target many cloud providers APIs, but none of them currently target the Xen API. |
|||
|Outcomes=A new backend for one or more of these, able to install and configure virtual machines on a machine running XCP. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=RBD (Ceph) client support in XCP |
|||
|Date=01/30/2013 |
|||
|Contact=James Bulpin <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=C and Python |
|||
|Desc=The Ceph distributed storage system allows objects to be stored in a distributed fashion over a number of storage nodes (as opposed to using a centralised storage server). Ceph provides for a block device abstraction (RBD); clients currently exist for Linux (exposing the device as a kernel block device) and qemu (providing a backend for emulated/virtualised virtual disks). It is desirable to have support for RBD in XCP to allow VMs to have virtual disks in a Ceph distributed storage system and therefore to allow VMs to be migrated between hosts without the need for centralised storage. Although it is possible to use the Linux kernel RBD client to do this the scalability is limited and there is no integrated way to manage the creation/destruction and attachment/detachment of RBDs. Ceph provide a user-space client library which could be used by XCP’s tapdisk program (this handles virtual disk read and write requests); a driver (Python script) for XCP’s storage manager could be written to manage the creation/destruction and attachment/detachment of RBDs. |
|||
|Outcomes= A tapdisk driver (userspace C) would be written to act as a datapath interface between XCP’s block backend and a Ceph distributed store. It is likely that a reasonable amount of this work could be performed by porting similar support from qemu. A storage manager driver (Python) would be written to activate/deactivate RBDs on demand and create and destroy them in response to XCP API calls. A Ceph distributed store would be represented by a XCP “storage repository” with a XCP “VDI” being implemented by a RBD. The end result would be that XCP VMs could run using Ceph RBDs as the backing for their virtual disks. All code would be submitted to the xen.org XCP tapdisk (blktap) and SM projects. |
|||
|GSoC=yes}} |
|||
{{project |
|||
|Project=Add connection tracking capability to the Linux OVS |
|||
|Date=01/30/2013 |
|||
|Contact=Mike Bursell <''first.last''@citrix.com> |
|||
|Difficulty=Medium |
|||
|Skills=C, networking concepts, OpenFlow useful (but not essential upfront) |
|||
|Desc=The open-vswitch (OVS) currently has no concept of connections - only flows. One piece of functionality which it would be interesting to create would be to allow incoming flows for existing outgoing flows in order, for instance, to allow telnet or HTTP connections. This would require an add-on which would act as proxy between an OpenFlow controller and the OVS instance. It would intercept requests for flow rules for incoming flows, match against existing outgoing flows, and, based on simple rules, decide whether to set up a relevant OpenFlow rule in the OVS. This is comparable to iptables’ “RELATED/ESTABLISHED” state matching. If time were short, a simpler, non-proxying version for use without a Controller would still be useful. |
|||
|Outcomes=A daemon which would bind to the local OVS OpenFlow interface and perform the following work: |
|||
* parse OpenFlow requests |
|||
* read OVS flow table(s) |
|||
* optionally maintain a whitelist of acceptable port/IP address tuples |
|||
* write OVS flow rules |
|||
* optionally act as a proxy between an OVS instance and an OpenFlow Controller, relaying requests which it has decided not to parse out to the Controller, and the results back to the OVS instance. |
|||
Code would be submitted to the openvswitch project. |
|||
|GSoC=yes}} |
|||
* XCP and XAPI development projects: [[XAPI project suggestions]] |
* XCP and XAPI development projects: [[XAPI project suggestions]] |
Revision as of 18:04, 30 January 2013
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
To work on a project:
- Find a project that looks interesting (or a bug if you want to start with something simple)
- Send an email to xen-devel mailinglist and let us know you started working on a specific project.
- Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
- Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
- Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
- Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
Before to submit patches, please look at Submitting Xen Patches wiki page.
If you have new ideas, suggestions or development plans let us know and we'll update this list!
List of projects
Domain support
Upstreaming Xen PVSCSI drivers to mainline Linux kernel
|
Upstreaming Xen PVUSB drivers to mainline Linux kernel
|
Implement Xen PVSCSI support in xl/libxl toolstack
|
Implement Xen PVUSB support in xl/libxl toolstack
|
Block backend/frontend improvements
|
Netback overhaul
|
Multiqueue support for Xen netback/netfront in Linux kernel
|
Block backend/frontend improvements
|
Utilize Intel QuickPath on network and block path.
|
perf working with Xen
|
PAT writecombine fixup
|
Parallel xenwatch
|
dom0 kgdb support
|
Hypervisor
Microcode uploader implementation
|
Introducing PowerClamp-like driver for Xen
|
Xen in the Real-Time/Embedded World: Are We Ready?
|
Xen in the Real-Time/Embedded World: Improve the Temporal Isolation among vCPUs in SEDF
|
Xen in the Real-Time/Embedded World: Improve Multiprocessor Support in SEDF
|
Virtual NUMA topology exposure to VMs
|
NUMA and ballooning on Xen
|
NUMA effects on inter-VM communication and on multi-VM workloads
|
Integrating NUMA and Tmem
|
IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA allocations
|
HVM per-event-channel interrupts
|
Userspace Tools
Convert PyGrub to C
|
Refactor Linux hotplug scripts
|
XL to XCP VM motion
|
VM Snapshots
|
Allowing guests to boot with a passed-through GPU as the primary display
|
Advanced Scheduling Parameters
|
CPU/RAM/PCI diagram tool
|
KDD (Windows Debugger Stub) enhancements
|
Performance
Performance tools overhaul
|
Create a tiny VM for easy load testing
|
Upstream bugs!
VCPU hotplug bug
|
RCU timer sent to offline VCPU
|
CONFIG_NUMA on 32-bit.
|
Time accounting for stolen ticks.
|
Xen Cloud Platform (XCP) and XAPI projects
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
Fuzz testing Xen with Mirage
|
Mirage OS XCP/Xen support
|
From simulation to emulation to production: self-scaling apps
|
Towards a multi-language unikernel substrate for Xen
|
DRBD Integration
|
Expose counters for additional aspects of system performance in XCP
|
Add support for XCP performance counters to be sampled at varying rates
|
XCP backend to Juju/Chef/Puppet/Vagrant
|
RBD (Ceph) client support in XCP
|
Add connection tracking capability to the Linux OVS
|
- XCP and XAPI development projects: XAPI project suggestions
- XCP short-term roadmap: XCP short term roadmap
- XCP monthly developer meetings: XCP Monthly Meetings
- XAPI developer guide: XAPI Developer Guide
Xen.org testing system
Testing PV and HVM installs of Debian using debian-installer
|
Testing NetBSD
|
Please see XenRepositories wiki page!