Archived/Xen Development Projects: Difference between revisions
Jamesbulpin (talk | contribs) (Add proposed GSoC projects from Citrix XenServer team) |
mNo edit summary |
||
Line 258: | Line 258: | ||
{{project |
{{project |
||
|Project=Xen |
|Project=Is Xen ready for the Real-Time/Embedded World? |
||
|Date=08/08/2012 |
|Date=08/08/2012 |
||
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]> |
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]> |
||
Line 269: | Line 269: | ||
Whether or not Xen is able to provide its (potential) real-time VMs with all this, is something which has not been investigated thoroughly enough yet. This project therefore aims at defining, implementing and executing the proper set of tests, becnhmarks and measuring to understand where Xen stands, identify the sources of inefficiencies and alleviate or remove them. The very first step would be to check how much latency Xen introduces, by running some typical real-time workload within a set of VM, under different host and guest load conditions (e.g., by using [https://rt.wiki.kernel.org/index.php/Cyclictest cyclictest] within the VM themselves). Results can then be compared to what it is achievable with other hypervisors. After this, the code paths contributing most to latency and/or disrupting temporal isolation will need to be identified, and proper solutions to mitigate their effect envisioned and implemented. |
Whether or not Xen is able to provide its (potential) real-time VMs with all this, is something which has not been investigated thoroughly enough yet. This project therefore aims at defining, implementing and executing the proper set of tests, becnhmarks and measuring to understand where Xen stands, identify the sources of inefficiencies and alleviate or remove them. The very first step would be to check how much latency Xen introduces, by running some typical real-time workload within a set of VM, under different host and guest load conditions (e.g., by using [https://rt.wiki.kernel.org/index.php/Cyclictest cyclictest] within the VM themselves). Results can then be compared to what it is achievable with other hypervisors. After this, the code paths contributing most to latency and/or disrupting temporal isolation will need to be identified, and proper solutions to mitigate their effect envisioned and implemented. |
||
''Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).'' |
|||
|GSoC=Yes |
|GSoC=Yes |
||
}} |
}} |
||
{{project |
{{project |
||
|Project= |
|Project=Implement Temporal Isolation and Multiplocessor Support in the SEDF Scheduler |
||
|Date=08/08/2012 |
|Date=08/08/2012 |
||
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]> |
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]> |
||
Line 283: | Line 281: | ||
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
||
However, SEDF, the EDF implementation in Xen, is there, suffers from some rough edges. In fact, as of now, SEDF deals with events such as a vCPU blocking --in general, stopping running-- and unblocking --in general, restarting running-- by trying (and failing!) to special case all the possible situations, resulting in the code being rather complicated, ugly, inefficient and hard to maintain. Unified approaches have been proposed for enabling blocking and unblocking in EDF, while still guaranteeing temporal isolation among different vCPUs. It also lacks pproper multiprocessor support, meaning that it does not properly handle SMP systems, unless vCPU are specifically and statically pinned by the user. This is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most. |
|||
Therefore, this project aims at implementing one of this solutions in SEDF, and more specifically the one called Constant BandWidth Server (CBS, [http://xoomer.virgilio.it/lucabe72/pshare/pshare.html [1]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=739726&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D739726 [2]], [http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1137390&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1137390 [3]]). |
|||
''Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).'' |
|||
⚫ | |||
|GSoC=Yes |
|||
}} |
|||
{{project |
|||
|Project=Xen in the Real-Time/Embedded World: Improve Multiprocessor Support in SEDF |
|||
|Date=08/08/2012 |
|||
|Contact=Dario Faggioli <[mailto:dario.faggioli@citrix.com dario.faggioli@citrix.com]> |
|||
|Desc= |
|||
No matter if it is to build a [http://gigaom.com/2011/06/25/mobile-virtualization-finds-its-home-in-the-enterprise/ multi-personallity mobile phone], or [http://www.youtube.com/watch?v=j4uMdROzEGI help achieving consolidation in industrial and factory automation], embedded virtualization ([http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], [http://www.ibm.com/developerworks/linux/library/l-embedded-virtualization/index.html [2]], [http://www.wirevolution.com/2012/02/18/mobile-virtualization/ [3]]) is upon us. In fact, quite a number of ''embedded hypervisors'' already exist, e.g.: [http://www.windriver.com/products/hypervisor/ Wind River Hypervisor], [http://dev.b-labs.com/ CodeZero] or [http://www.sysgo.com/products/pikeos-rtos-and-virtualization-concept/ PikeOS]. Xen definitely '''is''' ''small, fast type-1 hypervisor with support for multiple VMs'' [http://en.wikipedia.org/wiki/Embedded_hypervisor [1]], so it could be a good candidate embedded hypervisor. |
|||
Moreover, Xen offers with an implementation of one of the most famous and efficient real-time scheduling algorithm, the [http://en.wikipedia.org/wiki/Earliest_deadline_first_scheduling Earliest Deadline First] (which is called SEDF in Xen), and real-time support is a key feature for a successful embedded hypervisor. Using such an advanced scheduling policy is, if it is implemented correctly, a great advancement and provide much more flexibility than only using vCPU pinning (which is what most embedded hypervisors do to guarantee real-time performances and isolation). |
|||
Unfortunately, SEDF, the EDF implementation in Xen, does not properly handle SMP systems yet, unless specific vCPU pinning is specified by the user. That is a big limitation of the current implementation, especially since EDF can work well without the need of imposing this constraint, providing much more flexibility and efficiency in exploiting the system resources to their most. |
|||
Therefore, this project aims at extending the SEDF scheduler, enabling proper support for SMP hardware. The fist step would be to use, instead of one vCPU run-queue per each physical processor (pCPU), only one per each "set of pCPUs" (e.g., only one run-queue for all the pCPUs that have a common L3 cache). This would already increase the effectiveness of the scheduler on current hardware a lot. After that, a mechanism for balancing and migrating vCPUs among different run-queues should be envisioned and implemented. |
|||
''Note for the GSoC Working Group: this can be coalesced with any other project called "Xen in the Real-Time/Embedded World: XXX" (or even with both of them).'' |
|||
Therefore, this project aims at extending the SEDF scheduler, by turning it into a proper multiprocessor and temporal isolation enabled scheduling solution. |
|||
[[GSoC_2013#multiprocessor-sedf]] |
|||
⚫ | |||
|GSoC=Yes |
|GSoC=Yes |
||
}} |
}} |
Revision as of 00:10, 31 January 2013
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
To work on a project:
- Find a project that looks interesting (or a bug if you want to start with something simple)
- Send an email to xen-devel mailinglist and let us know you started working on a specific project.
- Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
- Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
- Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
- Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
Before to submit patches, please look at Submitting Xen Patches wiki page.
If you have new ideas, suggestions or development plans let us know and we'll update this list!
List of projects
Domain support
Upstreaming Xen PVSCSI drivers to mainline Linux kernel
|
Upstreaming Xen PVUSB drivers to mainline Linux kernel
|
Implement Xen PVSCSI support in xl/libxl toolstack
|
Implement Xen PVUSB support in xl/libxl toolstack
|
Block backend/frontend improvements
|
Netback overhaul
|
Multiqueue support for Xen netback/netfront in Linux kernel
|
Block backend/frontend improvements
|
Utilize Intel QuickPath on network and block path.
|
perf working with Xen
|
PAT writecombine fixup
|
Parallel xenwatch
|
dom0 kgdb support
|
Hypervisor
Microcode uploader implementation
|
Introducing PowerClamp-like driver for Xen
|
Is Xen ready for the Real-Time/Embedded World?
|
Implement Temporal Isolation and Multiplocessor Support in the SEDF Scheduler
|
Virtual NUMA topology exposure to VMs
|
NUMA and ballooning on Xen
|
NUMA effects on inter-VM communication and on multi-VM workloads
|
Integrating NUMA and Tmem
|
IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA allocations
|
HVM per-event-channel interrupts
|
Userspace Tools
Convert PyGrub to C
|
Refactor Linux hotplug scripts
|
XL to XCP VM motion
|
VM Snapshots
|
Allowing guests to boot with a passed-through GPU as the primary display
|
Advanced Scheduling Parameters
|
CPU/RAM/PCI diagram tool
|
KDD (Windows Debugger Stub) enhancements
|
Performance
Performance tools overhaul
|
Create a tiny VM for easy load testing
|
Upstream bugs!
VCPU hotplug bug
|
RCU timer sent to offline VCPU
|
CONFIG_NUMA on 32-bit.
|
Time accounting for stolen ticks.
|
Xen Cloud Platform (XCP) and XAPI projects
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
Fuzz testing Xen with Mirage
|
Mirage OS XCP/Xen support
|
From simulation to emulation to production: self-scaling apps
|
Towards a multi-language unikernel substrate for Xen
|
DRBD Integration
|
Expose counters for additional aspects of system performance in XCP
|
Add support for XCP performance counters to be sampled at varying rates
|
XCP backend to Juju/Chef/Puppet/Vagrant
|
RBD (Ceph) client support in XCP
|
Add connection tracking capability to the Linux OVS
|
- XCP and XAPI development projects: XAPI project suggestions
- XCP short-term roadmap: XCP short term roadmap
- XCP monthly developer meetings: XCP Monthly Meetings
- XAPI developer guide: XAPI Developer Guide
Xen.org testing system
Testing PV and HVM installs of Debian using debian-installer
|
Testing NetBSD
|
Please see XenRepositories wiki page!