Archived/Xen Development Projects: Difference between revisions
(→Domain support: - Intel QuickPath) |
|||
Line 115: | Line 115: | ||
{{project |
{{project |
||
|Project=Block backend/frontend improvements |
|||
|Project=Multiqueue support for Xen netback/netfront in Linux kernel |
|||
|Date= |
|Date=01/01/2013 |
||
|Contact= |
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
||
|Desc= |
|Desc= |
||
Blkback requires a number of improvements, some of them being: |
|||
Please consider this project a sub-project of Netback overhaul by Konrad Rzeszuteck Wilk. Originally posted by Pasik, elaborated by Wei. |
|||
* Multiple disks in a guest cause contention in the global pool of pages. |
|||
* There is only one ring page and with SSDs nowadays we should make this larger, implementing some multi-page support. |
|||
* With multi-page it becomes apparent that the segment size ends up wasting a bit of space on the ring. BSD folks fixed that by negotiating a new parameter to utilize the full size of the ring. Intel had an idea for descriptor page. |
|||
* Add DIF/DIX support [http://oss.oracle.com/~mkp/docs/lpc08-data-integrity.pdf] for T10 PI (Protection Information), to support data integrity fields and checksums. |
|||
* Further perf evaluation needs to be done to see how it behaves under high load. |
|||
* Further discussion and issues outlined in http://lists.xen.org/archives/html/xen-devel/2012-12/msg01346.html |
|||
|GSoC=Yes, but we would have to chop them in a nice chunks |
|||
}} |
|||
{{project |
|||
Multiqueue support allows a single virtual network interface (vif) to scale to multiple vcpus. Each queue has it's own interrupt, and thus can be bind to a different vcpu. KVM VirtIO, VMware VMXNet3, tun/tap and various other drivers already support multiqueue in upstream Linux. |
|||
|Project=Utilize Intel QuickPath on network and block path. |
|||
* Some general info about multiqueue: http://lists.linuxfoundation.org/pipermail/virtualization/2011-August/018247.html |
|||
|Date=01/22/2013 |
|||
|Contact=Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> |
|||
|Desc=The Intel QuickPath, also known as Direct Cache Access, is the chipset that sits in the PCIe subsystem in the Intel systems. It allows the PCIe subsystem to tag which PCIe writes to memory should reside in the Last Level Cache (LLC, also known as L3, which in some cases can be of 15MB or 2.5MB per CPU). This offers incredible boosts of speed - as we bypass the DIMMs and instead the CPU can process the data in the cache. |
|||
Adding this component in the network or block backends can mean that we can keep the data longer in the cache |
|||
In the current implementation of Xen PV network, every vif is equipped with only one TX/RX ring pair and one event channel, which does not scale when a guest has multiple vcpus. If we need to utilize all vcpus to do network job then we need to configure multiple vifs and bind interrupts to vcpus manuals. This is not ideal and involves too much configuration. |
|||
and the guest can process the data right off the cache. |
|||
⚫ | |||
The multiqueue support in Xen vif should be straight forward. It requires changing the current vif protocol and the code used to initialize / connect / reconnect vifs. However, there are risks in terms of collaboration, it is possible multiple parties will work on same piece of code. Here are possible obstacles and thoughts: |
|||
* netback worker model change - the possible change is from M:N to 1:1 is not really an obstacle because 1:1 is just a special case for M:N |
|||
* netback page allocation mechanism change - not likely to have protocol change |
|||
* netback zero-copy - not likely to have protocol change |
|||
* receiver-side copy - touches both protocol and implementation, |
|||
* multi-page ring - touches protocol and implementation, should be easy to merge |
|||
* split event channel - touches protocol and implementation, should be easy to merge |
|||
⚫ | |||
The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties. |
The candidate for this project should be familiar with open source development workflow as it may require collaboration with several parties. |
||
Expected outcome: |
Expected outcome: |
||
* |
* Have upstream patches. |
||
* benchmark report of with and without. |
|||
* benchmark report (basic: compare single-queue / multi-queue vif. advanced: compare Xen multi-queue vif against KVM multi-queue VirtIO etc.) |
|||
|GSoC=Yes |
|GSoC=Yes |
||
}} |
}} |
Revision as of 10:26, 23 January 2013
This page lists various Xen related development projects that can be picked up by anyone! If you're interesting in hacking Xen this is the place to start! Ready for the challenge?
To work on a project:
- Find a project that looks interesting (or a bug if you want to start with something simple)
- Send an email to xen-devel mailinglist and let us know you started working on a specific project.
- Post your ideas, questions, RFCs to xen-devel sooner than later so you can get comments and feedback.
- Send patches to xen-devel early for review so you can get feedback and be sure you're going into correct direction.
- Your work should be based on xen-unstable development tree, if it's Xen and/or tools related. After your patch has been merged to xen-unstable it can be backported to stable branches (Xen 4.2, Xen 4.1, etc).
- Your kernel related patches should be based on upstream kernel.org Linux git tree (latest version).
xen-devel mailinglist subscription and archives: http://lists.xensource.com/mailman/listinfo/xen-devel
Before to submit patches, please look at Submitting Xen Patches wiki page.
If you have new ideas, suggestions or development plans let us know and we'll update this list!
List of projects
Domain support
Upstreaming Xen PVSCSI drivers to mainline Linux kernel
|
Upstreaming Xen PVUSB drivers to mainline Linux kernel
|
Implement Xen PVSCSI support in xl/libxl toolstack
|
Implement Xen PVUSB support in xl/libxl toolstack
|
Block backend/frontend improvements
|
Netback overhaul
|
Block backend/frontend improvements
|
Utilize Intel QuickPath on network and block path.
|
PAT writecombine fixup
|
Parallel xenwatch
|
Hypervisor
Microcode uploader implementation
|
Introducing PowerClamp-like driver for Xen
|
Xen in the Real-Time/Embedded World: Are We Ready?
|
Xen in the Real-Time/Embedded World: Improve the Temporal Isolation among vCPUs in SEDF
|
Xen in the Real-Time/Embedded World: Improve Multiprocessor Support in SEDF
|
Virtual NUMA topology exposure to VMs
|
NUMA effects on inter-VM communication and on multi-VM workloads
|
Integrating NUMA and Tmem
|
Userspace Tools
Convert PyGrub to C
|
Refactor Linux hotplug scripts
|
XL to XCP VM motion
|
VM Snapshots
|
Allowing guests to boot with a passed-through GPU as the primary display
|
Advanced Scheduling Parameters
|
Performance
Performance tools overhaul
|
Upstream bugs!
VCPU hotplug bug
|
RCU timer sent to offline VCPU
|
CONFIG_NUMA on 32-bit.
|
Time accounting for stolen ticks.
|
Xen Cloud Platform (XCP) and XAPI projects
There are separate wiki pages about XCP and XAPI related projects. Make sure you check these out aswell!
Fuzz testing Xen with Mirage
|
Mirage OS XCP/Xen support
|
From simulation to emulation to production: self-scaling apps
|
Towards a multi-language unikernel substrate for Xen
|
DRBD Integration
|
- XCP and XAPI development projects: XAPI project suggestions
- XCP short-term roadmap: XCP short term roadmap
- XCP monthly developer meetings: XCP Monthly Meetings
- XAPI developer guide: XAPI Developer Guide
Please see XenRepositories wiki page!