Xen network TODO: Difference between revisions

From Xen
Jump to navigationJump to search
(typo fix)
(Remove dead projects)
 
(2 intermediate revisions by the same user not shown)
Line 8: Line 8:


* split event channels (done)
* split event channels (done)
* kthread + NAPI 1:1 model (in progress)
* kthread + NAPI 1:1 model (done)


After switching to new baseline, you can get better scheduling fairness among vifs, mitigate interrupts to some extend, and have better aggregated throughput.
After switching to new baseline, you can get better scheduling fairness among vifs, mitigate interrupts to some extend, and have better aggregated throughput.
Line 26: Line 26:
|Project=Multiqueue support
|Project=Multiqueue support
|Desc=create many TX and RX rings for vif, scales better when DomU has more vcpus.
|Desc=create many TX and RX rings for vif, scales better when DomU has more vcpus.
}}

{{Open Project
|Project=Put producer and consumer index to different cache line
|Desc=In present hardware that means the reader and writer will compete for the same cacheline causing a ping-pong between sockets. This can be solved by having a feature-split-indexes (or better name) where the req_prod and req_event as a tuple are different from the rsp_prod and rsp_prod. This would entail using 128bytes of the ring at the start - each cacheline for each tuple.
}}

{{Open Project
|Project=Cache alignment of requests
|Desc=The fix is to make the request structures more cache-aligned. For networking that means making it 16 bytes and block 64 bytes. Since it does not shrink the structure but just expands it, could be called feature-align-slot.
}}
}}


Line 51: Line 41:
|Project=Affinity of the frontend and backend being on the same NUMA node
|Project=Affinity of the frontend and backend being on the same NUMA node
|Desc=This touches upon the discussion about NUMA and having PV guests be aware of memory layout. It also means that each backend kthread needs to be on a different NUMA node.
|Desc=This touches upon the discussion about NUMA and having PV guests be aware of memory layout. It also means that each backend kthread needs to be on a different NUMA node.
}}

{{Open Project
|Project=Separate request and response rings for TX and RX
|Desc=Pretty self-explanatory.
}}
}}

Latest revision as of 14:10, 13 February 2014

This is a list of open work items for Xen network project.

These items are mostly Linux specific. If you need help developing for other OSes, don't hesitate to email Xen-devel with your questions.

In progress

New baseline for Xen network

  • split event channels (done)
  • kthread + NAPI 1:1 model (done)

After switching to new baseline, you can get better scheduling fairness among vifs, mitigate interrupts to some extend, and have better aggregated throughput.

But please also note that you would need irqbalance to run in Dom0 (or your backend driver domain), otherwise you might get worse performance than before.

Unfortunately most in stock irqbalance would not work in Dom0 as it doesn't classify xen-dyn-event properly, you would need compile from the latest irqbalance master branch.

Open Work Items

Multipage ring
Contact: xen-devel@lists.xen.org
Description: The max outstanding amount of data the it can have is 898kB (64K of data use 18 slot, out of 256. 256 / 18 = 14, 14 * 64KB). This can be expanded by having multi-page to expand the ring. This would benefit NFS and bulk data transfer (such as netperf data).
References: Not specified, useful references.

Multiqueue support
Contact: xen-devel@lists.xen.org
Description: create many TX and RX rings for vif, scales better when DomU has more vcpus.
References: Not specified, useful references.

Don't gnt_copy all of the requests
Contact: xen-devel@lists.xen.org
Description: Instead don't touch them and let the Xen IOMMU create appropriate entries. This would require the DMA API in dom0 to be aware whether the grant has been done and if not (so FOREIGN, aka no m2p_override), then do the hypercall to tell the hypervisor that this grant is going to be used by a specific PCI device. This would create the IOMMU entry in Xen.
References: Not specified, useful references.

Do persistent mapping on guest TX side
Contact: xen-devel@lists.xen.org
Description: This would only be done from frontend -> backend path. However we could exhaust initial domains memory.
References: Not specified, useful references.

Affinity of the frontend and backend being on the same NUMA node
Contact: xen-devel@lists.xen.org
Description: This touches upon the discussion about NUMA and having PV guests be aware of memory layout. It also means that each backend kthread needs to be on a different NUMA node.
References: Not specified, useful references.