Virtio On Xen: Difference between revisions
(Update for 2021 VirtIO on Xen context: active VirtIO on Xen developments) |
No edit summary |
||
(4 intermediate revisions by 2 users not shown) | |||
Line 10: | Line 10: | ||
Placeholder names have been assigned to each of these approaches for ease of reference in this page: |
Placeholder names have been assigned to each of these approaches for ease of reference in this page: |
||
* ''VirtIO- |
* ''VirtIO-MMIO'' |
||
* ''VirtIO- |
* ''VirtIO-Grant'' |
||
* ''VirtIO-Argo'' |
* ''VirtIO-Argo'' |
||
* ''VirtIO-GSOC'' |
|||
In addition, Linaro has a project 'Stratos' pursuing: "Establish virtio as the standard interface between hypervisors, freeing a mobile, industrial or automotive platform to migrate between hypervisors and reuse the backend implementation." |
In addition, Linaro has a project 'Stratos' pursuing: "Establish virtio as the standard interface between hypervisors, freeing a mobile, industrial or automotive platform to migrate between hypervisors and reuse the backend implementation." |
||
Line 23: | Line 22: | ||
The [https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnAxdWxvc2s4aGI0ZGpfMjAyMTA5MzBUMTUwMDAwWiBjX2o3bmdpMW84cmxvZmtwZWQ0cjVjaDk4bXZnQGc&tmsrc=c_j7ngi1o8rlofkped4r5ch98mvg%40group.calendar.google.com Stratos project teleconference calls] are open. |
The [https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnAxdWxvc2s4aGI0ZGpfMjAyMTA5MzBUMTUwMDAwWiBjX2o3bmdpMW84cmxvZmtwZWQ0cjVjaDk4bXZnQGc&tmsrc=c_j7ngi1o8rlofkped4r5ch98mvg%40group.calendar.google.com Stratos project teleconference calls] are open. |
||
⚫ | |||
⚫ | |||
Development by EPAM and others, with focus on Xen on Arm platforms. Contact: Oleksandr Tyshchenko |
Development by EPAM and others, with focus on Xen on Arm platforms. Contact: Oleksandr Tyshchenko |
||
Line 30: | Line 28: | ||
Enables use of the existing standardized VirtIO-MMIO transport driver, which is present in the mainline Linux kernel, using Xen's IOREQ emulation infrastructure and use of privileged foreign mappings to establish shared memory for access to guest data by the device model backend. |
Enables use of the existing standardized VirtIO-MMIO transport driver, which is present in the mainline Linux kernel, using Xen's IOREQ emulation infrastructure and use of privileged foreign mappings to establish shared memory for access to guest data by the device model backend. |
||
'''Status: Support for ARM in the Xen 4.17 release.''' This includes support in the toolstack (xl / libxl), booting via dom0less DT, as well as a Linux frontend, and a custom userspace backend, [https://github.com/xen-troops/virtio-disk virtio-disk] |
|||
'''Status: Patches are in progress towards Xen on the xen-devel mailing list. The presentation at Linaro Connect 2021 includes a working demonstration.''' |
|||
VirtIO on Xen hypervisor (Arm), Oleksandr Tyshchenko, EPAM, Linaro Connect 2021: |
VirtIO on Xen hypervisor (Arm), Oleksandr Tyshchenko, EPAM, Linaro Connect 2021: |
||
Line 37: | Line 35: | ||
== ''VirtIO- |
== ''VirtIO-Grant:'' introducing a new VirtIO transport driver that uses Xen grants == |
||
Developed by SuSE, presented at the Xen Design and Developer Summit 2021. Contact: Juergen Gross |
Developed by SuSE and EPAM, presented at the Xen Design and Developer Summit 2021 and 2022. Contact: Juergen Gross, Oleksandr Tyshchenko |
||
A new VirtIO transport device driver is added to the guest kernel, to translate guest physical addresses into grant references, enabling VirtIO data path communication over mutually- |
A new VirtIO transport device driver is added to the guest kernel, to translate guest physical addresses into grant references, enabling VirtIO data path communication over mutually-negotiated shared memory regions between the guest virtual machine and the device model backend. Improves isolation as backend does not need privilege over the guest to perform foreign mappings. |
||
Grant references are a Xen-specific interface. Design supports driver domains. |
Grant references are a Xen-specific interface. Design supports driver domains. |
||
'''Status''': Linux frontend work is upstreamed. Patches for qemu and vhost backends available, but not yet upstreamed; after that, need to add toolstack (libxl / xl) support. |
|||
'''Status: A prototype is described in the presentation at the Xen Summit 2021.''' |
|||
VirtIO and Xen with Full Grant Support: |
VirtIO and Xen with Full Grant Support: |
||
* https://static.sched.com/hosted_files/xen2021/bf/Thursday_2021-Xen-Summit-virtio.pdf |
* https://static.sched.com/hosted_files/xen2021/bf/Thursday_2021-Xen-Summit-virtio.pdf |
||
* https://static.sched.com/hosted_files/xen2022/68/VirtIO%20with%20Grants%20-%20Current%20State%20and%20Open%20Questions%20%28Oleksandr’s%20part%29%20%281%29%20%281%29.pdf |
|||
* https://static.sched.com/hosted_files/xen2022/73/2022-Xen-Summit-virtio.pdf |
|||
* https://www.youtube.com/watch?v=IrlEdaIUDPk&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=25 |
* https://www.youtube.com/watch?v=IrlEdaIUDPk&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=25 |
||
* https://www.youtube.com/watch?v=N7VhFYzEH9o&list=PLYyw7IQjL-zEWILLZJ6JDjBCluEr8FDtk&index=11 |
|||
== ''VirtIO-Argo:'' introducing a new VirtIO transport driver that uses Argo for interdomain communication, supporting isolation and Mandatory Access Control == |
== ''VirtIO-Argo:'' introducing a new VirtIO transport driver that uses Argo for interdomain communication, supporting isolation and Mandatory Access Control == |
||
Line 71: | Line 71: | ||
* https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html |
* https://lists.xenproject.org/archives/html/xen-devel/2021-02/msg01422.html |
||
Xen-devel mailing list post, 30th September 2020 "VirtIO & Argo: a Linux VirtIO transport driver on Xen": |
|||
* https://lists.archive.carbon60.com/xen/devel/598361 |
|||
== ''VirtIO-GSOC:'' 2011 Google Summer of Code Project == |
|||
A Google Summer of Code project by Wei Liu investigated enabling VirtIO on Xen. |
|||
A working prototype was produced for both PV and HVM guests, using XenBus and the Qemu VirtIO backends. PV guests require a guest kernel patch to translate guest physical addresses to machine addresses in VirtIO rings. |
|||
'''Status: project completed.''' |
|||
=== Using the prototype === |
|||
This section is intended to guide people who might be interested giving it a try. |
|||
'''Note''': |
|||
* Make sure you have the latest Xen unstable source (at least CS23728). |
|||
* Checkout Stefano Stabellini's QEMU repository: git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-stable-0.15 |
|||
==== Virtio for HVM guest ==== |
|||
In fact, there is nothing much to do. |
|||
* With this [http://downloads.xen.org/Wiki/VirtioOnXen/libxl-virtio-support.patch patch], we can enable libxl to support configuration for Virtio disk and nic. |
|||
** For disk configuration syntax, use 'vd*' as virtual identifier, see docs/misc/vbd-interface.txt for details. |
|||
** For nic configuration syntax, use 'model=virtio' in vif configuration. Latest QEMU may require 'model=virtio-net'. Just choose the one that works. |
|||
* Disable MSI in guest kernel, with 'pci=nomsi' option in guest kernel command line. Currently, MSI delivery is not supported by HVM guest (though Xen has one HVMOP to do this, QEMU is not yet ready for this). |
|||
* Also you might want to add 'xen_emul_unplug=never' in the guest kernel command line. |
|||
Use 'xl' to start the guest, not 'xm'. |
|||
==== Virtio for PV guest ==== |
|||
First of all, you need the patch in the previous section to enable libxl to support configuration syntax for Virtio disk and nic. |
|||
Then apply the following patches to upstream QEMU and Linux kernel. |
|||
* QEMU |
|||
** [http://downloads.xen.org/Wiki/VirtioOnXen/qemu-01-xenpv-exec.patch qemu-01-xenpv-exec.patch] |
|||
** [http://downloads.xen.org/Wiki/VirtioOnXen/qemu-02-virtio-for-pv.patch qemu-02-virtio-for-pv.patch] |
|||
* Linux kernel |
|||
** [http://downloads.xen.org/Wiki/VirtioOnXen/linux-01-virtio-xenbus.patch linux-01-virtio-xenbus.patch] |
|||
** [http://downloads.xen.org/Wiki/VirtioOnXen/linux-02-virtio-ring.patch linux-02-virtio-ring.patch]. '''N.B''' This patch breaks existing virtio ring implementation, so please compile it out of tree and load it manually. |
|||
* VM config file |
|||
** Make sure you add device_model_args_pv=['-net', 'tap,vlan=0,script=no'] to make QEMU add a tap for you, or you won't be able to send packets via Virtio net. |
|||
* Linux kernel config |
|||
** Make sure you have 'iommu=soft' in your guest kernel command line. |
|||
Run 'xl' to startup VM, 'xm' doesn't understand virtio configuration. After the VM is up, load modified virtio_ring.ko and all the other modules manually in the following order: |
|||
# virtio.ko |
|||
# virtio_ring.ko |
|||
# virtio_net.ko |
|||
# virtio_blk.ko |
|||
# virtio_xenbus.ko |
|||
Hopefully you will get Virtio devices without much trouble. |
|||
==== TODO ==== |
|||
* Enable xen-mapcache for Virtio for PV, improve performance. |
|||
* Squash two evtchns into one, hopefully we can eliminate locking in transport layer and improve performance. |
|||
* Enable Virtio device DMA capability |
|||
* Bug fixes, transport layer crashes sometimes under heavy workload. |
|||
== Legacy GSoC projects == |
|||
* http://wiki.xen.org/xenwiki/QEMUUpstream |
|||
[[Virtio On Xen - Legacy: GSoC]] |
|||
For those who encounter problems, please contact Wei Liu <liuw AT liuw SPAMFREE dot name> |
|||
[[Category:Xen]] |
[[Category:Xen]] |
||
[[Category:Project]] |
[[Category:Project]] |
||
[[Category:GSoC]] |
|||
[[Category:Internships]] |
Latest revision as of 07:23, 19 April 2023
VirtIO on the Xen Hypervisor
There are three separate development approaches within the Xen community towards building support for enabling use of VirtIO device drivers in guest virtual machines with the Xen hypervisor, and a fourth older completed GSOC project. Work on the active approaches is proceeding on the xen-devel public mailing list.
Placeholder names have been assigned to each of these approaches for ease of reference in this page:
- VirtIO-MMIO
- VirtIO-Grant
- VirtIO-Argo
In addition, Linaro has a project 'Stratos' pursuing: "Establish virtio as the standard interface between hypervisors, freeing a mobile, industrial or automotive platform to migrate between hypervisors and reuse the backend implementation."
- https://linaro.atlassian.net/wiki/spaces/STR/overview
- https://projects.linaro.org/projects/STR/summary
- https://op-lists.linaro.org/mailman/listinfo/stratos-dev
The Stratos project teleconference calls are open.
VirtIO-MMIO: enabling existing VirtIO-MMIO transport on Xen, using foreign mappings and an IOREQ server
Development by EPAM and others, with focus on Xen on Arm platforms. Contact: Oleksandr Tyshchenko
Enables use of the existing standardized VirtIO-MMIO transport driver, which is present in the mainline Linux kernel, using Xen's IOREQ emulation infrastructure and use of privileged foreign mappings to establish shared memory for access to guest data by the device model backend.
Status: Support for ARM in the Xen 4.17 release. This includes support in the toolstack (xl / libxl), booting via dom0less DT, as well as a Linux frontend, and a custom userspace backend, virtio-disk
VirtIO on Xen hypervisor (Arm), Oleksandr Tyshchenko, EPAM, Linaro Connect 2021:
- https://static.linaro.org/connect/lvc21/presentations/lvc21-314.pdf
- https://www.youtube.com/watch?v=XE5Rn8KFunk
VirtIO-Grant: introducing a new VirtIO transport driver that uses Xen grants
Developed by SuSE and EPAM, presented at the Xen Design and Developer Summit 2021 and 2022. Contact: Juergen Gross, Oleksandr Tyshchenko
A new VirtIO transport device driver is added to the guest kernel, to translate guest physical addresses into grant references, enabling VirtIO data path communication over mutually-negotiated shared memory regions between the guest virtual machine and the device model backend. Improves isolation as backend does not need privilege over the guest to perform foreign mappings. Grant references are a Xen-specific interface. Design supports driver domains.
Status: Linux frontend work is upstreamed. Patches for qemu and vhost backends available, but not yet upstreamed; after that, need to add toolstack (libxl / xl) support.
VirtIO and Xen with Full Grant Support:
- https://static.sched.com/hosted_files/xen2021/bf/Thursday_2021-Xen-Summit-virtio.pdf
- https://static.sched.com/hosted_files/xen2022/68/VirtIO%20with%20Grants%20-%20Current%20State%20and%20Open%20Questions%20%28Oleksandr’s%20part%29%20%281%29%20%281%29.pdf
- https://static.sched.com/hosted_files/xen2022/73/2022-Xen-Summit-virtio.pdf
- https://www.youtube.com/watch?v=IrlEdaIUDPk&list=PLYyw7IQjL-zGcRPN6EjiTuFVGo4A6KCNf&index=25
- https://www.youtube.com/watch?v=N7VhFYzEH9o&list=PLYyw7IQjL-zEWILLZJ6JDjBCluEr8FDtk&index=11
VirtIO-Argo: introducing a new VirtIO transport driver that uses Argo for interdomain communication, supporting isolation and Mandatory Access Control
Design and analysis performed within the OpenXT and Xen communities. Contact: Christopher Clark
A new VirtIO transport device driver is added to the guest kernel to transmit data between the guest domain and the domain hosting the device model via Argo rings: a Hypervisor-Mediated data eXchange protocol where the hypervisor transfers the data, being trusted to strictly adhere to the delivery protocol. Supports stronger isolation properties and enforcement of Mandatory Access Control security policy over interdomain communication. Does not use shared memory between domains. Development of a Hypervisor-agnostic interface for Argo has been proposed and discussed within the Xen community. Design supports driver domains.
Status: Design and analysis published; funding required for development to proceed.
VirtIO-Argo: Documentation at the OpenXT wiki:
VirtIO-Argo Development:
Minutes from the Argo HMX Transport for VirtIO topic call, 14th January 2021:
Xen-devel mailing list post, 30th September 2020 "VirtIO & Argo: a Linux VirtIO transport driver on Xen":