TCT Meeting/March 2014 Minutes: Difference between revisions

From Xen
Jump to navigationJump to search
(Created page with "{{sidebar | name = TCT Meeting/March 2014 Minutes | outertitle = Navigation | outertitlestyle = text-align: left; | headingstyle = text-align: left; | contentstyle = t…")
 
mNo edit summary
 
(One intermediate revision by the same user not shown)
Line 10: Line 10:
}}
}}


= Attendees =

* Ian Campbell
* Don Dugger
* Konrad Rzeszutek Wilk
* Kelly Zytaruk
* Sherry Hurwitz
* James Bulpin
* Ian Pratt
* Jun Nakajima
* Olaf Hering
* Boris Ostrovsky
* Daniel De Graaf
* Daniel Kiper

= TCT Mailing List =

List for administrivia, agenda, minutes etc. No objections

= Review Action items =

None

= Current technical challenges =

== Perf subsystem ==

Sherry: Interested in Boris' work on the perf subsystem. How solid/complete is it.

Boris: Xen side being reviewed on the list, Linux side not posted yet, can send out on demand.

= Coordination on current and future work =

== GPU passthrough ==

Kelly: Is anyone working on GPU passthrough as a primary device?

Ian C: Not sure

Konrad: Intel has patches?

Ian P: Traditionally Intel can only be used as a primary, while AMD and NVIDIA can only be used as a secondary

Konrad: Was thinking of the XenGT.

Jun: This is virtualising the GPU, not assignment.

Jun: Passthrough depends on the GFX card being used, depends on (V)BIOS etc

Kelley: A lot of internal architecture doesn't support primary passthrough, sent Konrad a list of deficiencies. Has it working on a test system but not ready for primetime. If no one else is working on it then may pick it, willing to work with other interested parties. Looking into the qemu side.

Konrad: Three versions of qemu, upstream, version in Xen and the old one

IanC: The first two are the same thing.

Jun: QEMU/KVM guys are using VFIO for gfx passthrough.

IanC: Can Xen use VFIO? Thought it was very KVM centric.

Jun: Might be areas where we can share.

Konrad: Bus reset and slot reset code in pciback is buggy and needs work.

IanC: thought we did that via sysfs these days.

Konrad: For devices yes, but not for slot or bus etc. Generic code will refuse to reset if there are multiple functions etc. There's a bunch of code in Linux which VFIO uses to work out whether to do a slot or bus reset. Code for pciback to use this same infrastructure has been posted but needs some rework.

Jan: Changes to memory type changes are a prerequisite for GPU passthrough to work, at least whenever shared RAM is used for communications with the GPU. Currently we have an arbitrary mix of UC and WB, which leaves us at risk of not doing what the programmer wanted. Some patches went in, the remainder depend on vmx maintainers to comment on certain aspects of EPT and why IOMMU related things are done the way they are.

{{Action|Don & Jun}} Don and Jun will chase this up inside Intel.

== Xen nested on Xen performance ==

Ian P: have been doing some work on this usecase. VMCS shadow on Haswell help, but can do better if detect you are nested and modify your behaviour, can do even better than VMCS shadowing. No need to change the root hypervisor. Effectively a PV hypervisor with no changes to the host/root hypervisor. Writing to VMCS directly avoiding VMREAD/VMWRITE -- non-architectural but goes an awful lot faster. Would like to propose a version identifier in the virtual VMCS for nested hypervisor to use, and access VMCS directly on match.

We use a virtual VMCS which does not match the physical VMCS, when VMCS shadow is disabled. Layout is arbitrary.

Code in Xen looks for changes in the virtual-VMCS and syncs to the physical-VMCS. A kind of lazy update, update everyting and then trigger a resync with VMLOAD. Relies on Xen resyncing the entire VMCS. Effectively batches up.

The nested VM needs to know something about how the root Xen behaves so need some way to determine when it does/doesn't know about this.

No changes to the root Xen, but lots to the nested Xen.

IanC: Need to see both the nested and root side things, in order to test and guarentee.

Don: Hard to justify without something to take advantage of it.

Konrad: Adding an ABI to the root hypervisor.

Don: Need to be careful to bump the version appropriately. Someone needs to think about it for every change. Need to test etc.

Ian P: Could use hypervisor version number.

Ian C: This can break on backport.

Ian P: Currently the version field is the hardware version, which is very wrong. A hardcoded number would be more correct.

Don: Intel are doing ongoing work on nested.

Ian P: Will propose patches for comments.

== Roadmap for 4.5 ==

Sherry: Have we started a roadmap, wiki page etc

Ian C: People are still catching their breath. Once the dust has settle and the Release Manager has been decided can revisit. Next time perhaps?

= Community news, activities =

None


[[Category:Technical Coordination Team]]
[[Category:Technical Coordination Team]]

Latest revision as of 12:25, 31 March 2014

Attendees

  • Ian Campbell
  • Don Dugger
  • Konrad Rzeszutek Wilk
  • Kelly Zytaruk
  • Sherry Hurwitz
  • James Bulpin
  • Ian Pratt
  • Jun Nakajima
  • Olaf Hering
  • Boris Ostrovsky
  • Daniel De Graaf
  • Daniel Kiper

TCT Mailing List

List for administrivia, agenda, minutes etc. No objections

Review Action items

None

Current technical challenges

Perf subsystem

Sherry: Interested in Boris' work on the perf subsystem. How solid/complete is it.

Boris: Xen side being reviewed on the list, Linux side not posted yet, can send out on demand.

Coordination on current and future work

GPU passthrough

Kelly: Is anyone working on GPU passthrough as a primary device?

Ian C: Not sure

Konrad: Intel has patches?

Ian P: Traditionally Intel can only be used as a primary, while AMD and NVIDIA can only be used as a secondary

Konrad: Was thinking of the XenGT.

Jun: This is virtualising the GPU, not assignment.

Jun: Passthrough depends on the GFX card being used, depends on (V)BIOS etc

Kelley: A lot of internal architecture doesn't support primary passthrough, sent Konrad a list of deficiencies. Has it working on a test system but not ready for primetime. If no one else is working on it then may pick it, willing to work with other interested parties. Looking into the qemu side.

Konrad: Three versions of qemu, upstream, version in Xen and the old one

IanC: The first two are the same thing.

Jun: QEMU/KVM guys are using VFIO for gfx passthrough.

IanC: Can Xen use VFIO? Thought it was very KVM centric.

Jun: Might be areas where we can share.

Konrad: Bus reset and slot reset code in pciback is buggy and needs work.

IanC: thought we did that via sysfs these days.

Konrad: For devices yes, but not for slot or bus etc. Generic code will refuse to reset if there are multiple functions etc. There's a bunch of code in Linux which VFIO uses to work out whether to do a slot or bus reset. Code for pciback to use this same infrastructure has been posted but needs some rework.

Jan: Changes to memory type changes are a prerequisite for GPU passthrough to work, at least whenever shared RAM is used for communications with the GPU. Currently we have an arbitrary mix of UC and WB, which leaves us at risk of not doing what the programmer wanted. Some patches went in, the remainder depend on vmx maintainers to comment on certain aspects of EPT and why IOMMU related things are done the way they are.


Pictogram voting comment 15px.png Action Don & Jun: Don and Jun will chase this up inside Intel.

Xen nested on Xen performance

Ian P: have been doing some work on this usecase. VMCS shadow on Haswell help, but can do better if detect you are nested and modify your behaviour, can do even better than VMCS shadowing. No need to change the root hypervisor. Effectively a PV hypervisor with no changes to the host/root hypervisor. Writing to VMCS directly avoiding VMREAD/VMWRITE -- non-architectural but goes an awful lot faster. Would like to propose a version identifier in the virtual VMCS for nested hypervisor to use, and access VMCS directly on match.

We use a virtual VMCS which does not match the physical VMCS, when VMCS shadow is disabled. Layout is arbitrary.

Code in Xen looks for changes in the virtual-VMCS and syncs to the physical-VMCS. A kind of lazy update, update everyting and then trigger a resync with VMLOAD. Relies on Xen resyncing the entire VMCS. Effectively batches up.

The nested VM needs to know something about how the root Xen behaves so need some way to determine when it does/doesn't know about this.

No changes to the root Xen, but lots to the nested Xen.

IanC: Need to see both the nested and root side things, in order to test and guarentee.

Don: Hard to justify without something to take advantage of it.

Konrad: Adding an ABI to the root hypervisor.

Don: Need to be careful to bump the version appropriately. Someone needs to think about it for every change. Need to test etc.

Ian P: Could use hypervisor version number.

Ian C: This can break on backport.

Ian P: Currently the version field is the hardware version, which is very wrong. A hardcoded number would be more correct.

Don: Intel are doing ongoing work on nested.

Ian P: Will propose patches for comments.

Roadmap for 4.5

Sherry: Have we started a roadmap, wiki page etc

Ian C: People are still catching their breath. Once the dust has settle and the Release Manager has been decided can revisit. Next time perhaps?

Community news, activities

None