Xen Maintainer, Committer and Developer Meeting/XenSummit NA 2012
Home |
Xen release and Xen maintenance release management, release cadence and process
Led by George Dunlap
The Xen 4.2 release cycle was 17 months. This is a very long time for an open source project and longer than the Xen release cycle traditionally ran. Historically we had a 9 month cycle and we should we go back to that.
If we plan ahead, we can plan for a freeze date and assume 6 weeks of hardening and 6 weeks of RC's (based on historical data). After freeze no more features will be accepted. So assuming we release 4.2 on Sept 1st, 4.3 would be ready June 1st 2013.
The main open question is how much we want to tie to fixed numbers. Cerainly we wont make the exact dates. We should have flexibility to change the schedule for up to 2 months if there is a good reason to do so.
JanB raised the point that a feature freeze period of 3 months is fairly long. Maybe we should aim to sqeeze freezing and hardening to 6-9 weeks.
GeorgeD said that 6 weeks was an estimate, but we should have flexibility to shorten or lengthen.
This was followed by a discussion on experimental features. The conclusion was that expermintal features could go into a release (marked as experimental) and that issues on experimental features should not block a release.
If a feature is central to Xen and in retrospect it turns out that it is central to Xen, we could consider reverting it, before a release. However in some cases this wont be possible (e.g. libxl is something we couldn't revert).
Xen 4.3
Led by George Dunlap
For Xen 4.3 let's aim for feature freeze on March 1st (plus hardening). We should aim for more pessimistic dates and overdeliver (rather than an optimistic view on release dates and underdeliver). This will help consumers. of Xen.
Key points:
- Aim to freeze for March 1st
- 6 + 3 months is not that unrealistic
- But have flexibility to deal with exceptions
- If this works, we will try to replicate the model for for 4.4 (if we slip, we start with 6+3 months again)
- If we are early we can fix more bugs
George Dunlap will be acting as Xen 4.3 release coordinator. George started with a list of features that Citrix is planning to implement for Xen 4.3 and will actively collate input and feedback from others. George will circulate the list every two weeks and collate/integrate input.
If you are planning on working on a Xen feature during the 4.3 release time frame, please e-mail George. Jan raised the question on how realistic the current list is: the answer is that it is realistic and items have been assigned to individuals.
ACTIONS:
- George to send updated list to xen-devel every two weeks (consider a copy on a wiki page)
- Everybody to respond as apropriate
Point Releases: how do we do these?
Led by Jan Beulich
We need o have more predictable point releases. In the past point releases were managed on an ad-hoc basis. This made planning difficult for Linux distros. We all agreed to aim for a releases cycle with 3 monthly cadence.
One of the key issues is on who is doing backports of features, which happened late in the release cycle (1 month before a point release). Jan's preference would be to have backports applied to release branches, when they are made on xen-devel. The earlier back-porting is performed, the easier it is to apply them. Right now Keir has been keeping the two most recent releases and release branches up-to-date. To be more effective, we would need to delegate the responsibility of back-porting to maintainers.
In the discussion that followed, we agreed on the following points:
- Keir is happy for Jan to have ownership for point releases and coordinate activities needed
- We want maintainers be responsible for backports, but there are a number of practical issues that would need addressing:
- We need a mechanism to identify which changes should be backported
- Maybe we can treat identifying candidates for back-porting as part of the patch submission (i.e. highlight
in a patch whether the patch should be back-ported)
We also discussed activities that would need to be taken up by the point release coordinator. Key responsibilities discussed were:
- Initiate point releases
- Ensure that maintainers backport features as they are proposed for xen-unstable
- Propmpt and chase maintainers who do not backport
There was also a short discussion on whether we should have a more formal mechanism to track bugs. The consensus was that the mailing list works well enough. However, there was no objection to use debbugs (which depends on Ian Jackson getting round to implementing it for Xen).
Future direction of Xen and PV
Led by Konrad R Wilk, also see http://www.slideshare.net/xen_com_mgr/pvh-pv-guest-in-hvm-container
HW has become really good at page manipulation. This makes it feasible to run Dom0 in an HVM container. We don't have a name yet, however PVH as short hand for PV Guests in HVM containers was proposed.
Konrad raised that at the Kernel Summit there will be a discussion about future requirements by the Linux upstream community with regards to PVOPS. Talking to Konrad after the kernel summit, there no requirements to reduce the size of PVOPS or the Xen specific code in PVOPS were raised.
PVH has great potential: it would enable us to clean up the PVOPS code and remove PV Xen functionality in the long term. In other words: PVH should enable the Xen community to deprecate and remove PV and reduce the size of the PVOPS code (e.g. the PV page table manipulation code PVMMU). The reason for this is that PVH provides all the advantages of PV without the need for PVOPS code solely needed for PV. It does this by making use of Virtualization hardware extensions. Initial benchmarking of Mukesh's PVH implementation shows that with PVH, we should be able to achieve PV performance where PV is faster and HVM performance where HVM is faster (and that without optimization).
Although PVH technically has the potential to reduce the size of the PVOPS code, it is not yet clear how long service providers and large vendors who run PV on Hardware without virtualization extensions will still need PV support. As a community we do not want to impose unnecessary migration burden on our existing user base. Part of the open question is how long service providers expect to be able to use Hardware without virt support.
A number of items related to PVH and PV were discussed in more detail:
- With MMU virtualization we can use shadow page tables too. We would have to consider how long hardware without MX support will be around
- Linux without PVMMU support is seen a regression today (that position has not changed after the Kernel summit and is not likely to change in the future). Were this requirement removed at some point in the future, versions of Xen needing PVMMU support would not run on Linux versions without PVMMU.
- On the question of how long service providers will still run Xen on non-VT-x enabled hardware there was some discussion. Amazon and Citrix stated that non VT-x hardware is not an issue for them. However we do not have any information regarding others. Intel stated that Atom processors have VT-x but no EPT and that in 2012 new servers without EPT will come to market.
- Jan Beulich also raised questions that would need to be conclusively answered before we know for sure whether PVH can replace PV
- Is the assertion that PV always performs worse compared MMU based HW true?
- Hardware vendors say that HVM + EPT is faster than PV, but measurements that Jan made showed that HVM never even got close to PV. IO performance cerainly does not appear to be the problem.
- Ian Pratt clarified that for uniprocessor guests PV wins out. But for 2 CPU processors HVM does
- There was also some discussion on various benchmark setups
- There was also some discussion about the complexity and maintainability of implementations: Shadow paging is very complex, WHEREAS PV is way simpler
- NVidea claimed that shadow pagetables work better than EPT
- Another unknown is how well PCI passthrough and other functionality will work with PVH. Some if this is still unknown.
- There was also a longer discussion on Guest compatibility. In particular any potential future changes to PV support need to consider how long PV guests will be supported by existing Linux distros (e.g. RHEL6 will still be supported 7 years from now)
In a nutshell, the conclusion of this discussion was that considering to phase out PV support in the next 5 years in favour of PVH is premature at this stage and also entirely unnecessary. However, we should track this as a future possibility to simplify the Xen architecture and reduce the size and complexity of the codebase. A good time to revisit would be after the Xen 4.3 release, when PVH has been optimized and hardened and we also have concrete user feedback.
ACTIONS:
- Konrad to summatrize PVOPS discussion from Kernel Summit on xen-devel
- Konrad to make concrete proposals regarding PVH and PVOPS and discuss on list when appropriate in future
How do we respond to EFI and SecureBoot
Led by Stefano Stabellini (also see http://mjg59.dreamwidth.org/5552.html)
Stefano introduced the topic
- Dom0 kernels: we will basically need to follow what the various distros do
- Xen Guests: The key question is whether a guest will boot without certification. Xen will likely be able to emulate this and make
it work based on work that has already happened in TianoCore.
- Booting Xen on the host: this is harder as it is not yet clear what OEM's will require on new hardware. OEM and MS keys are likely to be present, which implies
that a MS signed bootloader is needed
Stefano explained the two approaches taken by different Linux distros:
- Fedora: subscribe to MS bootloader and be passed through, then boot GRUB and everything up the stack. The implication is that the entire stack needs to be signed
- Ubuntu: only require signing up to the bootloader (thus no need to sign kernel and above). SHIM bootloader will boot unsigned loader.
- Xen point of view: we need to take action for both cases.
There was a little discussion on various topics:
- Jan raised the point that in fact we do not need a bootloader, e.g. SUSE doesn't need a bootloader
- AWS: does not care about bootloaders
- Jan: EFI was designed for a bootloader to become superfluous. In that case we could have the same simple shim for both cases!
- Stefano: if Ubuntu says we have GRUB2, we will have to deal with it
There was some discussion on signing Xen. The outcome was as follows
- In essence we will have to assume that Xen needs to be signed.
- We will need to sign Xen anyway for people caring about security.
- We can also safely assume that most users wont upload their own keys.
- Advanced users are not going to be a problem
- However, we have to worry about users which are not advanced: e.g. who download fedora and run Xen. To make this work Fedora Xen should be signed by Fedora key.
More generally Xen needs to be signed with whatever key the host distro is signed (if so).
There were also a couple of questions:
- Is there reference hardware for EFI and (can aquire a SW vehicle). Matt Wilson: yes
- Should we make some kind of official statement? People may be wonderwing how we will do this! YES
A number of conrete actions came out of this discussion:
- ACTION: Need to expand distro build system to also sign Xen (in fact each Xen distro contact will need to work on this with their distros)
- Aside: Michael Young was talking about about defining the signing process in Fedora! Once the signing process is defined, it should be easy for us to tie into it.
- ACTION: the build system for xen needs to have a clean mechanism for signing. Not clear yet whether this needs to be in our build system, or the distro's system
- ACTION: We may have signing server issues and may need a process to protect keys
- ACTION: Matt Wilson to send round links to reference HW for EFI
- ACTION: Stefano to add best practice ./howto on signing of Xen to our distro wiki page!
- ACTION: George add EFI and SecureBoot to 4.3 TODO list
Nested Virtualization
Led by Don D Dugger (also see http://www.slideshare.net/xen_com_mgr/nested-virtualization-update-from-intel)
Don laid out Intel plans on nested virtualiztion - Intel nested virt for 4.2 - patches went in last week. Mostly functional, but there are some outstanding bugs - Requires some additional changes for out-of-box integration - Supports: Xen on Xen, KVM on Xen, Win7 on Xen, ... - Is it ready to add it as an offical feature for Xen 4.2? No, not working out of the box. - Intel wants to make passthrough work on nested virt - More details, see http://www.slideshare.net/xen_com_mgr/nested-virtualization-update-from-intel - AMD has also been spending some time on makeing nested virt work with Xen 4.2
Testing now: - Xen on Xen - KVM on Xen - Windows7 comp mode on Xen
ACTION: George put down nested virtualization as en 4.3 features ACTION: George to verify whether we should add AMD nested virt support into Xen 4.2 feature list
Then there was a discussion on other HV's compared to Xen and KVM:
- For Intel HyperV and VMWare not a priority. Should it be? If so, who would pick it up?
- ESX kind of works
- Intel not committed to supporting HyperV and ESX for now
- ESL for ESX is not documented: this makes it hard for a third party to implement this
- Ian Pratt: We should aim to support all HV's
- Maybe somebody should lobby ESX to support these?
- Will this work across vendors? Intel on AMD (and vice versa) should work
The conclusion of the discussion is that there are still issues and that the functionality is still under active development.
How does Xen need to evolve to support NUMA
Led by Dario Faggioli (awas presented at XenSummit, but still waiting for slides)
The core issues around NUMA support are
- Performance
- Placement / Pinning
- Scheduling (working on it now)
The reason why topic was submitted:
- Looking for advice on how to test and benchmark NUMA support
- Also true for other hardware related issues
On benchmarks, the following possibilities were raised
- Andrea Arcangeli has an automated benchmark pool (will talk to him at LPC, but not sure this is suitable)
- Mostly running SpecBJJ now (Need some more representative and typical workloads)
- Other benchmarks to look at: Hackbench, LMbench, Blend of representative benchmarks, Phoronix.
- Plus some microbenchmarks: IPC from Cambridge, VM2EM (shows odd behaviour on NUMA)
ACTION: Lars to check with LF whether there is a NUMA working group! Benchmarking and testing can be shared!
ACTION: Dario talk to AnilM
Guests and NUMA
Guest NUMA support for Xen does not currently exist. But Intel and AMD have some patches.
Potential Issues: migration and race conditions Should not overdesign guest NUMA (in particular for live migration, enable people to design homogenous pools, etc.) Should be sufficient to give the users the tools to optimize (and that is all we need to do)
Key questions:
- Will the guest make worse decisions compared to not making any decisions altogether
ACTION: Dario to investigate what VMWare does
A number of questions raised: Q: How to relay topology information if we do not have a device model? Can just add a hypercall Oracle: if the topology is relayed to the DB, compared to turning NUMA off, performance is 15% better
Another alternative may be to randomize everything and check the difference between randomized and optimized
Upstream Kernel Missing Features
Led by Konrad R Wilk
Status update:
- The list of critical items on the waiting list has reduced dramatically
- ACP Resume is missing (suspend works)
- uCode updater : GRUB also works for UEFI : Have an out-of-tree patch
- Some support for new HW is missing
- Some functionality that classic Xen had is missing : e.g. perf
Call for action: looking for more people to work on the Linux kernel
Linux 3.0 DOM0 is opening a whole set of new possibilities for Xen: lots of doors are open, with lots of interesting possibilities. However, there are too many doors open. As maintainer, Konrad is looking for help!
Actions:
- Intel: has Linux and Xen guys - sometimes there is disconnect. Don will see whether he can make connections.
- Lars to follow up with Citrix management
For 4.3 release planning, we should know whether there are any gating criteria and dependencies in Linux.
ACTION: Konrad will track Linux work items (and act as Release Manager for PVOPS in teh same way George does for Xen 4.3) and send a regular update around. xen-devel and kernel lists are sufficient
Blkback/netback peformance: should get data in 1 week, preliminary data is available now
- P LOOP : did a whole set of work in different environments on netback => lower CPU usage, but now bandwidth
- Grant operations seem to have an issue
- Improved: but still a significant number
- Removed grants => worked much faster
- Citrix has a person working on this : blktap - provisional works, not done and understood yet
- Do we want to continue on blkback/netback or virtio? Konrad is looking at virtio to see whether there is potential
- Don't know where the magic bullet is
- Whole architecture without virtio is flexible. Virtio is orthogonal and less flexible.
- We would like to avoid going down the virtio route, if possible
- Maybe we can find a better approach
- PVHVM spinlock issue : overhead which generally makes it intolerable, alternative not progressing (Rumal @ IBM)
- PV ticketlocks in PVHVM is one of the Linux TODO items
- KVM had the same problem
- Advancement on 0 copy pathways for PV drivers:
- 0 copy netback / front => lower CPU usage, perf equal or worse (more copying means mapping => perf impact)
Issues:
- Part of the issue is that this discussion is not currently happening in public
- Don't have good performance tools on Dom0 - not precise and not easy <=> ties back to perf on Linux
CenTOS 5 => 6
Xen.org has been approached by the CentOS community with a call for help on getting Xen support into CentOS 6. As an aside, we have worked out a draft plan at LinuxCon supported by several vendors to resolve this, but do not want to publish until we are 100% sure this can be delivered.
What would need to be done?
- Need to support a xen-enabled pre PVOPS kernel in CentoS
- We will give them Xen 4.1
- Default toolstack will be XEND => XEND wont die for the next few years
- Need a new kernel + drivers
- Will try to provide (and support) a PVOPS based kernel
- Will maintain this (version to be decided, probably 3.5)
- Any help will be useful
- Probably need to backport some drivers and some testing
Help we are looking for:
- Testing and Hardware
- Help in case there are critical bugs in XEND : have very little internal knowledge in XEND
- Note that Pasi is part of the CentOS community
What happens if there is a critical bug? Will need people who monitor CentOS bugs (that would help too)
- Pasi can help
- And get to connect us to others
- CentOS has quite an extensive testing framework already testing CO 5 with Xen (move to 6)
- Dom0 kernel list on CentOS
- There also 3rd party repos with upstream kernels (but most CentOS users would like to use RH kernels with patches)
The hard part is
- Security updates
- Bug fixes
- Maybe use RH kernel with updates: 2.6.32 with lots of backported stuff - probably not feasible
- Need an enterprise product for an enterprise kernel (apart from SLES which is getting out-of-support soon)
- 3.4 support is 1 year, 3.5 is 2 years
Besides the kernel issue
- Need to compile a new version of libvirt (if we want virtmanager to run as well) ... Pasi agreed to volunteer