AB WG/Test/October 2013 Minutes: Difference between revisions

From Xen
Jump to navigationJump to search
(Created page with "== Attendees == * Aravind Gopalakrishnan (AMD) * Anthony Liguori (AWS), Matt Wilson (AWS) * Chris Sheperd (Citrix) * Demetrios Coulis standing in for Allan Roberto (CA) * Greg Lu…")
 
 
(One intermediate revision by the same user not shown)
Line 18: Line 18:
=== Purpose and Scope of Working Group ===
=== Purpose and Scope of Working Group ===
''Lars: My view is that the WG provides oversight and guidance for creating a test infrastructure for the Xen Project on behalf of the Xen Community. This is merely providing a stake in the ground to start a discussion.''
''Lars: My view is that the WG provides oversight and guidance for creating a test infrastructure for the Xen Project on behalf of the Xen Community. This is merely providing a stake in the ground to start a discussion.''

As such, the group needs to
As such, the group needs to
* Make proposals to the Advisory Board for funding, for example
* Make proposals to the Advisory Board for funding, for example

Latest revision as of 16:42, 21 November 2013

Attendees

  • Aravind Gopalakrishnan (AMD)
  • Anthony Liguori (AWS), Matt Wilson (AWS)
  • Chris Sheperd (Citrix)
  • Demetrios Coulis standing in for Allan Roberto (CA)
  • Greg Lutostanski (Calxeda)
  • Konrad Wilk (Oracle)
  • Lars Kurth (Chair, Xen Project)
  • Will Auld (Intel)

Lars believes, but is not 100% sure that

  • Harry Hart and Don Slutz (Verizon) also stayed at the meeting


Pictogram voting comment 15px.png Action Don: Please confirm whether you were at the meeting and whether Verizon would in fact also participate in the WG and who the respective company rep would be

Agenda

Introductions

Purpose and Scope of Working Group

Lars: My view is that the WG provides oversight and guidance for creating a test infrastructure for the Xen Project on behalf of the Xen Community. This is merely providing a stake in the ground to start a discussion.

As such, the group needs to

  • Make proposals to the Advisory Board for funding, for example
    • What do we need to set up such a system in terms of hosting space, machines, …
    • Do we need a full-time resource employed by the LF to maintain and evolve the framework, …
  • Make proposals to the community and work with the community on a solution that works for developers in the community
    • Without community buy-in and creating something that the community actually wants, we won’t be able to improve upstream quality
  • Make decisions to help bootstrap demo systems, proof of concepts, etc.
  • The group will also need to approve Statement of Works or provide input on hiring contractors/resource
  • Influence or set ground rules to ensure that the money that the Advisory Board invests delivers value to the community and the Advisory Board
  • Highlight focus areas for investment: e.g. the group would decide on questions such as
    • Are there focus areas for test coverage the WG/AB cares about?
    • Can WG/AB members find resources to cover these areas?
    • If not, do we need to use project funds to seed such areas?

Comment by Matt Wilson (AWS): I'd like to suggest that the AB and working groups try to avoid making technology decisions. It makes sense for the WG to decide where to make investments on behalf of Xen Project members (e.g. where to invest through contractors / grants, capital investments, etc.). While this might influence technology decisions, it should avoid trouble some engineering-by-committee problems.

Lars: proposal would be to go through the list above and add/remove items and examples. We do not need to make decisions on items listed above at this meeting (Lars sees these as example to clarify the scope/purpose/boundaries of the WG). Lars would then take the output and draft a charter which we can vote on in the WG subsequently and to put it forward to the Advisory Board and the wider community.

Status Update and scenarios going forward (Chris Shepherd)

We probably need to give a quick overview of what we have (there are quite a few new people on the list)

Status update:

  • The OSU Open Source Lab in principle agreed to set up a Test-As-A-Service rig for Xen for a limited time
  • The proposal is based on Citrix’s internal XenRT system. This would be a proof of concept aimed at informing further investment decisions.
  • OSSTest is already running (however on Citrix servers, with limited access by the community) … see

Meeting Mechanics

  • Meeting cadence/time/etc.
  • Mailing lists

Meeting

Introductions

Please feel free to expand and ask questions, etc.

  • Aravind Gopalakrishnan, works in the Server OS team at AMD and specialises in RAS features
  • Chris Sheperd, leads the Test Department for XenServer at Citrix
  • Will Auld is Performance Architect and Principal Engineer at Intel, but also works closely with the Cloud Virtualization Test Group
  • Anthony Liguori is the QEMU project lead at Amazon
  • Demetrios Coulis, is product manager for AppLogic at CA
  • Greg Lutostanski is working in the Validation team (sorry, may have gotten this wrong) at Calxeda and is new to Xen
  • Konrad Wilk is Software Development Manager at Oracle, but also Xen maintainer in Linux
  • Lars Kurth is the community manager for the Xen Project and is volunteering to chair this working group until it becomes self-sustaining

Purpose and Scope of Working Group

Lars: We didn’t really work through the items in the list in the agenda. We ended up having a discussion. I didn’t capture all of it - the discussion got quite engaged – and I was taking notes and chairing the meeting. Please feel free to augment on the list and correct me.

Lars: We started with the list of items in the agenda, but realized that maybe we need to take a step back and check our assumptions.

Anthony:

  • Raised the point that the introduction of KVM autotest ([1]) has been problematic. Developers generally tend to write test code if it fits into their development workflow (i.e. they can run tests easily locally on their branch very easily).
  • System Test frameworks (such as Xen OSSTest and XenRT) which are run *after* submission are more problematic and many devs tend to ignore them
  • From Anthony’s experience, we also approached the OSSTest / XenRT discussions wrongly. We should *not* just ask developers to write tests, but understand better what tests framework they would find attractive and provide something which helps them.

Lars:

  • We can fix the last point on engaging the community

We then covered an angle on what is wrong with OSS test right now

  • Konrad: stated that OSSTest is not well enough maintained and owned right now (IanJ does this in his spare time). OSSTest or other systems the AB supports will need to be properly owned and funded, otherwise the system would lose trust.
  • Matt commented that he doesn’t like the pushgate mechanism in OSSTest (we didn’t cover this in detail)

On the other hand … running Coverity on the code base has been a huge success.

Konrad:

  • In 2 months we had > 215 bug fixes
  • the quality of the xen core components according to coverity is now higher than that in the Linux kernel

This implies that there is a desire in the community to use tools to improve quality.

We then slipped into a discussion about goals: Anthony raised a few questions

  • How do we determine how we (or what we propose) provides value to members and the community
  • We would need to define measurable success criteria

The conclusion we came to (note that we didn’t vote) was

  • Understand what member companies want to get out of the framework
  • Understand what the community wants to get out of it (as value for companies depends on community buy in).


Pictogram voting comment 15px.png Action Lars: Carefully draft a mail to the devel lists (after sending to this list for approval) along the lines of:

  • As AB we have resources that we can use to help the community
  • We are only willing to spend money, if we are confident enough that this helps the community and is likely to be adopted beyond Advisory Board members
  • Here are a few options and what we think would help you (intended to seed the discussion)
  • We are looking for volunteers in the community to work with the WG


Pictogram voting comment 15px.png Action All: Each member company should provide a statement of what they are trying to achieve, whether there are any specific test related items, constraints, platforms, goals, etc. that they are care about

Additional Notes:

Lars also included some notes and discussions he had outside the WG meeting at the bottom of this section.

Lars had a conversation with a number of developers after the test talks

  • Anthony stated that maybe to have a test tool based on QEMU that allows people to run some relevant tests locally would be best (problem: performance). He also raised the point that both OSSTest and XenRT are just like autotest. Why do we need yet another new test framework rather than using something which is out there already?
  • Ian Jackson stated that the main issue right now is that sysadmin for OSSTest is not properly resourced and does not yet have good coverage. He spends 50% of his time keeping OSSTest running and sometimes tracking down hardware issues. So whatever we do, resourcing needs to be resolved.
  • When Lars talked to Citrix Platform team members, it turned out that only a few have actually used OSSTest (lack of documentation is the key issue). It’s too hard to get started with OSSTest right now.
  • Another developer stated (source not disclosed): why don’t the member companies fund the “creation of tests (regardless of test framework) if they care about quality”
  • Ian Jackson: there are some interesting properties about XenRT(e.g. the capability to submit test code with a spec on the fly).
Aside by Lars: this may be close enough to providing the capability to try something locally, if we there also was a capability to test a devs personal git branch with some test code
  • Ian J also stated that he would look at XenRT, if the code was made available in a Git repo rather than a tarball. Any issues and discussions could then happen using the normal ways of discussing larger code contributions on xen-devel

Workflow

Right now, OSSTest as well as Xen RT require (in Lars' understanding)

a) code to be submitted for review
b) code to be reviewed and be submitted by the committer to the staging branch
c) only then tests are run
d) if tests fail, it has to be taken out and the whole process starts again

Whereas what would be really desirable and attractive to devs is the following work-flow

a) developer has a well working prototype on their personal git branch somewhere
b) developer can run a set of interesting (or new) tests on some machines on different architectures locally (which is somewhat impractical)
c) alternatively developer points to his/her branch (which is built), plus some interesting tests and specifies interesting machines and tests are quickly run on a central test farm (the question is how quickly and smoothly this process would be)
d) if all works well, code is submitted for review (and test results could be attached)

This the core of Anthony's argument. The question is whether the second workflow is achievable with something like OSSTest and XenRT. Of course the system testing approach is also needed. And we shouldn't get too distracted by it.

Status Update and scenarios going forward

(mostly by Chris Sheperd)

Harry Hart: mentioned that Verizon tried XenRT but that they got stuck (Aside Lars: the fact that right now XenRT is sort of outside the Xen Project community does not lend itself to discuss and raise issues about it)

The main benefit of XenRT is that we inherit a large number of existing tests (including for example performance tests and others)

Another key benefit is that in theory XenRT would allow a distributed test labs architecture: in other words ...

  • Hardware vendors in the working group could add test machines that are located locally on their premise to the test environment (assuming this can be placed outside company firewalls)
  • Community members could submit tests to these machine
  • In theory (vendor buy-in assumed) this would enable the community to use hardware that is not readily available on the market or hard to ship

At the time of the meeting, the intention was to show a demo running at osuosl.org, but OSL had not set this up

  • Update from OSL on 8/11/13: Sorry for dropping the ball on this. We have the systems basically ready to go outside of getting public internet plumbed. Are you still in need of them? Please let me know!

The question now is whether

a) Citrix is willing to spend some time on setting a test environment up (and add support for xl)
b) Whether others on this list think that a XenRT demo instance is a good idea

Lars' question to Chris:


  • Pictogram voting comment 15px.png Action Chris: Determine whether Citrix is willing to set up XenRT on http://osuosl.org/ (and add support for “xl”)

  • Pictogram voting comment 15px.png Action Chris: Let Lance Albertson from http://osuosl.org/ know. Now as we do have a WG, I suggest to CC the WG list to the response to OSG


The other question Lars had to the group (in light of the previous discussion) is:


Pictogram voting comment 15px.png Action All: Let me know by “replying +1” to this item whether you feel there is value in setting up a XenRT demo instance on http://osuosl.org/; “replying 0” means you don’t care; “replying -1“ means you believe this is a bad idea (to satisfy Xen Project conventions you would have to justify why you think this)

Meeting Mechanics


Pictogram voting comment 15px.png Action All: Let Lars know what day and time of week you would be available for a monthly call. All people on the list are based in the EU or US (East to West coast), so a 4pm GMT or 5pm GMT slot would probably work best. Please state 2-3 preferences.