This is how it works now. You're required to run the code. Thing is, this is about legal contracts. So enforcement is hazy. Essentially, if hp signs a trademark usage thing that says we have to run it, our own lawyers won't let us do different. I think the first step is to say "OpenStack requires you run the code" ... Then as violations are brought to our attention, we look in to it. On a parallel tack, compatible is testable. I think it's also useful. An OpenStack could not be compatible based on config. You could be compatible with being an OpenStack. Mark McLoughlin <markmc@redhat.com> wrote:
On Thu, 2013-10-31 at 13:55 -0500, Mark Collier wrote:
The OpenStack Mission is to:
To produce the ubiquitous OpenSource Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.
IMHO the operative (action) word is "produce." This is why I agree with Monty that it is critical that if we start somewhere, it should be with a program based on people running the code, who have a vested interest in the code's continual evolution and survival. If we take our eye of of that it could invite fragmentation and dilution of any meaning for "what is openstack".
That said, it seems to me that technically speaking, we might only need 1 set of tests that could feed the two distinct programs being described ("openstack cloud" and "compatible"). So there might be to two (marketing/business/logo) *programs* with unique requirements other than testing, but with one test suite.
Therefore, IMHO, the best place to start is with the development of the test itself while continuing to discuss the ways in which the results might be applied to two distinct (logo) *prorgrams*. Now I understand that you cant develop a test in the absence of requirements such as which projects to include, but we could probably come up with a sensible starting point and add additional coverage over time (increasing test coverage not necessarily implying that each must pass to qualify for a specific program).
Ok, I'd be totally cool with all that ... if I had any idea how we were going to go about creating "a program based on people running the code". I see a near bottomless pit of issues we'd need to work through before putting that into place and I don't want to block making interoperability progress on these issues.
Am I missing something? What's (roughly) your take on how such a program would work?
To be clear, I mean - assuming we know what code should be required to run - how do we express those requirements precisely and certify that providers are meeting them?
Thanks, Mark.