On Wed, 2014-02-05 at 12:21 +0100, Thierry Carrez wrote:
Hi everyone,
As you all know, the DefCore effort is working on redefining "core" (and associated permitted usage of the OpenStack trademark), with the end goal of fostering interoperability.
Interoperability is good, and everyone agrees we need it. However, "interoperability" is a single word describing two very different outcomes, and using that word until now prevented having a necessary discussion on what type of interoperability we are actually after.
There are two types of interoperability. I call the first one "loose interoperability". The idea is to minimize the set of constraints, and maximize the ecosystem able to call themselves "openstack" (or "interoperable").
I call the second one "total interoperability". The idea is to maximize the set of constraints, and have less deployments be able to call themselves "openstack" (but have that lucky few share more common traits).
Loose interoperability enables differentiation and is business-friendly (larger ecosystem). Total interoperability enables federation and is end-user-friendly (easier to move workloads from one openstack to another, as they share more).
We can continue defining the process, but now we need to choose between those two types of interoperability, as it affects the implementation details (how we designate "must-use" parts of the code and determine "must-pass" tests).
I think it's more than time we look beyond the process and decide what is our end goal here.
Very nicely put, Thierry. I agree we need to be crystal clear on our interoperability goals, otherwise arguing over the implementation details of our interoperability program is fairly futile. I tried and failed to express that idea here: http://blogs.gnome.org/markmc/2013/10/30/openstack-core-and-interoperability... I think the focus should be on immediate baby-steps towards kick-starting this marketplace. One simple question – if and when we certify the first batch of interoperable clouds, would we rather have a smaller number of big clouds included or a large number of smaller clouds? In terms of resource capacity provided by this marketplace, I guess it’s more-or-less the same thing. Let’s assume we absolutely want (at least) a small number of the bigger providers included in the program at launch. Let’s say “small number” equals “Rackspace and HP”. And let’s assume both of these providers are very keen to help get this program started. Isn’t the obvious next baby-step to get representatives of those two providers to figure out exactly what level of interoperability they already have and also what improvements to that they can make in the short term? If we had that report, we could next do a quick “sniff test” comparing this to many of the other OpenStack clouds out there to make sure we haven’t just picked two clouds with an unusual set of compatible APIs. Then the board could make a call on whether this represents a reasonable starting point for the requirements of OpenStack clouds. No, this isn’t perfect. But it would be a genuine step forward towards being able to certify some clouds. We would have some data on commonalities and have made a policy decision on what commonalities are required. It would be a starting point. Basically, I'd love to be able to look at a chart which plots interoperability against the size of the interoperable marketplace/ecosystem. And then we could have a rationale discussion about whether we aim for higher interop, even if that means a tiny interoperable marketplace, or a larger interoperable marketplace with a tiny level of interoperability. It's a difficult trade-off, but it's *the* important decision for the board to make. I'd prefer to be making that decision with data rather than in the abstract but, in the absence of data, I think the board should be discussing it. To put the question another way - which helps users more? The ability to write interoperable code which uses a large number of OpenStack APIs, but which is only actually interoperable between a small number of OpenStack clouds? Or interoperability whereby you have a smaller set of interoperable APIs, but code that uses only those APIs would be interoperable with a much larger number of OpenStack clouds? My instinct would be to start with the set of APIs which are already interoperable between the highest profile OpenStack clouds and build a campaign around enlarging the set of APIs every release without damaging the size of the interoperable ecosystem. Mark.