As you may know, the OSGi Enterprise Expert Group is currently in the process of defining the interaction with “external” systems and formalizing the idea of what it means to have a “distributed” registry in OSGi. Having been through what seems like the same process now for the third time (EJB, SCA and now this effort), the overriding issue seems to be that distributed systems “are not like local systems”. One would have thought that this nugget of wisdom would be deeply lodged in the DNA of anyone who’s done anything with distributed systems for more than – say – a bit of example code found on The Server Side, but sometimes we do have to bring what should be obvious facts to the fore and illuminate them with Klieg lights so that the current, next and last generation who hasn’t figured this out yet will.
Amongst the things that are often brought up in these discussions are things like asynchronous messaging, marker interfaces, and – of course – the holiest of holies: METADATA. Now, naturally, I do agree with all of this. Asynchronous message patterns are darn important in not just distributed computing, but in “local” computing as well – so important to me are these patterns that I actually wrote a framework called Anubis for doing this and it’s something that literally provides a great deal of the foundation of the work that I do on a daily basis. Ditto for METADATA and all that jazz…
But what I believe should be obvious to people and for some reason seems to be not so obvious, is that all these things have absolutely nothing to do with OSGi. None. Natta. Zip. Zero. And what’s really odd to me is that when I say things like that, people believe that I’m saying “things like asynchronous messaging, marker interfaces, METADATA, etc, are not important”. Naturally, because of this misunderstanding, I spend 99% of my time trying to point out the fact that – yes – I actually do believe these things are of crucial importance and that – yes – these issues are necessary to resolve and come up with good solutions.
The point that never seems to get across is that I believe these issues are orthogonal to OSGi.
Now, the OSGi EEG is meeting next week for a Face 2 Face in Boston (lovely Boston!) to move the ball forward on the two specs having to do with distributed systems (amongst others) and I’m awaiting the result. I had long and involved arguments with the people involved with these specs and the main result was that I was told that my worries are unfounded and that we are not, in fact, going to create a new distributed system.
But, as you might expect from reading this post, I am highly skeptical. It’s one thing to say you don’t want to do something. It’s quite another to spec out requirements that pretty much spell out a new distributed system.
For just one example, take policy (please!). One of the requirements int the external systems RFP is the whole issue regarding policy: SLA, HA, load balancing, fail over, transactions, etc, etc, etc. Now, one thing that has been made abundantly clear in the 20 years that I’ve been doing distributed computing is that if there’s one requirement for a distributed systems, that requirement is smack dab in the middle of what we commonly refer to as policy. So, if you’re not trying to create a new distributed system, then I have no idea why one would be including the requirement that this new OSGi specification for external systems needs to deal with policy. Do distributed systems infrastructure need to deal with policy? Absolutely. Does an OSGi spec about how to integrate distributed systems infrastructure need to deal with policy?
In my humble opinion, no.
So, we’ll see what happens in Boston. In the mean time, I’ve actually put together some code which provides an example of how I believe one should be integrating distributed systems infrastructure into OSGi. The code can be found here in jar format. This example is a Maven 2 project consisting of four modules:
framework – this is the extent of the changes needed in OSGi to support my model. Note that this is actually different than what is envisioned, as the actual end framework will be realized as listeners on the OSGi service registry, to be called when BundleContext.getServiceReference(xxx) is called.
system-1 – this is a toy implementation of a super complex, pan dimensional, meta data driven distributed computing infrastructure which will be used in the example
test-service – this is the oh-so-very complex distributed service used for testing.
testing – this is the JUnit test for the system, demonstrating the entirety of interaction I feel is necessary to integrate exquisitely complex, pan dimensional distributed computing into OSGi.
I’ll be fleshing more of the system out, as only the export and import part of the lifecycle is implemented. As this is the point of contention, I think that this is the place to focus, but I’ll be fleshing out the full example as I get time.
What would be really helpful to me, personally, is if people can download this, run it and then tell me – in excruciating detail – precisely what is missing from this framework which requires a whole lotta work on policy, asynchronous messaging, marker interfaces and exceptions, etc., etc.
I look forward to any comments you might have.