All we need to do is take these lies and make them true (somehow)

Having worked in OSGi for quite a while, the most frequently asked question I get on the technology is “what’s the value proposition.”  Being a rapacious capitalist at heart, I think this is an eminently fair query from anyone looking at this OSGi stuff and scratching their head and wondering why the heck they would even want to consider using this technology in their systems.  There’s a non-zero and usually non-trivial cost associated with changing technology – a cost which usually grows in direct proportion to how “low” the technology is on whatever technology stack you are deploying.  OSGi is pretty low on that technology stack, so it has the potential to be very disruptive and hence very costly to an organization which adopts the technology.  Surely the benefit of making the switch should be at least proportional to the costs and a prudent business would like to understand what they’re going to get for their trouble, and more importantly, their hard earned money.

To answer this question, what I first ask is that people think about their current environment.  Assuming you’re not a startup – a domain which I’m not considering in this post – then you are undoubtedly dealing with a mature system which is now or quickly will resemble Frankenstein’s monster more than anything else.  If your system is successful to any degree – and if it isn’t, then we aren’t really having this conversation – what you find is that your system is a victim of its own success.  Grizzled veterans remember the “good old days” when builds would take less than an hour and everyone could sit in a room and share a common understanding of what the hell was going on in this money maker of yours.

Sadly, one day you look up and find that no one knows what the hell is going on anymore.  Build times – perhaps one of the more visceral measurements of complexity we have – have jumped dramatically.  These days, people fire off builds and then go on lunch breaks.  Worse, your projections are that in just a short time in the future, the nightly “integration” builds you kick off will still be running well after your developers have shown up for work.  It’s at this point that one panics and decides that dramatic action is required.  Something MUST be done.  Well, as long as that something doesn’t require any change to what you’re currently doing – i.e. one starts searching for a silver bullet which will slay this beast of chaos that you’ve collectively created and return your life back to the way things used to be.  Before “IT” happened.


Now, this scenario is something that I’m reasonably confident
that everyone can relate to.  It’s the classic story of the inevitable
destination several turns of the software development cycle will
ultimately lead to.  So the question is, how do we deal with this cycle
in a rational fashion and break its grip upon our systems?

the-problem.pngNaturally,
I think that OSGi has something to do with the solution to this
problem.  It’s not the only thing that OSGi offers, but it’s one of the
aspects of the system that provides the most understandable benefit
that is easy to explain using the problems that we all can relate to.

On
the right, I have a simple graph with two lines.  The Y axis represents
“complexity” and the X axis is time.  The astute observer will notice
that one line is linear and the other is non linear.  My contention is
that the linear line represents the behavior of systems when developed
using OSGi, strictly from a modularity perspective.  The non-linear
line represents systems when developed using what I’ll label as
“traditional” technology.

One of the interesting things to note
is that the initial “cost” in complexity is actually higher in the
beginning of the system’s lifetime for the OSGi based system.  The
reason why is that modularity does have a certain fixed cost which
cannot be simply erased by waving hands.  Understanding the basics of
any modular system requires some up front investment in time, training
and build infrastructure.  Some thought needs to be put into the way
things are done.  Developers need to be familiar with this technology
and understand the processes that are in place to maintain the system. 
This is what I call the “cognitive burden” of OSGi.

In contrast,
you’ll note that the so called “cognitive burden” of traditional,
non-OSGi technology is rather low, and continues to be lower than what
is required of the developer who works with OSGi for quite some time of
the system’s lifespan.  What this means, in effect, is that it’s pretty
easy to get started in the traditional mechanisms, but it takes a bit
of work when you want to use OSGi.

However, what ultimately
happens to any successful system is that the complexity starts to go
through a non-linear transition.  As I mentioned above, build times
start to sky rocket.  Tests take forever, dogs and cats start living
together.  Total anarchy appears on the horizon and threatens to drink
your milkshake – in more ways than one.

Basically, what happens
is easily explained by the geometric nature of connections.  For small
numbers, things look pretty good.  But as the number of subsystems and
people become involved, these connections grow geometrically and that
starts to suck pretty hard when you’re well into the knee of the
curve.  It’s at this point that you start looking around for something
- anything – to get a handle on the situation and return things back to
the friendly, cuddly system you used to know.

The value
proposition for OSGi is that the time spent beyond the hard knee of
your curve is where you get your major benefit as a business.  What
invariably happens when your system reaches this knee and it starts
punching you in the gut is that organizations start implementing more
oppressive bureaucracies to manage the geometric explosion. 
Committees, meetings, high level pow wows – human systems start being
brought to bear on the exploding complexity of a system that used to be
so well behaved.

So, the value proposition of OSGi is that it
provides the mechanisms which can control this complexity.  The
simplest being the module meta system which defines the dependencies
between cooperating components.  And whether you agree that OSGi
provides the answer, or some other module system provides the answer
instead of OSGi, I would claim that this answer has to be at least as good as OSGi.

The
ability of OSGi to handle complex systems as a set of interdependent
modules is kind of like toilet paper: Sooner or later, you’re going to
want to use it.  Naturally, as software developers, we simply cannot
reuse any existing system such as OSGi and consequently there will be a
lot of stupendously successful efforts which essentially recapitulate
much of what OSGi already provides today.  Rather than seeing this as a
fundamental problem with OSGi, I see this as merely one of the best
forms of flattery – my mom always told me that Imitation Is The Best Form Of Flattery, after all…

But
whether you think there is something – perhaps proprietary, perhaps a
secret plan produced by your favorite vendor – that will supplant OSGi,
the fact remains that what it represents, both at the simple module
level and the far more useful service level (a topic of exploration of
another post), what will eventually become a common, accepted fact of
software development.  Something that all the XP and Agile types will have claimed to have known for centuries.  Why? Because it solves the hard problem of
geometrically escalating complexity.  And solves it well.

Yea,
it’s going to be something more that “gets in your way” at the
beginning.  But only people who develop demos actually program in
isolation – and then throw the system away after the keynote – or those who develop exclusively in “start up mode” will be primarily concerned with how fast they can develop “teh quick and dirty”.  In the
real world, systems accrete complexity over time.  In the
real world, there are multiple existing systems that a “new” piece of
software has to integrate with to work.  In the real world, new
software has massive dependencies on pre-existing systems – systems
that, when you inquire about them, you find that anyone who knows about
them has apparently “died”.

Unfortunately, our industry pretty much seems to ignores stuff like
this.  No one likes having to do extra work and sadly the people who
direct most of the infrastructure development simply don’t build a lot
of applications that need to be maintained by a moderately skilled
workforce using the infrastructure technology they’re developing.

Demo systems are invariably thrown away rather than carefully nurtured
like the cash cows that are the reason why you’re developing for
whatever god forsaken system you’re developing/managing as part of your daily job.  Sadly, much
of the razzle-dazzle is focused on satisfying what amounts to a sugar
rush. A quick fix that satisfies and what ever consequences of this exercise are simply
forgotten after the targeted purpose of the demo is completed.  The result is that
all the focus is on the development of what amounts to green field
systems which is an experience that has little to do with the day to
day life of mining a large, successful application that has scores, if
not hundreds, of developers toiling away tirelessly at its impenetrable
hide.  And I have yet to even mention the vast fields of integration testers and system management entities (they aren’t human, you know) required to actually turn those bits into dollars.

So what is the value proposition of OSGi?  In a nutshell, it’s a
technology that pays off in the medium to long term, for systems that
are successful and have more than a handful of developers.  That’s a
hard sell for the attention deficit set, but something that is all too
familiar to those who are tearing their hair out, wondering why their
gentle purring kitten has become a mutant monster three stories tall,
threatening to destroy their cash cow and expected retirement plan.

We, as vendors, have the job of reducing this “cost” of OSGi.  Part of
this is through tooling such as IDE integration and such.  Part of this
is through education and evangelicalism – if you have been exposed to
the basics of OSGi, it’s not that scary when you actually try to use
it, after all.  Part of it is the development of frameworks which lower
the “cost of entry” into OSGi, such as the Blueprint specification. 
However, for the foreseeable future, there will always be a non-zero
cost associated with OSGi – just as there is with more traditional
technologies.  With time, this cost will become institutionalized and
part of the expected curriculum and best practice within the industry.

But for now, I have to write long winded blog posts on the issue,
hoping that some shred of lucidity comes across in the ramblings…

9 thoughts on “All we need to do is take these lies and make them true (somehow)

  1. Excellent post Hal, and you didn’t even once refer to Spice or Navigators’ Guilds.
    What you’re essentially saying, for the algorithmically inclined, is that modularity is O(N) with a high constant factor versus O(N^2) or even O(e^N) for non-modular approaches. I think the challenges we face in the OSGi community are as follows:
    1) Prove the above assertion. I agree with everything you say, but the people who control IT budgets have heard this kind of argument before and they’re not going to fall for it again without cold, hard facts.
    2) Reduce the constant factor as much as possible, and/or demonstrate that it is already as low as it can be without giving up the O(N) performance. In particular, demonstrate that other modularity solutions that claim to have lower constant factors are either incorrectly making this claim or are not O(N).
    3) Define in terms of project size the point where the two lines cross. Projects that will never get bigger than that do not benefit from OSGi, and indeed OSGi would be considered a failure if used, so should not be recommended.

  2. Disclaimer – I think OSGi rocks.
    BUT – in the absence of even rough graph scale, the above argument is a little scary.
    Suppose I am an “enterprise” looking to “buy” OSGi. Then I look at this argument, and it says “it will cost me, and by an unknown amount”. That’s hard for me to procure ass cover for buying, let alone budget.
    Suppose instead that I am an ‘independent software vendor’ looking to ‘get ahead of the curve’. Perhaps I think OSGi will take off like “X did back in the day” (solve for your favourite X). Then I look at this argument and it says “you will achieve ROI, but in an uncertain time frame”. That’s hard for me to justify if I am making (say) semi-annual investment decisions.
    That leaves OSGi as immediately attractive to (a) startups, aka “the economically insane”; and (b) vendors who are not ISVs – which means software vendors who are large enough to not be ‘independent’. Let’s call those big dogs the “dependent” software vendors – since that accurately captures their most common licensing model. Perhaps OSGi is also attractive to them – as a way of migrating to a ‘component based platform’ or ‘service provider’ model.
    For those vendors – the ‘houses’ – I would see OSGi as a waiting game. Time goes by, and presumably the market evolves. Eventually, the gap between your two curves can be quantified. Then, the enterprises and ISVs can really buy in – perhaps even ‘mass adopt’.
    Do you agree, and if so, when might the curves get closer together?
    Cheers
    alexis

  3. Well, you did read the title of this post, didn’t you? :)
    Seriously, though, I agree with you. But a couple of points. First, I think you’re pretty much holding OSGi to a standard of proof which isn’t being applied to anything else. I would simply hold out some popular methodologies such as XP, Agile, etc, which are blissfully free of any actual proof or empirical data. Or how about POJOs? So if you’re going to chain a couple of submarines to your ankles, don’t complain that you’re falling behind.
    Lack of cold, hard facts has never really stopped IT spending, in my experience. Rather, it’s the anecdotal evidence from successful projects of their competitors or peers which usually drives adoption (see Spring, etc).
    Second, there’s a chicken and the egg problem here. It’s terribly hard to get data when you can’t convince anyone to use things. So while I for one would love to have actual experimental evidence that I’m not simply talking out my ass, it’s quite clear that someone has start doing something before we can say anything.
    Third, the number of relationships grow as: ((N^2 * N) / N) – it’s a tetrahedral number, not really N^2. Just nitpicking.
    Fourth, I actually plan to start laying out my case for why OSGi grows linearly in a series of blog posts, but they will be largely based on Platonic reasoning rather than reduction of data – argument from principles with examples, rather than summary of studies and use cases.
    Again, it’s the best I can do until we have a lot more data and I get my interns who will do the hard work for me.

  4. I guess my first comment would be “never confuse selling with installing”. My second would be that the dance of seven veils doesn’t start by dropping all seven.
    This post isn’t really targeted at those who are at the beginning of the curve. That’s startup land and green field applications. This post is targeted at those who are already past the knee of their project and are staring straight into the jaws of hell. For these unfortunates, none of your points matter. They’re well beyond any “cost” of OSGi vs. doing things without OSGi.
    WRT your specifics, certainly I think that it’s important to quantify the gap and such, but what’s more important is to shift the arguments. As Wilde once said, the only thing worse than being talked about is not being talked about. And so arguing about the actual quantitative measurements is a move in the right direction. Having reasoned and perhaps impassioned but ill informed arguments about what OSGi will and will not do is all the better.
    Finally, what you describe is essentially the Gartner hype cycle. Yea, people are waiting around to see what’s going to happen, but there’s more than a few who have already taken the plunge. Certainly all the large vendors have determined that it makes terrific sense to build their application server infrastructure on it. And eventually, through discussions, blog posts and user success/failure stories, the cycle reaches its peak and The Register will declare the technology dead and it will thus become part of the common programming model that everyone takes for granted…

  5. Love the post.
    I actually went through the pain of selling OSGi, and I felt the bumps at the beginning of this long road. But at the moment of writing this, we are starting to see the payoffs: reuse, distributed development, and all the other benefits of a modular design.

  6. v-nsomlai-worktemp

    OSGi White Paper from 2007, interesting points: not updated recently The OSGi Alliance has developed many standard component interfaces that are available for common functions like HTTP servers, configuration, logging, security,…

  7. OSGi investigation

    Table of Contents Objectives The final objective of this investigation to discover information via Internet search and prototyping to answer the following questions: What is the case for migrating to OSGi?…

Comments are closed.