A couple of people have asked about the differences between the implementation I have for a distributed OSGi framework and the framework described by Jan Rellermeyer (see my previous entry on the subject). In a nutshell, the primary difference is that the way services are advertised and discovered is completely hidden in my implementation. The implications, though, that this small difference has on usability and integration with existing OSGi frameworks is rather profound.
At this year’s EclipseCon, I went to a talk by Jan Rellermeyer regarding his R-OSGi system. His stuff is open source, which you can find here. The slides for his talk are available here. It’s extremely interesting and I would suggest anyone who’s interested in the distributed OSGi stream to both download and play with the r-OSGi system and to read the paper and presentation. In particular, I think that his discussion of the alternatives vis a vis Jini, UPnP and SLP is quite illuminating.
After I went to Jan’s talk I went back to my cave and started to play around with my own version of the system. I had been playing with SLP (Service Location Protocol) quite a bit in other distributed systems and it was quite obvious after Jan’s talk that SLP and OSGi are a good fit. However, I was rather troubled by the degree that SLP was exposed to the end programmer and started on my own implementation.
What I ended up with is a system which has, I believe, a minimal surface exposed to the end programmers of an OSGi system. From the service interaction perspective of the programmer, the only mechanism which is added to standard OSGi is the way you indicate which services you would like to import into your local OSGi process registry. There is no exposure of SLP or any other system for discovering services. Rather, the system simply makes use of the already existing mechanisms in OSGi for discovering services – i.e. the service registry, filters, ServiceReferences and ServiceTrackers n’ such.
Keynote was again fantastic. Hugh Thompson talking about security and such. Fuzz testing. Negative requirements (e.g. users should only be able to login with u/p: positive requirement. No one can access the database except through logging in: negative requirement). Chatted with the Paramesus guys as well as the gentleman from Knopflerfish. Went to the talk on “OSGi: the good, the bad and the ugly” by BEA guys. Man, they’ve done a lot of work and I’m pretty jealous. They don’t seem to think that OSGi needs “distribution services” either. Good to see. Also, they hate – hate, I say – the whole evil of 277. Good. May 277 rot on the vine.
It was interesting to hear them talk about their experiences developing with OSGi. It’s quite the common refrain: “I thought I wrote modular code until I started working in OSGi”. And it’s going to become even more common to hear this. Sadly, it’s going to be one of the barriers to entry into OSGi: i.e. it’s never a good business strategy (as I found out in the past) to force developers to understand their code. I’m looking into tools like Lattix, which seems quite promising in being able to help you understand and figure out your modularity and layering in your architecture. Something very, very, important in OSGi and if it takes off like we all think, something that a lot more people will be paying attention too.
Which is a really good thing, when you think about it. I really despair at times for all the talk by people who constantly complain about having to do actual work to make use of something. I mean, we need to keep raising the bar – it’s the only way we’re going to make things better. Dumb people produce dumb code and dumb code is dangerous, insecure and something that should be eliminated. It won’t be eliminated by better IDEs, team development environments, SCRUM or the religion of Agile. It simply comes down to better people producing better code and leaning from their mistakes in a virtuous cycle. </rant>
R0ml was here – excellent key note. Economics and responsibility. Warrants vs. ? Can’t remember. “Of course Eclipse is the worst programming IDE on the market – IT’S FREE”. And anything which is worse than eclipse isn’t going to be used, much less have someone charge money for. This is what “free” does – it sets the lower level of expectations. Literate programming. Reminds me of the old days when I worshiped Knuth and his idea of Literate Programming. Only thing was, Knuth’s idea of an IDE was raw TEX – insanity. Anyways, very cool stuff from ROml. Hard to keep track of and I was too busy listening to take notes. If you ever get a chance to hear him talk, I would suggest dropping everything and doing so. Well worth the dollar.
Ajax panel – what is different about Ajax that makes it impossible to build upon the past? I actually got a small quote in ARN on this (hey, we take what publicity we can)
An audience member emphasized that rather than devising new technologies, it may be a good idea to leverage what has already been done. “They should be standing on the shoulders of giants,” said Hal Hildebrand, an architect in Oracle’s Java platform group, after the session. AJAX is a new interface technology and lots of work already has been done in this area, he said.
Just let me expand a bit on this. X has been around for, what? Over twenty years? What’s so different about AJAX? It’s just a remote windowing system. It’s like a frickin’ VT100, for Bob’s sake. So you do it with XML or JASON. Oooooooohhhhh. I’m so impressed.
Okay. Deep breaths.
OSGi panel – was it good for you? So I was on this panel. We played to a
packed moderately occupied theater and it went over pretty well. It’s pretty clear that OSGi’s star is on the rise and the major players in the market – including House Harkonnen – are clearly buying into it. All in all, it was pretty good for me and hell, OSGi even kissed me in the morning and made me breakfast.
R-OSGi. This talk by Jan Rellermey of ETH was worth the entire cost of the conference for me (especially as House Harkonnen is paying for it). Basically, he turned OSGi into a distributed system in a quite elegant way. His talk started out with some background as to what choices are available for modeling OSGi as a distributed system. The first and obvious choice is JINI. Now, JINI is one of those technologies which seems to generate a lot of religious fervor. Sometimes I feel like the people who are JINI fans are kind of like old style communists who keep on telling people that communism was never really tried – JINI is the next distributed system and always will be.
Anyhow, companies like Parmeus have taken the JINI route in creating a distributed OSGi system and have run with it. I have to say, though, I agree with Jan’s take on JINI and why it’s not a great fit for OSGi. But I’ll leave that to Jan. Let’s just say that I think JINI has way too much baggage and is – quite frankly – a poor fit.
Next Jan talked about Universal Plug n’ Play, which is another candidate model he considered for distributing OSGi. Again, the model doesn’t quite fit with the way OSGi services are used and the way they are filtered and discovered. Yes, OSGi has a UPnP service definition and Jan pointed us to Domoware’s UPnP implementation which he highly recommends.
In the end, Jan decided to base his work around SLP – the Service Location Protocol. And I must say that I smacked my forehead in a D’oh! moment. I had played around with SLP quite a bit when I was fooling around with Smartfrog – it’s quite a sweet protocol. As Jan Points out, the mechanism SLP uses for attributes and filtering is the same LDAP query format that OSGi uses for the service filters. So, it’s a 100% match with the way OSGi does things.
Jan’s framework has the usual cool stuff such as auto generated proxies using CGLIB and does some pretty neat stuff like shipping virtual bundles across the network for the clients. Really, you have to get the code and start playing with it to see how cool his stuff is. Having been playing around with it for a while, I do have some critiques and changes (of course) that I’m making. But that’s the subject for another post
Enterprise OSGi – Siemens, SCA, distributed registries and such. It was interesting to see their presentation and the problems they’re trying to solve. After the R-OSGi talk, I was pretty well primed and have a zillion potential solutions for them. Going to be much more interesting in the OSGi Enterprise Expert Group.
Jazz – Envy’s electric boogaloo. I must say that the thing that we love to do is create tools. tools for creating tools which create tools which allow us to create tools for managing the process to collaboratively develop tools for creating tools. Ye gods. They have their own source code control. Yea, that’ll go over real well at companies
Scammed a “committer” shirt from the conference. Really, it was all innocent. When I registered, the nice woman who was taking my registration asked if I was a “commuter”. Well, at least, that’s what I thought she said. “Yes,” I helpfully replied – hell, I had just spent an hour driving from my secluded fortress of silence to Santa Clara. “Go to that table after you’re done here and we have a nice shirt for you.”
So, I head over to the table and she takes my badge and consults a list. “Gee,” I think to myself, “amazing technology that they have which knows – in advance – whether I was commuting to EclipseCon would take information sharing between the organizers of the convention and surrounding hotels on a scale that boggles the imagination.” But perhaps the DHS has tendrils that are far more pervasive than even my fevered, paranoid mind could imagine.
“I’m sorry, but you’re not on the list.” What? I’m not on the list? “Of course, I’m on the list”. Confused, the nice lady asked, “So you have been okay’d by the person on the other side?” “Of course.” I replied with confidence. So she wrote down my name on the list – so lord knows someone is going to get a laugh from that one – and she handed me my quite nice button shirt with the “Eclipse Committer” embroidered badge.
Psyche! Here I thought I was going to get a “commuter” shirt and I end up with a open source deputy’s badge. I rule the town of RockRidge.
Key Note by Scott Adams: All I can say is that it is pretty surreal to sit in a room with thousands of geeks reading Dilbert cartoons. Granted, it’s funny and he’s got a lot of interesting stories behind the cartoons. I don’t, however, for a moment believe that he was passed up for promotions because of “diversity” requirements. But then, any one who loves to dig him self deeper with this I.D. crap can’t be expected to do anything less.
Met one of my very old friends Hendrik, and Dave L. was there as well. Kind of like the old OOPSLAs, as I was promised by Herr Milinkovitch.
Went to a talk by the guy who heads up Mylar Tasks: very cool stuff. I’m now starting to use it in my work and it really is an amazing piece of work. Again, another reason why IDEA is losing out to Eclipse. Really, there’s no reason why the Mylar guys shouldn’t be doing this for IntelliJ – so something has got to be going on. The stuff that the Mylar guys are planning seems quite aggressive and a bit overreaching – everyone wants to rule the world, I suppose – but no doubt will still be worth using.
Google summer of code – ho hum. “what did you learn?” bizarre. Sometimes I think that Google really is nothing but a glorified IT department filled with Dilberts.
RMI in OSGi – seems like a lot of the problem could be solved by the Spring context class loader rather than the wild, wild west of the service registry only/bundle nightmare. Context is everything.
OSGi and the JBoss Microcontainer – love the Russians. Microcontainer is a fine grained state machine for managing dependencies. Scoped metadata. Aspectized deployment ??? Integration points: Metadata. Seems like their metadata model is much slicker than the Spring namespace extension model, but that’s something easily fixed. Controller – dependency state machine – seems like a stunning degree of overkill and over modeling. Lifecycle transitions modeled with state machine. Overall, seems like a seriously severe case of the second system effect. Lot’s of wrapping. Man, this is going to suck. Seems to me that I remember a time when JBoss was telling us that the old JMX microcontainer was the greatest thing since sliced bread. I guess this is even better than that
Finally, a common sense idea that may finally have an impact on the rampant abuse of patents we’ve seen in the last two decades:
Open Call From the Patent Office
The Patent and Trademark Office is starting a pilot project that will not only post patent applications on the Web and invite comments but also use a community rating system designed to push the most respected comments to the top of the file, for serious consideration by the agency’s examiners. A first for the federal government, the system resembles the one used by Wikipedia, the popular user-created online encyclopedia.
“For the first time in history, it allows the patent-office examiners to open up their cubicles and get access to a whole world of technical experts,” said David J. Kappos, vice president and assistant general counsel at IBM.
It’s quite a switch. For generations, the agency responsible for awarding patents, one of the cornerstones of innovation, has kept its distance from the very technological advances it has made possible. The project, scheduled to begin in the spring, evolved out of a meeting between IBM, the top recipient of U.S. patents for 14 years in a row, and New York Law School Professor Beth Noveck. Noveck called the initiative “revolutionary” and said it will bring about “the first major change to our patent examination system since the 19th century.”
Well keep our eye on this. Quite frankly I think that a system that hasn’t had a major change since the 19th century is going to find things a might bit interesting here in the 21st and I predict their foray into the
wild, wild west internet will result in some nasty bumps.
Here’s hoping they respond with grace.
So, as I mentioned in previous posts, one of the things I’m currently working with is rule systems, in particular, Jess. Regardless of how lame Equinox may or may not be, one of the advantages it has is that the developer of Jess has gone and done the work involved to integrate Jess into Eclipse. This integration is manifest in a syntax sensitive, highlighting editor, a debugger and an inspector for the rete network. Quite cool stuff in its own right.
However, when you’re developing rule systems, the first thing you have to do is start developing an ontology. Unless I’m missing something really basic (known to happen), developing an ontology is pretty close to what architects and designers (even lowly programmers) do in Object Oriented technology all the time. So it’s a whole system that anyone who’s programmed in OO for a number of years will find quite comfortable.
One of the primary differences between an ontology and an OO system in – say – Java is that classes in Java are primarily a means of sharing code. I can’t tell you the number of times I’ve rolled my eyes over programmers and <shudder> architects discussing – in all seriousness – what an object “truly was”. I mean, my god. It’s FRICKIN’ CODE. It’s not a journey into the platonic realm of pure form. Geebus.
With the ontology, things are even more murky. As will be constantly pointed out to you, there is no “correct” ontology for a given domain. In fact, translation between ontologies is one of the big reasons for OWL and other things that use RDF like a dark blood ritual.
In any event, for developing your ontology, there doesn’t seem to be anything better than protégé. It’s a Java based application for developing and manipulating ontologies. Quite nice and definitely recommended.
What’s even better is the integration of Jess into protégé through a plugin called JessTab. With JessTab, you now really have almost the equivalent of a programming IDE for knowledge “engineering”.
This is a paper on the tactics used by denialists such as the American Enterprise Institute, CATO institute and other Libertarian organizations who’s only real purpose is to sow fear, uncertainty and doubt. However, the tactics discussed in this this paper are not only applicable in the public policy realm, I’m sure that if you just substitute a few terms and squint a bit, you’ll find that these same pack of cards are in wide use by others in your professional life. After all, people who are seeking an outcome and not dialogue aren’t limited to the public policy realm.
A very good read.
The Denialists’ Deck of Cards: An Illustrated Taxonomy of Rhetoric Used to Frustrate Consumer Protection Efforts
The Denalists’ Deck of Cards is a humorous illustration of how libertarian policy groups use denialism. In this context, denialism is the use of rhetorical techniques and predictable tactics to erect barriers to debate and consideration of any type of reform, regardless of the facts. Giveupblog.com has identified five general tactics used by denialists: conspiracy, selectivity, the fake expert, impossible expectations, and metaphor.
The Denialists’ Deck of Cards builds upon this description by providing specific examples of advocacy techniques. The point of listing denialists’ arguments in this fashion is to show the rhetorical progression of groups that are not seeking a dialogue but rather an outcome. As such, this taxonomy is extremely cynical, but it is a reflection of and reaction to how poor the public policy debates in Washington have become.
The Deck is drawn upon my experience as a lawyer working on consumer protection in Washington, DC. Where possible, I have provided specific examples of denialism, but in many cases, these arguments are used only in closed negotiations. Some who read them find the examples humorous, while others find it troubling. But all who read the Washington Post will recognize these tactics; they are ubiquitous and quite effective.
This taxonomy provides a roadmap for consumer advocates to understand the resistance they will face with almost any form of consumer reform. I hope to expand it to include retorts to each argument in the future.