Just found the Java Native Access library for dynamic access to native libraries without JNI. Sweet. Had been using the JNI wrapper, which provided similar functionality, (but cost money and had licensing encrustations) previously. Glad to see this available as it makes it far easier for me to start integrating with C based physics engines (ODE) and 3D gaming engines (OGRE) in the 3rd Space project.
Bryan Atsatt, who is House Harkonnen’s representative on the JSR 277 expert group, has been doing a lot of work trying to bring the two warring parties together and not simply salvage the relationship but to turn the momentum around and spark a new found friendship between the two.
His first post on his new blog is dedicated to this premise, JSR 277 Could Be Great for OSGi
The initial spec actually has two separate parts: an api/framework, and an implementation with a new distribution format. Unfortunately, these are presented in a way that seriously blurs the distinction. Worse, the new distribution format (“.jam” files) often takes center stage.
The emPHAsis is on the wrong syLLAble.
The api/framework layer must be the primary focus of the JSR 277 specification, providing a coherent compile/runtime model that enables multiple implementations. Specific implementations, while required to surface framework/api issues, should be documented in appendices or even separate specs.
If the EDR spec had been written from this perspective, we probably would have avoided most of the current animosity. We can and should fix this in the next draft release. Implementations can then be seen on equal footing. More importantly, they can be left to compete on their own merits.
Not everyone wants or needs OSGi, and the new .jam implementation may be right for them.
But we know that there will be LOTS of bundles around when SE 7 ships, vastly outnumbering .jam files, so we need an OSGi implementation. ASAP. Built by OSGi insiders. Without it, we cannot have confidence that the api/framework abstractions are right or complete. With it, we not only gain spec validation but have a ready-made solution for using bundles on SE 7.
Give it a read.
So Bob Pasker has a typically thoughtful response to my post regarding the changes I’ve made to the infrastructure of the Prime Mover simulation framework. The regaling of my adventures reminds him of his conclusion that Java is a lousy language for building infrastructure.
It’s something that I never really thought about, personally. It’s not that I haven’t cursed Java for some reason or another on a continual basis since I started programming in it. But that’s always the way it is when you’re pushing the limits of what a language is capable of and what a system is designed to do. You love to hate it and the very fact that it fights against you as you bend, fold and mutilate it, in the process of violating the warranty is just our way of saying that we really love it – really, we do. Sure, it’s not a healthy relationship, but then again, specializing in these kinds of things has never really been healthy.
Now, I must say that, in defense of proposition that Java as a fantastic language for building infrastructure, that the “pain and suffering” I experienced (not really, but I know it looks that way from the outside) in the refactoring of the guts of the Prime Mover infrastructure really didn’t have anything to do with the shortcomings of Java as an infrastructure programming language per se. Indeed, the primary library that I’ve made use of in this project, the ASM byte code engineering framework, has proven to be a superlative tool for doing precisely the kind of stuff that Bob reminisces about WRT the days of the wild, wild west – back when we had to wrangle byte codes with our bare hands and force them to our will using nothing more than our teeth and a screwdriver.
Rather, all the issues I documented wrt Prime Mover have been problems of my own making – in the end, I’m always the worst enemy I have. Far outstripping any puny terror and punishment that Guy Steel or the hordes of Orc programmers over at JavaSoft can dish out.
Well, that was fun. I just spent a decent chunk of my weekend swapping out the fundamental mechanism in the Prime Mover event driven simulation framework. Between taking care of the little one and the massive allergy attack I was suffering (thanks, Mom Nature!), I think I should get an award or something. Wait! Here’s a gold star I can put on my laptop case.
In any event, the changes to the underlying framework are actually quite cool. What had happened was that Prime Mover was finally getting used in anger – well, gently played with, really – and this exposed some serious short comings in the data flow analysis I was using to perform the byte code rewriting which made the magic happen under the hood. Over a few beers at Bucks on Friday afternoon, we mulled over a few different strategies for fixing the issue – none of which were particularly appetizing.
So, let me step back a bit and lay out what happened to the framework and why.
(Part I – Preliminaries
Part III – The First Cut is the Deepest)
From my perspective, one of the major pitfalls in any project which starts out to produce a management infrastructure is that the project almost immediately starts focussing on the API layer rather than the defining the large scale system behavior. In many ways, this is completely understandable, given that the API has the most immediate impact on the first users of the system – i.e. those hapless fools that form the brigade of developers who have to integrate their systems into your management infrastructure. Given that in most large organizations, APIs become the mechanism that groups use to mediate their interaction – not just at the Java level, but in a visceral sense that governs the actual political interaction between the groups. Somewhat because APIs are something concrete and form a nucleus around which people can argue concretely about. But mostly it’s because most people are rather ignorant of how systems actually interact, but one thing they do know is that there are APIs and consequently these concrete manifestations of handles that can be universally understood become the battleground upon which system integration takes place. Or, put another way, APIs are the lowest common denominator that even managers can understand, consequently they are the only focus of pretty much every large scale project.
But the problem with this focus is that an API doesn’t define a system, rather it’s the other way around. The way I think about it is that the APIs of a system are like the inner core of a sphere. Defining the surface of the system – i.e. what the system “looks” like – will provide enormous leverage on the internals of that system. And this leverage will simply force things into place – meaning that the reason the API exists is because it is literally the inevitable result of the forces that keep the system together.
But I digress.
Update: Part II – the long kiss goodnight, Part III – The First Cut is the Deepest
This is the first in a series of posts documenting the research I’ve been doing into a different way of thinking about system management infrastructure. For quite some time, I’ve been obsessed with the idea of how to simply and effectively manage large scale systems. Throughout this obsession, I’ve travelled down various roads and found myself in several box canyons along the way. I’ve tried out a lot of different strategies and have finally settled into something which provides the kind of framework I’ve been looking for which I haven’t found replicated anywhere.
Note that I’m certainly not making the claim that it is “Teh Best” management infrastructure. Rather, what I’m making the claim is that it’s the most interesting management architecture to me. As anyone who knows me can testify, I have rather peculiar tastes and I am a strange bird at times. So fair warning, eh?
In any event, what I plan to do is to provide a fairly deep dive into the architecture that I’ve come up with. In the standard tradition of all literature scientific and technical, it will be presented in precisely the opposite order in which I actually came up with things – i.e. from the top down, in a semi coherent form that makes sense. Lord knows that actual discoveries and explorations are more a matter of luck in which you discover something and then spend an inordinate amount of time tracking down why the heck you managed to stumble upon it and where it fits in the larger picture of things that you’re trying to map out. I’ve always found this cognitive dissonance amusing, myself, and hope you won’t mind to much when I veer off into seemingly irrelevant paths rather than sticking to the point at hand.
If you’re one of those people who can’t wait until the end of the story to find out what’s going on, by all means download the PDF of my talk on the subject at last year’s Spring Experience entitled Digging the Trenches on the Ninth Level. If you’re not familiar with Dante’s Divine Comedy, then you won’t get the joke. But suffice it to say that I’m a big believer in the principle that every time you solve a problem, you discover ten more problems that you didn’t know you had.
A perfect example of this sometimes perverse law is something as simple as email. Email solved a lot of problems that a modern economy and social population have, but in doing so it created a lot more. Without email, we would never have been subjected to the sublime beauty of penile extension spam nor would your grandmother be subjected to the horror of id phishing which you discover has snagged her bank account and drained all her life’s savings leaving you with a predicament that makes you wonder what all this progress was supposed to do in the first place.
Likewise, I firmly believe that in solving the problems I believe have been addressed by OSGi, Spring DM and management architectures like mine, we’ve inadvertently unleashed new levels of horror that will ensure future generations will curse our names as they suffer from the fall out and live the unspeakable abominations unleashed from these “solutions” and witness them unfold in ways that we couldn’t possibly imagine.
So, with that cheery panorama as the back drop, I’ll end this introductory post and start working on the next post, which provides high level overview and ten dollar tour of the sewers that I’ve been digging for your benefit on the ninth level of hell.
Remember. I dig because I care. After all, you do want that frozen crap to be routed somewhere and dealt with, don’t you?
My talk at last year’s Spring Experience talk on the next generation of application server architecture is available here.
The talk is about OSGi and how the next generation of application server platforms will simply do away with the cumbersome and rather dated component models that we all know and hate in favor of the vastly superior OSGi platform. Or that’s the theory at least. Only time will tell if I’m correct or just another mad hatter sniffing too much mercury outgassing from the various toys littering his office.
In addition, I also lay out the management architecture I’ve been experimenting with for the past year. Obviously, it uses OSGi as its base, but OSGi – by itself – isn’t sufficient to provide the kind of management infrastructure you need to manage large numbers of processes. I call this management architecture – for lack of a better name – Event Driven Autonomic Management. I’ll be kicking off a series of posts going into far more detail on this architecture as a means of documenting the research I’ve been doing.
Think of it as therapy, as talking about it on this blog – posting, so to speak, to the wind about concepts and issues that no one else seems to find terribly interesting or useful. You can tell I’m a great hit at parties, can’t you?
This, imho, is pretty much the portrait of a corporate “solution” to the “problem” of P2P
Comcast and BitTorrent: Why You Can’t Negotiate with a Protocol
The problem is that you can’t negotiate with a protocol, for the same reason that you can’t negotiate with (say) the English language. You can use the language to negotiate with someone, but you can’t have a negotiation where the other party is the language. You can negotiate with the Queen of England, or English Department at Princeton, or the people who publish the most popular dictionary. But the language itself just isn’t the kind of entity that can make an agreement or have an intention.
This property of protocols — that you can’t get a meeting with them, convince them to change their behavior, or make a deal with them — seems especially challenging to some Washington policymakers. If, as they do, you live in a world driven by meetings and deal-making, a world where problem-solving means convincing someone to change something, then it’s natural to think that every protocol, and every piece of technology, must be owned and managed by some entity.
I think we’re at the threshold of one of those eras where the old guard simply doesn’t understand the rules any more and is reflexively dealing with the new reality using all the power of its impotent tools. Sure, a frustrated Lizard – a HUGE frustrated lizard, mind you – is pretty terrifying to be on the wrong end of. However, the poor system simply can’t comprehend what’s happening to it. All it can do is lash out, find someone to take to lunch and negotiate a contract with.
So the other foot drops in the Microsoft “Open Source” debacle. When I was at EclipseCon, we had Sam Ramji of Microsoft regale us about how MS wasn’t just going to play in Open Source like everyone else, they had in fact invented Open Source. It was a talk so boring, that I left half way through it.
In any event, today we find out how open the source is.
Royalties are the admission price Microsoft tells freetards
Microsoft wants to license Windows patents to open source companies in the same way it’s licensed patents to companies like Motorola in the past. “Because cathedrals can do agreements with each other its possible to sit down with the companies we have and say: ‘Let’s see what we can work out that works for you and our business’.”
Smith was borrowing the phrase “cathedrals” from Eric Raymond’s book The Cathedral and the Bazaar, which talks about the open source, or bazaar, method of development versus the traditional vendor approach. “We’d be prepared to sit down with any entity that can deal with the issue in real terms,” Smith said.
It’s a vital emphasis, and one that could harm technology and business innovation in open source. Many open source products and businesses today have begun life with individuals or groups of individuals working on projects, unencumbered by worries about the ability of their project to support royalty payments to a patent owner down the line.
“We are much better connected with the open source community today, we love open source software running on Windows and we are working to interoperate with it,” Smith said. ” But I can’t give you an answer saying: ‘Here’s the blank check,’ he told OSBC.
Having made Microsoft’s position clear, Smith called for a willingness in the open source community to compromise in negotiations and solve problems. In translation, that appeared to mean: stop requesting publication of all Microsoft patents under a royalty free license. According to Smith a solution can be reached to “normalize the IP relations” to “reach almost all spectrums”
Despite living in the online digital age where PDF rules, and despite loving my Kindle, I find that pretty much nothing beats a book – still. And although my vast library still has books largely unread, I can’t help but buy more. Myself, I find that the number of books I have unread and thus undiscovered in my library a more pleasant metric than the books I’ve already read. Sure, I treasure the books I’ve already tasted, but the ones I have yet to discover are the ones that give me pleasure…
Anyways, I got two very good books – as yet unread, naturally – from Amazon today and I can’t wait to start applying and diving into both of them.
The prize of the two is Foundations Of Multidimensional and Metric Data Structures. It’s a 1000 page, oversized book on just what the title proclaims: a comprehensive work on multidimensional and metric data structures. Amazing stuff and quite dense – both literally and figuratively. The paper is that fantastic, acid free glossy textbook paper that can be used as a bullet proof fabric substitute in a pinch.
Anyways, it’s a 2006 copyright and it’s absolutely stunning in its depth. As the foreword reads, this book is encyclopedic, organizing a bewildering array of spatial and multi-dimensional indexing methods into a coherent field. Very cool stuff, to say the least.
The second book promises to be a very good read as well: 3D Game Engine Architecture – Engineering Real-Time Applications with Wild Magic. Although the book is about a specific engine, it’s really – as one reviewer put it – the longest and probably best done architecture documentation ever done. I was looking for a book on the subject that really went into the architecture of such a system, not just a mere retelling of how great the code was and sparse commentary around it. From what I can tell so far, the book seems to live up to this promise.