Stanley Ho shows up for another meeting of the JSR 277 Java Modules Specification with a brand new versioning scheme
Dear god, why are we subjected to such hubris?
Stanley Ho, working on the Java Module System (formerly known as JSR 277) has decided that Sun is so smart that they need to invent yet another versioning system. Now, maybe Stanley Ho really is a super smart dude who makes me look like I was a fifth rate, multiple generation inbred dufus from the white trash mountains of Virginny (many, many people have actually pointed that I actually am a fifth rate, multiple generation inbred dufus from the white trash mountains of Virginny, but I digress), but it seems to me that it is self evident that versioning is what we in the business know as a HARD PROBLEMtm. And to think that someone even as super intelligent as those people at Sun are known to be will come up with a scheme that doesn’t have any unintended consequences that were not foreseen by even their Spicetm enhanced brains is simply something that I think you’ll find many in this industry simply can’t conceive.
There will be bugs in what Stanley has designed. There will be unintended consequences. It will only be after years of serious use and abuse by many hundreds of thousands of developers in real world situations that even the Spicetm enhanced mind of Stanley Ho hasn’t dreamed of will the full extent of his hubris become clear.
One might ask the question: “Why, Stanley, didn’t you use the OSGi versioning scheme?”.
This shit is hard. This path is fraught with peril. In the 21st century, we simply don’t go around inventing shit for the hell of it simply because we can. We reuse existing things even though they might not be the correct shade of purple we were dying to have. Simply because the corners are a bit worn.
The absolute last thing we need from Sun in the Java Modules is Yet Another Brilliant Sun Shiny Invention that we’ll have to suffer though for years until all the bullshit is extracted from the hubris that went into creating.
Geebus. Please. Dear God. Please.
Stop the madness. Someone please hold an intervention with these people. The industry simply cannot stand to have this crap thrown at them, especially in something as fundamental and as far reaching as the frickin’ base Java module system.
See Peter Krien’s rant on the hubris that is Stanley Ho. Sometimes a turd is just a turd
So I finally have what I consider to be the minimum – bare minimum, mind you – bar of the 3rd Space gaming framework finally passed. What I have working is a Predator/Prey simulation using the Boid rules for modeling flocking behavior. Boids were invented by Craig Reynolds to model the flocking behavior we see in birds, fish and other herd animals. The rules in Boid are actually butt simple and all local – i.e. simulating the flocking behavior only relies on the emergent behavior of the individual’s action, not some “higher authority” guiding the individuals, keeping them in formation.
In any event, what you find if you look for any Java code which implements Boids, the seeming universal implementation is to actually keep a global list of the boid positions – i.e. an implicit “global” which really shouldn’t be present. The reason why this is so is, of course, because Area Of Interest management (AOI, for short) is a tough thing to do, even if you are simply running the simulation in a single process.
Predator/Prey Boids, using Voronoi based AOI management in an event driven simulation
I really needed a non trivial simulation to use as the driver test case for the 3rd Space gaming framework I’ve been working on. Flocking behavior is, as one expects, quite common in games and is something that the human players do quite frequently as well – they don’t call humans sheep for nothing. As it turns out, managing the area of interest for simulated entities and human’s avatars is rather difficult precisely because of this tendency to flock. Moving, flocking simulated entities is therefore the de facto test case that one should be using as a model when developing the basics of a large scale, distributed simulation and gaming infrastructure – i.e. if you can’t handle this, then you have no hope of handling anything more complicated.
Same as above, revealing the voronoi overlay used to maintain AOI.
So, it’s kind of cool. I now have a non trivial simulation which runs under both the event driven framework and makes use of the voronoi based AOI management. Next step is to now distribute the simulation using Coherence so I can see how well my theories of how to massively scale this framework will work.
(Part I – Preliminaries, Part II – The Long Kiss Goodnight)
<sigh> Sorry. I tend to talk more about the surrounding atmosphere than the thing itself. In this post I hope to remain focussed and actually discuss the actual meat of the architecture and the skeleton upon which it is based. Apologies for not providing the color and fluff of life that actually surrounded the process Point 1Humans are good at making declarative statements and woefully incompetent at micromanagement.
This is one of those points that one shouldn’t have to make, but the simple fact that bears repeating is that Humans are really good at figuring out what should be done, but really shitty at actually – you know – doing what they think should be done. In keeping with the spirit of the theme of this post, I leave it to the reader to reflect upon the profound reality that it is far easier to see the goal than the path to it. The profound insight I have (not singular, as I’m sure you’re aware) is that a management system which caters to the micro-manger who actually is competent at orchestrating a complex series of transformations under chaotic conditions are few and far between – so rare as to be non-existent to several orders of approximation (or so expensive, which amounts to the same thing).
The take away is simply that any system which depends on a human to do the actual work is simply not going to work – by definition. Seeing as how this is one of my premises, it’s not something I can really argue. It’s a premise derived by years of observations of not just other humans but also myself. Again, I’m not making the claim that there are not humans who are absolutely brilliant at micro-managing large scale distributed systems – let’s be crystal clear on that point. No. My point is simply that these people are incredibly rare and you – the actual person paying the price – will have to pay through the nose if you find such a person. And, quite frankly, the chances of you actually finding such a person is so miniscule as to be almost unmeasurable. Most likely, what you’ll do is find someone who claims to be such a person or someone whom someone you trust claims to be such a person. And the odds are overwhelmingly that you are just a complete maroon and have been hoodwinked into paying a lot of money for a cheap imitation of such a being. Get used to it. It’s just a simple fact of reality.
Over the weekend, I worked on getting the Thoth voronoi based area of interest management integrated into the Prime Mover event driven simulation framework. This integration required me to actually come up with how the Thoth perceptrons (i.e. the entity responsible for the AOI management) interacted with the simulation entity the perceptron was working for. Not really a big deal by any measure, but it was on my to do list. The actual integration into Prime Mover was simply the adding of the @Entity annotation which indicates that instances of the class are simulation entities and viola, it’s a simulation. I had purposefully designed the Node interface protocol to conform to the event constraints, so no changes were required. However, the simulation did require that there be a disconnect between the nodes, themselves, and the representation of the neighboring nodes that each node has.
Not sure why this is so choppy, as the video seems to move in 1 second increments. My first time with YouTube, so who knows… Rest assured, the actual animation is smooth as silk…
In the original VAST model, this was done by the P2P network emulation which essentially serialized the nodes and reconstituted them when the P2P messages were received. I really, really hated the whole P2P emulation framework that was there and completely eliminated it in the integration. The message processing was problematic and caused me unnecessary pain and suffering in the actual simulation of the protocol anyway. Seeing as how I’m going to be using Coherence under the covers to make this puppy work, that means I’m going to have a different mechanism for communication anyway – the network model in VAST was way off base for what I had in mind.
When you’re trying to build a massively multiplayer online gaming platform (MMOG), probably the most important part of the system is scalability. After all, if it doesn’t scale, it’s simply a multiplayer online gaming platform – without the “massive”. While it almost seems embarrassing to point this out, it’s extremely interesting to note that there have recently been a lot of discussion about scalability of online systems – in particular, the Web 2.0 applications. I won’t point to these discussions, but suffice it to say that I find it terribly amusing to hear the various forms of the argument that you can worry about scalability later – i.e. it’s not something that has to be designed in from the start. (Arguments of the form “don’t worry about scalability because no one is going to use your application anyway” are perfectly fine, however). As the history of MMOG has shown, the application’s architecture has a huge impact on the ability to scale. As many gaming platforms have discovered, scalability isn’t something you can simply “add on” after you “get things right”. Anyone who thinks that this doesn’t apply to other network application architectures amuse me to no end, given as if they actually produce something of value, it will fall over when it hits the natural scalability limit of their crappy architecture.
In any event, there’s a couple of basic problems with MMOG that limit scalability. The first has to do with what is known as “Area Of Interest”. The idea here is familiar enough to anyone who has done any distributed communication in that the gaming platform doesn’t want to find itself in an N2 connection topology. In MMOG, the entities (gamer avatars, NPC, etc) have to communicate with other entities in the game. If you can’t find a way to limit the communication to the entities in the area of interest – i.e. the other entities that the entity in question is limited to communicating with – then you have a huge scalability issue due to sending messages to entities that simply don’t care about the communication because it can’t possibly affect them. This not only wastes bandwidth and precious OS network resources but causes a host of other issues having to do with the time ordering of distributed events and filtering our events that aren’t relevant. It’s a mess.