The Future is Composite

Materials science has always fascinated me.  How things actually work, on a mechanical level, when it really, really matters.  I mean, it’s one thing to simply build in the abstract, where you’re working with heuristics.  Big Picture™ stuff where tolerance isn’t really an issue.  Miss by a factor of two and everyone thinks you’re a fucking Einstein.  A Wilbur Wright.  Edison and shit.

But even at a level we can directly experience without instrumentation (all we need is a good sense of touch or the benefit of good eyesight – with or without lenses), the actual honest to “Bob” mechanical structure of whatever reality is there really becomes the sole determination of what is real NOW, what just HAPPENED and what is INEVITABLE.  The smaller and smaller we go, the less and less freedom we appear to have – the more DETERMINISTIC things become.

Or so we naively thought, of course.  The reality we enjoyed before Bohm kicked Einstein’s ass by bludgeoning him with sub Newtonian probability non linearity even his prodigious intellect couldn’t model.  But such is the mystery of reality that we need not bring up further in the course of this rant….

As I was saying, how things work at the mechanical level we intuitively understand has understandably fascinated me.  And one of the more fascinating things that I’ve wondered about is the whole field of composites.  A composite sounds like an exotic, Science Fiction™ material, but its really not.  Take some wood and shred it.  Pour some epoxy on the fibers and let it dry.  Viola!  You have a Composite material.

The great thing about composites is, from one POV, is that they have inevitably have properties that often far surpass the linear combination of the properties of the materials that form the composite.

Back in the day, my operating reality was assuming that things like a pure titanium crystal would be your penultimate material of choice for extreme material coolness.  For example, they grow – yes, grow – jet engine turbine blades out of solid crystals of titanium allows.  Stunning.  And no doubt a feat of precision engineering seldom found in nature.

But think about it.  We can build composites from materials – such as wood and some oily shit.  And even if we do things in a slipshod manner, we find that we can invariably produce amazing stuff.  In fact, the very “imperfections” and randomness in the composite material is where the strength arises.

And then there are alloys of different materials, where the different sized atoms form quasi periodic imperfections in an otherwise homgenous lattice – e.g. graphene doped with some random element.  Suddenly very cool and amazing properties spring from such unlikely combinations and seemingly haphazard arrangments of primitive chunks of base reality.  Amazing when you think about it.

Lately, I’ve been pondering the wonders of three dimensional printers – i.e. 3D replicators.  These nifty little machines will essentially “print” a solid object replicating pretty much anything you can model in 3D space.  Low end models like the one I built extrude plastic and consequently don’t produce parts you can use – say – in making a car.

But that isn’t a limitation that is currently shared by the state of the art in 3D printing.  Actually, it’s not a limitation that’s really been around for quite a while.  Almost immediately, it seems, 3D printers appeared which use metal powder.  Each layer of the object is produced by laying down a thin layer of metal and using a laser to sinter the powder to order.  What’s produced is a loose lattice of metallic grains that are kind of “welded” together.  What you do when that is finished is that you dunk the object in some molten metal which fills the gaps and viola you have a pretty solid chunk of metal that you custom printed in the 3D shape you desired.  Tough.  Industrial.  And locally created from essentially raw materials, rather than some huge big ass industrial plant with union laborers and drone managers.

Turns out you can also do much the same thing and create even more exotic composites, such as ceramics and glass.  The amazing thing, of course, is that you don’t even need a frickin’ laser to produce these ceramics.  You just fire the objects in a kiln after you’ve glued the matrix together.  Pretty cool.

In any event, what got me thinking about this was the fact that a lot of this science fictiony stuff is possible only because it’s actually a good idea to build materials with diverse components.  It turns out that it’s actually advantageous to have structures built using the fundamental principle of diversity.

Which brings me to the point that I actually was mulling about when I decided to write up this post.  People whine about diversity for diversity’s sake and wonder why on earth liberals and progressives think it’s a good idea – intrinsically good idea.  Your average independent will scratch his head and wonder what the big fuss is.  Your right of center type will simply know, instinctively, that it’s a bad idea.  Your libertarian nut jobs will wonder what the hell is wrong with the moron who even contemplates such ideas.

But diversity in systems can result in amazing emergent properties that are quite valuable.  Properties that simply aren’t obtainable in monocultures.  We see this in complicated macro scale systems like biological immune systems.  Inter species systems where a diversity of genes as well as large scale actors are seen as essential to the overall health and evolutionary fitness of these ecosystems.

It’s not a hard jump to the conclusion that diversity in the workforce, diversity in our society, diversity in our friends is not a terribly bad idea that results in emergent properties that are quite valuable to the systems that make use of these human cultural constructs.  Enumerating these properties isn’t exactly easy, for much the same reason that it is hard to explain to someone that adding Chromium and Nickel to Iron produces a material which has an order of magnitude more tensile strength than the sum of the tensile strength of these three elements alone.  It’s these synergetic effects – emergent properties of systems – that provide tremendous value.  But trying to understand how they come about, much less explaining to someone else as to why it works the way it does… well, that is actually pretty hard to do.

But valuable it is.  And even if the right wing loons and even the more moderate folks who operate purely out of “common” sense can’t understand the reasoning, diversity is still valuable.  Intrinsically.  And not being able to understand why that is, is most likely not a proof of its lack of intrinsic value.

The Colour Out of Space

The_Color_Out_Of_Space.jpgA while back, our local godless communist public radio station (KQED) held its regularly scheduled talk show in the 9-10 am slot, Forum.  The topic was the appointment of the new “Security Czar” of the intertubes for the Obama administration, and they had a number of “computer security experts” to discuss it.  Besides talking about the politics, the show revolved around the issue of the threats faced by pretty much everyone because we are now fully dependent upon the intertubes and computer technology in general for pretty much everything.

Being who I am, I just had to capitalize on the opportunity to talk to these experts on this public forum and ask them a couple of questions I had on the subject at hand.  Sadly, due to my limitations as a questioner I didn’t quite make my point clear in the questions I was allowed to ask.

(Thanks to the wonders of the intertubes, you can find the MP3 of the episode here and I come in at the 33 minute mark)

However, I don’t think it was simply my lack of skills that was the source of the misunderstanding.  The fundamental issue that the experts were discussing was the security of our networks and the security of our computerized systems in general.  The questions over how we protect ourselves in this digital age and why in heaven’s name we simple aren’t doing so in the 21st century.

Continue reading

Open Sourcing of 3rd Space

Thirdspace.JPGRather than trying to deal with any licensing issues with those who wanted to use or play with the work I’ve done under the umbrella of the 3rd Space project, I’ve decided to open source the system.  The source is hosted on my SVN repository, which is:

    http://svn.tensegrity.hellblazer.com/3Space

I have bugzilla and Trac enabled, but I haven’t done much of the necessary work to make those usable yet.

Right now, the system isn’t all that impressive from a full featured simulation environment perspective.  I do have a lot of the basics covered and have a good idea of where I’m headed, but there still an enormous amount of work to do, obviously.

Still, the system is quite different from any of the other virtual worlds  or simulation environments in that the underlying framework is an innovative system which provides an event driven simulation framework using Java as the language you write the simulations in.  Naturally, you can also use any of the scripting languages supported by the JVM (not to mention, languages which use the JVM, such as Scala, etc), so it’s actually pretty interesting from that perspective as well.

I’ve also fully implemented the 2 dimensional Voronoi based Area of Interest management which I think is actually quite innovative.  Currently, the state of the art in this arena is message based pub/sub systems which simply don’t scale at all, or some Frankenstein’s monster based on what amounts to a QuadTree structure.  Not optimal.  Naturally, the AOI management is all event driven, being implemented as part of the aforementioned simulation framework. I have the 3 dimensional Voronoi Sphere of Interest management in the works.  Once that is in place, things will really start rocking.

You’ll find some interesting things in the source tree, such as a pretty darn sophisticated Data Flow Analysis framework that I found.  I don’t actually use it in the simulation framework’s transformation logic, but it’s something that will likely have to be pulled out to solve some of the more nastier analysis issues that will inevitably have to be solved in that arena.

You’ll also find a Composite Object framework that I put together, which I call Janus.  The composite patter is an important pattern in the simulation / virtual worlds area and I couldn’t find anything out there remotely useful.  Yes, I’ve looked at the available frameworks, but they were either too simple to do anything useful, or literally tried to boil the ocean in the attempt to solve all problems with their framework (I’m looking at you, Qi4J).    It’s small and extremely tight, leveraging ASM for the bytecode transformations to implement the composite, rather than creating a maze of twisty proxies, all alike.  Again, I haven’t used it in anger yet, so who knows if it’s really useful.  But it seems promising.

In any event, be aware that the code is licensed under the Affero GPL V3.  I am fully aware that this is the “yes, I’m a complete asshole” license.  Oh well.  Please note that I’m a rapacious capitalist at heart and believe in making money off of stuff.  But I also believe in the open source model.  I figure if you want to contribute back your mods, then that’s a decent enough payment.  The Affero GPL does a pretty good job of modeling the balance I want to strike with the release of the source.

As always, this is all “as is” and without warranty.  I don’t exist to fix any bugs you might find and I may not even respond to your emails.  But if you’re interested in working on some stuff that I think is pretty darn interesting, then this might be something you find useful.  Let me know.

I’ll definitely start putting in the work to fill out the Trac WIKI with some more useful information and start detailing what the sub projects are, documenting the various frameworks and start laying out the road map I have in my mind.

In any event, I hope you find the code useful and interesting.

Third Space postings on Tensegrity

When you shake your ass, they notice fast – and some mistakes were built to last

captain-walmart.jpgOne of the more interesting books I’ve read in the past couple of years is No One Makes You Shop At Wal-Mart.  One of the take-aways from the book is the unmistakable conclusion that the idea of “revealed preferences” is simply wrong.  You see this kind of short cut thinking all the time in that people basically believe that the choices that someone has made reveals their preferences.  So, if you see that USA Today is the best selling newspaper, the theory of revealed preferences tells us that what this means is that news consumers really, really do prefer USA Today.

What you learn from the work of Tom Slee is that this thinking is simply bullshit.  As anyone with a pulse can tell you, life is filled situations that amount to versions of the Prisoner’s Dilemma where in your best strategy is often counter to your best interests.  The long and short of the argument is that, inevitably, you’re choices suck and interpretations based on the results of your decisions hardly reflect the trade offs that are involved in the process of making that decision.

A good book and not all that long.  Definitely recommended, if you’re interested in that sort of thing.

One of the reasons that Slee’s book interests me is its application to my chosen domain of software.  One way to look at the Slee’s thesis is through the lens of the Innovator’s Dilemma.  In the software industry, one sees this reflected in the idea that what a customer wants is more of what they’ve been buying in the past, only “better” – e.g. cheaper, faster, whatever…  A decent way to understand this is the transition that took place in the transportation when the automobile came into fore.  If you were had asked the customers of the horse drawn transportation (i.e. the “legacy” industry) what they wanted, the customers would have stated “faster horses”.

And so it goes in this industry we love so much.  What do customers want?  More Of The Same!  Faster Horses™!

Given that I work in the legacy industry, I’m certainly understand that there’s a lot of stuff out there that is way over hyped and fizzles faster than it became fashionable.  But this notion of basing your strategy on the revealed preferences of your customers, rather than understanding what their actual problems are, is something that definitely keeps me up at night.  The idea that someone’s current investment reveals their preferences for the “way things are done” seems to be one that’s based on manifestly shaky ground.

And unlike – say – the auto industry, where the dinosaurs of the automotive industry were caught flat footed after years of ignoring all the warning signs, it’s fairly clear that changes in the more ephemeral industries such as ours – industries where changes can happen far more rapidly than the buying cycle of durable goods – can happen in a moment’s notice and fundamental changes in technology – such as the rise of the internet, for example – can completely catch dominate players flat footed and they find themselves in the “me, too” category of innovation.  Playing catch up in the also ran category.

Or selling Horse Drawn Carriages in the age of Henry Ford.

Deep thought

It’s the definition of passive-aggression and really quite unseemly, to set out to provoke people, and then when they react passionately and defensively, to criticise them for not holding to your standards of a calm and rational debate.

Just sayin’

All we need to do is take these lies and make them true (somehow)

Having worked in OSGi for quite a while, the most frequently asked question I get on the technology is “what’s the value proposition.”  Being a rapacious capitalist at heart, I think this is an eminently fair query from anyone looking at this OSGi stuff and scratching their head and wondering why the heck they would even want to consider using this technology in their systems.  There’s a non-zero and usually non-trivial cost associated with changing technology – a cost which usually grows in direct proportion to how “low” the technology is on whatever technology stack you are deploying.  OSGi is pretty low on that technology stack, so it has the potential to be very disruptive and hence very costly to an organization which adopts the technology.  Surely the benefit of making the switch should be at least proportional to the costs and a prudent business would like to understand what they’re going to get for their trouble, and more importantly, their hard earned money.

To answer this question, what I first ask is that people think about their current environment.  Assuming you’re not a startup – a domain which I’m not considering in this post – then you are undoubtedly dealing with a mature system which is now or quickly will resemble Frankenstein’s monster more than anything else.  If your system is successful to any degree – and if it isn’t, then we aren’t really having this conversation – what you find is that your system is a victim of its own success.  Grizzled veterans remember the “good old days” when builds would take less than an hour and everyone could sit in a room and share a common understanding of what the hell was going on in this money maker of yours.

Sadly, one day you look up and find that no one knows what the hell is going on anymore.  Build times – perhaps one of the more visceral measurements of complexity we have – have jumped dramatically.  These days, people fire off builds and then go on lunch breaks.  Worse, your projections are that in just a short time in the future, the nightly “integration” builds you kick off will still be running well after your developers have shown up for work.  It’s at this point that one panics and decides that dramatic action is required.  Something MUST be done.  Well, as long as that something doesn’t require any change to what you’re currently doing – i.e. one starts searching for a silver bullet which will slay this beast of chaos that you’ve collectively created and return your life back to the way things used to be.  Before “IT” happened.

Continue reading

Snausages

So now I’m the guy wielding the process stick.  Stranger is the wonderment by many that I am “a process guy”.  I guess it just goes to show one that you’re never quite able to see yourself as others see you.  Certainly, I’m a “process guy”.  Process is how you deal with the inevitable problems with groups of people trying to do things.  It’s called “protocol”.  Considering that all my professional life has been focused around the problem of consensus in distributed systems and the engineering of distributed protocols, I do find it slightly bewildering that people are surprised when I bring up process and my belief in good process.

One of the more painful lessons that I learned in business is that good contracts keep good friends as good friends.  I had to learn the hard way how stressful it is on even good, solid relationships between very smart and honest people when misunderstandings occur about things that mean something to them – i.e. MONEY, or more generally BUSINESS.  When things are just left up to good intentions and the belief that “we can all just work this out like normal people”, I invariably find the trail of debris in broken friendships and business partnerships.

Continue reading

Slouching Towards Bethlehem

noosa-hellsgatessign.jpgI am of the opinion that no one actually sets out to do stupid things.  Rather, stupid things happen almost invariably for the best of intentions.  Thus the phrase “The path to hell is paved with the best of intentions”.

And so it goes with standards.  In the past week, I’ve been privileged to witness two crystal clear examples of first class paving of the path straight to the gates of hell.  The root of both of these examples is the simple fact that the specification organization doesn’t have a process in place for dealing with the changing of a specification after it has been accepted as final.  Basically, in this organization, the “final” version of the spec is presented to the members, and the members vote on whether to accept it.  Sounds simple, right?

Well, the devil is always in the details and due to the way these things are scheduled, the reference implementations and conformance tests for these specifications aren’t finished at the time that the members vote on the “final” version of the specification.  Consequently, if anything shows up in either the creation of the RI, or in the creation of the conformance test which is to verify that the behavior in accordance with the specification, there’s a serious problem that needs to be resolved.

In the two cases I’ve been privileged to see this week, the first was an issue with time.  As with all things, resources are limited, and certain resources are scarcer than the sympathy in a banker’s cold, dead heart.  Consequently, when the time pressure to produce something gets unbearable as the deadline approaches, the reality will set in and like survivors on a sinking lifeboat, everyone starts looking for stuff to throw overboard.  And because this process is done under pressure, there’s not an awful lot of thought and strategy put into the choice of what is being chucked overboard.

Continue reading

And Then Hemingway Punched Me In The Mouth

groundhog_day.jpgFrom my own point of view, I continually say lines from movies, expecting people to understand their applicability to the current context, but given that I often find myself looking at a lot of confused, blank stares when I say these things without explanation.  Consequently, I show the image on the right as context.  It’s a scene from the movie Groundhog Day.  In this scene, Bill Murray’s character is going to kill himself for the umpteenth zillion first time by driving a truck off the cliff of a quarry.  As you can see, the groundhog is actually driving the car as the police are chasing them at high speed.  The line in question – i.e the line I would say which would then draw blank stares – is what Bill Murray’s character is saying to the groundhog at this precise moment in the scene:

“Don’t drive angry!  You should never drive when you’re angry”

And that’s pretty much what I would say about tweeting when you’re angry: don’t do it.

Continue reading

OSGi RFP 122 – The OSGi Bundle Repository

standards-process.pngThis week at EclipseCon, I discovered that I had inadvertently opened a can of worms and found the entire Landsraad of the Open Source community arrayed against me.  My crime?  Apparently it’s simply that I’ve had the audacity to pick up an OSGi specification that has been in existence – and in the public domain – since 2005 (i.e. OSGi RFC 112, the OSGi Bundle Repository) and attempt to work out the issues with that specification so that we can finally formally release it as part of the OSGi specification.

Much of the suffering I was dealt was due to serious misunderstandings of those involved with the process that is currently being played out.  Some of this misunderstanding is due to ignorance of the OSGi process – and note that I use the term “ignorance” not as a pejorative, rather simply as a statement of fact.  Those who aren’t involved in the OSGi process, nor familiar with the way that specifications in general are produced can sometimes be left bewildered by the array of TLAs and the process by which consensus is reached.  Certainly in the Open Source world, sometimes things are done quite differently than they are done in standards bodies (note: I’m not saying that this is universal, only making the point that standards bodies are their own beasts and Open Source communities rarely conform to such formalized systems).

So let me make a couple of points, and talk about how I’m going to carry out the process of specification of the OSGi Bundle Repository to ensure that the world outside of OSGi can participate in this process.

Continue reading