22 June 2009

The Intercloud is a global cloud of clouds

 Few of us will dispute that:
The Internet is a global network of networks
So it logically follows that:
The Intercloud is a global cloud of clouds

It's amazing to think that the Internet kept us busy for two decades or so just by delivering the ability to pass messsages between any two (or more) clients, and to consider all the things we've managed to achieve with this seemingly simple advance. It seems only yesterday I had one of the first private Internet connections in Australia (courtesy DIALix - the country's first commercial ISP) and was able to communicate with others around the globe (in real-time courtesy [y]talk - responsiveness we still haven't managed to faithfully replicate with today's instant messaging networks!). But now it's time to take the Internet to the next level.

While the servers scaled up as the masses poured in it wasn't long before we reached a glass ceiling - clearly vertical scalability wasn't the way forward. Sure you can build big machines (after all, mainframes and minicomputers were fresh in our minds) but it's like driving a boat - after a certain point you'll use an order of magnitude more fuel to go only a fraction faster (think of the cost of big iron vs commodity white boxes).

By now I was a university sysadmin and the dot-com bust was still a few years away. Officially I was busy setting up Aurema's Share II (since acquired by Citrix) on a pair of SGI Origin servers so as UNSW's Maths Department and the Australian Graduate School of Management (AGSM) could "fair share" the hardware they'd purchased together. Unofficially I was experimenting with making ~150 overpowered but under-used Pentium-II workstations appear as one (using Debian GNU/Linux, bpbatch aka Rembo aka IBM Tivoli and tools like PVM). I knew which approach I preferred but unfortunately the machines lived out their lives idling as X terminals and I went to work on dot-coms and the Sydney 2000 Olympics.

Enter Google, Amazon and others (e.g. the entire grid community) who worked out how to make horizontal scalability work properly with toys like BigTable (A Distributed Storage System for Structured Data) and MapReduce (Simplified Data Processing on Large Clusters). It was finally possible to build services that could scale endlessly, allowing these pioneers to innovate without looking over their shoulders after becoming victims of their own success. We know how that worked out for them (after all the world only needs five computers, right?) - today we have computing powerhouses sprinkled around the Internet run by companies like Google and Amazon while everyone else is playing musical chairs and hoping they won't wind up without a seat.

To use the electricity grid analogy, the Internet is like the grid itself - that is, the network of wires and power stations that connect everything together. One can poke electrons in one side and know ekectrons will pop out the other, even if various links are severed. Indeed that's all we've needed for email, instant messaging, media streaming and of course the web itself. The problem is that a grid without power stations isn't so interesting. Useful, yes, but certainly not exploiting the technology to the fullest extent possible. Enter cloud computing with various cloud providers (and the underlying Internet) forming the Intercloud.

So who invented the term? Who knows. Who cares. I didn't (I'm not even the first to say it's a "cloud of clouds") but I have been using it pretty much since I first started talking about cloud computing and I've heard others like Rich Miller using it too... it was first mentioned in the press (outside of Trend Micro's "InterCloud" security service) back in 2007 in Head in the clouds? Welcome to the future:

Although vendors talk as though there is only one Internet cloud each vendor will be running its own set of data centres that customers can use to access Internet-based information and resources which may complicate matters

Cisco have been busy popularising the term lately, presenting a "blueprint" and whipping up A Hitchhiker's Guide to the Inter-Cloud that unsurprisingly focuses on private cloud and finds a place for their Unified Computing System. Gartner have been getting in on the action too and it seems likely that before long a bunch of other people will as well.

Although I think it's got a snowflake's chance in hell of displacing the Internet moniker, it may be useful for framing discussions about cloud computing interoperability and unlike many of the other terms that have popped up may actually serve some purpose (surely IBM of all people should know that whenever someone says "CloudBurst" $GOD kills a kitten).

If we're to realise the full value of cloud computing it will be by loosely coupled "aggregation" (as distinct from integration) of various offerings rather than putting all our eggs in one basket with a single provider. We don't expect Microsoft to provide the best software for every task (hence products like Adobe Photoshop and Autodesk's AutoCAD) so why expect less heterogeneity in the cloud?


  1. I've lost track of where I first heard 'InterCloud', but the concept was being talked about in the very early days of AWS. The aspect of the concept which appeals most to me is the notion of dynamic binding / late binding and the interaction of 'peer clouds' though interworking, interoperation and the standards that make it possible. I jumped back on the 'InterCloud' meme in a big way when James Urquhart and Chris Hoff started using it in the discussion of models and ontologies of Cloud Computing... probably late summer / early fall of 2008.

  2. I'm guessing the AWS community is where I picked it up too - I've been working on this cloud stuff since 2006 and like most of us beta tested EC2. The term makes a lot of sense, particularly when you consider that the Internet is just a network. The key is making sure that if one component fails the whole system doesn't fall over.

  3. When I started to work on infrastructure-on-demand-via-VMs in 2004, this idea was already obvious because huge amounts of federated, on-demand resources is at the heart of what the big-science community's been doing for 15 years now.

  4. Tim: Thanks - a lot of what we're doing today has a distinct sense of déjà vu about it... I'm not surprised that the concepts (if not the name) were alive and well long before floating to the surface now.

  5. What changes things this time is a huge non-grant-based market, I think that is exciting, I am not trying be cynical etc.

    Anyhow, I wanted to comment again really to point out one interesting thing in this cloud "past" that I thought people knew but I guess maybe not?

    Xen itself was created initially as a tool for the XenoServer project which is all about global-scale cloud computing in this vein. The VMM tool they built turned out to have game changing performance characteristics (and a talented crew) and the "rest is history."

    It is interesting to read the 2003 paper "Controlling the XenoServer Open Platform" in the context of the drastic mainstream computing changes that have happened since 2006: http://www.cl.cam.ac.uk/research/srg/netos/papers/

  6. It's interesting the huge yet understated role Xen plays in cloud computing today... I was working on Xen at Citrix long before the powers that be took an interest in it and yet now they have it nobody knows wtf to do with it.

    In any case the hypervisor is now well and truly commoditised (just need APIs like OCCI to take hold) so it's all about the management layer.



Note: only a member of this blog may post a comment.