Introducing the Cloud Computing Stack (2009 Edition)

Those of you watching the Open Cloud Computing Interface (OCCI) mailing list over the weekend may have spotted the Resource Types: Compute / Network / Storage thread which the cloud computing stack was discussed. Although a little off topic it was useful for framing the first use case for OCCI (Cloud Infrastructure Serivces aka IaaS) and the result of the discussion was some refinement of my cloud computing stack that Wikipedia’s cloud computing article (among other things) is based on.

There was some contention over the use of the term “fabric” for the bottom layer given it has also been used with platforms like Azure, so thanks to Alexis Richardson for suggesting the (obvious) “Servers” replacement. While not perfect I can’t think of anything better, and it fits nicely with “Clients” at the top layer, making this a fully functional taxonomy for cloud computing.

Other changes include pushing “storage” down into the infrastructure layer and “services” into the platform layer (ignoring mashups and the like for the sake of clarity) and sticking with the application layer after considering changing it to “software”.

It’s available under the new Creative Commons Zero license (essentially public domain).

Cloud Computing Types: Public Cloud, Hybrid Cloud, Private Cloud

It’s no secret that I don’t very much like this whole private cloud or internal cloud concept (see here and here), on the basis that while advanced virtualisation technologies are valuable to businesses they are a severe short sell of what cloud computing is ultimately capable of. The electricity grid took over from the on-site generators very quickly and I expect cloud computing to do the same with respect to private servers, racks and datacenters. Provided that is the concept is not co-opted by threatened vendors pushing solutions that they claim are “just like cloud computing, only better”. The potential for cheap, commoditised computing resources far outweighs the benefits of in-house installations which carry few of the benefits that makes cloud computing so interesting (e.g. no capex, minimal support, access anywhere anytime, no peak load engineering, shared costs, etc.).

If you look at the overwhelming amount of coverage of cloud computing in the traditional sense versus the recent sporadic appearances of articles about private/internal clouds then the latter is what us Wikipedians call a fringe theory, and I’ve just treated it as such in the article (see below).

Interesting thing is this editor who appeared on the scene at the cloud computing article recently… Initially they sought to water down the references to open source software (which currently powers the overwhelming majority of cloud computing installations, e.g. Google, Salesforce and Amazon) but then they moved on to declaring that the very definition of cloud computing should be changed to accommodate private clouds (which is not going to happen so long as the overwhelming majority of reliable sources equate “cloud” to “Internet”).

The conflict of interest alarm bells were ringing already but it wasn’t until they pressed on with this change despite the absense of a consensus and protests from other editors that they were pushed to disclose affiliations. It was the redefining of “network computing” (an Oracle-ism and trademark from over a decade ago) to be a synonym for “cloud computing” using questionable sources that gave the game away and it wasn’t long before the editor revealed their identity as a Senior Software Architect at Oracle in the bay area.

That in itself isn’t a huge problem, after all conflict of interest is a behavioural guideline rather than a policy, but it is when there are associated policy violations like verifiability and neutral point of view as there were here. I’m still not sure what to make of Oracle’s new-found interest in cloud computing, especially after CEO Larry Ellison heavily criticised it in a speech last year, and it troubles me somewhat that these shenanigans are going on during business hours (I’d hate to think that they were assigned the task of “fixing” the article), but for now I’m assuming good faith and waiting to see what this editor comes up with next.

Anyway the result is that they’ve got their mention of private cloud/internal cloud, only it probably wasn’t exactly what they had in mind (that’s the law of unintended consequences for you). I’m sure this will be quite controversial with “I can’t believe it’s not cloud” vendors and their cronies but it’s supported by reliable sources and I believe an accurate representation of the consensus view. The term “private cloud”, so far as I am concerned, borders on deceptive advertising as it fails to deliver on the potential of cloud computing and those who attempt to use it to hang on the coat-tails of cloud computing should expect resistance.

All is not lost though, as most of what people are calling “private clouds” have some “public cloud” aspect (even if just the future possibility to migrate) and can be classed as a “hybrid cloud” architecture. Indeed according to the likes of HP, Citrix and Nicholas Carr (and myself) most large enterprises will be looking to run a hybrid architecture for upto 5-10 years (though many early adopters have already taken the plunge). Yes it’s semantic but the important difference is that you’re not claiming to be a drop in replacement for cloud computing, rather a component of it. You can expect a lot less resistance from cloud computing partisans as a result.

As usual the diagram is available under a Creative Commons Attribution ShareAlike 3.0 license in PNG and SVG formats from the Wikimedia Commons (Cloud computing types.svg) so free to use it in your own documents, presentations, etc.


Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.

Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers “will be typical for most enterprises”.

Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks. These (typically virtualisation automation) products claim to “deliver some benefits of cloud computing without the pitfalls”, capitalising on data security, corporate governance, and reliability concerns. They have been criticised on the basis that users “still have to buy, build, and manage them” and as such do not benefit from lower up-front capital costs and less hands-on management, essentially “[lacking] the economic model that makes cloud computing such an intriguing concept”.

While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some contention as to whether they are a reality even within the same firm. Analysts also claim that within five years a “huge percentage” of small and medium enterprises will get most of their computing resources from external cloud computing providers as they “will not have economies of scale to make it worth staying in the IT business” or be able to afford private clouds.

The term has also been used in the logical rather than physical sense, for example in reference to platform as a service offerings.

Update: This article was featured on CircleID on 6 March 2009.

Cloud Computing Economics 101

I have finally got around to adding some of my cloud computing economics research in Wikipedia’s cloud computing article. I tried to maintain a neutral point of view but there’s not really a bad thing to say about cloud computing when it comes to economics. I also realised we hadn’t yet talked about cloud computing with the oh-so-common electricity utility analogy and as Wikipedia likes analogies for technical content this addition is probably long overdue.

I’m sure the diagram in particular will prove useful and as such I’ve made it available under a Creative Commons Attribution ShareAlike 3.0 license at the Wikimedia Commons: Cloud computing economics.svg. As it’s in Scalable Vector Graphics (SVG) format you can scale it up to poster size with no loss in quality and most browsers support it natively now. The MediaWiki software can also render a PNG version for you if you prefer. As usual it was created using the excellent OmniGraffle software, but I use open source Inkscape and Adobe Illustrator sometimes too (all of which export to SVG).

Without further ado, here’s the new section (but it may look quite different by the time you get to it… here’s a link to the current revision.


Cloud computing users can avoid capital expenditure (CapEx) on hardware, software and services, rather paying a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with little or now upfront cost. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are often covered by service level agreements with financial penalties.

According to Nicholas Carr the strategic importance of information technology is diminishing as it becomes standardised and cheaper. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century.

Native Web Applications (NWA) vs Rich Internet Applications (RIA)

A rewrite of the Rich Internet Application (RIA) article (snapshot) is my latest contribution to Wikipedia following last year’s full rewrite of the Cloud Computing article (which is now finally fairly stable and one of the main authorative sources on the topic; according to the article statistics I’ve just done my 500th edit, or one every eight hours on average so it’s about as up-to-date as you’ll find).

Needless to say I agree wholeheartedly with Mozilla’s Mark Finkle in saying RIA is Dead! Long Live Web Applications. There’s still some niches (eg online gaming, video capture) but with HTML 5 bringing goodies like the VIDEO tag (even if commercial interests prevented standardisation on open standards like Theora) and next-generation browsers (eg Google Chrome) treating plugins like second-class citizens, it’s only a matter of time before Rich Internet application Frameworks (yes there’s a new Wikipedia category too) are relegated to specific use cases and enterprises with controlled client configurations.

Proliferation of mobile and alternative devices (eg Netbooks using Linux and/or ARM processors) is making it increasingly difficult for vendors who were already struggling to maintain penetration rates and having wildly successful devices like the iPhone totally off limits can’t be helping (especially if Apple branch out into the Netbook space as they almost certainly will this year).

Just quickly on that topic, this will be true whether they enter the market with an embedded device (eg iPhone’s stripped back OS X on ARM) or go for a full-blown thick client (eg OS X on Atom) as either way it would surprise me if the AppStore (with all its restrictions) didn’t make an appearance. Unlike Microsoft, while Apple make virtually nothing on software sales to traditional thick clients the AppStore is a license to print money.

The rest of us will be able to enjoy what I call “Native Web Applications” (NWAs for those who insist on TLAs) from the device of our choice with nothing more than a recent, standards-compliant browse like  Chrome, Firefox, IE 8 or WebKit. For now I define it as follows:

A Native Web Application (NWA) is a web application which is 100% supported out of the box by recent standards-compliant web browsers

You don’t have to use this term if you don’t want to (I can’t think of a better one), but please make an effort to avoid referring to such applications as ‘Rich Internet applications’ irrespective of how ‘rich’ the interface appears. And no, using Ajax (which is based on existing web standards) does not make for an RIA, nor does releasing components of an RIA Framework as open source and/or open standards make for an NWA. It’s becoming increasingly important to differentiate and while RIA need not be considered dirty words, the only way to reach everyone will be by going native.

Rich Internet application

Rich Internet applications (RIAs) are web applications that have some of the characteristics of desktop applications, typically delivered by way of proprietary web browser plug-ins or independently via sandboxes or virtual machines[1]. Examples of RIA frameworks include Adobe Flash, Java/JavaFX[2] and Microsoft Silverlight[3].

The term was introduced in the 1990’s by vendors like Macromedia who were addressing limitations at the time in the “richness of the application interfaces, media and content, and the overall sophistication of the solutions” by introducing proprietary extensions[4].

As web standards (such as Ajax and HTML 5) have developed and web browsers‘ compliance has improved there is less need for such extensions. HTML 5 delivers a full-fledged application platform; “a level playing field where video, sound, images, animations, and full interactivity with your computer are all standardized”[5].


With very few exceptions (most notably YouTube which currently relies on Adobe Flash for video playback) the vast majority of the most popular web sites are native web applications[6]. Online gaming is one area where RIAs are prevalent and applications (such as DimDim) which require access to video capture also tend to use RIAs (with the notable exception of Gmail which uses its own task-specific browser plug-in[7]).

Key characteristics

  • Accessibility of data to search engines and web accessibility can be impaired. For example it took over a decade from release for Adobe Flash to be universally searchable[7].
  • Advanced communications with supporting servers can improve the user experience, for example by using optimised network protocols, asynchronous I/O and pre-fetching data (eg Google Maps). Accordingly, reliable broadband connections are often required.
  • Complexity of advanced solutions can make them more difficult to design, develop, deploy and debug than traditional web applications (but typically less so than application software).
  • Consistency of user interface and experience can be controlled across operating systems. Performance monitoring and fault diagnosis can be particularly difficult.
  • Installation and Maintenance of plug-ins, sandboxes or virtual machines is required (but applications are smaller than their predecessors and updates are typically automated). Installation is typically faster than that of application software but slower than that of native web applications and automation may not be possible.
  • Offline use may be supported by retaining state locally on the client machine, but developments in web standards (prototyped in Google Gears) have also enabled this for native web applications.
  • Security can improve over that of application software (for example through use of sandboxes and automatic updates) but the extensions themselves are subject to vulnerabilities and access possible is often much greater than that of native web applications[8].
  • Performance can improve depending on the application and network characteristics. In particular, applications which can avoid the latency of round-trips to the server by processing locally on the client are often a lot faster. Offloading work to the clients can also improve server performance. Conversely the resource requirements can be prohibitive for small, embedded and mobile devices.
  • Richness by way of features not supported natively by the web browser such as video capture (eg Adobe Flash).


An appropriate Rich Internet application framework is usually required to run an RIA, and needs to be installed using the computer’s operating system before launching the application. The software framework is typically responsible for downloading, updating, verifying and executing the RIA.[9]

See also

External links

Taxonomy: The 6 layer Cloud Computing Stack

Finally things have settled down enough that we can start making some sensible statements about the taxonomy of cloud computing. The Wikipedia cloud computing article(s) in particular are now fairly stable (after countless hours of work over the recent months) and the main one has climbed its way to the top of Google’s search results for “cloud computing”. New articles like the hours-old one on RightScale are slipping in to the cloud computing category tree nicely too.

Therefore, without further ado I am pleased to announce the 6 layer Cloud Computing Stack (developed and hosted on the brand new Cloud Computing Community Wiki) in all its Creative Commons Public Domain glory. You can use it however you see fit, commercially or non-commercially, under whatever license you want and you don’t even need to give me credit for it. You can get it in SVG and PNG formats from the WikiMedia Commons, and if you’ve got the excellent OmniGraffle editor then I’ll even send you the originals if you ask nicely.

The 6 layers of the Cloud Computing Stack (from top to bottom) are:

This finds a happy medium in that it:

  • Doesn’t oversimplify like the aaSy 3 layer stack (SaaS, PaaS, IaaS)
  • Doesn’t overcomplicate like the all-inclusive 11 layer stack (remember cloud computing is about hiding the complexity of the Facilities, Network, Hardware, OS, Systems Management and Development Environment layers).
  • Favours ‘Application’ (as in ‘Web Application’) over ‘Software’ (which can exist on the servers with SaaS and/or the clients with software plus services).
  • Ignores things cloud computing users don’t care about (most notably anything physical)
  • Avoids altogether the overused ‘as a Service‘ moniker, the characteristics of which (scalability, utility billing, no capex, etc.) are common and shared with cloud computing in general.
  • Resists the urge to create new terms (neologisms) unnecessarily, opting for the simplest appropriate single word possible.

I hope that this work proves useful in understanding what cloud computing is all about and how cloud computing architecture works. I also hope that these terms take over from more complex, aaSy neologisms as they’re clean, simple and they just make sense. Here’s a translation table to get you started:

  • Cloud Computer, Device, etc. » Cloud Client
  • Web Services » Cloud Services
  • Software as a Service, Software plus Services » Cloud Application
  • Platform as a Service » Cloud Platform
  • Storage as a Service, Cloud Attached Storage » Cloud Storage
  • Infrastructure as a Service, Hardware as a Service » Cloud Infrastructure

Update: We ended up settling on a simpler 3 layer stack after much discussion.

The Cloud and Cloud Computing consensus definition?

“Cloud Computing is the realisation of Internet (‘Cloud’) based development and use of computer technology (‘Computing’) delivered by an ecosystem of providers.”

It’s amazing that such a simple concept has caused so much confusion, but having spent the last few days reviewing the recent discussions it seems many are falling into the trap of trying to align Cloud Computing with (or contrast it against) existing terminology like SaaS and Utility Computing. It is in fact far more suitable as an umbrella term encompassing all of these related components.

‘The Cloud’

While there can be multiple definitions for Cloud Computing, for The Cloud itself ‘there can be only one‘ as it’s a metaphor for the Internet; people talking about clouds (plural) are probably confusing it with grids. Yes you can replicate some of this in a ‘private cloud’, but it will always be exactly that: a replica, and it will likely be somehow connected to (and therefore part of) the real cloud anyway. Remember, much of the value of Cloud Computing comes from leveraging other services in The Cloud for a result greater than the sum of its parts.

Why ‘The Cloud’?

Remember all those network diagrams with a fluffy cloud in the middle? Why a cloud and not a black box or some other device? Because we simply don’t know, and better yet we don’t need to know, what goes on in there – we just pass a packet down our pipe and (most of the time) it arrives at its destination. This is an abstraction (in reality the Internet is an incredibly complex beast) but an important one; it significantly reduces the complexity of our systems; a good example is relatively simple VPNs having quickly displaced many complex WANs.


Let’s break down my definition (which I came to by collating the assertions that were in line with my view and then boiling the result down to the basic common elements):

“Cloud Computing…

  • …is the realisation of
    While many of the requisite components have been available in various forms for some time (eg Software as a Service, Utility Computing, Web Services, Web 2.0, etc.) it is only now they are reaching critical mass that the Cloud Computing concept is working its way into the mainstream. As more of a collection of trends (a ‘metatrend‘) we still have some way to go yet, but Cloud Computing solutions are a reality today and will rapidly mature and expand into virtually every corner of our lives and enterprises.
  • …Internet (‘Cloud’) based…
    Although some have [ab]used the ‘Cloud Computing’ term in reference to infrastructure (particularly grid computing, like Amazon’s pioneering Elastic Compute Cloud), much of its value is derived from the universal connectivity of the Internet; between businesses (B2B e.g. Web Services like Amazon Web Services), businesses and consumers (B2C e.g. Web 2.0 like Google Apps) and between consumers themselves (C2C e.g. peer to peer like BitTorrent). Many of us are now connected to ‘The Cloud’ where we work (office), rest (home) and play (mobile) and there are solutions (eg Gears) for when we are not.
  • …development and use of computer technology’…
    an accepted, all-encompassing definition of computing – there are very few areas which will not be affected in some way by Cloud Computing so I’ve gone for the broadest possible definition.
  • …delivered by an ecosystem of providers.
    While it is possible to enjoy some of the advantages using a single provider (eg Google), it is hard to imagine a functionally complete solution which does not draw on multiple providers (in much the same way as we install task-specific applications onto our legacy computers). Your electricity is almost certainly generated by wholesale providers who pump it into the grid and similarly Cloud Computing will typically be delivered by layered (eg Smugmug on Amazon S3) and/or interconnected (eg Facebook<->Twitter) systems.

Cloud Computing Architecture

Cloud Computing is typically universally accessible, massively scalable (with vast pools of multi-tenant ‘on-demand’ resources), highly reliable (see my TrustSaaS site for proof that the main services are up over 99% of the time), cost effective and utility priced with low barriers to entry (eg capital expenditure, professional services), but none of these attributes are absolute requirements (no, not even massive scalability – even an esoteric web service may still be an absolute requirement for a small handful of users and thus an important part of the ecosystem).

Cloud Computing architecture looks something like this, with layers similar to the OSI networking stack:

Services Client which consumes these applications via a browser and/or programmatically
Composite (Composite Applications or Mashups) which are linked together using APIs like REST (eg TrustSaaS), in much the same way as ‘pipes’ are used in Unix to create arbitrarily complex systems from simple tools
Software Application which ideally follow proven Unix philosophy of ‘do one thing and do it well‘, but which may grow quite complex
Platform on which applications are built, including the language itself (eg Java, Python) as well as supporting systems like storage
Hardware Infrastructure consisting of the physical computing resources (and virtualisation layer(s) at the hardware and/or operating system layers)
Networking courtesy the existing Internet (eg TCP/IP)

Cloud Computing Components

Although many of these are solutions to the same problems, most of them are actually components of Cloud Computing, rather than Cloud Computing itself (working from the ground up):

  • Grid computing, any network of loosely-coupled computers acting in concert, is mostly concerned with tackling complexity and improving managability of computing resources (for example, production servers not being taken down by server failures or routine maintenance). You’ll find grids outside of Cloud Computing architectures, though there is a [vendor driven] tendency to confuse the two (particularly where some intelligent/autonomic management aspects are involved). Don’t make this mistake yourself; although many Cloud Computing systems are based on grids because their scalability needs can only be satisfied by horizontal scaling (usually involving thousands of commodity grade PCs), these are very different animals.
  • Virtualisation (in the Cloud Computing context), which allows you to deploy a virtual server where you might otherwise have provisioned physical hardware, is an enabler for Infrastructure as a Service (IaaS). Increased automation of operating system and application deployment is pushing the interface further and further up towards the application layer itself (eg Desktone‘s Desktop as a Service).
  • Infrastructure as a Service (IaaS) (Amazon EC2, GoGrid, AppNexus) While Internet (‘cloud’) connected grids are particularly useful (and a natural progression for virtualisation and SOA solutions being rolled out en-masse in enterprises today), implying that this is somehow equivalent to cloud computing is too narrow a view. Integrate a SaaS/Utility style billing system to a traditional grid and you’ve got Infrastructure as a Service (IaaS). These are more cost effective, reliable, scalable and user friendly than their disconnected counterparts and are one big step closer to the panacea of autonomic computing. Expect to see existing ‘virtual infrastrucutre’ providers like VMware and Citrix seamlessly complementing on-premises solutions with cloud based services.
  • Platform as a Service (PaaS) (Google’s AppEngine, Salesforce’s, Heroku, Joyent, Rackspace’s Mosso): takes grid computing to the next level of abstraction by pushing the interface up to the platform or ‘stack’ on which applications themselves are built (eg Django, Ruby on Rails, Apex Code). This is primarily interesting for developers and power users and is an increasingly important component of the cloud computing ecosystem. It allows them to focus on development without the overhead of hardware and operating system maintenance, database tuning, load balancing, network connectivity etc. while exposing technology like BigTable (and massive scalability) which might not otherwise be available to them. More importantly, it eliminates capital expenditure requirements, allowing boutique Independent Software Vendors like us to ‘stay in the game’.
  • Utility Computing (Amazon S3) is more about a ‘utility’ (gas, water, electricity) pricing model, yet one can derive the benefits of cloud computing with a more traditional pricing model, or indeed without having to pay for it at all (consider Google’s AppEngine for example, where it’s utility-style pricing only applies to the more demanding users).
  • Web Services (Amazon Web Services): ‘the ‘glue’ that holds cloud computing components together’, are finally maturing and being adopted ‘en-masse’ thanks in no small part to simplification by way of protocols like REST, commercilisation by providers like Amazon (Jeff Bezos’ Risky Bet) and the abundance of web toolkits (e.g. Ruby on Rails) which lower the barrier to entry by providing native support. You can do everything from payments to ‘human intelligence tasks‘ with Web Services now and mashups rely on them heavily to make products that are greater than the sum of their parts. Companies like Ariba and Rearden Commerce are taking this to the extreme.
  • Web 2.0 (Wikipedia, Facebook, WebEx) which while a force in itself, deals more with making the web ‘read/write’, shifting power towards the consumer and leveraging their collective energy. While AJaX does a lot to make this environment more user friendly, the underlying theme is turning the ‘reader’ into a ‘contributor’. Most of the players in cloud computing exhibit Web 2.0 attributes.
  • Software as a Service (SaaS): (Google Apps, Salesforce CRM) falls under the cloud computing umbrella and is a primary component, but to align the two definitions is too narrow a view. SaaS is typically sold per user as pizza is per slice, but what is more important is that it is implemented and maintained by a provider who handles much of the complexity of running software on your behalf (eg scaling, backups, updates, etc.).
  • ‘Cloud’ System Integrators (Australian Online Solutions) and consultancies deploy the various components, make them work in concert together (using services like RightScale), integrate them to each other and with legacy systems using the exposed APIs as well as migrating data (eg email, calendars, contacts, documents, etc.) so that users can ‘hit the ground running’ and continue to collaborate efficiently with those who have not yet migrated ‘to the cloud’. Seamless migration is a reality today, and a critical component for cloud adoption.

Cloud Computing Today

The Cloud Computing revolution is upon us. Expect it to rapidly proliferate your enterprise, with much of the drive coming from individual grassroots users (who are almost certainly already improving operational efficiency with Web 2.0 tools like Google, Salesforce and WebEx) so plan accordingly. It must be embraced for competitiveness rather than resisted (in much the same way as the PC was embraced decades ago) but it also requires careful governance and change management by experts. Low risk, high return offerings like messaging and web security are available for those who want to ‘test the water’ without opting for a complete Enterprise 2.0 deployment.

The draw of loosely coupled, massively scalable services will eventually result in most enterprises being swallowed by the cloud (or by more agile, possibly ‘digital native’ competitors who already were), or at least becoming nodes on it; indeed many already have. Barriers to adoption (eg offline support, security and compliance services) are being torn down every day and practical solutions exist for those that remain (eg encryption) so there are less and less reasons to sit on the sidelines.

Even the largest of enterprises are now starting to jump (typically having completed controlled pilots) and just as company officers would have difficulty explaining downtime losses caused by continuing to generate their own power after cheap, reliable utility electricity became available, shareholders will not accept companies wasting resources on commotitised infrastructure rather than focusing on their core competencies.

Thanks to Jeff Kaplan, Markus Klems, Reuven Cohen, Douglas Gourlay, Praising Gaw, Jimmy Pike, Damon Edwards, Brian de Haaf, Ben Kepes, Jack van Hoof, Kirill Sheynkman, Ken Ostreich, James Urquhart, Thorsten von Eicken, Omar Sultan, Nick Carr and others for their inadvertent contributions.

This article [was] also available as a Google Knol: Cloud Computing.