Getting started with OpenStack in your lab

Having recently finished building my new home lab I wanted to put the second server to good use by installing OpenStack (the first is running VMware ESXi 5.0 with Windows 7, Windows 8, Windows 8 Server and Ubuntu 12.04 LTS virtual machines). I figured many of you would benefit from a detailed walkthrough so here it is (without warranty, liability, support, etc).

The two black boxes on the left are HP Proliant MicroServer N36L’s with modest AMD Athlon(tm) II Neo 1.3GHz dual-core processors and 8GB RAM and the one on the right is an iomega ix4-200d NAS box providing 8TB of networked storage (including over iSCSI for ESXi should I run low on direct attached storage). There’s a 5 port gigabit switch stringing it all together and a 500Mbps CPL device connecting it back up the house. You should be able to set all this up inside 2 grand. Before you try to work out where I live, the safe is empty as I don’t trust electronic locks.

IMG 1198

Download Ubuntu Server (12.04 LTS or the latest long term support release) and write it to a USB key — if you’re a Mac OS X only shop then you’ll want to follow these instructions. Boot your server with the USB key inserted and it should drop you straight into the installer (if not you might need to tell the BIOS to boot from USB by pressing the appropriate key, usually F2 or F10, at the appropriate time). Most of the defaults are OK but you’ll probably want to select the “OpenSSH Server” option in tasksel unless you want to do everything from the console, but be sure to tighten up the default configuration if you care about security. Unless you like mundane admin tasks then you might want to enable automatic updates too. Even so let’s ensure any updates since release have been applied:

sudo apt-get update
sudo apt-get -u upgrade

Next you’ll want to install DevStack (“a documented shell script to build complete OpenStack development environments from RackSpace Cloud Builders“), but first you’ll need to get git:

sudo apt-get install git

Now grab the latest version of DevStack from GitHub:

git clone git://

And run the script:

cd devstack/; ./

The first thing it will do is ask you for passwords for MySQL, Rabbit, a SERVICE_TOKEN and SERVICE_PASSWORD and finally a password for Horizon & Keystone. I used the (excellent) 1Password to generate passwords like “sEdvEuHNNeA7mYJ8Cjou” (the script doesn’t like special characters) and stored them in a secure note.

The script will then go and download dozens of dependencies, which are conveniently packaged by Ubuntu and/or the upstream Debian distribution, run for a few python packages, clone some repositories, etc. While you wait you may as well go read the script to understand what’s going on. At this point the script failed because /opt/stack/nova didn’t exist. I filed bug 995078 but the script succeeded when I ran it for a second time — looks like it may have been a glitch with GitHub.

You should end up with something like this:

Horizon is now available at
Keystone is serving at
Examples on using novaclient command line is in
The default users are: admin and demo
The password: qqG6YTChVLzEHfTDzm8k
This is your host ip: completed in 431 seconds.

If you browse to that address you’ll be able to log in to the console:

Openstack login

That will drop you into the Admin section of the OpenStack Desktop (Horizon) where you can get an overview and administer instances, services, flavours, images, projects, users and quotas. You can also download OpenStack and EC2 credentials from the “Settings” pages.

Openstack console

Switch over to the “Project” tab and “Create Keypair” under “Access & Security” (so you can access any instances you create):

Openstack keygen

The key pair will be created and downloaded as a .pem file (e.g. admin.pem).

Now select “Images & Snapshots” under “Manage Compute” you’ll be able to launch the cirros-0.3.0-x86_64-uec image which is included for testing. Simply click “Launch” under “Actions”:

Openstack project

Give it a name like “Test”, select the key pair you created above and click “Launch Instance”:

Openstack launch

You’ll see a few tasks executed and your instance should be up and running (Status: Active) in a few seconds:

Openstack spawning

Now what? First, try to ping the running instance from within the SSH session on the server (you won’t be able to ping it from your workstation):

$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=64 time=0.734 ms
64 bytes from icmp_req=2 ttl=64 time=0.585 ms
64 bytes from icmp_req=3 ttl=64 time=0.588 ms

Next let’s copy some EC2 credentials over to our user account on the server so we can use the command line euca-* tools. Go to “Settings” in the top right and then the “EC2 Credentials” tab. Now “Download EC2 Credentials”, which come in the form of a ZIP archive containing an X.509 certificate (cert.pem) and key (pk.pem) pair as well as a CA certificate (cacert.pem) and an rc script ( to set various environment variables which tell the command line tools where to find these files:

Openstack ec2

While you’re at it you may as well grab your OpenStack Credentials which come in the form of an rc script ( only. It too sets environment variables which can be seen by tools running under that shell.

Openstack rc

Let’s copy them (and the key pair from above) over from our workstation to the server:

scp admin.pem samj@

Stash the EC2 credentials in ~/.euca:

mkdir ~/.euca; chmod 0700 ~/.euca; cd ~/.euca
cp ~/ ~/.euca; unzip *.zip

Finally let’s source the rc scripts:

source ~/.euca/
source ~/

You’ll see the script asks you for a password. Given this is a dev/test environment and we’ve used a complex password, let’s modify the script and hard code the password by commenting out the last 3 lines and adding a new one to export OS_PASSWORD:

# With Keystone you pass the keystone password.
#echo "Please enter your OpenStack Password: "

You probably don’t want anyone seeing your password or key pair so let’s lock down those files:

chmod 0600 ~/ ~/admin.pem

Just make sure the environment variables are set correctly:

echo $EC2_USER_ID

Finally we should be able to use the EC2 command line tools:

RESERVATION r-8wvdh1c7 b34166e97765499b9a75f59eaff48b98 default
INSTANCE i-00000001 ami-00000001 test test running None (b34166e97765499b9a75f59eaff48b98, ubuntu) 0 m1.tiny 2012-05-05T13:59:47.000Z nova aki-00000002 ari-00000003 monitoring-disabled instance-store

As well as the openstack command:

openstack list server
| ID | Name | Status | Networks |
| 44a43355-7f95-4621-be61-d34fe53e50a8 | Test | ACTIVE | private= |

You should be able to ssh to the running instance using the IP address and key pair from above:

ssh -i admin.pem -l cirros
$ uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux

That’s all for today — I hope you find the process as straightforward as I did and if you do follow these instructions then please leave a comment below (especially if you have any tips or solutions to problems you run into along the way).

Simplifying cloud: Reliability

The original Google server rack

Reliability in cloud computing is a very simple concept which I’ve explained in many presentations but never actually documented:

Traditional legacy IT systems consist of relatively unreliable software (Microsoft Exchange, Lotus Notes, Oracle, etc.) running on relatively reliable hardware (Dell, HP, IBM servers, Cisco networking, etc.). Unreliable software is not designed for failure and thus any fluctuations in the underlying hardware platform (including power and cooling) typically result in partial or system-wide outages. In order to deliver reliable service using unreliable software you need to use reliable hardware, typically employing lots of redundancy (dual power supplies, dual NICs, RAID arrays, etc.). In summary:

unreliable software
reliable hardware

Cloud computing platforms typically prefer to build reliability into the software such that it can run on cheap commodity hardware. The software is designed for failure and assumes that components will misbehave or go away from time to time (which will always be the case, regardless of how much you spend on reliability – the more you spend the lower the chance but it will never be zero). Reliability is typically delivered by replication, often in the background (so as not to impair performance). Multiple copies of data are maintained such that if you lose any individual machine the system continues to function (in the same way that if you lose a disk in a RAID array the service is uninterrupted). Large scale services will ideally also replicate data in multiple locations, such that if a rack, row of racks or even an entire datacenter were to fail then the service would still be uninterrupted. In summary:

reliable software
unreliable hardware

Asked for a quote for Joe Weinman’s upcoming Cloudonomics: The Business Value of Cloud Computing book, I said:

The marginal cost of reliable hardware is linear while the marginal cost of reliable software is zero.

That is to say, once you’ve written reliability into your software you can scale out with cheap hardware without spending more on reliability per unit, while if you’re using reliable hardware then each unit needs to include reliability (typically in the form of redundant components), which quickly gets very expensive.
The other two permutations are ineffective:

Unreliable software on unreliable hardware gives an unreliable system. That’s why you should never try to install unreliable software like Microsoft Exchange, Lotus Notes, Oracle etc. onto unreliable hardware like Amazon EC2:

unreliable software
unreliable hardware

Finally, reliable software on reliable hardware gives a reliable but inefficient and expensive system. That’s why you’re unlikely to see reliable software like Cassandra running on reliable platforms like VMware with brand name hardware:

reliable software
reliable hardware

Google enjoyed a significant competitive advantage for many years by using commodity components with a revolutionary proprietary software stack including components like the distributed Google File System (GFS). You can still see Google’s original hand-made racks built with motherboards laid on cork board at their Mountain View campus and the computer museum (per image above), but today’s machines are custom made by ODMs and are a lot more advanced. Meanwhile Facebook have decided to focus on their core competency (social networking) and are actively commoditising “unreliable” web scale hardware (by way of the Open Compute Project) and software (by way of software releases, most notably the Cassandra distributed database which is now used by services like Netflix).

The challenge for enterprises today is to adopt cheap reliable software so as to enable the transition away from expensive reliable hardware. That’s easier said than done, but my advice to them is to treat this new technology as another tool in the toolbox and use the right tool for the job. Set up cloud computing platforms like Cassandra and OpenStack and look for “low-hanging fruit” to migrate first, then deal with the reticent applications once the “center of gravity” of your information technology systems has moved to cloud computing architectures.

P.S. Before the server huggers get all pissy about my using the term “relatively unreliable software”, this is a perfectly valid way of achieving a reliable system — just not a cost effective one now “relatively reliable software” is here.

Cloud computing’s concealed complexity

Cloud gears cropped

James Urquhart claims Cloud is complex—deal with it, adding that “If you are looking to cloud computing to simplify your IT environment, I’m afraid I have bad news for you” and citing his earlier CNET post drawing analogies to a recent flash crash.

Cloud computing systems are complex, in the same way that nuclear power stations are complex — they also have catastrophic failure modes, but given cloud providers rely heavily on their reputations they go to great lengths to ensure continuity of service (I was previously the technical program manager for Google’s global tape backup program so I appreciate this first hand). The best analogies to flash crashes are autoscaling systems making too many (or too few) resources available and spot price spikes, but these are isolated and there are simple ways to mitigate the risk (DDoS protection, market limits, etc.)

Fortunately this complexity is concealed behind well defined interfaces — indeed the term “cloud” itself comes from network diagrams in which complex interconnecting networks became the responsibility of service providers and were concealed by a cloud outline. Cloud computing is, simply, the delivery of information technology as a service rather than a product, and like other utility services there is a clear demarcation point (the first socket for telephones, the meter for electricity and the user or machine interface for computing).

Everything on the far side of the demarcation point is the responsibility of the provider, and users often don’t even know (nor do they need to know) how the services actually work — it could be an army of monkeys at typewriters for all they care. Granted it’s often beneficial to have some visibility into how the services are provided (in the same way that we want to know our phone lines are secure and power is clean), but we’ve developed specifications like CloudAudit to improve transparency.

Making simple topics complex is easy — what’s hard is making complex topics simple. We should be working to make cloud computing as approachable as possible, and drawing attention to its complexity does not further that aim. Sure there are communities of practitioners who need to know how it all works (and James is addressing that community via GigaOm), but consumers of cloud services should finally be enabled to apply information technology to business problems, without unnecessary complexity.

If you find yourself using complex terminology or unnecessary acronyms (e.g. anything ending with *aaS) then ask yourself if you’re not part of the problem rather than part of the solution.

VDI: Virtually Dead Idea?

I’ve been meaning to give my blog some attention (it’s been almost a year since my last post, and a busy one at that) and Simon Crosby’s (@simoncrosbyVDwhy? post seems as good a place to start as any. Simon and I are both former Citrix employees (“Citrites”) and we’re both interested in similar topics — virtualisation, security and cloud computing to name a few. It’s no surprise then that I agree with his sentiments about Virtual Desktop Infrastructure (VDI) and must admit to being perplexed as to why it gets so much attention, generally without question.

Windows NT (“New Technology”), the basis for all modern Microsoft desktop operating systems, was released in 1993 and shortly afterwards Citrix (having access to the source code) added the capability to support multiple graphical user interfaces concurrently. Windows NT’s underlying architecture allowed for access control lists to be applied to every object, which made it far easier for this do be done securely than what might have been possible on earlier versions. They also added their own proprietary ICA (“Independent Computing Architecture“) network protocol such that these additional sessions could be accessed remotely, over the network, from various clients (Windows, Linux, Mac and now devices like iPads, although the user experience is, as Simon pointed out, subpar). This product was known as Citrix WinFrame and was effectively a fork of Windows 3.51 (I admit to having been an NT/WinFrame admin in a past life, but mostly focused on Unix/Linux integration). It is arguably what put Citrix (now a $2bn revenue company) on the map, and it still exists today as XenApp.

Terminal Services
It turns out this was a pretty good idea. So good, in fact, that (according to Wikipedia) “Microsoft required Citrix to license their MultiWin technology to Microsoft in order to be allowed to continue offering their own terminal services product, then named Citrix MetaFrame, atop Windows NT 4.0“. Microsoft introduced their own “Remote Desktop Protocol” and armed with only a Windows NT 4.0 Terminal Server Edition beta CD, Matthew Chapman (who went to the same college, university and workplace as me and is to this day one of the smartest guys I’ve ever met) cranked out rdesktop, if I remember well over the course of a weekend. I was convinced that this was the end of Citrix so imagine my surprise when I ended up working for them, on the other side of the world (Dublin, Ireland), almost a decade later!

About the time I left Citrix for a startup opportunity in Paris, France (2006) we were tinkering with a standalone ICA listener that could be deployed on a desktop operating system (bearing in mind that by now even Windows XP included Terminal Services and an RDP listener). I believe there was also a project working on the supporting infrastructure for cranking up and tearing down single-user virtual machines (rather than multiple Terminal Services sessions based on a single Windows Server, as was the status quo at the time), but I didn’t get the point and never bothered to play with it.

Even then I was curious as to what the perceived advantage was — having spent years hardening desktop and server operating systems at the University of New South Wales to “student proof” them I considered it far easier to have one machine servicing many users than many machines servicing many users. Actually there’s still one machine, only the virtualisation layer has been moved from between the operating system and user interface — where it arguably belongs — to between the bare metal and the operating system. As such it was now going to be necessary to run multiple kernels and multiple operating systems (with all the requisite configurations, patches, applications, etc.)!

Meanwhile there was work being done on “application virtualisation” (Project Tarpon) whereby applications are sandboxed by interrupting Windows’ I/O Request Packets (IRPs) and rewriting them as required. While this was a bit of a hack (Windows doesn’t require developers to follow the rules, so they don’t and write whatever they want pretty much anywhere), it was arguably a step in the right — rather than wrong — direction.

At the end of the day the issue is simply that it’s better to share infrastructure (e.g. costs) between multiple users. In this case, why would I want to have one kernel and operating system dedicated to a single user (and exacting a toll in computing and human resources) when I can have one dedicated to many? In fact, why would I want to have an operating system at all, given it’s now essentially just a life support system for the browser? The only time I ever interact with the operating system is when something goes wrong and I have to fix it (e.g. install/remove software, modify configurations, move files, etc.) so I’d much rather have just enough operating system than one for everyone and then a bunch more on servers to support them!This is essentially what Google Chrome OS (one of the first client-side cloud operating environments) does, and I can’t help but to wonder whether the chromoting feature isn’t going to play a role in this market (actually I doubt it but it’s early days).

The RightWay™
Five years ago (as I had one foot out the door of Citrix with my eye on a startup opportunity in Paris) I met with product strategist Will Harwood at the UK office and explained my vision for the future of Citrix products. I’d been working on the Netscaler acquisition (among others) and had a pretty good feeling for the direction things were going — I’d even virtualised the various appliances on top of Xen to deliver a common appliance platform long before it was acquired (and was happy to be there to see Citrix CEO Mark Templeton announce this product as Netscaler SDX at Interop).It went something like this: the MultiWin WinFrame MetaFrame Presentation Server XenApp is a mature, best-of-breed product that had (and probably still has) some serious limitations. Initially the network-based ICA Browser service was noisy, flaky and didn’t scale, so Independent Management Architecture (IMA) was introduced — a combination of a relational data store (SQL Server or Oracle) and a mongrel “IMA” protocol over which the various servers in a farm could communicate about applications, sessions, permissions, etc. Needless to say, centralised relational databases have since gone out of style in favour of distributed “NoSQL” databases, but more to the point — why were the servers trying to coordinate between themselves when the Netscaler was designed from the ground up to load balance network services?

My proposal was simply to take the standalone ICA browser and apply it to multi-user server operating systems rather than single-user client operating systems, ditching IMA altogether and delegating the task of (global) load balancing, session management, SSL termination, etc. to the Netscaler. This would be better/faster/cheaper than the existing legacy architecture, it would be more reliable in that failures would be tolerated and best of all, it would scale out rather than up. While the Netscaler has been used for some tasks (e.g. SSL termination), I’m surprised we haven’t seen anything like this (yet)… or have we?

I can think of at least one application where VDI does make sense — public multi-tenant services (like Desktone) where each user needs a high level of isolation and customisation.

For everyone else I’d suggest taking a long, hard look at the pros and cons because any attempt to deviate from the status quo should be very well justified. I use a MacBook Air and have absolutely no need nor desire to connect to my desktop from any other device, but if I did I’d opt for shared infrastructure (Terminal Services/XenApp) and for individual “seamless” applications rather than another full desktop. If I were still administering and securing systems I’d just create a single image and deploy it over the network using PXE — I’d have to do this for the hypervisor anyway so there’s little advantage in adding yet another layer of complexity and taking the hit (and cost) of virtualisation overhead. Any operating system worth its salt includes whole disk encryption so the security argument is largely invalidated too.

I can think of few things worse than having to work on remote applications all day, unless the datacenter is very close to me (due to the physical constraints of the speed of light and the interactive/real-time nature of remote desktop sessions) and the network performance is strictly controlled/guaranteed. We go to great lengths to design deployments that are globally distributed with an appropriate level of redundancy, while being close enough to the end users to deliver the strict SLAs demanded by interactive applications — if you’re not going to bother to do it properly then you might not want to do it at all.

Citrix OpenCloud™ is neither Open nor Cloud

I’ve been busying myself recently establishing the Open Cloud Initiative which has been working with the community to establish a set of principles outlining what it means to be open cloud. As such Citrix’s announcement this week that they were “expanding their leadership in open cloud computing“(?) with the “Citrix OpenCloud™ Infrastructure platform” was somewhat intriguing, particularly for someone who’s worked with Citrix technology for 15 years and actually worked for the company for a few years before leaving to get involved in cloud computing. I was already excited to see them getting involved with OpenStack a few weeks ago as I’m supportive of this project and amazed by the level of community interest and participation, though I was really hoping that they were going to adopt the stack and better integrate it with Xen.

As usual the release itself was fluffy and devoid of clear statements as to what any of this really meant, and it doesn’t help that Citrix rebrands products more often than many change underwear. Armed with their product catalogue and information about their previous attempt to crack into the cloud space with Citrix Cloud Center (C3) I set about trying to decipher the announcement. The first thing that sprung out was the acquisition of VMlogix – a web based hypervisor management tool targeting lab environments that happens to also support Amazon EC2. Given OpenStack supports the EC2 API, perhaps this is how they plan to manage it as well as Xen “from a single management console“? Also, as Citrix are about to “add [the] intuitive, self-service interface to its popular XenServer® virtualization platform” it will be interesting to see how the likes of Enomaly feel about having a formidable ($10B+) opponent on their turf… not to mention VMware (but apparently VMware does NOT compete with Citrix – now there’s wishful thinking if I’ve ever seen it!).

Citrix also claim that customers will be able to “seamlessly manage a mix of public and private cloud workloads from a single management console, even if they span across a variety of different cloud providers“. Assuming they’re referring to VMlogix, will it be open sourced? I doubt it… and here’s the thing – I don’t expect them to. Nobody says Citrix has to be open – VMware certainly aren’t and that hasn’t kept them from building a $30B+ business. However, if they want to advertise openness as a differentiator then they should expect to be called to justify their claims. From what I can tell only the Xen hypervisor itself is open source software and it’s not at all clear how they plan to “leverage” Open vSwitch, nor whether OpenStack is even relevant given they’re just planning to manage it from their “single management console”. Even then, in a world where IT is delivered as a service rather than a product, the formats and interfaces are far more important than having access to the source itself; Amazon don’t make Linux or Xen modifications available for example but that doesn’t make them any less useful/successful (which is not to say that an alternative open source implementation like OpenStack isn’t important – it absolutely is).

Then there’s the claim that any of this is “cloud”… Sure I can use Intel chips to deliver a cloud service but does that make Intel chips “cloud”? No. How about Linux (which powers the overwhelming majority of cloud services today)? Absolutely not. So far as I can tell most of the “Citrix OpenCloud Framework” is little more than their existing suite of products cloudwashed rebranded:

  • CloudAccess ~= Citrix Password Manager
  • CloudBridge ~= Citrix Branch Repeater
  • On-Demand Apps & Demos ~= XenApp (aka WinFrame aka MetaFrame aka CPS)
  • On-Demand Desktops ~= XenDekstop
  • Compliance ~= XenApp & XenDesktop
  • Onboarding ~= Project Kensho
  • Disaster Recovery and Dev & Test ~= suites of above

At the end of the day Simon Crosby (one of the Xen guys who presumably helped convince Citrix an open source hypervisor was somehow worth $1/2bn) has repeatedly stated that Citrix OpenCloud™ is (and I quote) “100% open source software”, only to backtrack by sayingany layer of the open stack you can use a proprietary compoent(sic)” when quizzed about NetScaler, “another key component of the OpenCloud platform” and @Citrix_Cloud helpfully clarified that “OPEN means it’s plug-compatible with other options, like some open-source gear you cobble together with mobo from Fry’s“.

Maybe they’re just getting started down the open road (I hope so), but this isn’t my idea of “open” or “cloud” – and certainly not enough to justify calling it “OpenCloud”.

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.

Trend Micro abandons Intercloud™ trademark application

Just when I thought we were going to be looking at another trademark debacle not unlike Dell’s attempt at “cloud computing” back in 2008 (see Dell cloud computing™ denied) it seems luck is with us in that Trend Micro have abandoned their application #77018125 for a trademark on the term Intercloud (see NewsFlash: Trend Micro trademarks the Intercloud™). They had until 5 February 2010 to file for an extension and according to USPTO’s Trademark Document Retrieval system they have now well and truly missed the date (the last extension was submitted at the 11th hour, at 6pm on the eve of expiry).

Like Dell, Trend Micro were issued a “Notice of Allowance” on 5 August 2008 (actually Dell’s “Notice of Allowance” for #77139082 was issued less than a month before, on 8 July 2008, and cancelled just afterwards, on 7 August 2008). Unlike Dell though, Trend Micro just happened to be in the right place at the right time rather than attempting to lay claim to an existing, rapidly developing technology term (“cloud computing”).

Having been issued a Notice of Allowance both companies just had to submit a Statement of Use and the trademarks were theirs. With Dell it was just lucky that I happened to discover and reveal their application during this brief window (after which the USPTO cancelled their application following widespread uproar), but with Trend Micro it’s likely they don’t actually have a product today with which to use the trademark.

A similar thing happened to Psion late 2008, who couldn’t believe their luck when the term “netbook” became popular long after they had discontinued their product line by the same name. Having realised they still held an active trademark, they threatened all and sundry over it, eventually claiming Intel had “unclean hands” and asking for $1.2bn, only to back down when push came to shove. One could argue that as we have “submarine patents“, we also have “submarine trademarks”.

In this case, back on September 25, 2006 Trend Micro announced a product coincidentally called “InterCloud” (see Trend Micro Takes Unprecedented Approach to Eliminating Botnet Threats with the Unveiling of InterCloud Security Service), which they claimed was “the industry’s most advanced solution for identifying botnet activity and offering customers the ability to quarantine and optionally clean bot-infected PCs“. Today’s Intercloud is a global cloud of clouds, in the same way that the Internet is a global network of networks – clearly nothing like what Trend Micro had in mind. It’s also both descriptive (a portmanteau describing interconnected clouds) and generic (in that it cannot serve as a source identifier for a given product or service), which basically means it should be found ineligible for trademark protection should anyone apply again in future.

Explaining further, the Internet has kept us busy for a few decades simply by passing packets between clients and servers (most of the time). It’s analogous to the bare electricity grid, allowing connected nodes to transfer electrical energy between one another (typically from generators to consumers but with alternative energy sometimes consumers are generators too). Cloud computing is like adding massive, centralised power stations to the electricity grid, essentially giving it a life of its own.

I like the term Intercloud, mainly because it takes the focus away from the question of “What is cloud?”, instead drawing attention to interoperability and standards where it belongs. Kudos to Trend Micro for this [in]action – whether intentional or unintentional.

Introducing Planet Cloud: More signal, less noise.

As you are no doubt well aware there is a large and increasing amount of noise about cloud computing, so much so that it’s becoming increasingly difficult to extract a clean signal. This has always been the case but now that even vendors like Oracle (who have previously been sharply critical of cloud computing, in part for exactly this reason) are clambering aboard the bandwagon, it’s nearly impossible to tell who’s worth listening to and who’s just trying to sell you yesterday’s technology under today’s label.

It is with this in mind that I am happy to announce Planet Cloud, a news aggregator for cloud computing articles that is particularly fussy about its sources. In particular, unless you talk all cloud, all the time (which is rare – even I take a break every once in a while) then your posts won’t be included unless you can provide a cloud-specific feed. Fortunately most blogging software supports this capability and many of the feeds included at launch take advantage of it. You can access Planet Cloud at: or @planetcloud

Those of you aware of my disdain for SYS-CON’s antics might be surprised that we’ve opted to ask for forgiveness rather than permission, but you’ll also notice that we don’t run ads (nor do we have any plans to – except for a few that come to us via feeds and are thus paid to authors). As such this is a non-profit service to the cloud computing community intended filter out much of the noise in the same way that the Clouderati provides an fast track to the heart of the cloud computing discussion on Twitter. An unwanted side effect of this approach is that it is not possible for us to offer the feeds under a Creative Commons license, as would usually be the case for content we own.

Many thanks to Tim Freeman (@timfaas) for his contribution not only of the domain itself, but also of a comprehensive initial list of feeds (including many I never would have thought of myself). Thanks also to Rackspace Cloud who provide our hosting and who have done a great job of keeping the site alive during the testing period over the last few weeks. Thanks to the Planet aggregator which is simple but effective Python software for collating many feeds. And finally thanks to the various authors who have [been] volunteered for this project – hopefully we’ll be able to drive some extra traffic your way (of course if you’re not into it then that’s fine too – we’ll just remove you from the config file and you’ll vanish within 5 minutes).

Press Release: Cloud computing consultancy condemns controversial censorship conspiracy

SYDNEY, 24 December 2009: Sydney-based Australian Online Solutions today condemned the government’s plans to introduce draconian Internet censorship laws in Australia.

Senator Stephen Conroy (Minister for Broadband, Communications and the Digital Economy) recently announced the introduction of mandatory Internet Service Provider (ISP) level filtering of Refused Classification (RC)-rated content as well as grants to encourage ISPs to filter wider categories of content. This would require the implementation of complicated, expensive and unreliable, yet trivially circumvented filtering technology at the cost of the taxpayer and Internet user, despite a strong message having been sent that this is both unwanted and unwarranted. Reader polls conducted by the Sydney Morning Herald and The Age newspaper showed a staggering 95% of some 25,000 readers reject the federal government’s plans to censor the Internet in Australia, on the basis that it impinges on their freedom. “There are better and safer ways to tackle the problem, such as educating parents, teachers and children, offering customisable filtering as a value-added option and improving law enforcement (including cooperation with other countries)” said Sam Johnston, Australian Online Solutions’ Founder & CTO.

The full frontal assault on civil liberties aside, Australian Online Solutions has also raised some serious technical concerns about the program. “At a time when individuals and businesses are looking to shed expensive legacy systems in favour of cheap, scalable Internet based services, any action which can only impair performance and reliability while threatening to strangle Australia’s connectivity with the outside world calls for extensive justification”, said Johnston. “Cloud computing, which delivers computing services over the Internet on a utility basis – like electricity – gives its’ users a significant advantage over competitors. However web-based applications such as Facebook, Gmail, Hotmail and Twitter are extremely sensitive to bandwidth and latency constraints introduced by censorship technology”, added Johnston. “The proposed law threatens to exclude Australia from this large and growing industry altogether, both as provider and consumer, at a time when it could emerge as a market leader. Would you buy an Internet-based service from China or Iran, or even use one if you were based there?”. Analysts Merrill Lynch and Gartner estimate the cloud computing market to reach $175 billion in the coming years.

Trials commissioned by Senator Conroy and conducted by “highly reputable and independent testing company” Enex Testlab were also called into question, on both technical and conflict of interest bases. Enex Testlab, a supplier of “independent” evaluation, purchasing advice and product review services, boasts a corporate client list with over a dozen vendors of filtering technology including Content Keeper Technologies, Content Watch and Internet Sheriff Technology (accounting for around one quarter of all clients listed) and offers formal certification for content filters. As such it is believed they have strong motivation to avoid releasing a report directly or indirectly critical of their clients’ offerings.

Furthermore, the scope of the testing was artificially constrained, criticial controls (such as connection consistency) were missing and success criteria were poorly defined or non- existent from the outset, in a trial that appears to be a manufactured success. Nonetheless unflattering results which highlighted serious deficiencies in the proposal were disingenuously touted by Senator Conroy as showing “100 percent accuracy” with “negligible impact on internet speed”.

Other problems with the fatally flawed and heavily criticised report include include:

  • Proof that “a technically competent user could circumvent the filtering technology” while “circumvention prevention measures can result in greater degradation of internet performance”.
  • Admission that all filters were “not effective in the case of non-web based protocols such as instant messaging, peer-to-peer or chat rooms”.
  • False positive rates (over-blocking of legitimate/innocuous content) of up to 3.4% (over 5.1 billion pages per Internet Archive estimates) with failure rates as high as 2% (3 billion pages) considered “low”.
  • False negative rates (passing of inappropriate content) exceeding 20% (over 30 billion pages) with failure rates as high as 30% considered “reasonable by industry standards” (45 billion pages).
  • Admission that 100% accuracy is “unlikely to be achieved” and that the false positive rate increases with sensitivity, with no attempt to scientifically determine acceptable failure rates.
  • Faults being perceptible to end users, with some customers reporting “over-blocking and/or under-blocking of content during the pilot” while considering “mechanisms for self-management” and “improved visibility of the filter in action” to be “important”.
  • Unjustified assumptions including that “performance impact is minimal if between 10 and 20 percent”, while at least one system “displayed a noticeable performance impact”. Some customers “believe they experienced some speed degradation”.
  • Admission of “uncontrollable variables”, including ones that could result in “40 percent performance degradation over theoretical maximum line-rate, or more in some cases”, even at speeds less than 1/12 that of the proposed National Broadband Network (NBN).
  • Admission that reliable recognition of IP addresses to be filtered is unreliable (indeed often impossible), particularly for large-scale websites that use load balancing (e.g. most cloud computing solutions).
  • Results that were “irregular/incorrect” and “highly anomalous with reasonable expectations” (such as physically impossible improvements in performance when transferring encrypted, random payloads).
  • Complete absence of quantitative cost analysis (e.g. what financial load will be borne by both the taxpayer and Internet subscriber, both up front and on an ongoing basis), as well as any secondary costs such as decreased efficiency.
  • Overall results indicating that 1 in 5 customers’ needs were not met, with 1 in 3 opting out of continued use of the filtered service.

In addition to contacting local representatives, Australian Online Solutions encourages concerned individuals and businesses to join and support organisations including Electronic Frontiers Australia (EFA), GetUp and The Pirate Party Australia. The immediate availability of a limited number of sponsorships for founding members of The Pirate Party Australia is also announced for those who want to get involved but, for whatever reason, cannot afford the membership fees in this difficult economic environment. To take advantage of this opportunity please contact with a brief explanation of your situation.

“Anyone who cares about their future and that of their children and grandchildren should take action now”, said Johnston, who applied to both The Pirate Party Australia and Electronic Frontiers Australia (EFA) in response to Senator Conroy’s announcement. “The government’s gift to us this Christmas was draconian censorship, so let’s return the favour in helping The Pirate Party Australia attain official status by acquiring 500 exclusive members”.


About Australian Online Solutions Pty Ltd
Australian Online Solutions is a boutique consultancy that specialises in cloud computing solutions for large enterprise, government and education clients throughout Australia, Europe and the USA. Founded in 1998, Australian Online Solutions has over a decade of experience delivering next generation Internet-based systems and is a pioneer in the cloud computing space, whereby technology previously delivered as hardware and software products are delivered as services over the Internet. Cloud computing is Internet (‘cloud’) based development and use of computer technology (‘computing’). For more information refer to

About The Pirate Party Australia
The Pirate Party Australia ( is a political party with a serious platform of intellectual property law reform and protection of privacy rights and freedom of speech. The Pirate Party Australia aims to protect civil liberties and promote culture and innovation, primarily through:

  • Decriminalisation of non-commercial copyright infringement
  • Protection of freedom of speech rights
  • Protection of privacy rights
  • Opposition to internet censorship
  • Support for an R18+ rating for games
  • Reforming the life + 70 years copyright length
  • Providing parents with the tools to run their own families.

About Electronic Fronteirs Australia (EFA)
Electronic Frontiers Australia (EFA) is a non-profit national organisation representing Internet users concerned with on-line freedoms and rights. The EFA is the organisation responsible for the “No Clean Feed” ( grassroots movement to stop Internet censorship in Australia. They are also dealing with related issues such as the Anti- Counterfeiting Trade Agreement (ACTA) and censorship of computer games. Individual memberships start at $27.50 and organisational memberships are available. For more information refer to

About GetUp
GetUp is an independent, grass-roots community advocacy organisation that is actively tackling this and other pertinent issues including climate change. For more information about how to get involved refer to

About Sam Johnston
Sam Johnston, Australian Online Solutions’ Founder and CTO, is a prominent blogger on cloud computing, security and open source topics. He maintains a blog at

Press Contact:
Sam Johnston
+61 2 8898 9090
Australian Online Solutions Pty Ltd

For the latest version of this release please refer to