Getting started with OpenStack in your lab

Having recently finished building my new home lab I wanted to put the second server to good use by installing OpenStack (the first is running VMware ESXi 5.0 with Windows 7, Windows 8, Windows 8 Server and Ubuntu 12.04 LTS virtual machines). I figured many of you would benefit from a detailed walkthrough so here it is (without warranty, liability, support, etc).

The two black boxes on the left are HP Proliant MicroServer N36L’s with modest AMD Athlon(tm) II Neo 1.3GHz dual-core processors and 8GB RAM and the one on the right is an iomega ix4-200d NAS box providing 8TB of networked storage (including over iSCSI for ESXi should I run low on direct attached storage). There’s a 5 port gigabit switch stringing it all together and a 500Mbps CPL device connecting it back up the house. You should be able to set all this up inside 2 grand. Before you try to work out where I live, the safe is empty as I don’t trust electronic locks.

IMG 1198

Download Ubuntu Server (12.04 LTS or the latest long term support release) and write it to a USB key — if you’re a Mac OS X only shop then you’ll want to follow these instructions. Boot your server with the USB key inserted and it should drop you straight into the installer (if not you might need to tell the BIOS to boot from USB by pressing the appropriate key, usually F2 or F10, at the appropriate time). Most of the defaults are OK but you’ll probably want to select the “OpenSSH Server” option in tasksel unless you want to do everything from the console, but be sure to tighten up the default configuration if you care about security. Unless you like mundane admin tasks then you might want to enable automatic updates too. Even so let’s ensure any updates since release have been applied:

sudo apt-get update
sudo apt-get -u upgrade

Next you’ll want to install DevStack (“a documented shell script to build complete OpenStack development environments from RackSpace Cloud Builders“), but first you’ll need to get git:

sudo apt-get install git

Now grab the latest version of DevStack from GitHub:

git clone git://github.com/openstack-dev/devstack.git

And run the script:

cd devstack/; ./stack.sh

The first thing it will do is ask you for passwords for MySQL, Rabbit, a SERVICE_TOKEN and SERVICE_PASSWORD and finally a password for Horizon & Keystone. I used the (excellent) 1Password to generate passwords like “sEdvEuHNNeA7mYJ8Cjou” (the script doesn’t like special characters) and stored them in a secure note.

The script will then go and download dozens of dependencies, which are conveniently packaged by Ubuntu and/or the upstream Debian distribution, run setup.py for a few python packages, clone some repositories, etc. While you wait you may as well go read the script to understand what’s going on. At this point the script failed because /opt/stack/nova didn’t exist. I filed bug 995078 but the script succeeded when I ran it for a second time — looks like it may have been a glitch with GitHub.

You should end up with something like this:

Horizon is now available at http://10.0.1.10/
Keystone is serving at http://10.0.1.10:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: qqG6YTChVLzEHfTDzm8k
This is your host ip: 10.0.1.10
stack.sh completed in 431 seconds.

If you browse to that address you’ll be able to log in to the console:

Openstack login

That will drop you into the Admin section of the OpenStack Desktop (Horizon) where you can get an overview and administer instances, services, flavours, images, projects, users and quotas. You can also download OpenStack and EC2 credentials from the “Settings” pages.

Openstack console

Switch over to the “Project” tab and “Create Keypair” under “Access & Security” (so you can access any instances you create):

Openstack keygen

The key pair will be created and downloaded as a .pem file (e.g. admin.pem).

Now select “Images & Snapshots” under “Manage Compute” you’ll be able to launch the cirros-0.3.0-x86_64-uec image which is included for testing. Simply click “Launch” under “Actions”:

Openstack project

Give it a name like “Test”, select the key pair you created above and click “Launch Instance”:

Openstack launch

You’ll see a few tasks executed and your instance should be up and running (Status: Active) in a few seconds:

Openstack spawning

Now what? First, try to ping the running instance from within the SSH session on the server (you won’t be able to ping it from your workstation):

$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=0.734 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.585 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.588 ms

Next let’s copy some EC2 credentials over to our user account on the server so we can use the command line euca-* tools. Go to “Settings” in the top right and then the “EC2 Credentials” tab. Now “Download EC2 Credentials”, which come in the form of a ZIP archive containing an X.509 certificate (cert.pem) and key (pk.pem) pair as well as a CA certificate (cacert.pem) and an rc script (ec2rc.sh) to set various environment variables which tell the command line tools where to find these files:

Openstack ec2

While you’re at it you may as well grab your OpenStack Credentials which come in the form of an rc script (openrc.sh) only. It too sets environment variables which can be seen by tools running under that shell.

Openstack rc

Let’s copy them (and the key pair from above) over from our workstation to the server:

scp b34166e97765499b9a75f59eaff48b98-x509.zip openrc.sh admin.pem samj@10.0.1.10:~

Stash the EC2 credentials in ~/.euca:

mkdir ~/.euca; chmod 0700 ~/.euca; cd ~/.euca
cp ~/b34166e97765499b9a75f59eaff48b98-x509.zip ~/.euca; unzip *.zip

Finally let’s source the rc scripts:

source ~/.euca/ec2rc.sh
source ~/openrc.sh

You’ll see the openrc.sh script asks you for a password. Given this is a dev/test environment and we’ve used a complex password, let’s modify the script and hard code the password by commenting out the last 3 lines and adding a new one to export OS_PASSWORD:

# With Keystone you pass the keystone password.
#echo "Please enter your OpenStack Password: "
#read -s OS_PASSWORD_INPUT
#export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_PASSWORD=qqG6YTChVLzEHfTDzm8k

You probably don’t want anyone seeing your password or key pair so let’s lock down those files:

chmod 0600 ~/openrc.sh ~/admin.pem

Just make sure the environment variables are set correctly:

echo $EC2_USER_ID
42
echo $OS_USERNAME
admin

Finally we should be able to use the EC2 command line tools:

euca-describe-instances 
RESERVATION r-8wvdh1c7 b34166e97765499b9a75f59eaff48b98 default
INSTANCE i-00000001 ami-00000001 test test running None (b34166e97765499b9a75f59eaff48b98, ubuntu) 0 m1.tiny 2012-05-05T13:59:47.000Z nova aki-00000002 ari-00000003 monitoring-disabled 10.0.0.2 10.0.0.2 instance-store

As well as the openstack command:

openstack list server
+--------------------------------------+------+--------+------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------+--------+------------------+
| 44a43355-7f95-4621-be61-d34fe53e50a8 | Test | ACTIVE | private=10.0.0.2 |
+--------------------------------------+------+--------+------------------+

You should be able to ssh to the running instance using the IP address and key pair from above:

ssh -i admin.pem -l cirros 10.0.0.2
$ uname -a
Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux

That’s all for today — I hope you find the process as straightforward as I did and if you do follow these instructions then please leave a comment below (especially if you have any tips or solutions to problems you run into along the way).

Simplifying cloud: Reliability

The original Google server rack

Reliability in cloud computing is a very simple concept which I’ve explained in many presentations but never actually documented:

Traditional legacy IT systems consist of relatively unreliable software (Microsoft Exchange, Lotus Notes, Oracle, etc.) running on relatively reliable hardware (Dell, HP, IBM servers, Cisco networking, etc.). Unreliable software is not designed for failure and thus any fluctuations in the underlying hardware platform (including power and cooling) typically result in partial or system-wide outages. In order to deliver reliable service using unreliable software you need to use reliable hardware, typically employing lots of redundancy (dual power supplies, dual NICs, RAID arrays, etc.). In summary:

unreliable software
reliable hardware

Cloud computing platforms typically prefer to build reliability into the software such that it can run on cheap commodity hardware. The software is designed for failure and assumes that components will misbehave or go away from time to time (which will always be the case, regardless of how much you spend on reliability – the more you spend the lower the chance but it will never be zero). Reliability is typically delivered by replication, often in the background (so as not to impair performance). Multiple copies of data are maintained such that if you lose any individual machine the system continues to function (in the same way that if you lose a disk in a RAID array the service is uninterrupted). Large scale services will ideally also replicate data in multiple locations, such that if a rack, row of racks or even an entire datacenter were to fail then the service would still be uninterrupted. In summary:

reliable software
unreliable hardware

Asked for a quote for Joe Weinman’s upcoming Cloudonomics: The Business Value of Cloud Computing book, I said:

The marginal cost of reliable hardware is linear while the marginal cost of reliable software is zero.

That is to say, once you’ve written reliability into your software you can scale out with cheap hardware without spending more on reliability per unit, while if you’re using reliable hardware then each unit needs to include reliability (typically in the form of redundant components), which quickly gets very expensive.
The other two permutations are ineffective:

Unreliable software on unreliable hardware gives an unreliable system. That’s why you should never try to install unreliable software like Microsoft Exchange, Lotus Notes, Oracle etc. onto unreliable hardware like Amazon EC2:

unreliable software
unreliable hardware

Finally, reliable software on reliable hardware gives a reliable but inefficient and expensive system. That’s why you’re unlikely to see reliable software like Cassandra running on reliable platforms like VMware with brand name hardware:

reliable software
reliable hardware

Google enjoyed a significant competitive advantage for many years by using commodity components with a revolutionary proprietary software stack including components like the distributed Google File System (GFS). You can still see Google’s original hand-made racks built with motherboards laid on cork board at their Mountain View campus and the computer museum (per image above), but today’s machines are custom made by ODMs and are a lot more advanced. Meanwhile Facebook have decided to focus on their core competency (social networking) and are actively commoditising “unreliable” web scale hardware (by way of the Open Compute Project) and software (by way of software releases, most notably the Cassandra distributed database which is now used by services like Netflix).

The challenge for enterprises today is to adopt cheap reliable software so as to enable the transition away from expensive reliable hardware. That’s easier said than done, but my advice to them is to treat this new technology as another tool in the toolbox and use the right tool for the job. Set up cloud computing platforms like Cassandra and OpenStack and look for “low-hanging fruit” to migrate first, then deal with the reticent applications once the “center of gravity” of your information technology systems has moved to cloud computing architectures.

P.S. Before the server huggers get all pissy about my using the term “relatively unreliable software”, this is a perfectly valid way of achieving a reliable system — just not a cost effective one now “relatively reliable software” is here.

Cloud computing’s concealed complexity

Cloud gears cropped

James Urquhart claims Cloud is complex—deal with it, adding that “If you are looking to cloud computing to simplify your IT environment, I’m afraid I have bad news for you” and citing his earlier CNET post drawing analogies to a recent flash crash.

Cloud computing systems are complex, in the same way that nuclear power stations are complex — they also have catastrophic failure modes, but given cloud providers rely heavily on their reputations they go to great lengths to ensure continuity of service (I was previously the technical program manager for Google’s global tape backup program so I appreciate this first hand). The best analogies to flash crashes are autoscaling systems making too many (or too few) resources available and spot price spikes, but these are isolated and there are simple ways to mitigate the risk (DDoS protection, market limits, etc.)

Fortunately this complexity is concealed behind well defined interfaces — indeed the term “cloud” itself comes from network diagrams in which complex interconnecting networks became the responsibility of service providers and were concealed by a cloud outline. Cloud computing is, simply, the delivery of information technology as a service rather than a product, and like other utility services there is a clear demarcation point (the first socket for telephones, the meter for electricity and the user or machine interface for computing).

Everything on the far side of the demarcation point is the responsibility of the provider, and users often don’t even know (nor do they need to know) how the services actually work — it could be an army of monkeys at typewriters for all they care. Granted it’s often beneficial to have some visibility into how the services are provided (in the same way that we want to know our phone lines are secure and power is clean), but we’ve developed specifications like CloudAudit to improve transparency.

Making simple topics complex is easy — what’s hard is making complex topics simple. We should be working to make cloud computing as approachable as possible, and drawing attention to its complexity does not further that aim. Sure there are communities of practitioners who need to know how it all works (and James is addressing that community via GigaOm), but consumers of cloud services should finally be enabled to apply information technology to business problems, without unnecessary complexity.

If you find yourself using complex terminology or unnecessary acronyms (e.g. anything ending with *aaS) then ask yourself if you’re not part of the problem rather than part of the solution.

VDI: Virtually Dead Idea?

I’ve been meaning to give my blog some attention (it’s been almost a year since my last post, and a busy one at that) and Simon Crosby’s (@simoncrosbyVDwhy? post seems as good a place to start as any. Simon and I are both former Citrix employees (“Citrites”) and we’re both interested in similar topics — virtualisation, security and cloud computing to name a few. It’s no surprise then that I agree with his sentiments about Virtual Desktop Infrastructure (VDI) and must admit to being perplexed as to why it gets so much attention, generally without question.

History
Windows NT (“New Technology”), the basis for all modern Microsoft desktop operating systems, was released in 1993 and shortly afterwards Citrix (having access to the source code) added the capability to support multiple graphical user interfaces concurrently. Windows NT’s underlying architecture allowed for access control lists to be applied to every object, which made it far easier for this do be done securely than what might have been possible on earlier versions. They also added their own proprietary ICA (“Independent Computing Architecture“) network protocol such that these additional sessions could be accessed remotely, over the network, from various clients (Windows, Linux, Mac and now devices like iPads, although the user experience is, as Simon pointed out, subpar). This product was known as Citrix WinFrame and was effectively a fork of Windows 3.51 (I admit to having been an NT/WinFrame admin in a past life, but mostly focused on Unix/Linux integration). It is arguably what put Citrix (now a $2bn revenue company) on the map, and it still exists today as XenApp.

Terminal Services
It turns out this was a pretty good idea. So good, in fact, that (according to Wikipedia) “Microsoft required Citrix to license their MultiWin technology to Microsoft in order to be allowed to continue offering their own terminal services product, then named Citrix MetaFrame, atop Windows NT 4.0“. Microsoft introduced their own “Remote Desktop Protocol” and armed with only a Windows NT 4.0 Terminal Server Edition beta CD, Matthew Chapman (who went to the same college, university and workplace as me and is to this day one of the smartest guys I’ve ever met) cranked out rdesktop, if I remember well over the course of a weekend. I was convinced that this was the end of Citrix so imagine my surprise when I ended up working for them, on the other side of the world (Dublin, Ireland), almost a decade later!

VDI
About the time I left Citrix for a startup opportunity in Paris, France (2006) we were tinkering with a standalone ICA listener that could be deployed on a desktop operating system (bearing in mind that by now even Windows XP included Terminal Services and an RDP listener). I believe there was also a project working on the supporting infrastructure for cranking up and tearing down single-user virtual machines (rather than multiple Terminal Services sessions based on a single Windows Server, as was the status quo at the time), but I didn’t get the point and never bothered to play with it.

Even then I was curious as to what the perceived advantage was — having spent years hardening desktop and server operating systems at the University of New South Wales to “student proof” them I considered it far easier to have one machine servicing many users than many machines servicing many users. Actually there’s still one machine, only the virtualisation layer has been moved from between the operating system and user interface — where it arguably belongs — to between the bare metal and the operating system. As such it was now going to be necessary to run multiple kernels and multiple operating systems (with all the requisite configurations, patches, applications, etc.)!

Meanwhile there was work being done on “application virtualisation” (Project Tarpon) whereby applications are sandboxed by interrupting Windows’ I/O Request Packets (IRPs) and rewriting them as required. While this was a bit of a hack (Windows doesn’t require developers to follow the rules, so they don’t and write whatever they want pretty much anywhere), it was arguably a step in the right — rather than wrong — direction.

Multitenancy
At the end of the day the issue is simply that it’s better to share infrastructure (e.g. costs) between multiple users. In this case, why would I want to have one kernel and operating system dedicated to a single user (and exacting a toll in computing and human resources) when I can have one dedicated to many? In fact, why would I want to have an operating system at all, given it’s now essentially just a life support system for the browser? The only time I ever interact with the operating system is when something goes wrong and I have to fix it (e.g. install/remove software, modify configurations, move files, etc.) so I’d much rather have just enough operating system than one for everyone and then a bunch more on servers to support them!This is essentially what Google Chrome OS (one of the first client-side cloud operating environments) does, and I can’t help but to wonder whether the chromoting feature isn’t going to play a role in this market (actually I doubt it but it’s early days).

The RightWay™
Five years ago (as I had one foot out the door of Citrix with my eye on a startup opportunity in Paris) I met with product strategist Will Harwood at the UK office and explained my vision for the future of Citrix products. I’d been working on the Netscaler acquisition (among others) and had a pretty good feeling for the direction things were going — I’d even virtualised the various appliances on top of Xen to deliver a common appliance platform long before it was acquired (and was happy to be there to see Citrix CEO Mark Templeton announce this product as Netscaler SDX at Interop).It went something like this: the MultiWin WinFrame MetaFrame Presentation Server XenApp is a mature, best-of-breed product that had (and probably still has) some serious limitations. Initially the network-based ICA Browser service was noisy, flaky and didn’t scale, so Independent Management Architecture (IMA) was introduced — a combination of a relational data store (SQL Server or Oracle) and a mongrel “IMA” protocol over which the various servers in a farm could communicate about applications, sessions, permissions, etc. Needless to say, centralised relational databases have since gone out of style in favour of distributed “NoSQL” databases, but more to the point — why were the servers trying to coordinate between themselves when the Netscaler was designed from the ground up to load balance network services?

My proposal was simply to take the standalone ICA browser and apply it to multi-user server operating systems rather than single-user client operating systems, ditching IMA altogether and delegating the task of (global) load balancing, session management, SSL termination, etc. to the Netscaler. This would be better/faster/cheaper than the existing legacy architecture, it would be more reliable in that failures would be tolerated and best of all, it would scale out rather than up. While the Netscaler has been used for some tasks (e.g. SSL termination), I’m surprised we haven’t seen anything like this (yet)… or have we?

Caveat
I can think of at least one application where VDI does make sense — public multi-tenant services (like Desktone) where each user needs a high level of isolation and customisation.

For everyone else I’d suggest taking a long, hard look at the pros and cons because any attempt to deviate from the status quo should be very well justified. I use a MacBook Air and have absolutely no need nor desire to connect to my desktop from any other device, but if I did I’d opt for shared infrastructure (Terminal Services/XenApp) and for individual “seamless” applications rather than another full desktop. If I were still administering and securing systems I’d just create a single image and deploy it over the network using PXE — I’d have to do this for the hypervisor anyway so there’s little advantage in adding yet another layer of complexity and taking the hit (and cost) of virtualisation overhead. Any operating system worth its salt includes whole disk encryption so the security argument is largely invalidated too.

I can think of few things worse than having to work on remote applications all day, unless the datacenter is very close to me (due to the physical constraints of the speed of light and the interactive/real-time nature of remote desktop sessions) and the network performance is strictly controlled/guaranteed. We go to great lengths to design deployments that are globally distributed with an appropriate level of redundancy, while being close enough to the end users to deliver the strict SLAs demanded by interactive applications — if you’re not going to bother to do it properly then you might not want to do it at all.

Citrix OpenCloud™ is neither Open nor Cloud

I’ve been busying myself recently establishing the Open Cloud Initiative which has been working with the community to establish a set of principles outlining what it means to be open cloud. As such Citrix’s announcement this week that they were “expanding their leadership in open cloud computing“(?) with the “Citrix OpenCloud™ Infrastructure platform” was somewhat intriguing, particularly for someone who’s worked with Citrix technology for 15 years and actually worked for the company for a few years before leaving to get involved in cloud computing. I was already excited to see them getting involved with OpenStack a few weeks ago as I’m supportive of this project and amazed by the level of community interest and participation, though I was really hoping that they were going to adopt the stack and better integrate it with Xen.

As usual the release itself was fluffy and devoid of clear statements as to what any of this really meant, and it doesn’t help that Citrix rebrands products more often than many change underwear. Armed with their product catalogue and information about their previous attempt to crack into the cloud space with Citrix Cloud Center (C3) I set about trying to decipher the announcement. The first thing that sprung out was the acquisition of VMlogix – a web based hypervisor management tool targeting lab environments that happens to also support Amazon EC2. Given OpenStack supports the EC2 API, perhaps this is how they plan to manage it as well as Xen “from a single management console“? Also, as Citrix are about to “add [the] intuitive, self-service interface to its popular XenServer® virtualization platform” it will be interesting to see how the likes of Enomaly feel about having a formidable ($10B+) opponent on their turf… not to mention VMware (but apparently VMware does NOT compete with Citrix – now there’s wishful thinking if I’ve ever seen it!).

Citrix also claim that customers will be able to “seamlessly manage a mix of public and private cloud workloads from a single management console, even if they span across a variety of different cloud providers“. Assuming they’re referring to VMlogix, will it be open sourced? I doubt it… and here’s the thing – I don’t expect them to. Nobody says Citrix has to be open – VMware certainly aren’t and that hasn’t kept them from building a $30B+ business. However, if they want to advertise openness as a differentiator then they should expect to be called to justify their claims. From what I can tell only the Xen hypervisor itself is open source software and it’s not at all clear how they plan to “leverage” Open vSwitch, nor whether OpenStack is even relevant given they’re just planning to manage it from their “single management console”. Even then, in a world where IT is delivered as a service rather than a product, the formats and interfaces are far more important than having access to the source itself; Amazon don’t make Linux or Xen modifications available for example but that doesn’t make them any less useful/successful (which is not to say that an alternative open source implementation like OpenStack isn’t important – it absolutely is).

Then there’s the claim that any of this is “cloud”… Sure I can use Intel chips to deliver a cloud service but does that make Intel chips “cloud”? No. How about Linux (which powers the overwhelming majority of cloud services today)? Absolutely not. So far as I can tell most of the “Citrix OpenCloud Framework” is little more than their existing suite of products cloudwashed rebranded:

  • CloudAccess ~= Citrix Password Manager
  • CloudBridge ~= Citrix Branch Repeater
  • On-Demand Apps & Demos ~= XenApp (aka WinFrame aka MetaFrame aka CPS)
  • On-Demand Desktops ~= XenDekstop
  • Compliance ~= XenApp & XenDesktop
  • Onboarding ~= Project Kensho
  • Disaster Recovery and Dev & Test ~= suites of above

At the end of the day Simon Crosby (one of the Xen guys who presumably helped convince Citrix an open source hypervisor was somehow worth $1/2bn) has repeatedly stated that Citrix OpenCloud™ is (and I quote) “100% open source software”, only to backtrack by sayingany layer of the open stack you can use a proprietary compoent(sic)” when quizzed about NetScaler, “another key component of the OpenCloud platform” and @Citrix_Cloud helpfully clarified that “OPEN means it’s plug-compatible with other options, like some open-source gear you cobble together with mobo from Fry’s“.

Maybe they’re just getting started down the open road (I hope so), but this isn’t my idea of “open” or “cloud” – and certainly not enough to justify calling it “OpenCloud”.

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.