Citrix OpenCloud™ is neither Open nor Cloud

I’ve been busying myself recently establishing the Open Cloud Initiative which has been working with the community to establish a set of principles outlining what it means to be open cloud. As such Citrix’s announcement this week that they were “expanding their leadership in open cloud computing“(?) with the “Citrix OpenCloud™ Infrastructure platform” was somewhat intriguing, particularly for someone who’s worked with Citrix technology for 15 years and actually worked for the company for a few years before leaving to get involved in cloud computing. I was already excited to see them getting involved with OpenStack a few weeks ago as I’m supportive of this project and amazed by the level of community interest and participation, though I was really hoping that they were going to adopt the stack and better integrate it with Xen.

As usual the release itself was fluffy and devoid of clear statements as to what any of this really meant, and it doesn’t help that Citrix rebrands products more often than many change underwear. Armed with their product catalogue and information about their previous attempt to crack into the cloud space with Citrix Cloud Center (C3) I set about trying to decipher the announcement. The first thing that sprung out was the acquisition of VMlogix – a web based hypervisor management tool targeting lab environments that happens to also support Amazon EC2. Given OpenStack supports the EC2 API, perhaps this is how they plan to manage it as well as Xen “from a single management console“? Also, as Citrix are about to “add [the] intuitive, self-service interface to its popular XenServer® virtualization platform” it will be interesting to see how the likes of Enomaly feel about having a formidable ($10B+) opponent on their turf… not to mention VMware (but apparently VMware does NOT compete with Citrix – now there’s wishful thinking if I’ve ever seen it!).

Citrix also claim that customers will be able to “seamlessly manage a mix of public and private cloud workloads from a single management console, even if they span across a variety of different cloud providers“. Assuming they’re referring to VMlogix, will it be open sourced? I doubt it… and here’s the thing – I don’t expect them to. Nobody says Citrix has to be open – VMware certainly aren’t and that hasn’t kept them from building a $30B+ business. However, if they want to advertise openness as a differentiator then they should expect to be called to justify their claims. From what I can tell only the Xen hypervisor itself is open source software and it’s not at all clear how they plan to “leverage” Open vSwitch, nor whether OpenStack is even relevant given they’re just planning to manage it from their “single management console”. Even then, in a world where IT is delivered as a service rather than a product, the formats and interfaces are far more important than having access to the source itself; Amazon don’t make Linux or Xen modifications available for example but that doesn’t make them any less useful/successful (which is not to say that an alternative open source implementation like OpenStack isn’t important – it absolutely is).

Then there’s the claim that any of this is “cloud”… Sure I can use Intel chips to deliver a cloud service but does that make Intel chips “cloud”? No. How about Linux (which powers the overwhelming majority of cloud services today)? Absolutely not. So far as I can tell most of the “Citrix OpenCloud Framework” is little more than their existing suite of products cloudwashed rebranded:

  • CloudAccess ~= Citrix Password Manager
  • CloudBridge ~= Citrix Branch Repeater
  • On-Demand Apps & Demos ~= XenApp (aka WinFrame aka MetaFrame aka CPS)
  • On-Demand Desktops ~= XenDekstop
  • Compliance ~= XenApp & XenDesktop
  • Onboarding ~= Project Kensho
  • Disaster Recovery and Dev & Test ~= suites of above

At the end of the day Simon Crosby (one of the Xen guys who presumably helped convince Citrix an open source hypervisor was somehow worth $1/2bn) has repeatedly stated that Citrix OpenCloud™ is (and I quote) “100% open source software”, only to backtrack by sayingany layer of the open stack you can use a proprietary compoent(sic)” when quizzed about NetScaler, “another key component of the OpenCloud platform” and @Citrix_Cloud helpfully clarified that “OPEN means it’s plug-compatible with other options, like some open-source gear you cobble together with mobo from Fry’s“.

Maybe they’re just getting started down the open road (I hope so), but this isn’t my idea of “open” or “cloud” – and certainly not enough to justify calling it “OpenCloud”.

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.

Trend Micro abandons Intercloud™ trademark application

Just when I thought we were going to be looking at another trademark debacle not unlike Dell’s attempt at “cloud computing” back in 2008 (see Dell cloud computing™ denied) it seems luck is with us in that Trend Micro have abandoned their application #77018125 for a trademark on the term Intercloud (see NewsFlash: Trend Micro trademarks the Intercloud™). They had until 5 February 2010 to file for an extension and according to USPTO’s Trademark Document Retrieval system they have now well and truly missed the date (the last extension was submitted at the 11th hour, at 6pm on the eve of expiry).

Like Dell, Trend Micro were issued a “Notice of Allowance” on 5 August 2008 (actually Dell’s “Notice of Allowance” for #77139082 was issued less than a month before, on 8 July 2008, and cancelled just afterwards, on 7 August 2008). Unlike Dell though, Trend Micro just happened to be in the right place at the right time rather than attempting to lay claim to an existing, rapidly developing technology term (“cloud computing”).

Having been issued a Notice of Allowance both companies just had to submit a Statement of Use and the trademarks were theirs. With Dell it was just lucky that I happened to discover and reveal their application during this brief window (after which the USPTO cancelled their application following widespread uproar), but with Trend Micro it’s likely they don’t actually have a product today with which to use the trademark.

A similar thing happened to Psion late 2008, who couldn’t believe their luck when the term “netbook” became popular long after they had discontinued their product line by the same name. Having realised they still held an active trademark, they threatened all and sundry over it, eventually claiming Intel had “unclean hands” and asking for $1.2bn, only to back down when push came to shove. One could argue that as we have “submarine patents“, we also have “submarine trademarks”.

In this case, back on September 25, 2006 Trend Micro announced a product coincidentally called “InterCloud” (see Trend Micro Takes Unprecedented Approach to Eliminating Botnet Threats with the Unveiling of InterCloud Security Service), which they claimed was “the industry’s most advanced solution for identifying botnet activity and offering customers the ability to quarantine and optionally clean bot-infected PCs“. Today’s Intercloud is a global cloud of clouds, in the same way that the Internet is a global network of networks – clearly nothing like what Trend Micro had in mind. It’s also both descriptive (a portmanteau describing interconnected clouds) and generic (in that it cannot serve as a source identifier for a given product or service), which basically means it should be found ineligible for trademark protection should anyone apply again in future.

Explaining further, the Internet has kept us busy for a few decades simply by passing packets between clients and servers (most of the time). It’s analogous to the bare electricity grid, allowing connected nodes to transfer electrical energy between one another (typically from generators to consumers but with alternative energy sometimes consumers are generators too). Cloud computing is like adding massive, centralised power stations to the electricity grid, essentially giving it a life of its own.

I like the term Intercloud, mainly because it takes the focus away from the question of “What is cloud?”, instead drawing attention to interoperability and standards where it belongs. Kudos to Trend Micro for this [in]action – whether intentional or unintentional.

Introducing Planet Cloud: More signal, less noise.


As you are no doubt well aware there is a large and increasing amount of noise about cloud computing, so much so that it’s becoming increasingly difficult to extract a clean signal. This has always been the case but now that even vendors like Oracle (who have previously been sharply critical of cloud computing, in part for exactly this reason) are clambering aboard the bandwagon, it’s nearly impossible to tell who’s worth listening to and who’s just trying to sell you yesterday’s technology under today’s label.

It is with this in mind that I am happy to announce Planet Cloud, a news aggregator for cloud computing articles that is particularly fussy about its sources. In particular, unless you talk all cloud, all the time (which is rare – even I take a break every once in a while) then your posts won’t be included unless you can provide a cloud-specific feed. Fortunately most blogging software supports this capability and many of the feeds included at launch take advantage of it. You can access Planet Cloud at:

http://www.planetcloud.org/ or @planetcloud

Those of you aware of my disdain for SYS-CON’s antics might be surprised that we’ve opted to ask for forgiveness rather than permission, but you’ll also notice that we don’t run ads (nor do we have any plans to – except for a few that come to us via feeds and are thus paid to authors). As such this is a non-profit service to the cloud computing community intended filter out much of the noise in the same way that the Clouderati provides an fast track to the heart of the cloud computing discussion on Twitter. An unwanted side effect of this approach is that it is not possible for us to offer the feeds under a Creative Commons license, as would usually be the case for content we own.

Many thanks to Tim Freeman (@timfaas) for his contribution not only of the planetcloud.org domain itself, but also of a comprehensive initial list of feeds (including many I never would have thought of myself). Thanks also to Rackspace Cloud who provide our hosting and who have done a great job of keeping the site alive during the testing period over the last few weeks. Thanks to the Planet aggregator which is simple but effective Python software for collating many feeds. And finally thanks to the various authors who have [been] volunteered for this project – hopefully we’ll be able to drive some extra traffic your way (of course if you’re not into it then that’s fine too – we’ll just remove you from the config file and you’ll vanish within 5 minutes).

Face it Flash, your days are numbered.

It’s no secret that I’m no fan of Adobe Flash:

It should be no surprise then that I’m stoked to see a vigorous debate taking place about the future/fate of Flash well ahead of schedule, and even happier to see Flash sympathisers already resorting to desperate measures including “playing the porn card” (not to mention Farmville which, in addition to the myriad annoying, invasive and privacy-invading advertisements, I will also be more than happy to see extinct). In my mind this all but proves how dire their situation has become with the sudden onslaught of mobile devices deliberately absent flash malware*.

Let’s take a moment to talk about statistics. According to analysts there are currently “only” 1.3 billion Internet-connected PCs. To put that into context, there are already almost as many Internet-connected mobile devices. With a growth rate 2.5 times that of PCs, mobiles will soon become the dominant Internet access device. Of those new devices, few of them support Flash (think Android, iPhone), and with good reason – they are designed to be small, simple, performant and operate for hours/days between charges.

As if that’s not enough, companies with the power to make it happen would very much like for us to have a third device that fills the void between the two – a netbook or a tablet (like the iPad). For the most part (again being powered by Android and iPhone OS) these devices don’t support Flash either. Even if we were to give Adobe the benefit of the doubt in accepting their deceptiveoptimistic claims that Flash is currently “reaching 99% of Internet-enabled desktops in mature markets” (for more on that subject see Lies, damned lies and Adobe’s penetration statistics for Flash), between these two new markets it seems inevitable that their penetration rate will drop well below 50% real soon now.

Here’s the best part though, Flash penetration doesn’t even have to drop below 50% for us to break the vicious cycle of designers claiming “99% penetration” and users then having to install Flash because so many sites arbitrarily depend on it (using Flash for navigation is a particularly heinous offense, as is using it for headings with fancy fonts). Even if penetration were to drop to 95% (I would argue it already has long ago, especially if you dispense with weasel wording like “mature markets” and even moreso if you do away with the arbitrary “desktop” restriction – talk about sampling bias!) that translates to turning away 1 in 20 of your customers. At what point will merchants start to flinch – 1 in 10 (90%)? 1 in 5 (80%)? 1 in 4 (75%)? 1 in 2 (50%)?

As if that’s not enough, according to Rich Internet Application Statistics, you would be losing some of your best customers – those who can afford to run Mac OS X (87% penetration) and Windows 7 (around 75% penetration) – not to mention those with iPhones and iPads (neither of which are the cheapest devices on the market). Oh yeah and you heard it right, according to them, Flash penetration on Windows 7 is an embarassing 3 in 4 machines; even worse than SunOracle Java (though ironically Microsoft’s own Silverlight barely reaches 1 in 2 machines).

While we’re at it, at what point does it become “willful false advertising” for Adobe and their army of Flash designers to claim such deep penetration? Victims who pay $$lots for Flash-based sites only to discover from server logs that a surprisingly large percentage of users are being turned away have every reason to be upset, and ultimately to seek legal recourse. Why hasn’t this already happened? Has it? In any case designers like “Paul Threatt, a graphic designer at Jackson Walker design group, [who] has filed a complaint to the FTC alleging false advertising” ought to think twice before pointing the finger at Apple (accused in this case over a few mockups, briefly shown and since removed, in an iPad promo video).

At the end of the day much of what is annoying about the web is powered by Flash. If you don’t believe me then get a real browser and install Flashblock (for Firefox or Chrome) or ClickToFlash (for Safari) and see for yourself. You will be pleasantly surprised by the absence of annoyances as well as impressed by how well even an old computer can perform when not laden with this unnecessary parasite*. What is less obvious (but arguably more important) is that your security will dramatically improve as you significantly reduce your attack surface (while you’re at it replace Adobe Reader with Foxit and enjoy even more safety). As someone who has been largely Flash-free for the last 3 months I can assure you life is better on the other side; in addition to huge performance gains I’ve had far fewer crashes since purging my machine – unsurprising given according to Apple’s Steve Jobs, “Whenever a Mac crashes more often than not it’s because of Flash“. “No one will be using Flash, he says. The world is moving to HTML5.

So what can Adobe do about this now the horse has long since bolted? If you ask me, nothing. Dave Winer (another fellow who, like myself, “very much care[s] about an open Internet“) is somewhat more positive in posing the question What if Flash were an open standard? and suggesting that “Adobe might want to consider, right now, very quickly, giving Flash to the public domain. Disclaim all patents, open source all code, etc etc.“. Too bad it’s not that simple so long as one of the primary motivations for using Flash is bundled proprietary codecs like H.264 (which the MPEG LA have made abundantly clear will not be open sourced so long as they hold [over 900!] essential patents over it).

Update: Mobile Firefox Maemo RC3 has disabled Flash because “The Adobe Flash plugin used on many sites degraded the performance of the browser to the point where it didn’t meet Mozilla’s standards.” Sound familiar?

Update: Regarding the upcoming CS5 release which Adobe claims will “let you publish ActionScript 3 projects to run as native applications for iPhone“, this is not at all the same thing as the Flash plugin and will merely allow developers to create applications which suck more using a non-free SDK. No thanks. I’m unconvinced Apple will let such applications into the store anyway, citing performance concerns and/or the runtime rule.

Update: I tend to agree with Steven Wei that The best way for Adobe to save Flash is by killing it, but that doesn’t mean it’ll happen and any case if they wanted to do that they would have wanted to have started at least a year or two ago for the project to have any relevance, and it’s clear that they’re still busy flogging the binary plugin dead horse.

Update: Another important factor I neglected to mention above is that Adobe already struggle to maintain up-to-date binaries for a small number of major platforms and even then Mac and Linux are apparently second and third class citizens. If they’re struggling to manage the workload today then I don’t see what will make it any easier tomorrow with the myriad Linux/ARM devices hitting the market (among others). Nor would they want to – if they target HTML5, CSS3, etc. as proposed above then they have more resources to spend on having the best development environment out there.

* You may feel that words like “parasite” and “malware” are a bit strong for Flash, but when you think about it Flash has all the necessary attributes; it consumes your resources, weakens your security and is generally annoying. In short, the cost outweighs any perceived benefits.

HOWTO: Set up OpenVPN in a VPS

If, like me, you want to do any or all of the following things, you’ll want to tunnel your traffic over a VPN to a remote location:

  • Access media services restricted by geography (Hulu, FOX, BBX, etc.)
  • Bypass draconian censorship
  • Conceal your identity/location/etc.
  • Protect your machine from attackers
  • etc.

You could of course use a commercial service like AlwaysVPN in which case you typically pay ($5-10) per month or (~$1) per gigabyte, but many will prefer to run their own service. FWIW AlywaysVPN has worked very well for me but it’s time to move on.

First thing’s first you’ll want to find yourself a remote Linux server, and the easiest way to do so is to rent a virtual private server (VPS) from one of a myriad providers. No point spending more than 10 bucks a month on it as you don’t need much in the way of resources (only bandwidth). Check out lowendbox.com for VPS deals under $7/month or just run with a BurstNET VPS starting at $5.95/month for a very reasonable resource allocation (including a terabyte of bandwidth!).

Once you’ve placed your order and passed their fraud detection systems (which includes an automated callback on the number you supply) you’ll have to wait 12-24 hours for activation, upon which you’ll receive an email with details for accessing your vePortal control panel as well as the VPS itself (via SSH). You’ll get 2 IP addresses and I dedicated the second to both inbound and outbound traffic for VPS clients (which live on a 10.x RFC1918 subnet and access the Internet via SNAT).

If you didn’t already do so when signing up then choose a sensible OS in your control panel (“OS Reload”) like Ubuntu 8.04 – a Long Term Support release which means you’ll be getting security fixes for years to come – or better yet, 10.4 if it’s been released by the time you read this (it’s the next LTS release). Do an “apt-get install unattended-upgrades” and you ought to be fairly safe until 2015. You’re also going to need your TUN/TAP device(s) enabled which involves another trip to the control panel (“Enable Tun/Tap”) and/or a helpdesk ticket (http://support.burst.net/). If /dev/net/tun doesn’t exist then you can create it with “mknod /dev/net/tun c 10 200”.

To install OpenVPN it’s just a case of doing “apt-get install openvpn”… you could also download a free 2-user version of OpenVPN-AS from http://openvpn.net/ but I found it had problems trying to load netfilter modules that were already loaded so YMMV. If you want support or > 2 users you’ll be looking at a very reasonable $5/user – you’re on your own with the free/open source version but there’s no such limitations either.

OpenVPN uses PKI but rather than go to a certificate authority we’ll set one up ourselves. EasyRSA is included to simplify this process so it’s just a case of doing something like this:

cd /usr/share/doc/openvpn/examples/easy-rsa/2.0. ./vars./clean-all./build-ca./build-dhopenvpn --genkey --secret ta.key./build-key-server server./build-key client1./build-key client2./build-key client3

It’ll ask you a bunch of superflous information like your country, state, city, organisation, etc. but I just filled these out with ‘.’ (blank rather than the defaults) – mostly so as not to give away information unnecessarily to anyone who asks. The only field that matters is the Common Name which you probably want to leave as ‘server’, ‘client1’ (or some other username like ‘samj’), etc. When you’re done here you’ll want to “cp keys/* /etc/openvpn” so OpenVPN can see it.

Next you’ll want to configure the OpenVPN server and client(s) based on examples in /usr/share/doc/openvpn/examples/sample-config-files. I’m running two – one “Faster” one for the best performance when I’m on a “clean” connection (which uses udp/1194) and another “Compatible” one for when I’m on a restricted/corporate network (which shares tcp/443 with HTTPS). I did a “zcat server.conf.gz > /etc/openvpn/faster.conf” and edited it so it (when filtered with cat faster.conf | grep -v "^#" |grep -v "^;" | grep -v "^$") looks something like this:

local 173.212.x.xport 1194proto udpdev tunca ca.crtcert server.crtkey server.keydh dh1024.pemserver 10.9.0.0 255.255.255.0ifconfig-pool-persist faster-ipp.txtpush "redirect-gateway def1 bypass-dhcp"push "dhcp-option DNS 8.8.8.8"push "dhcp-option DNS 8.8.4.4"client-to-clientkeepalive 10 120tls-auth ta.key 0cipher BF-CBCcomp-lzouser nobodygroup nogrouppersist-keypersist-tunstatus /var/log/openvpn/faster-status.loglog-append /var/log/openvpn/faster.logverb 3mute 20

Noteworthy points:

  • local specifies which IP to bind to – I used the second (of two) that BurstNET had allocated to my VPS so as to keep the first for other servers, but you could just as easily use the first and then put clients behind the second, which would appear to be completely “clean”.
  • We’re using “tun” (tunneling/routing) rather than “tap” (ethernet briding) because BurstNET use venet interfaces which lack MAC addresses rather than veth. Wasn’t able to get bridging up and running, as originally intended.
  • There are various hardening options but to keep it simple I just run as nobody:nogroup and use tls-auth (having generated the optional ta.key with “openvpn –genkey –secret ta.key” above).
  • Pushing Google Public DNS addresses to clients as they won’t be able to use their local resolver addresses once connected. Also telling them to route all traffic over the VPN (which would otherwise only intercept traffic for a remote network).
  • Configured separate log files and subnets (10.8.0.0/24 and 10.9.0.0/24) for the “faster” and “compatible” instances.

The “compatible.conf” file varies only with the following lines:

port 443proto tcpserver 10.8.0.0 255.255.255.0status /var/log/openvpn/compatible-status.loglog-append /var/log/openvpn/compatible.log

Next you’ll want to copy over client.conf from /usr/share/doc/openvpn/examples/sample-config-files (but set ‘AUTOSTART=”compatible faster”‘ in /etc/default/openvpn so it’s ignored by the init scripts).

clientdev tunproto udpremote 173.212.x.x 1194resolv-retry infinitenobindpersist-keypersist-tunca burstnet-ca.crtcert burstnet-client.crtkey burstnet-client.keyns-cert-type servertls-auth burstnet-ta.key 1cipher tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHAcipher BF-CBCcomp-lzoverb 3

As I’ve got a bunch of different connections on my clients I’ve prepended “burstnet-” to all the files and called the main config files “BurstNET-Faster.conf” and “BurstNET-Compatible.conf” (which appear in the Tunnelblick menu on OS X as “BurstNET-Faster” and “BurstNET-Compatible” respectively – thanks to AlwaysVPN for this idea). The only difference for BurstNET-Compatible.conf is:

proto tcpremote 173.212.x.x 443

You’re now almost ready for the smoke test (and indeed should be able to connect) but you’ll end up on a 10.x subnet and therefore unable to communicate with anyone. The fix is “iptables -t nat -A POSTROUTING -s 10.8.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x” (where the source IP is one of those allocated to you).

Being paranoid though I want to lock down my server with a firewall, which for Ubuntu typically means ufw (you’ll need to “apt-get install ufw” if you haven’t already). My ufw rules look something like this:

# ufw statusStatus: activeTo                         Action  From--                         ------  ----Anywhere                   ALLOW   1.2.3.41194/udp                   ALLOW   Anywhere443/tcp                    ALLOW   Anywhere

The first rule allows me to access the server from home via SSH and 1194/udp and 443/tcp allow VPN clients in. To allow the clients to access the outside world we’re going to have to rewrite their traffic to come from a public IP (which is called “SNAT”), but first you’ll want to enable forwarding by setting DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw. Then it’s just a case of adding something like this to /etc/ufw/before.rules:

# nat Table rules*nat:POSTROUTING ACCEPT [0:0]# SNAT traffic from VPN subnet.-A POSTROUTING -s 10.8.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x-A POSTROUTING -s 10.9.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x# don't delete the 'COMMIT' line or these nat table rules won't be processedCOMMIT

You may need to enable UFW (“ufw enable”) and if you lose access to your server you can always disable UFW (“ufw disable”) using the rudimentary “Console” function of vePortal.

On the client side you’ve got support for (at least) Linux (e.g. “openvpn --config /etc/openvpn/BurstNET-Faster.conf“), Mac and Windows and there’s various GUIs (including OpenVPN GUI for Windows and Tunnelblick for Mac OS X). I’m (only) using Tunnelblick, and after copying Tunnelblick.app to /Applications I just need to create a ~/Library/openvpn directory and drop these files in there:

  • BurstNET-Compatible.conf
  • BurstNET-Faster.conf
  • burstnet-ca.crt
  • burstnet-client.key
  • burstnet-client.crt
  • burstnet-ta.key

When Tunnelblick’s running I have a little black tunnel symbol in the top right corner of my screen from which I can connect & disconnect as necessary.

I think that’s about it – hopefully there’s nothing critical I’ve missed but feel free to follow up in the comments if you’ve anything to add. I’m now happily streaming from Hulu and Fox in the US, downloading Amazon MP3s (using my US credit card), and have a reasonable level of anonymity. If I was in Australia I’d have little to fear from censorship (and there’s virtually nothing they can do to stop me) and as my machine has a private IP I’m effectively firewalled.

Update: It seems that my VPS is occasionally restarted (which is not all that surprising) and forgets about its tun/tap device (which is). The device node itself is still visible in the filesystem, but with no driver to connect to in the kernel it doesn’t work and OpenVPN doesn’t start. You can test if your tun device is working using cat:

WORKING:

cat /dev/net/tun

cat: /dev/net/tun: File descriptor in bad state

NOT WORKING:

cat /dev/net/tun

cat: /dev/net/tun: No such device

I’ve also noticed that ufw may need to be manually started with a ‘ufw enable’. Hope that saves you some time diagnosing problems!

NoSQL “movement” roadblocks HTML5 WebDB

Today’s rant is coming between me and a day of skiing so I’ll keep it brief. While trying to get to the bottom of why I can’t enjoy offline access to Google Apps & other web-based applications with Gears on Snow Leopard I came across a post noting Chrome, Opera to support html5 webdb, FF & IE won’t. This seemed curious as HTML5 is powering on towards last call and there are already multiple implementations of both applications and clients that run them. Here’s where we’re at:

  • Opera: “At opera, we implemented web db […] it’s likely we will [ship it] as people have built on it”
  • Google [Chrome]: “We’ve implemented WebDB … we’re about to ship it”
  • Microsoft [IE]: “We don’t think we’ll reasonably be able to ship an interoperable version of WebDB”
  • Mozilla [Firefox]: “We’ve talked to a lot of developers, the feedback we got is that we really don’t want SQL […] I don’t think mozilla plans to ship it.”

Of these, Microsoft’s argument (aside from being disproven by existing interoperable implementations) can be summarily dismissed because offline web applications are a direct competitor to desktop applications and therefore Windows itself. As if that’s not enough, they have their own horse in this race that they don’t have to share with anyone in the form of Silverlight. As such it’s completely understandable (however lame) for them to spread interoperability FUD about competing technology.

Mozilla’s argument that “we really don’t want SQL” is far more troublesome and posts like this follow an increasingly common pattern:

  1. Someone proposes SQL for something (given we’ve got 4 decades of experience with it)
  2. Religious zealots trash talk SQL, offering a dozen or so NoSQL alternatives (all of which are in varying stages of [early] development)
  3. “My NoSQL db is bigger/better/faster than yours” debate ensues
  4. Nobody does anything

Like it or not, SQL is a sensible database interface for web applications today. It’s used almost exclusively on the server side already (except perhaps for the largest of sites, and even these tend to use SQL for some components) so developers are very well equipped to deal with it. It has been proven to work (and work well) by demanding applications including Gmail, Google Docs and Google Calendar, and is anyway independent of the underlying database engine. Ironically work has already been done to provide SQL interfaces to “NoSQL” databases (which just goes to show the “movement” completely misses the point) so those who really don’t like SQLite (which happens to drive most implementations today) could conceivably create a drop-in replacement for it. Indeed power users like myself would likely appreciate a browser with embedded MySQL as a differentiating feature.

In any case the API [cs]hould be versioned so we can offer alternatives like WebSimpleDB in the future. Right now though the open web is being held back by outdated standards and proprietary offerings controlled by single vendors (e.g. Adobe’s AIR and Microsoft’s Silverlight) are lining up to fill in the gap. Those suggesting “it’s worth stepping back” because “there are other options that should be considered” which “might serve those needs better” would want to take a long, hard look at whether their proposed alternatives are really ready for prime time, or indeed even necessary. To an outsider trying to solve real business problems today a lot of it looks like academic wankery.