RIP Adobe Flash (1996-2011) – now let’s bury the dead

Adobe kills mobile Flash, giving Steve Jobs the last laugh, reports The Guardian’s Charles Arthur following the late Steve Jobs’ epic Thoughts on Flash rant 18 months ago. It’s been about 2.5 years since I too got sick of Flash bringing my powerful Mac to its knees, so I went after the underlying lie that perpetuates the problem, explaining why Adobe Flash penetration is more like 50% than 99%. I even made progress Towards a Flash free YouTube killer, only it ended up being YouTube themselves who eventually started testing a YouTube HTML5 Video Player (while you’re there please do your bit for the open web by clicking “Join the HTML5 Trial” at the bottom of that page).

I heard a sound as though a million restaurant websites cried out at onceCharles Arthur

You see, armed with this heavily manipulated statistic, armies of developers are to this day fraudulently duping their paying clients into deploying a platform that will invariably turn away a percentage of their business at the door, in favour of annoying flaming logos and other atrocities that blight the web:

How much business can you tolerate losing? If you’ve got 95% penetration then you’re turning away 1 in 20 customers. At 90% you’re turning away 1 in 10. At 50% half of your customers won’t even get to see your product. I don’t know too many businesses who can afford to turn away any customers in this economic climate.

In my opinion the only place Flash technology has in today’s cloud computing environment is as a component of the AIR runtime for building (sub-par) cross-platform applications, and even then I’d argue that they should be using HTML5. As an Adobe Creative Suite Master Collection customer I’m very happy to see them dropping support for this legacy technology to focus on generating interactive HTML5 applications, and look forward to a similar announcement for desktop versions of the Flash player in the not too distant future.In any case, with the overwhelming majority of devices being mobile today and with more and more of them including browser functionality, the days of Flash were numbered even before Adobe put the mobile version out of its misery. Let’s not drag this out any longer than we have to, and bury the dead by uninstalling Flash Player. Here’s instructions for Mac OS X and Windows, and if you’re not ready to take the plunge into an open standards based HTML5 future then at least install FlashBlock for Chrome or Firefox (surely you’re not still using IE?).

Update: Flash for TV is dead too, as if killing off mobile wasn’t enough: Adobe Scrapping Flash for TV, Too‎

Update: Rich Internet Application (RIA) architectures in general are in a lot of trouble — Microsoft are killing off Silverlight as well: Mm, Silverlight, what’s that smell? Yes, it’s death

Update: In a surprising move that will no doubt be reversed, RIM announced it would continue developing Flash on the PlayBook (despite almost certainly lacking the ability to do so): RIM vows to keep developing Flash for BlackBerry PlayBook – no joke

VDI: Virtually Dead Idea?

I’ve been meaning to give my blog some attention (it’s been almost a year since my last post, and a busy one at that) and Simon Crosby’s (@simoncrosbyVDwhy? post seems as good a place to start as any. Simon and I are both former Citrix employees (“Citrites”) and we’re both interested in similar topics — virtualisation, security and cloud computing to name a few. It’s no surprise then that I agree with his sentiments about Virtual Desktop Infrastructure (VDI) and must admit to being perplexed as to why it gets so much attention, generally without question.

History
Windows NT (“New Technology”), the basis for all modern Microsoft desktop operating systems, was released in 1993 and shortly afterwards Citrix (having access to the source code) added the capability to support multiple graphical user interfaces concurrently. Windows NT’s underlying architecture allowed for access control lists to be applied to every object, which made it far easier for this do be done securely than what might have been possible on earlier versions. They also added their own proprietary ICA (“Independent Computing Architecture“) network protocol such that these additional sessions could be accessed remotely, over the network, from various clients (Windows, Linux, Mac and now devices like iPads, although the user experience is, as Simon pointed out, subpar). This product was known as Citrix WinFrame and was effectively a fork of Windows 3.51 (I admit to having been an NT/WinFrame admin in a past life, but mostly focused on Unix/Linux integration). It is arguably what put Citrix (now a $2bn revenue company) on the map, and it still exists today as XenApp.

Terminal Services
It turns out this was a pretty good idea. So good, in fact, that (according to Wikipedia) “Microsoft required Citrix to license their MultiWin technology to Microsoft in order to be allowed to continue offering their own terminal services product, then named Citrix MetaFrame, atop Windows NT 4.0“. Microsoft introduced their own “Remote Desktop Protocol” and armed with only a Windows NT 4.0 Terminal Server Edition beta CD, Matthew Chapman (who went to the same college, university and workplace as me and is to this day one of the smartest guys I’ve ever met) cranked out rdesktop, if I remember well over the course of a weekend. I was convinced that this was the end of Citrix so imagine my surprise when I ended up working for them, on the other side of the world (Dublin, Ireland), almost a decade later!

VDI
About the time I left Citrix for a startup opportunity in Paris, France (2006) we were tinkering with a standalone ICA listener that could be deployed on a desktop operating system (bearing in mind that by now even Windows XP included Terminal Services and an RDP listener). I believe there was also a project working on the supporting infrastructure for cranking up and tearing down single-user virtual machines (rather than multiple Terminal Services sessions based on a single Windows Server, as was the status quo at the time), but I didn’t get the point and never bothered to play with it.

Even then I was curious as to what the perceived advantage was — having spent years hardening desktop and server operating systems at the University of New South Wales to “student proof” them I considered it far easier to have one machine servicing many users than many machines servicing many users. Actually there’s still one machine, only the virtualisation layer has been moved from between the operating system and user interface — where it arguably belongs — to between the bare metal and the operating system. As such it was now going to be necessary to run multiple kernels and multiple operating systems (with all the requisite configurations, patches, applications, etc.)!

Meanwhile there was work being done on “application virtualisation” (Project Tarpon) whereby applications are sandboxed by interrupting Windows’ I/O Request Packets (IRPs) and rewriting them as required. While this was a bit of a hack (Windows doesn’t require developers to follow the rules, so they don’t and write whatever they want pretty much anywhere), it was arguably a step in the right — rather than wrong — direction.

Multitenancy
At the end of the day the issue is simply that it’s better to share infrastructure (e.g. costs) between multiple users. In this case, why would I want to have one kernel and operating system dedicated to a single user (and exacting a toll in computing and human resources) when I can have one dedicated to many? In fact, why would I want to have an operating system at all, given it’s now essentially just a life support system for the browser? The only time I ever interact with the operating system is when something goes wrong and I have to fix it (e.g. install/remove software, modify configurations, move files, etc.) so I’d much rather have just enough operating system than one for everyone and then a bunch more on servers to support them!This is essentially what Google Chrome OS (one of the first client-side cloud operating environments) does, and I can’t help but to wonder whether the chromoting feature isn’t going to play a role in this market (actually I doubt it but it’s early days).

The RightWay™
Five years ago (as I had one foot out the door of Citrix with my eye on a startup opportunity in Paris) I met with product strategist Will Harwood at the UK office and explained my vision for the future of Citrix products. I’d been working on the Netscaler acquisition (among others) and had a pretty good feeling for the direction things were going — I’d even virtualised the various appliances on top of Xen to deliver a common appliance platform long before it was acquired (and was happy to be there to see Citrix CEO Mark Templeton announce this product as Netscaler SDX at Interop).It went something like this: the MultiWin WinFrame MetaFrame Presentation Server XenApp is a mature, best-of-breed product that had (and probably still has) some serious limitations. Initially the network-based ICA Browser service was noisy, flaky and didn’t scale, so Independent Management Architecture (IMA) was introduced — a combination of a relational data store (SQL Server or Oracle) and a mongrel “IMA” protocol over which the various servers in a farm could communicate about applications, sessions, permissions, etc. Needless to say, centralised relational databases have since gone out of style in favour of distributed “NoSQL” databases, but more to the point — why were the servers trying to coordinate between themselves when the Netscaler was designed from the ground up to load balance network services?

My proposal was simply to take the standalone ICA browser and apply it to multi-user server operating systems rather than single-user client operating systems, ditching IMA altogether and delegating the task of (global) load balancing, session management, SSL termination, etc. to the Netscaler. This would be better/faster/cheaper than the existing legacy architecture, it would be more reliable in that failures would be tolerated and best of all, it would scale out rather than up. While the Netscaler has been used for some tasks (e.g. SSL termination), I’m surprised we haven’t seen anything like this (yet)… or have we?

Caveat
I can think of at least one application where VDI does make sense — public multi-tenant services (like Desktone) where each user needs a high level of isolation and customisation.

For everyone else I’d suggest taking a long, hard look at the pros and cons because any attempt to deviate from the status quo should be very well justified. I use a MacBook Air and have absolutely no need nor desire to connect to my desktop from any other device, but if I did I’d opt for shared infrastructure (Terminal Services/XenApp) and for individual “seamless” applications rather than another full desktop. If I were still administering and securing systems I’d just create a single image and deploy it over the network using PXE — I’d have to do this for the hypervisor anyway so there’s little advantage in adding yet another layer of complexity and taking the hit (and cost) of virtualisation overhead. Any operating system worth its salt includes whole disk encryption so the security argument is largely invalidated too.

I can think of few things worse than having to work on remote applications all day, unless the datacenter is very close to me (due to the physical constraints of the speed of light and the interactive/real-time nature of remote desktop sessions) and the network performance is strictly controlled/guaranteed. We go to great lengths to design deployments that are globally distributed with an appropriate level of redundancy, while being close enough to the end users to deliver the strict SLAs demanded by interactive applications — if you’re not going to bother to do it properly then you might not want to do it at all.

Citrix OpenCloud™ is neither Open nor Cloud

I’ve been busying myself recently establishing the Open Cloud Initiative which has been working with the community to establish a set of principles outlining what it means to be open cloud. As such Citrix’s announcement this week that they were “expanding their leadership in open cloud computing“(?) with the “Citrix OpenCloud™ Infrastructure platform” was somewhat intriguing, particularly for someone who’s worked with Citrix technology for 15 years and actually worked for the company for a few years before leaving to get involved in cloud computing. I was already excited to see them getting involved with OpenStack a few weeks ago as I’m supportive of this project and amazed by the level of community interest and participation, though I was really hoping that they were going to adopt the stack and better integrate it with Xen.

As usual the release itself was fluffy and devoid of clear statements as to what any of this really meant, and it doesn’t help that Citrix rebrands products more often than many change underwear. Armed with their product catalogue and information about their previous attempt to crack into the cloud space with Citrix Cloud Center (C3) I set about trying to decipher the announcement. The first thing that sprung out was the acquisition of VMlogix – a web based hypervisor management tool targeting lab environments that happens to also support Amazon EC2. Given OpenStack supports the EC2 API, perhaps this is how they plan to manage it as well as Xen “from a single management console“? Also, as Citrix are about to “add [the] intuitive, self-service interface to its popular XenServer® virtualization platform” it will be interesting to see how the likes of Enomaly feel about having a formidable ($10B+) opponent on their turf… not to mention VMware (but apparently VMware does NOT compete with Citrix – now there’s wishful thinking if I’ve ever seen it!).

Citrix also claim that customers will be able to “seamlessly manage a mix of public and private cloud workloads from a single management console, even if they span across a variety of different cloud providers“. Assuming they’re referring to VMlogix, will it be open sourced? I doubt it… and here’s the thing – I don’t expect them to. Nobody says Citrix has to be open – VMware certainly aren’t and that hasn’t kept them from building a $30B+ business. However, if they want to advertise openness as a differentiator then they should expect to be called to justify their claims. From what I can tell only the Xen hypervisor itself is open source software and it’s not at all clear how they plan to “leverage” Open vSwitch, nor whether OpenStack is even relevant given they’re just planning to manage it from their “single management console”. Even then, in a world where IT is delivered as a service rather than a product, the formats and interfaces are far more important than having access to the source itself; Amazon don’t make Linux or Xen modifications available for example but that doesn’t make them any less useful/successful (which is not to say that an alternative open source implementation like OpenStack isn’t important – it absolutely is).

Then there’s the claim that any of this is “cloud”… Sure I can use Intel chips to deliver a cloud service but does that make Intel chips “cloud”? No. How about Linux (which powers the overwhelming majority of cloud services today)? Absolutely not. So far as I can tell most of the “Citrix OpenCloud Framework” is little more than their existing suite of products cloudwashed rebranded:

  • CloudAccess ~= Citrix Password Manager
  • CloudBridge ~= Citrix Branch Repeater
  • On-Demand Apps & Demos ~= XenApp (aka WinFrame aka MetaFrame aka CPS)
  • On-Demand Desktops ~= XenDekstop
  • Compliance ~= XenApp & XenDesktop
  • Onboarding ~= Project Kensho
  • Disaster Recovery and Dev & Test ~= suites of above

At the end of the day Simon Crosby (one of the Xen guys who presumably helped convince Citrix an open source hypervisor was somehow worth $1/2bn) has repeatedly stated that Citrix OpenCloud™ is (and I quote) “100% open source software”, only to backtrack by sayingany layer of the open stack you can use a proprietary compoent(sic)” when quizzed about NetScaler, “another key component of the OpenCloud platform” and @Citrix_Cloud helpfully clarified that “OPEN means it’s plug-compatible with other options, like some open-source gear you cobble together with mobo from Fry’s“.

Maybe they’re just getting started down the open road (I hope so), but this isn’t my idea of “open” or “cloud” – and certainly not enough to justify calling it “OpenCloud”.

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.

Trend Micro abandons Intercloud™ trademark application

Just when I thought we were going to be looking at another trademark debacle not unlike Dell’s attempt at “cloud computing” back in 2008 (see Dell cloud computing™ denied) it seems luck is with us in that Trend Micro have abandoned their application #77018125 for a trademark on the term Intercloud (see NewsFlash: Trend Micro trademarks the Intercloud™). They had until 5 February 2010 to file for an extension and according to USPTO’s Trademark Document Retrieval system they have now well and truly missed the date (the last extension was submitted at the 11th hour, at 6pm on the eve of expiry).

Like Dell, Trend Micro were issued a “Notice of Allowance” on 5 August 2008 (actually Dell’s “Notice of Allowance” for #77139082 was issued less than a month before, on 8 July 2008, and cancelled just afterwards, on 7 August 2008). Unlike Dell though, Trend Micro just happened to be in the right place at the right time rather than attempting to lay claim to an existing, rapidly developing technology term (“cloud computing”).

Having been issued a Notice of Allowance both companies just had to submit a Statement of Use and the trademarks were theirs. With Dell it was just lucky that I happened to discover and reveal their application during this brief window (after which the USPTO cancelled their application following widespread uproar), but with Trend Micro it’s likely they don’t actually have a product today with which to use the trademark.

A similar thing happened to Psion late 2008, who couldn’t believe their luck when the term “netbook” became popular long after they had discontinued their product line by the same name. Having realised they still held an active trademark, they threatened all and sundry over it, eventually claiming Intel had “unclean hands” and asking for $1.2bn, only to back down when push came to shove. One could argue that as we have “submarine patents“, we also have “submarine trademarks”.

In this case, back on September 25, 2006 Trend Micro announced a product coincidentally called “InterCloud” (see Trend Micro Takes Unprecedented Approach to Eliminating Botnet Threats with the Unveiling of InterCloud Security Service), which they claimed was “the industry’s most advanced solution for identifying botnet activity and offering customers the ability to quarantine and optionally clean bot-infected PCs“. Today’s Intercloud is a global cloud of clouds, in the same way that the Internet is a global network of networks – clearly nothing like what Trend Micro had in mind. It’s also both descriptive (a portmanteau describing interconnected clouds) and generic (in that it cannot serve as a source identifier for a given product or service), which basically means it should be found ineligible for trademark protection should anyone apply again in future.

Explaining further, the Internet has kept us busy for a few decades simply by passing packets between clients and servers (most of the time). It’s analogous to the bare electricity grid, allowing connected nodes to transfer electrical energy between one another (typically from generators to consumers but with alternative energy sometimes consumers are generators too). Cloud computing is like adding massive, centralised power stations to the electricity grid, essentially giving it a life of its own.

I like the term Intercloud, mainly because it takes the focus away from the question of “What is cloud?”, instead drawing attention to interoperability and standards where it belongs. Kudos to Trend Micro for this [in]action – whether intentional or unintentional.

Introducing Planet Cloud: More signal, less noise.


As you are no doubt well aware there is a large and increasing amount of noise about cloud computing, so much so that it’s becoming increasingly difficult to extract a clean signal. This has always been the case but now that even vendors like Oracle (who have previously been sharply critical of cloud computing, in part for exactly this reason) are clambering aboard the bandwagon, it’s nearly impossible to tell who’s worth listening to and who’s just trying to sell you yesterday’s technology under today’s label.

It is with this in mind that I am happy to announce Planet Cloud, a news aggregator for cloud computing articles that is particularly fussy about its sources. In particular, unless you talk all cloud, all the time (which is rare – even I take a break every once in a while) then your posts won’t be included unless you can provide a cloud-specific feed. Fortunately most blogging software supports this capability and many of the feeds included at launch take advantage of it. You can access Planet Cloud at:

http://www.planetcloud.org/ or @planetcloud

Those of you aware of my disdain for SYS-CON’s antics might be surprised that we’ve opted to ask for forgiveness rather than permission, but you’ll also notice that we don’t run ads (nor do we have any plans to – except for a few that come to us via feeds and are thus paid to authors). As such this is a non-profit service to the cloud computing community intended filter out much of the noise in the same way that the Clouderati provides an fast track to the heart of the cloud computing discussion on Twitter. An unwanted side effect of this approach is that it is not possible for us to offer the feeds under a Creative Commons license, as would usually be the case for content we own.

Many thanks to Tim Freeman (@timfaas) for his contribution not only of the planetcloud.org domain itself, but also of a comprehensive initial list of feeds (including many I never would have thought of myself). Thanks also to Rackspace Cloud who provide our hosting and who have done a great job of keeping the site alive during the testing period over the last few weeks. Thanks to the Planet aggregator which is simple but effective Python software for collating many feeds. And finally thanks to the various authors who have [been] volunteered for this project – hopefully we’ll be able to drive some extra traffic your way (of course if you’re not into it then that’s fine too – we’ll just remove you from the config file and you’ll vanish within 5 minutes).

Face it Flash, your days are numbered.

It’s no secret that I’m no fan of Adobe Flash:

It should be no surprise then that I’m stoked to see a vigorous debate taking place about the future/fate of Flash well ahead of schedule, and even happier to see Flash sympathisers already resorting to desperate measures including “playing the porn card” (not to mention Farmville which, in addition to the myriad annoying, invasive and privacy-invading advertisements, I will also be more than happy to see extinct). In my mind this all but proves how dire their situation has become with the sudden onslaught of mobile devices deliberately absent flash malware*.

Let’s take a moment to talk about statistics. According to analysts there are currently “only” 1.3 billion Internet-connected PCs. To put that into context, there are already almost as many Internet-connected mobile devices. With a growth rate 2.5 times that of PCs, mobiles will soon become the dominant Internet access device. Of those new devices, few of them support Flash (think Android, iPhone), and with good reason – they are designed to be small, simple, performant and operate for hours/days between charges.

As if that’s not enough, companies with the power to make it happen would very much like for us to have a third device that fills the void between the two – a netbook or a tablet (like the iPad). For the most part (again being powered by Android and iPhone OS) these devices don’t support Flash either. Even if we were to give Adobe the benefit of the doubt in accepting their deceptiveoptimistic claims that Flash is currently “reaching 99% of Internet-enabled desktops in mature markets” (for more on that subject see Lies, damned lies and Adobe’s penetration statistics for Flash), between these two new markets it seems inevitable that their penetration rate will drop well below 50% real soon now.

Here’s the best part though, Flash penetration doesn’t even have to drop below 50% for us to break the vicious cycle of designers claiming “99% penetration” and users then having to install Flash because so many sites arbitrarily depend on it (using Flash for navigation is a particularly heinous offense, as is using it for headings with fancy fonts). Even if penetration were to drop to 95% (I would argue it already has long ago, especially if you dispense with weasel wording like “mature markets” and even moreso if you do away with the arbitrary “desktop” restriction – talk about sampling bias!) that translates to turning away 1 in 20 of your customers. At what point will merchants start to flinch – 1 in 10 (90%)? 1 in 5 (80%)? 1 in 4 (75%)? 1 in 2 (50%)?

As if that’s not enough, according to Rich Internet Application Statistics, you would be losing some of your best customers – those who can afford to run Mac OS X (87% penetration) and Windows 7 (around 75% penetration) – not to mention those with iPhones and iPads (neither of which are the cheapest devices on the market). Oh yeah and you heard it right, according to them, Flash penetration on Windows 7 is an embarassing 3 in 4 machines; even worse than SunOracle Java (though ironically Microsoft’s own Silverlight barely reaches 1 in 2 machines).

While we’re at it, at what point does it become “willful false advertising” for Adobe and their army of Flash designers to claim such deep penetration? Victims who pay $$lots for Flash-based sites only to discover from server logs that a surprisingly large percentage of users are being turned away have every reason to be upset, and ultimately to seek legal recourse. Why hasn’t this already happened? Has it? In any case designers like “Paul Threatt, a graphic designer at Jackson Walker design group, [who] has filed a complaint to the FTC alleging false advertising” ought to think twice before pointing the finger at Apple (accused in this case over a few mockups, briefly shown and since removed, in an iPad promo video).

At the end of the day much of what is annoying about the web is powered by Flash. If you don’t believe me then get a real browser and install Flashblock (for Firefox or Chrome) or ClickToFlash (for Safari) and see for yourself. You will be pleasantly surprised by the absence of annoyances as well as impressed by how well even an old computer can perform when not laden with this unnecessary parasite*. What is less obvious (but arguably more important) is that your security will dramatically improve as you significantly reduce your attack surface (while you’re at it replace Adobe Reader with Foxit and enjoy even more safety). As someone who has been largely Flash-free for the last 3 months I can assure you life is better on the other side; in addition to huge performance gains I’ve had far fewer crashes since purging my machine – unsurprising given according to Apple’s Steve Jobs, “Whenever a Mac crashes more often than not it’s because of Flash“. “No one will be using Flash, he says. The world is moving to HTML5.

So what can Adobe do about this now the horse has long since bolted? If you ask me, nothing. Dave Winer (another fellow who, like myself, “very much care[s] about an open Internet“) is somewhat more positive in posing the question What if Flash were an open standard? and suggesting that “Adobe might want to consider, right now, very quickly, giving Flash to the public domain. Disclaim all patents, open source all code, etc etc.“. Too bad it’s not that simple so long as one of the primary motivations for using Flash is bundled proprietary codecs like H.264 (which the MPEG LA have made abundantly clear will not be open sourced so long as they hold [over 900!] essential patents over it).

Update: Mobile Firefox Maemo RC3 has disabled Flash because “The Adobe Flash plugin used on many sites degraded the performance of the browser to the point where it didn’t meet Mozilla’s standards.” Sound familiar?

Update: Regarding the upcoming CS5 release which Adobe claims will “let you publish ActionScript 3 projects to run as native applications for iPhone“, this is not at all the same thing as the Flash plugin and will merely allow developers to create applications which suck more using a non-free SDK. No thanks. I’m unconvinced Apple will let such applications into the store anyway, citing performance concerns and/or the runtime rule.

Update: I tend to agree with Steven Wei that The best way for Adobe to save Flash is by killing it, but that doesn’t mean it’ll happen and any case if they wanted to do that they would have wanted to have started at least a year or two ago for the project to have any relevance, and it’s clear that they’re still busy flogging the binary plugin dead horse.

Update: Another important factor I neglected to mention above is that Adobe already struggle to maintain up-to-date binaries for a small number of major platforms and even then Mac and Linux are apparently second and third class citizens. If they’re struggling to manage the workload today then I don’t see what will make it any easier tomorrow with the myriad Linux/ARM devices hitting the market (among others). Nor would they want to – if they target HTML5, CSS3, etc. as proposed above then they have more resources to spend on having the best development environment out there.

* You may feel that words like “parasite” and “malware” are a bit strong for Flash, but when you think about it Flash has all the necessary attributes; it consumes your resources, weakens your security and is generally annoying. In short, the cost outweighs any perceived benefits.

HOWTO: Set up OpenVPN in a VPS

If, like me, you want to do any or all of the following things, you’ll want to tunnel your traffic over a VPN to a remote location:

  • Access media services restricted by geography (Hulu, FOX, BBX, etc.)
  • Bypass draconian censorship
  • Conceal your identity/location/etc.
  • Protect your machine from attackers
  • etc.

You could of course use a commercial service like AlwaysVPN in which case you typically pay ($5-10) per month or (~$1) per gigabyte, but many will prefer to run their own service. FWIW AlywaysVPN has worked very well for me but it’s time to move on.

First thing’s first you’ll want to find yourself a remote Linux server, and the easiest way to do so is to rent a virtual private server (VPS) from one of a myriad providers. No point spending more than 10 bucks a month on it as you don’t need much in the way of resources (only bandwidth). Check out lowendbox.com for VPS deals under $7/month or just run with a BurstNET VPS starting at $5.95/month for a very reasonable resource allocation (including a terabyte of bandwidth!).

Once you’ve placed your order and passed their fraud detection systems (which includes an automated callback on the number you supply) you’ll have to wait 12-24 hours for activation, upon which you’ll receive an email with details for accessing your vePortal control panel as well as the VPS itself (via SSH). You’ll get 2 IP addresses and I dedicated the second to both inbound and outbound traffic for VPS clients (which live on a 10.x RFC1918 subnet and access the Internet via SNAT).

If you didn’t already do so when signing up then choose a sensible OS in your control panel (“OS Reload”) like Ubuntu 8.04 – a Long Term Support release which means you’ll be getting security fixes for years to come – or better yet, 10.4 if it’s been released by the time you read this (it’s the next LTS release). Do an “apt-get install unattended-upgrades” and you ought to be fairly safe until 2015. You’re also going to need your TUN/TAP device(s) enabled which involves another trip to the control panel (“Enable Tun/Tap”) and/or a helpdesk ticket (http://support.burst.net/). If /dev/net/tun doesn’t exist then you can create it with “mknod /dev/net/tun c 10 200”.

To install OpenVPN it’s just a case of doing “apt-get install openvpn”… you could also download a free 2-user version of OpenVPN-AS from http://openvpn.net/ but I found it had problems trying to load netfilter modules that were already loaded so YMMV. If you want support or > 2 users you’ll be looking at a very reasonable $5/user – you’re on your own with the free/open source version but there’s no such limitations either.

OpenVPN uses PKI but rather than go to a certificate authority we’ll set one up ourselves. EasyRSA is included to simplify this process so it’s just a case of doing something like this:

cd /usr/share/doc/openvpn/examples/easy-rsa/2.0. ./vars./clean-all./build-ca./build-dhopenvpn --genkey --secret ta.key./build-key-server server./build-key client1./build-key client2./build-key client3

It’ll ask you a bunch of superflous information like your country, state, city, organisation, etc. but I just filled these out with ‘.’ (blank rather than the defaults) – mostly so as not to give away information unnecessarily to anyone who asks. The only field that matters is the Common Name which you probably want to leave as ‘server’, ‘client1’ (or some other username like ‘samj’), etc. When you’re done here you’ll want to “cp keys/* /etc/openvpn” so OpenVPN can see it.

Next you’ll want to configure the OpenVPN server and client(s) based on examples in /usr/share/doc/openvpn/examples/sample-config-files. I’m running two – one “Faster” one for the best performance when I’m on a “clean” connection (which uses udp/1194) and another “Compatible” one for when I’m on a restricted/corporate network (which shares tcp/443 with HTTPS). I did a “zcat server.conf.gz > /etc/openvpn/faster.conf” and edited it so it (when filtered with cat faster.conf | grep -v "^#" |grep -v "^;" | grep -v "^$") looks something like this:

local 173.212.x.xport 1194proto udpdev tunca ca.crtcert server.crtkey server.keydh dh1024.pemserver 10.9.0.0 255.255.255.0ifconfig-pool-persist faster-ipp.txtpush "redirect-gateway def1 bypass-dhcp"push "dhcp-option DNS 8.8.8.8"push "dhcp-option DNS 8.8.4.4"client-to-clientkeepalive 10 120tls-auth ta.key 0cipher BF-CBCcomp-lzouser nobodygroup nogrouppersist-keypersist-tunstatus /var/log/openvpn/faster-status.loglog-append /var/log/openvpn/faster.logverb 3mute 20

Noteworthy points:

  • local specifies which IP to bind to – I used the second (of two) that BurstNET had allocated to my VPS so as to keep the first for other servers, but you could just as easily use the first and then put clients behind the second, which would appear to be completely “clean”.
  • We’re using “tun” (tunneling/routing) rather than “tap” (ethernet briding) because BurstNET use venet interfaces which lack MAC addresses rather than veth. Wasn’t able to get bridging up and running, as originally intended.
  • There are various hardening options but to keep it simple I just run as nobody:nogroup and use tls-auth (having generated the optional ta.key with “openvpn –genkey –secret ta.key” above).
  • Pushing Google Public DNS addresses to clients as they won’t be able to use their local resolver addresses once connected. Also telling them to route all traffic over the VPN (which would otherwise only intercept traffic for a remote network).
  • Configured separate log files and subnets (10.8.0.0/24 and 10.9.0.0/24) for the “faster” and “compatible” instances.

The “compatible.conf” file varies only with the following lines:

port 443proto tcpserver 10.8.0.0 255.255.255.0status /var/log/openvpn/compatible-status.loglog-append /var/log/openvpn/compatible.log

Next you’ll want to copy over client.conf from /usr/share/doc/openvpn/examples/sample-config-files (but set ‘AUTOSTART=”compatible faster”‘ in /etc/default/openvpn so it’s ignored by the init scripts).

clientdev tunproto udpremote 173.212.x.x 1194resolv-retry infinitenobindpersist-keypersist-tunca burstnet-ca.crtcert burstnet-client.crtkey burstnet-client.keyns-cert-type servertls-auth burstnet-ta.key 1cipher tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHAcipher BF-CBCcomp-lzoverb 3

As I’ve got a bunch of different connections on my clients I’ve prepended “burstnet-” to all the files and called the main config files “BurstNET-Faster.conf” and “BurstNET-Compatible.conf” (which appear in the Tunnelblick menu on OS X as “BurstNET-Faster” and “BurstNET-Compatible” respectively – thanks to AlwaysVPN for this idea). The only difference for BurstNET-Compatible.conf is:

proto tcpremote 173.212.x.x 443

You’re now almost ready for the smoke test (and indeed should be able to connect) but you’ll end up on a 10.x subnet and therefore unable to communicate with anyone. The fix is “iptables -t nat -A POSTROUTING -s 10.8.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x” (where the source IP is one of those allocated to you).

Being paranoid though I want to lock down my server with a firewall, which for Ubuntu typically means ufw (you’ll need to “apt-get install ufw” if you haven’t already). My ufw rules look something like this:

# ufw statusStatus: activeTo                         Action  From--                         ------  ----Anywhere                   ALLOW   1.2.3.41194/udp                   ALLOW   Anywhere443/tcp                    ALLOW   Anywhere

The first rule allows me to access the server from home via SSH and 1194/udp and 443/tcp allow VPN clients in. To allow the clients to access the outside world we’re going to have to rewrite their traffic to come from a public IP (which is called “SNAT”), but first you’ll want to enable forwarding by setting DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw. Then it’s just a case of adding something like this to /etc/ufw/before.rules:

# nat Table rules*nat:POSTROUTING ACCEPT [0:0]# SNAT traffic from VPN subnet.-A POSTROUTING -s 10.8.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x-A POSTROUTING -s 10.9.0.0/255.255.255.0 -j SNAT --to-source 173.212.x.x# don't delete the 'COMMIT' line or these nat table rules won't be processedCOMMIT

You may need to enable UFW (“ufw enable”) and if you lose access to your server you can always disable UFW (“ufw disable”) using the rudimentary “Console” function of vePortal.

On the client side you’ve got support for (at least) Linux (e.g. “openvpn --config /etc/openvpn/BurstNET-Faster.conf“), Mac and Windows and there’s various GUIs (including OpenVPN GUI for Windows and Tunnelblick for Mac OS X). I’m (only) using Tunnelblick, and after copying Tunnelblick.app to /Applications I just need to create a ~/Library/openvpn directory and drop these files in there:

  • BurstNET-Compatible.conf
  • BurstNET-Faster.conf
  • burstnet-ca.crt
  • burstnet-client.key
  • burstnet-client.crt
  • burstnet-ta.key

When Tunnelblick’s running I have a little black tunnel symbol in the top right corner of my screen from which I can connect & disconnect as necessary.

I think that’s about it – hopefully there’s nothing critical I’ve missed but feel free to follow up in the comments if you’ve anything to add. I’m now happily streaming from Hulu and Fox in the US, downloading Amazon MP3s (using my US credit card), and have a reasonable level of anonymity. If I was in Australia I’d have little to fear from censorship (and there’s virtually nothing they can do to stop me) and as my machine has a private IP I’m effectively firewalled.

Update: It seems that my VPS is occasionally restarted (which is not all that surprising) and forgets about its tun/tap device (which is). The device node itself is still visible in the filesystem, but with no driver to connect to in the kernel it doesn’t work and OpenVPN doesn’t start. You can test if your tun device is working using cat:

WORKING:

cat /dev/net/tun

cat: /dev/net/tun: File descriptor in bad state

NOT WORKING:

cat /dev/net/tun

cat: /dev/net/tun: No such device

I’ve also noticed that ufw may need to be manually started with a ‘ufw enable’. Hope that saves you some time diagnosing problems!