HTTP2 Expression of Interest

Here’s my (rather rushed) personal submission to the Internet Engineering Task Force (IETF) in response to their Call for Expressions of Interest in new work around HTTP; specifically, a new wire-level protocol for the semantics of HTTP (i.e., what will become HTTP/2.0), and new HTTP authentication schemes. You can also review the submissions of Facebook, FirefoxGoogle, Microsoft, Twitter and others.

[The views expressed in this submission are mine alone and not (necessarily) those of Citrix, Google, Equinix or any other current, former or future client or employer.]

My primary interest is in the consistent application of HTTP to (“cloud”) service interfaces, with a view to furthering the goals of the Open Cloud Initiative (OCI); namely widespread and ideally consistent interoperability through the use of open standard formats and interfaces.

In particular, I strongly support the use of the existing metadata channel (headers) over envelope overlays (SOAP) and alternative/ancillary representations (typically in JSON/XML) as this should greatly simplify interfaces while ensuring consistency between services. The current approach to cloud “standards” calls on vendors to define their own formats and interfaces and to maintain client libraries for the myriad languages du jour. In an application calling on multiple services this can result in a small amount of business logic calling on various bulky, often poorly written and/or unmaintained libraries. The usual counter to the interoperability problems this creates is to write “adapters” (ala ODBC) which expose a lowest-common-denominator interface, thus hiding functionality and creating an “impedence mismatch”. Ultimately this gives rise to performance, security, cost and other issues.

By using HTTP as intended it is possible to construct (cloud) services that can be consumed using nothing more than the built-in, standards compliant HTTP client. I’m not writing to discuss whether this is a good idea, but to expose a use case that I would like to see considered, and one that we have already applied with an amount of success in the Open Cloud Computing Interface (OCCI).

To illustrate the original scope, versions of HTTP (RFC 2068) included not only the Link header (recently revived by Mark Nottingham in RFC 5988) but also LINK and UNLINK verbs to manipulate it (recently proposed for revival by James Snell). Unfortunately hypertext, and in particular HTML (which includes linking in-band rather than out-of-band) arguably stole HTTP’s thunder, leaving the overwhelming majority of formats that lack in-band linking (images, documents, virtual machines, etc.) high and dry and resulting in inconsistent linking styles (HTML vs XML vs PDF vs DOC etc.). This limited the extent of web linking as well as the utility of HTTP for innovative applications including APIs. Indeed HTTP could easily and simply meet the needs of many “Semantic Web” applications, but that is beyond the scope of this particular discussion.

To illustrate by way of example, consider the following synthetic request/response for an image hosting site which incorporates Web Linking (RFC 5988), Web Categories (draft-johnston-http-category-header) and Web Attributes (yet to be published):

GET /1.jpg HTTP/1.0

HTTP/1.0 200 OK
Content-Length: 69730
Content-Type: image/jpeg
Link: http://creativecommons.org/licenses/by-sa/3.0/; rel=”license”
Link: /2.jpg; rel=”next”
Category: dog; label=”Dog”; scheme=”http://example.org/animals”
Attribute: name=”Spot”

In order to “animate” resources, consider the use of the Link header to start a virtual machine in the Open Cloud Computing Interface (OCCI):

Link: </compute/123;action=start>; rel="http://schemas.ogf.org/occi/infrastructure/compute/action#start"

The main objection to the use of the metadata channel in this fashion (beyond the application of common sense in determining what constitutes data vs metadata) is implementation issues (e.g. arbitrary limitations, i18n, handling of multiple headers, etc.) which could be largely resolved through specification. For example, the (necessary) use of e.g. RFC 2231 encoding for header values (but not keys) in e.g. RFC 5988 Web Linking gives rise to unnecessary complexity that may lead to interoperability, security and other issues which could be resolved through the specification of Unicode for keys and/or values. Another concern is the absence of features such as a standardised ability to return a collection (e.g. multiple responses). I originally suggested that HTTP 2.0 incorporate such ideas in 2009.

I’ll leave the determination of what would ultimately be required for such applications to the working group (should this use case be considered interesting by others), and while better support for performance, scalability and mobility are obviously required this has already been discussed at length. I strongly support Poul-Henning Kamp’s statement that “I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the aggregations of badly thought out hacks of HTTP/1.1.” (and agree that we should incorporate the concept of a “HTTP Router”) as well as Tim Bray’s statement that: “I’m inclined to err on the side of protecting user privacy at the expense of almost all else” (and believe that we should prevent eavesdroppers from learning anything about an encrypted transaction; something we failed to do with DNSSEC even given alternatives like dnscurve that ensure confidentiality as well as integrity).

RIP Adobe Flash (1996-2011) – now let’s bury the dead

Adobe kills mobile Flash, giving Steve Jobs the last laugh, reports The Guardian’s Charles Arthur following the late Steve Jobs’ epic Thoughts on Flash rant 18 months ago. It’s been about 2.5 years since I too got sick of Flash bringing my powerful Mac to its knees, so I went after the underlying lie that perpetuates the problem, explaining why Adobe Flash penetration is more like 50% than 99%. I even made progress Towards a Flash free YouTube killer, only it ended up being YouTube themselves who eventually started testing a YouTube HTML5 Video Player (while you’re there please do your bit for the open web by clicking “Join the HTML5 Trial” at the bottom of that page).

I heard a sound as though a million restaurant websites cried out at onceCharles Arthur

You see, armed with this heavily manipulated statistic, armies of developers are to this day fraudulently duping their paying clients into deploying a platform that will invariably turn away a percentage of their business at the door, in favour of annoying flaming logos and other atrocities that blight the web:

How much business can you tolerate losing? If you’ve got 95% penetration then you’re turning away 1 in 20 customers. At 90% you’re turning away 1 in 10. At 50% half of your customers won’t even get to see your product. I don’t know too many businesses who can afford to turn away any customers in this economic climate.

In my opinion the only place Flash technology has in today’s cloud computing environment is as a component of the AIR runtime for building (sub-par) cross-platform applications, and even then I’d argue that they should be using HTML5. As an Adobe Creative Suite Master Collection customer I’m very happy to see them dropping support for this legacy technology to focus on generating interactive HTML5 applications, and look forward to a similar announcement for desktop versions of the Flash player in the not too distant future.In any case, with the overwhelming majority of devices being mobile today and with more and more of them including browser functionality, the days of Flash were numbered even before Adobe put the mobile version out of its misery. Let’s not drag this out any longer than we have to, and bury the dead by uninstalling Flash Player. Here’s instructions for Mac OS X and Windows, and if you’re not ready to take the plunge into an open standards based HTML5 future then at least install FlashBlock for Chrome or Firefox (surely you’re not still using IE?).

Update: Flash for TV is dead too, as if killing off mobile wasn’t enough: Adobe Scrapping Flash for TV, Too‎

Update: Rich Internet Application (RIA) architectures in general are in a lot of trouble — Microsoft are killing off Silverlight as well: Mm, Silverlight, what’s that smell? Yes, it’s death

Update: In a surprising move that will no doubt be reversed, RIM announced it would continue developing Flash on the PlayBook (despite almost certainly lacking the ability to do so): RIM vows to keep developing Flash for BlackBerry PlayBook – no joke

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.

Face it Flash, your days are numbered.

It’s no secret that I’m no fan of Adobe Flash:

It should be no surprise then that I’m stoked to see a vigorous debate taking place about the future/fate of Flash well ahead of schedule, and even happier to see Flash sympathisers already resorting to desperate measures including “playing the porn card” (not to mention Farmville which, in addition to the myriad annoying, invasive and privacy-invading advertisements, I will also be more than happy to see extinct). In my mind this all but proves how dire their situation has become with the sudden onslaught of mobile devices deliberately absent flash malware*.

Let’s take a moment to talk about statistics. According to analysts there are currently “only” 1.3 billion Internet-connected PCs. To put that into context, there are already almost as many Internet-connected mobile devices. With a growth rate 2.5 times that of PCs, mobiles will soon become the dominant Internet access device. Of those new devices, few of them support Flash (think Android, iPhone), and with good reason – they are designed to be small, simple, performant and operate for hours/days between charges.

As if that’s not enough, companies with the power to make it happen would very much like for us to have a third device that fills the void between the two – a netbook or a tablet (like the iPad). For the most part (again being powered by Android and iPhone OS) these devices don’t support Flash either. Even if we were to give Adobe the benefit of the doubt in accepting their deceptiveoptimistic claims that Flash is currently “reaching 99% of Internet-enabled desktops in mature markets” (for more on that subject see Lies, damned lies and Adobe’s penetration statistics for Flash), between these two new markets it seems inevitable that their penetration rate will drop well below 50% real soon now.

Here’s the best part though, Flash penetration doesn’t even have to drop below 50% for us to break the vicious cycle of designers claiming “99% penetration” and users then having to install Flash because so many sites arbitrarily depend on it (using Flash for navigation is a particularly heinous offense, as is using it for headings with fancy fonts). Even if penetration were to drop to 95% (I would argue it already has long ago, especially if you dispense with weasel wording like “mature markets” and even moreso if you do away with the arbitrary “desktop” restriction – talk about sampling bias!) that translates to turning away 1 in 20 of your customers. At what point will merchants start to flinch – 1 in 10 (90%)? 1 in 5 (80%)? 1 in 4 (75%)? 1 in 2 (50%)?

As if that’s not enough, according to Rich Internet Application Statistics, you would be losing some of your best customers – those who can afford to run Mac OS X (87% penetration) and Windows 7 (around 75% penetration) – not to mention those with iPhones and iPads (neither of which are the cheapest devices on the market). Oh yeah and you heard it right, according to them, Flash penetration on Windows 7 is an embarassing 3 in 4 machines; even worse than SunOracle Java (though ironically Microsoft’s own Silverlight barely reaches 1 in 2 machines).

While we’re at it, at what point does it become “willful false advertising” for Adobe and their army of Flash designers to claim such deep penetration? Victims who pay $$lots for Flash-based sites only to discover from server logs that a surprisingly large percentage of users are being turned away have every reason to be upset, and ultimately to seek legal recourse. Why hasn’t this already happened? Has it? In any case designers like “Paul Threatt, a graphic designer at Jackson Walker design group, [who] has filed a complaint to the FTC alleging false advertising” ought to think twice before pointing the finger at Apple (accused in this case over a few mockups, briefly shown and since removed, in an iPad promo video).

At the end of the day much of what is annoying about the web is powered by Flash. If you don’t believe me then get a real browser and install Flashblock (for Firefox or Chrome) or ClickToFlash (for Safari) and see for yourself. You will be pleasantly surprised by the absence of annoyances as well as impressed by how well even an old computer can perform when not laden with this unnecessary parasite*. What is less obvious (but arguably more important) is that your security will dramatically improve as you significantly reduce your attack surface (while you’re at it replace Adobe Reader with Foxit and enjoy even more safety). As someone who has been largely Flash-free for the last 3 months I can assure you life is better on the other side; in addition to huge performance gains I’ve had far fewer crashes since purging my machine – unsurprising given according to Apple’s Steve Jobs, “Whenever a Mac crashes more often than not it’s because of Flash“. “No one will be using Flash, he says. The world is moving to HTML5.

So what can Adobe do about this now the horse has long since bolted? If you ask me, nothing. Dave Winer (another fellow who, like myself, “very much care[s] about an open Internet“) is somewhat more positive in posing the question What if Flash were an open standard? and suggesting that “Adobe might want to consider, right now, very quickly, giving Flash to the public domain. Disclaim all patents, open source all code, etc etc.“. Too bad it’s not that simple so long as one of the primary motivations for using Flash is bundled proprietary codecs like H.264 (which the MPEG LA have made abundantly clear will not be open sourced so long as they hold [over 900!] essential patents over it).

Update: Mobile Firefox Maemo RC3 has disabled Flash because “The Adobe Flash plugin used on many sites degraded the performance of the browser to the point where it didn’t meet Mozilla’s standards.” Sound familiar?

Update: Regarding the upcoming CS5 release which Adobe claims will “let you publish ActionScript 3 projects to run as native applications for iPhone“, this is not at all the same thing as the Flash plugin and will merely allow developers to create applications which suck more using a non-free SDK. No thanks. I’m unconvinced Apple will let such applications into the store anyway, citing performance concerns and/or the runtime rule.

Update: I tend to agree with Steven Wei that The best way for Adobe to save Flash is by killing it, but that doesn’t mean it’ll happen and any case if they wanted to do that they would have wanted to have started at least a year or two ago for the project to have any relevance, and it’s clear that they’re still busy flogging the binary plugin dead horse.

Update: Another important factor I neglected to mention above is that Adobe already struggle to maintain up-to-date binaries for a small number of major platforms and even then Mac and Linux are apparently second and third class citizens. If they’re struggling to manage the workload today then I don’t see what will make it any easier tomorrow with the myriad Linux/ARM devices hitting the market (among others). Nor would they want to – if they target HTML5, CSS3, etc. as proposed above then they have more resources to spend on having the best development environment out there.

* You may feel that words like “parasite” and “malware” are a bit strong for Flash, but when you think about it Flash has all the necessary attributes; it consumes your resources, weakens your security and is generally annoying. In short, the cost outweighs any perceived benefits.

NoSQL “movement” roadblocks HTML5 WebDB

Today’s rant is coming between me and a day of skiing so I’ll keep it brief. While trying to get to the bottom of why I can’t enjoy offline access to Google Apps & other web-based applications with Gears on Snow Leopard I came across a post noting Chrome, Opera to support html5 webdb, FF & IE won’t. This seemed curious as HTML5 is powering on towards last call and there are already multiple implementations of both applications and clients that run them. Here’s where we’re at:

  • Opera: “At opera, we implemented web db […] it’s likely we will [ship it] as people have built on it”
  • Google [Chrome]: “We’ve implemented WebDB … we’re about to ship it”
  • Microsoft [IE]: “We don’t think we’ll reasonably be able to ship an interoperable version of WebDB”
  • Mozilla [Firefox]: “We’ve talked to a lot of developers, the feedback we got is that we really don’t want SQL […] I don’t think mozilla plans to ship it.”

Of these, Microsoft’s argument (aside from being disproven by existing interoperable implementations) can be summarily dismissed because offline web applications are a direct competitor to desktop applications and therefore Windows itself. As if that’s not enough, they have their own horse in this race that they don’t have to share with anyone in the form of Silverlight. As such it’s completely understandable (however lame) for them to spread interoperability FUD about competing technology.

Mozilla’s argument that “we really don’t want SQL” is far more troublesome and posts like this follow an increasingly common pattern:

  1. Someone proposes SQL for something (given we’ve got 4 decades of experience with it)
  2. Religious zealots trash talk SQL, offering a dozen or so NoSQL alternatives (all of which are in varying stages of [early] development)
  3. “My NoSQL db is bigger/better/faster than yours” debate ensues
  4. Nobody does anything

Like it or not, SQL is a sensible database interface for web applications today. It’s used almost exclusively on the server side already (except perhaps for the largest of sites, and even these tend to use SQL for some components) so developers are very well equipped to deal with it. It has been proven to work (and work well) by demanding applications including Gmail, Google Docs and Google Calendar, and is anyway independent of the underlying database engine. Ironically work has already been done to provide SQL interfaces to “NoSQL” databases (which just goes to show the “movement” completely misses the point) so those who really don’t like SQLite (which happens to drive most implementations today) could conceivably create a drop-in replacement for it. Indeed power users like myself would likely appreciate a browser with embedded MySQL as a differentiating feature.

In any case the API [cs]hould be versioned so we can offer alternatives like WebSimpleDB in the future. Right now though the open web is being held back by outdated standards and proprietary offerings controlled by single vendors (e.g. Adobe’s AIR and Microsoft’s Silverlight) are lining up to fill in the gap. Those suggesting “it’s worth stepping back” because “there are other options that should be considered” which “might serve those needs better” would want to take a long, hard look at whether their proposed alternatives are really ready for prime time, or indeed even necessary. To an outsider trying to solve real business problems today a lot of it looks like academic wankery.

An open letter to the NoSQL community

Following some discussion on Twitter today I posted this thread to the nosql-discussion group. You can see the outcome for yourself (essentially, and unsurprisingly I might add, “please feel free to take your software and call it whatever you want“).

While I don’t want to mess with their momentum (it’s a good cause, if branded with an unfortunate name) this isn’t the first time the issue’s been raised and I doubt it will be the last. I do however think that “no SQL” is completely missing the point and that the core concern is trading consistency for scalability. At the end of the day developers and users will deploy what is most appropriate for the task at hand.

There’a already been a question about alternatives to SQL, and knowing how Structured Query Language (SQL) came to be (consider the interfaces before it existed and compare that to what we have today) I figure it’s only a matter of time before history repeats itself and we end up creating something like Cloud Query Language (CQL) (a deliberate play on words). The closer this is to ANSI SQL the better it will be, both in terms of technology reuse and of the bags of bones that need to understand how it works… for the same reason the Open Cloud Computing Interface (OCCI) tries very hard to be as close as possible to HyperText Transfer Protocol (HTTP).

———- Forwarded message ———-
From: Sam Johnston
Date: Tue, Oct 27, 2009 at 3:33 PM
Subject: An open letter to the NoSQL community
To: NoSQL

Afternoon NoSQLers,

I write to you as a huge fan of next generation databases, but also as someone who doesn’t associate in any way with the “NoSQL” moniker. I don’t particularly care for SQL and appreciate the contrived contention it creates, but I think it misses the point somewhat and alienates people like myself who might otherwise have been drawn to the project.

I assume that by “NoSQL” we’re referring to the next generation of [generally cloud-based] databases such as Google’s BigTable, Amazon’s SimpleDB, Facebook’s Cassandra, etc., in which case the issue is more the underlying model (e.g. ACID vs BASE), where we are ultimately trading consistency for scalability.

To me this has nothing to do with the query language (which would still arguably be useful for many applications and which may as well be [something like] SQL, albeit adapted), nor the relational (as opposed to navigational) nature of the data (which is still the case today – it’s just represented as pointers rather than separate “relation” tables), and to focus on either attribute is missing the point. This is particularly true with today’s announcement of Amazon RDS.

Perhaps it’s too late already, but I’d like to think we can come up with a more representative name to which everyone can associate (and which isn’t so scary for fickle enterprise customers). There’s already been a couple of decent suggestions, including alt.db, db-ng, NRDB[MS], etc.

Sam
https://samj.net/

Twitter’s down for the count. What are we going to do about it?

What’s wrong with this picture?

  • There’s not a single provider for telephony (AT&T, T-Mobile, etc.)
  • There’s not a single provider for text messaging (AT&T, T-Mobile, etc.)
  • There’s not a single provider for instant messaging (GTalk, MSN, AIM, etc.)
  • There’s not a single provider for e-mail (GMail, Hotmail, Yahoo!, etc.)
  • There’s not a single provider for blogging (Blogger, WordPress, etc.)
  • There’s not a single provider for “mini” blogging (Tumblr, Posterous, etc.)
  • There IS a single provider for micro blogging (Twitter)
  • And it’s down for the count (everything from the main site to the API is inaccessible)
  • And it’s been down for an Internet eternity (the best part of an hour and counting)

What are we going to do about it?