HTTP2 Expression of Interest

Here’s my (rather rushed) personal submission to the Internet Engineering Task Force (IETF) in response to their Call for Expressions of Interest in new work around HTTP; specifically, a new wire-level protocol for the semantics of HTTP (i.e., what will become HTTP/2.0), and new HTTP authentication schemes. You can also review the submissions of Facebook, FirefoxGoogle, Microsoft, Twitter and others.

[The views expressed in this submission are mine alone and not (necessarily) those of Citrix, Google, Equinix or any other current, former or future client or employer.]

My primary interest is in the consistent application of HTTP to (“cloud”) service interfaces, with a view to furthering the goals of the Open Cloud Initiative (OCI); namely widespread and ideally consistent interoperability through the use of open standard formats and interfaces.

In particular, I strongly support the use of the existing metadata channel (headers) over envelope overlays (SOAP) and alternative/ancillary representations (typically in JSON/XML) as this should greatly simplify interfaces while ensuring consistency between services. The current approach to cloud “standards” calls on vendors to define their own formats and interfaces and to maintain client libraries for the myriad languages du jour. In an application calling on multiple services this can result in a small amount of business logic calling on various bulky, often poorly written and/or unmaintained libraries. The usual counter to the interoperability problems this creates is to write “adapters” (ala ODBC) which expose a lowest-common-denominator interface, thus hiding functionality and creating an “impedence mismatch”. Ultimately this gives rise to performance, security, cost and other issues.

By using HTTP as intended it is possible to construct (cloud) services that can be consumed using nothing more than the built-in, standards compliant HTTP client. I’m not writing to discuss whether this is a good idea, but to expose a use case that I would like to see considered, and one that we have already applied with an amount of success in the Open Cloud Computing Interface (OCCI).

To illustrate the original scope, versions of HTTP (RFC 2068) included not only the Link header (recently revived by Mark Nottingham in RFC 5988) but also LINK and UNLINK verbs to manipulate it (recently proposed for revival by James Snell). Unfortunately hypertext, and in particular HTML (which includes linking in-band rather than out-of-band) arguably stole HTTP’s thunder, leaving the overwhelming majority of formats that lack in-band linking (images, documents, virtual machines, etc.) high and dry and resulting in inconsistent linking styles (HTML vs XML vs PDF vs DOC etc.). This limited the extent of web linking as well as the utility of HTTP for innovative applications including APIs. Indeed HTTP could easily and simply meet the needs of many “Semantic Web” applications, but that is beyond the scope of this particular discussion.

To illustrate by way of example, consider the following synthetic request/response for an image hosting site which incorporates Web Linking (RFC 5988), Web Categories (draft-johnston-http-category-header) and Web Attributes (yet to be published):

GET /1.jpg HTTP/1.0

HTTP/1.0 200 OK
Content-Length: 69730
Content-Type: image/jpeg
Link:; rel=”license”
Link: /2.jpg; rel=”next”
Category: dog; label=”Dog”; scheme=””
Attribute: name=”Spot”

In order to “animate” resources, consider the use of the Link header to start a virtual machine in the Open Cloud Computing Interface (OCCI):

Link: </compute/123;action=start>; rel=""

The main objection to the use of the metadata channel in this fashion (beyond the application of common sense in determining what constitutes data vs metadata) is implementation issues (e.g. arbitrary limitations, i18n, handling of multiple headers, etc.) which could be largely resolved through specification. For example, the (necessary) use of e.g. RFC 2231 encoding for header values (but not keys) in e.g. RFC 5988 Web Linking gives rise to unnecessary complexity that may lead to interoperability, security and other issues which could be resolved through the specification of Unicode for keys and/or values. Another concern is the absence of features such as a standardised ability to return a collection (e.g. multiple responses). I originally suggested that HTTP 2.0 incorporate such ideas in 2009.

I’ll leave the determination of what would ultimately be required for such applications to the working group (should this use case be considered interesting by others), and while better support for performance, scalability and mobility are obviously required this has already been discussed at length. I strongly support Poul-Henning Kamp’s statement that “I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the aggregations of badly thought out hacks of HTTP/1.1.” (and agree that we should incorporate the concept of a “HTTP Router”) as well as Tim Bray’s statement that: “I’m inclined to err on the side of protecting user privacy at the expense of almost all else” (and believe that we should prevent eavesdroppers from learning anything about an encrypted transaction; something we failed to do with DNSSEC even given alternatives like dnscurve that ensure confidentiality as well as integrity).

RIP Adobe Flash (1996-2011) – now let’s bury the dead

Adobe kills mobile Flash, giving Steve Jobs the last laugh, reports The Guardian’s Charles Arthur following the late Steve Jobs’ epic Thoughts on Flash rant 18 months ago. It’s been about 2.5 years since I too got sick of Flash bringing my powerful Mac to its knees, so I went after the underlying lie that perpetuates the problem, explaining why Adobe Flash penetration is more like 50% than 99%. I even made progress Towards a Flash free YouTube killer, only it ended up being YouTube themselves who eventually started testing a YouTube HTML5 Video Player (while you’re there please do your bit for the open web by clicking “Join the HTML5 Trial” at the bottom of that page).

I heard a sound as though a million restaurant websites cried out at onceCharles Arthur

You see, armed with this heavily manipulated statistic, armies of developers are to this day fraudulently duping their paying clients into deploying a platform that will invariably turn away a percentage of their business at the door, in favour of annoying flaming logos and other atrocities that blight the web:

How much business can you tolerate losing? If you’ve got 95% penetration then you’re turning away 1 in 20 customers. At 90% you’re turning away 1 in 10. At 50% half of your customers won’t even get to see your product. I don’t know too many businesses who can afford to turn away any customers in this economic climate.

In my opinion the only place Flash technology has in today’s cloud computing environment is as a component of the AIR runtime for building (sub-par) cross-platform applications, and even then I’d argue that they should be using HTML5. As an Adobe Creative Suite Master Collection customer I’m very happy to see them dropping support for this legacy technology to focus on generating interactive HTML5 applications, and look forward to a similar announcement for desktop versions of the Flash player in the not too distant future.In any case, with the overwhelming majority of devices being mobile today and with more and more of them including browser functionality, the days of Flash were numbered even before Adobe put the mobile version out of its misery. Let’s not drag this out any longer than we have to, and bury the dead by uninstalling Flash Player. Here’s instructions for Mac OS X and Windows, and if you’re not ready to take the plunge into an open standards based HTML5 future then at least install FlashBlock for Chrome or Firefox (surely you’re not still using IE?).

Update: Flash for TV is dead too, as if killing off mobile wasn’t enough: Adobe Scrapping Flash for TV, Too‎

Update: Rich Internet Application (RIA) architectures in general are in a lot of trouble — Microsoft are killing off Silverlight as well: Mm, Silverlight, what’s that smell? Yes, it’s death

Update: In a surprising move that will no doubt be reversed, RIM announced it would continue developing Flash on the PlayBook (despite almost certainly lacking the ability to do so): RIM vows to keep developing Flash for BlackBerry PlayBook – no joke

How I tried to keep OCCI alive (and failed miserably)

I was going to let this one slide but following a calumniatory missive to his “followers” by the Open Cloud Computing Interface‘s self-proclaimed “Founder & Chair”, Sun refugee Thijs Metsch, I have little choice but to respond in my defense (particularly as “The Chairs” were actively soliciting followup from others on-list in support).

Basically a debate came to a head that has been brewing on- and off-list for months regarding the Open Grid Forum (OGF)‘s attempts to prevent me from licensing my own contributions (essentially the entire normative specification) under a permissive Creative Commons license (as an additional option to the restrictive OGF license) and/or submit them to the IETF as previously agreed and as required by the OGF’s own policies. This was on the grounds that “Most existing cloud computing specifications are available under CC licenses and I don’t want to give anyone any excuses to choose another standard over ours” and that the IETF has an excellent track record of producing high quality, interoperable, open specifications by way of a controlled yet open process. This should come as no surprise to those of you who know I am and will always be a huge supporter of open cloud, open source and open standards.

The OGF process had failed to deliver after over 12 months of deadline extensions – the current spec is frozen in an incomplete state (lacking critical features like collections, search, billing, security, etc.) as a result of being prematurely pushed into public comment, nobody is happy with it (including myself), the community has all but dissipated (except for a few hard core supporters, previously including myself) and software purporting to implement it actually implements something completely different altogether (see for yourself). There was no light at the end of the tunnel and with both OGF29 and IETF78 just around the corner I yesterday took a desperate gamble to keep OCCI alive (as a CC-licensed spec, an IETF Internet-Draft or both).

I confirmed that I was well within my rights to revoke any copyright, trademark and other rights previously granted (apparently it was amateur hour as OGF had failed to obtain an irrevocable license from me for my contributions) and volunteered to do so if restrictions on reuse by others weren’t lifted and/or the specification submitted to the IETF process as agreed and required by their own policies. Thijs’ colleague (and quite probably his boss at Platform Computing), Christopher Smith (who doubles as OGF’s outgoing VP of Standards) promptly responded, questioning my motives (which I can assure you are pure) and issuing a terse legal threat about how the “OGF will protect its rights” (against me over my own contributions no less). Thijs then followed up shortly after saying that they “see the secretary position as vacant from now on” and despite claims to the contrary I really couldn’t give a rats arse about a title bestowed upon me by a past-its-prime organisation struggling (and failing I might add) to maintain relevance. My only concern is that OCCI have a good home and if anything Platform have just captured the sort of control over it as VMware enjoy over DMTF/vCloud, with Thijs being the only remaining active editor.

I thought that would be the end of it and had planned to let sleeping dogs lie until today’s disgraceful, childish, coordinated and most of all completely unnecessary attack on an unpaid volunteer that rambled about “constructive technical debate” and “community driven consensus”, thanking me for my “meaningful contributions” but then calling on others to take up the pitchforks by “welcom[ing] any comments on this statement” on- or off-list. The attacks then continued on Twitter with another OGF official claiming that this “was a consensus decision within a group of, say, 20+ active and many many (300+) passive participants” (despite this being the first any of us had heard of it) and then calling my claims of copyright ownership “genuine bullshit” and report of an implementor instantly pulling out because they (and I quote) “can’t implement something if things are not stable” a “damn lie“, claiming I was “pissed” and should “get over it and stop crying” (needless to say they were promptly blocked).

Anyway as you can see there’s more to it than Thijs’ diatribe would have you believe and so far as I’m concerned OCCI, at least in it’s current form, is long since dead. I am undecided as to whether to revoke have revoked OGF’s licenses at this time but it probably doesn’t matter as they agree I retain the copyrights and I think their chance of success is negligible – nobody in their right mind would implement the product of such a dysfunctional group and those who already did have long since found alternatives. That’s not to say the specification won’t live on in another form but now the OGF have decided to go nuclear it’s going to have to be in a more appropriate forum – one that furthers the standard rather than constantly holding it back.

Update: My actions have been universally supported outside of OGF and in the press (and here and here and here and here etc.) but unsurprisingly universally criticised from within – right up to the chairman of the board who claimed it was about trust rather than IPR (BS – I’ve been crystal clear about my intentions from the very beginning). They’ve done a bunch of amateur lawyering and announced that “OCCI is becoming an OGF proposed standard” but have not been able to show that they were granted a perpetual license to my contributions (they weren’t). They’ve also said that “OGF is not really against using Creative Commons” but clearly have no intention to do so, apparently preferring to test my resolve and, if need be, the efficacy of the DMCA. Meanwhile back at the ranch the focus is on bright shiny things (RDF/RDFa) rather than getting the existing specification finished.

Protip: None of this has anything to do with my current employer so let’s keep it that way.

Face it Flash, your days are numbered.

It’s no secret that I’m no fan of Adobe Flash:

It should be no surprise then that I’m stoked to see a vigorous debate taking place about the future/fate of Flash well ahead of schedule, and even happier to see Flash sympathisers already resorting to desperate measures including “playing the porn card” (not to mention Farmville which, in addition to the myriad annoying, invasive and privacy-invading advertisements, I will also be more than happy to see extinct). In my mind this all but proves how dire their situation has become with the sudden onslaught of mobile devices deliberately absent flash malware*.

Let’s take a moment to talk about statistics. According to analysts there are currently “only” 1.3 billion Internet-connected PCs. To put that into context, there are already almost as many Internet-connected mobile devices. With a growth rate 2.5 times that of PCs, mobiles will soon become the dominant Internet access device. Of those new devices, few of them support Flash (think Android, iPhone), and with good reason – they are designed to be small, simple, performant and operate for hours/days between charges.

As if that’s not enough, companies with the power to make it happen would very much like for us to have a third device that fills the void between the two – a netbook or a tablet (like the iPad). For the most part (again being powered by Android and iPhone OS) these devices don’t support Flash either. Even if we were to give Adobe the benefit of the doubt in accepting their deceptiveoptimistic claims that Flash is currently “reaching 99% of Internet-enabled desktops in mature markets” (for more on that subject see Lies, damned lies and Adobe’s penetration statistics for Flash), between these two new markets it seems inevitable that their penetration rate will drop well below 50% real soon now.

Here’s the best part though, Flash penetration doesn’t even have to drop below 50% for us to break the vicious cycle of designers claiming “99% penetration” and users then having to install Flash because so many sites arbitrarily depend on it (using Flash for navigation is a particularly heinous offense, as is using it for headings with fancy fonts). Even if penetration were to drop to 95% (I would argue it already has long ago, especially if you dispense with weasel wording like “mature markets” and even moreso if you do away with the arbitrary “desktop” restriction – talk about sampling bias!) that translates to turning away 1 in 20 of your customers. At what point will merchants start to flinch – 1 in 10 (90%)? 1 in 5 (80%)? 1 in 4 (75%)? 1 in 2 (50%)?

As if that’s not enough, according to Rich Internet Application Statistics, you would be losing some of your best customers – those who can afford to run Mac OS X (87% penetration) and Windows 7 (around 75% penetration) – not to mention those with iPhones and iPads (neither of which are the cheapest devices on the market). Oh yeah and you heard it right, according to them, Flash penetration on Windows 7 is an embarassing 3 in 4 machines; even worse than SunOracle Java (though ironically Microsoft’s own Silverlight barely reaches 1 in 2 machines).

While we’re at it, at what point does it become “willful false advertising” for Adobe and their army of Flash designers to claim such deep penetration? Victims who pay $$lots for Flash-based sites only to discover from server logs that a surprisingly large percentage of users are being turned away have every reason to be upset, and ultimately to seek legal recourse. Why hasn’t this already happened? Has it? In any case designers like “Paul Threatt, a graphic designer at Jackson Walker design group, [who] has filed a complaint to the FTC alleging false advertising” ought to think twice before pointing the finger at Apple (accused in this case over a few mockups, briefly shown and since removed, in an iPad promo video).

At the end of the day much of what is annoying about the web is powered by Flash. If you don’t believe me then get a real browser and install Flashblock (for Firefox or Chrome) or ClickToFlash (for Safari) and see for yourself. You will be pleasantly surprised by the absence of annoyances as well as impressed by how well even an old computer can perform when not laden with this unnecessary parasite*. What is less obvious (but arguably more important) is that your security will dramatically improve as you significantly reduce your attack surface (while you’re at it replace Adobe Reader with Foxit and enjoy even more safety). As someone who has been largely Flash-free for the last 3 months I can assure you life is better on the other side; in addition to huge performance gains I’ve had far fewer crashes since purging my machine – unsurprising given according to Apple’s Steve Jobs, “Whenever a Mac crashes more often than not it’s because of Flash“. “No one will be using Flash, he says. The world is moving to HTML5.

So what can Adobe do about this now the horse has long since bolted? If you ask me, nothing. Dave Winer (another fellow who, like myself, “very much care[s] about an open Internet“) is somewhat more positive in posing the question What if Flash were an open standard? and suggesting that “Adobe might want to consider, right now, very quickly, giving Flash to the public domain. Disclaim all patents, open source all code, etc etc.“. Too bad it’s not that simple so long as one of the primary motivations for using Flash is bundled proprietary codecs like H.264 (which the MPEG LA have made abundantly clear will not be open sourced so long as they hold [over 900!] essential patents over it).

Update: Mobile Firefox Maemo RC3 has disabled Flash because “The Adobe Flash plugin used on many sites degraded the performance of the browser to the point where it didn’t meet Mozilla’s standards.” Sound familiar?

Update: Regarding the upcoming CS5 release which Adobe claims will “let you publish ActionScript 3 projects to run as native applications for iPhone“, this is not at all the same thing as the Flash plugin and will merely allow developers to create applications which suck more using a non-free SDK. No thanks. I’m unconvinced Apple will let such applications into the store anyway, citing performance concerns and/or the runtime rule.

Update: I tend to agree with Steven Wei that The best way for Adobe to save Flash is by killing it, but that doesn’t mean it’ll happen and any case if they wanted to do that they would have wanted to have started at least a year or two ago for the project to have any relevance, and it’s clear that they’re still busy flogging the binary plugin dead horse.

Update: Another important factor I neglected to mention above is that Adobe already struggle to maintain up-to-date binaries for a small number of major platforms and even then Mac and Linux are apparently second and third class citizens. If they’re struggling to manage the workload today then I don’t see what will make it any easier tomorrow with the myriad Linux/ARM devices hitting the market (among others). Nor would they want to – if they target HTML5, CSS3, etc. as proposed above then they have more resources to spend on having the best development environment out there.

* You may feel that words like “parasite” and “malware” are a bit strong for Flash, but when you think about it Flash has all the necessary attributes; it consumes your resources, weakens your security and is generally annoying. In short, the cost outweighs any perceived benefits.

NoSQL “movement” roadblocks HTML5 WebDB

Today’s rant is coming between me and a day of skiing so I’ll keep it brief. While trying to get to the bottom of why I can’t enjoy offline access to Google Apps & other web-based applications with Gears on Snow Leopard I came across a post noting Chrome, Opera to support html5 webdb, FF & IE won’t. This seemed curious as HTML5 is powering on towards last call and there are already multiple implementations of both applications and clients that run them. Here’s where we’re at:

  • Opera: “At opera, we implemented web db […] it’s likely we will [ship it] as people have built on it”
  • Google [Chrome]: “We’ve implemented WebDB … we’re about to ship it”
  • Microsoft [IE]: “We don’t think we’ll reasonably be able to ship an interoperable version of WebDB”
  • Mozilla [Firefox]: “We’ve talked to a lot of developers, the feedback we got is that we really don’t want SQL […] I don’t think mozilla plans to ship it.”

Of these, Microsoft’s argument (aside from being disproven by existing interoperable implementations) can be summarily dismissed because offline web applications are a direct competitor to desktop applications and therefore Windows itself. As if that’s not enough, they have their own horse in this race that they don’t have to share with anyone in the form of Silverlight. As such it’s completely understandable (however lame) for them to spread interoperability FUD about competing technology.

Mozilla’s argument that “we really don’t want SQL” is far more troublesome and posts like this follow an increasingly common pattern:

  1. Someone proposes SQL for something (given we’ve got 4 decades of experience with it)
  2. Religious zealots trash talk SQL, offering a dozen or so NoSQL alternatives (all of which are in varying stages of [early] development)
  3. “My NoSQL db is bigger/better/faster than yours” debate ensues
  4. Nobody does anything

Like it or not, SQL is a sensible database interface for web applications today. It’s used almost exclusively on the server side already (except perhaps for the largest of sites, and even these tend to use SQL for some components) so developers are very well equipped to deal with it. It has been proven to work (and work well) by demanding applications including Gmail, Google Docs and Google Calendar, and is anyway independent of the underlying database engine. Ironically work has already been done to provide SQL interfaces to “NoSQL” databases (which just goes to show the “movement” completely misses the point) so those who really don’t like SQLite (which happens to drive most implementations today) could conceivably create a drop-in replacement for it. Indeed power users like myself would likely appreciate a browser with embedded MySQL as a differentiating feature.

In any case the API [cs]hould be versioned so we can offer alternatives like WebSimpleDB in the future. Right now though the open web is being held back by outdated standards and proprietary offerings controlled by single vendors (e.g. Adobe’s AIR and Microsoft’s Silverlight) are lining up to fill in the gap. Those suggesting “it’s worth stepping back” because “there are other options that should be considered” which “might serve those needs better” would want to take a long, hard look at whether their proposed alternatives are really ready for prime time, or indeed even necessary. To an outsider trying to solve real business problems today a lot of it looks like academic wankery.

An open letter to the NoSQL community

Following some discussion on Twitter today I posted this thread to the nosql-discussion group. You can see the outcome for yourself (essentially, and unsurprisingly I might add, “please feel free to take your software and call it whatever you want“).

While I don’t want to mess with their momentum (it’s a good cause, if branded with an unfortunate name) this isn’t the first time the issue’s been raised and I doubt it will be the last. I do however think that “no SQL” is completely missing the point and that the core concern is trading consistency for scalability. At the end of the day developers and users will deploy what is most appropriate for the task at hand.

There’a already been a question about alternatives to SQL, and knowing how Structured Query Language (SQL) came to be (consider the interfaces before it existed and compare that to what we have today) I figure it’s only a matter of time before history repeats itself and we end up creating something like Cloud Query Language (CQL) (a deliberate play on words). The closer this is to ANSI SQL the better it will be, both in terms of technology reuse and of the bags of bones that need to understand how it works… for the same reason the Open Cloud Computing Interface (OCCI) tries very hard to be as close as possible to HyperText Transfer Protocol (HTTP).

———- Forwarded message ———-
From: Sam Johnston
Date: Tue, Oct 27, 2009 at 3:33 PM
Subject: An open letter to the NoSQL community

Afternoon NoSQLers,

I write to you as a huge fan of next generation databases, but also as someone who doesn’t associate in any way with the “NoSQL” moniker. I don’t particularly care for SQL and appreciate the contrived contention it creates, but I think it misses the point somewhat and alienates people like myself who might otherwise have been drawn to the project.

I assume that by “NoSQL” we’re referring to the next generation of [generally cloud-based] databases such as Google’s BigTable, Amazon’s SimpleDB, Facebook’s Cassandra, etc., in which case the issue is more the underlying model (e.g. ACID vs BASE), where we are ultimately trading consistency for scalability.

To me this has nothing to do with the query language (which would still arguably be useful for many applications and which may as well be [something like] SQL, albeit adapted), nor the relational (as opposed to navigational) nature of the data (which is still the case today – it’s just represented as pointers rather than separate “relation” tables), and to focus on either attribute is missing the point. This is particularly true with today’s announcement of Amazon RDS.

Perhaps it’s too late already, but I’d like to think we can come up with a more representative name to which everyone can associate (and which isn’t so scary for fickle enterprise customers). There’s already been a couple of decent suggestions, including alt.db, db-ng, NRDB[MS], etc.


Twitter’s down for the count. What are we going to do about it?

What’s wrong with this picture?

  • There’s not a single provider for telephony (AT&T, T-Mobile, etc.)
  • There’s not a single provider for text messaging (AT&T, T-Mobile, etc.)
  • There’s not a single provider for instant messaging (GTalk, MSN, AIM, etc.)
  • There’s not a single provider for e-mail (GMail, Hotmail, Yahoo!, etc.)
  • There’s not a single provider for blogging (Blogger, WordPress, etc.)
  • There’s not a single provider for “mini” blogging (Tumblr, Posterous, etc.)
  • There IS a single provider for micro blogging (Twitter)
  • And it’s down for the count (everything from the main site to the API is inaccessible)
  • And it’s been down for an Internet eternity (the best part of an hour and counting)

What are we going to do about it?

“Bare Metal” cloud infrastructure “compute” services arrive

Earlier in the year during the formation of the Open Cloud Computing Interface (OCCI) working group I described three types of cloud infrastructure “compute” services:

  • Physical Machines (“Bare Metal”) which are essentially dedicated servers provisioned on a utility basis (e.g. hourly), whether physically independent or just physically isolated (e.g. blades)
  • Virtual Machines which nowadays uses hypervisors to split the resources of a physical host amongst various guests, where both the host and each of the guests run a separate operating system instance. For more details on emulation vs virtualisation vs paravirtualisation see a KB article I wrote for Citrix a while back: CTX107587 Virtual Machine Technology Overview
  • OS Virtualisation (e.g. containers, zones, chroots) which is where a single instance of an operating system provides multiple isolated user-space instances.

While the overwhelming majority of cloud computing discussions today focus on virtual machines, the reason for my making the distinction was so as the resulting API would be capable of dealing with all possibilities. The clouderati are now realising that there’s more to life than virtual machines and that the OS is likea cancer that sucks energy (e.g. resources, cycles), needs constant treatment (e.g. patches, updates, upgrades) and poses significant risk of death (e.g. catastrophic failure) to any application it hosts“. That’s some good progress – now if only the rest of the commentators would quit referring to virtualisation as private cloud so we can focus on what’s important rather than maintaining the status quo.

Anyway such cloud services didn’t exist at the time but in France at least we did have providers like Dedibox and Kimsufi who would provision a fixed configuration dedicated server for you pretty much on the spot starting at €20/month (<€0.03/hr or ~$0.04/hr). I figured there was nothing theoretically stopping this being fully automated and exposed via a user (web) or machine (API) interface, in which case it would be indistinguishable from a service delivered via VM (except for a higher level of isolation and performance). Provided you’re billing as a utility (that is, users can consume resources as they need them and are billed only for what they use) rather than monthly or annually and taking care of all the details “within” the cloud there’s no reason this isn’t cloud computing. After all, as an end user I needn’t care if you’re providing your service using an army of monkeys, so long as you are. PCI compliance anyone?

Virtually all of the cloud infrastructure services people talk about today are based on virtual machines and the market price for a reasonably capable one is $0.10/hr or around $72.00 per month. That’s said to be 3-5x more than cost at “cloud scale” (think Amazon) so expect that price to drop as the market matures. Rackspace Cloud are already offering small Xen VMs for 1.5c/hr or ~$10/month. I won’t waste any more time talking about these offerings as everyone else already is. This will be a very crowded space thanks in no small part to VMware’s introduction of vCloud (which they claim turns any web hoster into a cloud provider) but with the hypervisor well and truly commoditised I assure you there’s nothing to see here.

On the lightweight side of the spectrum, VPS providers are a dime a dozen. These guys generally slice Linux servers up into tens if not hundreds of accounts for only a few dollars a month and take care of little more than the (shared) kernel, leaving end users to install the distribution of their choice as root. Solaris has zones and even Windows has MultiWin built in now days (that’s the technology, courtesy Citrix, that allows multiple users each having their own GUI session to coexist on the same machine – it’s primarily used for Terminal Services & Fast User Switching but applications and services can also run in their own context). This delivers most of the benefits of a virtual machine, only without the overhead and cost of running and managing multiple operating systems side by side. Unfortunately nobody’s really doing this yet in cloud but if they were you’d be able to get machines for tasks like mail relaying, spam filtering, DNS, etc. for literally a fraction of a penny per hour (VPSs start at <$5/m or around 0.7c/hr).

So the reason for my writing this post today is that SoftLayer this week announced the availability of “Bare Metal Cloud” starting at $0.15 per hour. I’m not going to give them any props for having done so thanks for their disappointing attempt to trademark the obvious and generic term “bare metal cloud” and due to unattractive hourly rates that are almost four times the price of the monthly packages by the time you take into account data allowances. I will however say that it’s good to see this prophecy (however predictable) fulfilled.

I sincerely hope that the attention will continue to move further away from overpriced and inefficient virtual machines and towards more innovative approaches to virtualisation.

“Twitter” Trademark in Trouble Too

Yesterday I apparently struck a nerve in revealing Twitter’s “Tweet” Trademark Torpedoed. The follow up commentary both on this blog and on Twitter itself was interesting and insightful, revealing that in addition to likely losing “tweet” (assuming you accept that it was ever theirs to lose) the recently registered Twitter trademark itself (#77166246) and pending registrations for the Twitter logo (#77721757, #77721751) are also on very shaky ground.

Trademarks 101

Before we get into details as to how this could happen lt’s start with some background. A trademark is one of three main types of intellectual property (the others being copyrights and patents) in which society grants a monopoly over a “source identifier” (e.g. a word, logo, scent, etc.) in return for being given some guarantee of quality (e.g. I know what I’m getting when I buy a bottle of black liquid bearing the Coke® branding). Anybody can claim to have a trademark but generally they are registered which makes the process of enforcing the mark much easier. The registration process itself is thus more of a sanity check – making sure everything is in order, fees are paid, the mark is not obviously broken (that is, unable to function as a source identifier) and perhaps most importantly, that it doesn’t clash with other marks already issued.

Trademarks are also jurisdictional in that they apply to a given territory (typically a country but also US states) but to make things easier it’s possible to use the Madrid Protocol to extend a valid trademark in one territory to any number of others (including the EU which is known as a “Community Trademark”). Of course if the first trademark fails (within a certain period of time) then those dependent on it are also jeopardised. Twitter have also filed applications using this process.

Moving right along, there are a number of different types of trademarks, starting with the strongest and working back:

  • Fanciful marks are created specifically to be trademarks (e.g. Kodak) – these are the strongest of all marks.
  • Arbitrary marks have a meaning but not in the context in which they are used as a trademark. We all know what an apple is but when used in the context of computers it is meaningless (which is how Apple Computer is protected, though they did get in trouble when they started selling music and encroached on another trademark in the process). Similarly, you can’t trademark “yellow bananas” but you’d probably get away with “blue bananas” or “cool bananas” because they don’t exist.
  • Suggestive marks hint at some quality or characteristic without describing the product (e.g. Coppertone for sun-tan lotion)
  • Descriptive marks describe some quality or characteristic of the product and are unregistrable in most trademark offices and unprotectable in most courts. “Cloud computing” was found to be both generic and descriptive by USPTO last year in denying Dell. Twitter is likely considered a descriptive trademark (but one could argue it’s now also generic).
  • Generic marks cannot be protected as the name of a product or service cannot function as a source identifier (e.g. Apple in the context of fruits, but not in the context of computers and music)


Twitter’s off to a bad start already in their selection of names – while Google is a deliberate misspelling of the word googol (suggesting the enormous number of items indexed), the English word twitter has a well established meaning that relates directly to the service Twitter, Inc. provides. It’s the best part of 1,000 years old too, derived around 1325–75 from ME twiteren (v.); akin to G zwitschern:

– verb (used without object)

1. to utter a succession of small, tremulous sounds, as a bird.
2. to talk lightly and rapidly, esp. of trivial matters; chatter.
3. to titter, giggle.
4. to tremble with excitement or the like; be in a flutter.

– verb (used with object)

5. to express or utter by twittering.

– noun

6. an act of twittering.
7. a twittering sound.
8. a state of tremulous excitement.

Although the primary meaning people associate these days is that of a bird, it cannot be denied that “twitter” also means “to talk lightly and rapidly, esp. of trivial matters; chatter“. The fact it is now done over the Internet matters not in the same way that one can “talk” or “chat” over it (and telephones for that matter) despite the technology not existing when the words were conceived. Had “twitter” have tried to obtain a monopoly over a more common words like “chatter” and “chat” there’d have been hell to pay, but that’s not to say they should get away with it now.

Let’s leave the definition at that for now as twitter have managed to secure registration of their trademark (which does not imply that it is enforceable). The point is that this is the weakest type of trademark already and some (including myself) would argue that it a) should never have been allowed and b) will be impossible to enforce. To make matters worse, Twitter itself has gained an entry in the dictionary as both a noun (“a website where people can post short messages about their current activities“) and a verb (“to write short messages on the Twitter website“) as well as the AP Sytlebook for good measure. This could constitute “academic credability” or “trademark kryptonite” depending how you look at it.


This brings us to the more pertinent point, trademark enforcement, which can essentially be summed up as “use it or lose it”. As at today I have not been able to find any reference whatsoever, anywhere on, to any trademark rights claimed by Twitter, Inc. Sure they assert copyright (“© 2009 Twitter”) but that’s something different altogether – I have never seen this before and to be honest I can’t believe my eyes. I expect they will fix this promptly in the wake of this post by sprinking disclaimers and [registered®] trademark (TM) and servicemark (SM) symbols everywhere, but the Internet Archive never lies so once again it’s likely too little too late. If you don’t tell someone it’s a trademark then how are they supposed to avoid infringing it?

Terms of Service

The single reference to trademarks (but not “twitter” specifically) I found was in the terms of service (which are commendably concise):

We reserve the right to reclaim usernames on behalf of businesses or individuals that hold legal claim or trademark on those usernames.

That of course didn’t stop them suspending @retweet shortly after filing for the ill-fated “tweet” trademark themselves, but that’s another matter altogether. The important point is that they don’t claim trademark rights and so far as I can tell, never have.


To rub salt in the (gaping) wound they (wait for it, are you sitting down?) offer their high resolution logos for anyone to use with no mention whatsoever as to how they should and shouldn’t be used (“Download our logos“) – a huge no-no for trademarks which must be associated with some form of quality control. Again there is no trademark claim, no ™ or ® symbols, and for the convenience of invited infringers, no less than three different high quality source formats (PNG, Adobe Illustrator and Adobe Photoshop):


Then there’s the advertising, oh the advertising. Apparently Twitter HQ didn’t get the memo about exercising extreme caution when using your trademark; lest be the trademark holder who refers to her product or service as a noun or a verb but Twitter does both, even in 3rd-party advertisements (good luck trying to get an AdWords ad containing the word “Google”):

Internal Misuse

Somebody from Adobe or Google please explain to Twitter why it’s important to educate users that they don’t “google” or “photoshop”, rather “search using Google®” and “edit using Photoshop®”. Here’s some more gems from the help section:

  • Now that you’re twittering, find new friends or follow people you already know to get their twitter updates too.
  • Wondering who sends tweets from your area?
  • @username + message directs a twitter at another person, and causes your twitter to save in their “replies” tab.
  • FAV username marks a person’s last twitter as a favorite.
  • People write short updates, often called “tweets” of 140 characters or fewer.
  • Tweets with @username elsewhere in the tweet are also collected in your sidebar tab; tweets starting with @username are replies, and tweets with @username elsewhere are considered mentions.
  • Can I edit a tweet once I post it?
  • What does RT, or retweet, mean? RT is short for retweet, and indicates a re-posting of someone else’s tweet. This isn’t an official Twitter command or feature, but people add RT somewhere in a tweet to indicate that part of their tweet includes something they’re re-posting from another person’s tweet, sometimes with a comment of their own. Check out this great article on re-tweeting, written by a fellow Twitter user, @ruhanirabin. <- FAIL x 7


According to this domain search there are currently 6,263 domains using the word “twitter”, almost all in connection with microblogging. To put that number in perspective, if Twitter wanted to take action against these registrants given current UDRP rates for a single panelist we’re talking $9,394,500 in filing fees alone (or around 1.5 billion nigerian naira if that’s not illustrative enough for you). That’s not including the cost of preparing the filings, representation, etc. that their lawyers (Fenwick & West LLP) would likely charge them.

If you (like Doug Champigny) happen to be on the receiving end of one of these letters recently you might just want to politely but firmly point them at the UDRP and have them prove, among other things, that you were acting in bad faith (don’t bother coming crying to me if they do though – this post is just one guy’s opinion and IANAL remember ;).

I could go on but I think you get the picture – Twitter has done such a poor job of protecting the Twitter trademark that they run the risk of losing it forever and becoming a lawschool textbook example of what not to do. There are already literally thousands of products and services [ab]using their brand and while some have recently succombed to the recent batch legal threats they may well have more trouble now that people know their rights and the problem is being actively discussed. Furthermore, were it not for being extremely permissive with the Twitter brand from the outset they arguably would not have had anywhere near as large a following as they do now. It is only with the dedicated support of the users and developers they are actively attacking that they have got as far as they have.

The Problem: A Microblogging Monopoly

Initially it was my position that Twitter had built their brand and deserved to keep it, but that they had gone too far with “tweet”. Then in the process of writing this story I re-read the now infamous May The Tweets Be With You post that prompted the USPTO to reject their application hours later and it changed my mind too. Most of the media coverage took the money quote out of context but here it is in its entirity (emphasis mine):

We have applied to trademark Tweet because it is clearly attached to Twitter from a brand perspective but we have no intention of “going after” the wonderful applications and services that use the word in their name when associated with Twitter.

Do you see what’s happening here? I can’t believe I missed it on the first pass. Twitter are happy for you to tweet to your heart’s content provided you use their service. That is, they realised that outside of the network effects of having millions of users all they really do is push 1’s and 0’s around (and poorly at that). They go on to say:

However, if we come across a confusing or damaging project, the recourse to act responsibly to protect both users and our brand is important.

Today’s batch of microblogging clients are hard wired to Twitter’s servers and as a result (or vice versa) they have an effective microblogging monopoly. Twitter, Inc has every reason to be happy with that outcome and is naturally seeking to protect it – how better than to have an officially sanctioned method with which to beat anyone who dare stray from the path by allowing connections to competitors like That’s exactly what they mean with the “when associated with Twitter” language above and by “confusing or damaging” they no doubt mean “confusing or damaging [to Twitter, Inc]”.

The Solution: Distributed Social Networking

Distributed social networking and open standards in general (in the traditional rather than Microsoft sense) are set to change that, but not if the language society uses (and has used for hundreds of years) is granted under an official monopoly to Twitter, Inc – it’s bad enough that they effectively own the @ namespace when there are existing open standards for it. Just imagine if email was a centralised system and everything went through one [unreliable] service – brings a new meaning to “email is down”! Well that’s Twitter’s [now not so] secret strategy: to be the “pulse of the planet” (their words, not mine).

Don’t get me wrong – I think Twitter’s great and will continue to twitter and tweet as @samj so long as it’s the best microblogging platform around – but I don’t want to be forced to use it because it’s the only one there is. Twitter, Inc had ample chance to secure “twitter” as a trademark and so far as I am concerned they have long since missed it (despite securing dubious and likely unenforceable registrations). Now they need to play on a level playing field and focus on being the best service there is.

Update: Before I get falsely accused of brand piracy let me clarify one important point: so far as I am concerned while Twitter can do what they like with their logo (despite continuing to give it away to the entire Internet no strings attached), the words “twitter” and “tweet” are fair game as they have been for the last 700+ years and will be for the next 700. From now on “twitter” for me means “generic microblog” and “tweet” means “microblog update”.

If I had a product interesting enough for Twitter, Inc to send me one of their infamous C&D letters I would waste no time whatsoever in scanning it, posting it here and making fun of them for it. I’m no thief but I am a fervent believer in open standards.

Organising the Internet with Web Categories

In order to scratch an itch relating to the Open Cloud Computing Interface (OCCI) I submitted my first Internet-Draft to the IETF this week: Web Categories (draft-johnston-http-category-header).

The idea’s fairly simple and largely inspired by the work of others (most notably the original HTTP and Atom authors, and a guy down under who’s working on another draft). It defines an intuitive mechanism for web servers to express flexible category information for any resource (including opaque/binary/non-HyperText formats) in the HTTP headers, allowing users to categorise web resources into vocabularies or “schemes” and assign human-friendly “labels” in addition to the computer-friendly “terms”.

This approach to taxonomies was lifted directly from (and is thus 100% compatible with) Atom and is another step closer to being able to render individual resources natively over HTTP rather than encoded and wrapped in XML (which gets unwieldly when you’re dealing with multi-gigabyte virtual machines, as we are with OCCI).

It’s anybody’s guess where the document will go from here – it’s currently marked “Experimental” but with any luck it will pique the interest of the standards and/or semantic web community and go on to live a long and happy life.

Internet Engineering Task Force                              S. Johnston
Internet-Draft                               Australian Online Solutions
Intended status: Experimental                               July 1, 2009
Expires: January 2, 2010

                             Web Categories

Status of this Memo

   This Internet-Draft is submitted to IETF in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at

   The list of Internet-Draft Shadow Directories can be accessed at

   This Internet-Draft will expire on January 2, 2010.

Copyright Notice

   Copyright (c) 2009 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents in effect on the date of
   publication of this document (
   Please review these documents carefully, as they describe your rights
   and restrictions with respect to this document.


   This document specifies the Category header-field for HyperText
   Transfer Protocol (HTTP), which enables the sending of taxonomy
   information in HTTP headers.

Johnston                 Expires January 2, 2010                [Page 1]

Internet-Draft              Abbreviated Title                  July 2009

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . 3
     1.1.  Requirements Language . . . . . . . . . . . . . . . . . . . 3
   2.  Categories  . . . . . . . . . . . . . . . . . . . . . . . . . . 3
   3.  The Category Header Field . . . . . . . . . . . . . . . . . . . 4
     3.1.  Examples  . . . . . . . . . . . . . . . . . . . . . . . . . 4
   4.  IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 5
     4.1.  Category Header Registration  . . . . . . . . . . . . . . . 5
   5.  Security Considerations . . . . . . . . . . . . . . . . . . . . 5
   6.  Internationalisation Considerations . . . . . . . . . . . . . . 5
   7.  References  . . . . . . . . . . . . . . . . . . . . . . . . . . 6
     7.1.  Normative References  . . . . . . . . . . . . . . . . . . . 6
     7.2.  Informative References  . . . . . . . . . . . . . . . . . . 6
   Appendix A.  Notes on use with HTML . . . . . . . . . . . . . . . . 7
   Appendix B.  Notes on use with Atom . . . . . . . . . . . . . . . . 7
   Appendix C.  Acknowledgements . . . . . . . . . . . . . . . . . . . 8
   Appendix D.  Document History . . . . . . . . . . . . . . . . . . . 8
   Appendix E.  Outstanding Issues . . . . . . . . . . . . . . . . . . 8
   Author's Address  . . . . . . . . . . . . . . . . . . . . . . . . . 9

Johnston                 Expires January 2, 2010                [Page 2]

Internet-Draft              Abbreviated Title                  July 2009

1.  Introduction

   A means of indicating categories for resources on the web has been
   defined by Atom [RFC4287].  This document defines a framework for
   exposing category information in the same format via HTTP headers.

   The atom:category element conveys information about a category
   associated with an entry or feed.  A given atom:feed or atom:entry
   element MAY have zero or more categories which MUST have a "term"
   attribute (a string that identifies the category to which the entry
   or feed belongs) and MAY also have a scheme attribute (an IRI that
   identifies a categorization scheme) and/or a label attribute (a
   human-readable label for display in end-user applications).

   Similarly a web resource may be associated with zero or more
   categories as indicated in the Category header-field(s).  These
   categories may be divided into separate vocabularies or "schemes"
   and/or accompanied with human-friendly labels.

   [[ Feedback is welcome on the mailing list,
   although this is NOT a work item of the HTTPBIS WG. ]]

1.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   document are to be interpreted as described in BCP 14, [RFC2119], as
   scoped to those conformance targets.

   This document uses the Augmented Backus-Naur Form (ABNF) notation of
   [RFC2616], and explicitly includes the following rules from it:
   quoted-string, token.  Additionally, the following rules are included
   from [RFC3986]: URI.

2.  Categories

   In this specification, a category is a grouping of resources by
   'term', from a vocabulary ('scheme') identified by an IRI [RFC3987].
   It is comprised of:

   o  A "term" which is a string that identifies the category to which
      the resource belongs.

   o  A "scheme" which is an IRI that identifies a categorization scheme

Johnston                 Expires January 2, 2010                [Page 3]

Internet-Draft              Abbreviated Title                  July 2009

   o  An "label" which is a human-readable label for display in end-user
      applications (optional).

   A category can be viewed as a statement of the form "resource is from
   the {term} category of {scheme}, to be displayed as {label}", for
   example "'Loewchen' is from the 'dog' category of 'animals', to be
   displayed as 'Canine'".

3.  The Category Header Field

   The Category entity-header provides a means for serialising one or
   more categories in HTTP headers.  It is semantically equivalent to
   the atom:category element in Atom [RFC4287].

   Category           = "Category" ":" #category-value
   category-value     = term *( ";" category-param )
   category-param     = ( ( "scheme" "=" <"> scheme <"> )
                      | ( "label" "=" quoted-string )
                      | ( "label*" "=" enc2231-string )
                      | ( category-extension ) )
   category-extension = token [ "=" ( token | quoted-string ) ]
   enc2231-string     = 
   term               = token
   scheme             = URI

   Each category-value conveys exactly one category but there may be
   multiple category-values for each header-field and/or multiple
   header-fields per [RFC2616].

   Note that schemes are REQUIRED to be absolute URLs in Category
   headers, and MUST be quoted if they contain a semicolon (";") or
   comma (",") as these characters are used to separate category-params
   and category-values respectively.

   The "label" parameter is used to label the category such that it can
   be used as a human-readable identifier (e.g. a menu entry).
   Alternately, the "label*" parameter MAY be used encode this label in
   a different character set, and/or contain language information as per
   [RFC2231].  When using the enc2231-string syntax, producers MUST NOT
   use a charset value other than 'ISO-8859-1' or 'UTF-8'.

3.1.  Examples

   NOTE: Non-ASCII characters used in prose for examples are encoded
   using the format "Backslash-U with Delimiters", defined in Section
   5.1 of [RFC5137].

Johnston                 Expires January 2, 2010                [Page 4]

Internet-Draft              Abbreviated Title                  July 2009

   For example:
   Category: dog

   indicates that the resource is in the "dog" category.
   Category: dog; label="Canine"; scheme=""

   indicates that the resource is in the "dog" category, from the
   "" scheme, and should be displayed as

   The example below shows an instance of the Category header encoding
   multiple categories, and also the use of [RFC2231] encoding to
   represent both non-ASCII characters and language information.
   Category: dog; label="Canine"; scheme="",
             lowchen; label*=UTF-8'de'L%c3%b6wchen";

   Here, the second category has a label encoded in UTF-8, uses the
   German language ("de"), and contains the Unicode code point \u'00F6'

4.  IANA Considerations

4.1.  Category Header Registration

   This specification adds an entry for "Category" in HTTP to the
   Message Header Registry [RFC3864] referring to this document:
   Header Field Name: Category
   Protocol: http
   Status: standard
   Author/change controller:
       IETF (
       Internet Engineering Task Force
   Specification document(s):
       [ this document ]

5.  Security Considerations

   The content of the Category header-field is not secure, private or
   integrity-guaranteed, and due caution should be exercised when using

6.  Internationalisation Considerations

   Category header-fields may be localised depending on the Accept-

Johnston                 Expires January 2, 2010                [Page 5]

Internet-Draft              Abbreviated Title                  July 2009

   Language header-field, as defined in section 14.4 of [RFC2616].

   Scheme IRIs in atom:category elements may need to be converted to
   URIs in order to express them in serialisations that do not support
   IRIs, as defined in section 3.1 of [RFC3987].  This includes the
   Category header-field.

7.  References

7.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC2231]  Freed, N. and K. Moore, "MIME Parameter Value and Encoded
              Word Extensions: Character Sets, Languages, and
              Continuations", RFC 2231, November 1997.

   [RFC2616]  Fielding, R., Gettys, J., Mogul, J., Frystyk, H.,
              Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
              Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.

   [RFC3864]  Klyne, G., Nottingham, M., and J. Mogul, "Registration
              Procedures for Message Header Fields", BCP 90, RFC 3864,
              September 2004.

   [RFC3986]  Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform
              Resource Identifier (URI): Generic Syntax", STD 66,
              RFC 3986, January 2005.

   [RFC3987]  Duerst, M. and M. Suignard, "Internationalized Resource
              Identifiers (IRIs)", RFC 3987, January 2005.

   [RFC4287]  Nottingham, M. and R. Sayre, "The Atom Syndication
              Format", RFC 4287, December 2005.

   [RFC5137]  Klensin, J., "ASCII Escaping of Unicode Characters",
              RFC 5137, February 2008.

7.2.  Informative References

   [OCCI]     Open Grid Forum (OGF), Edmonds, A., Metsch, T., Johnston,
              S., and A. Richardson, "Open Cloud Computing Interface
              (OCCI)", .

   [RFC2068]  Fielding, R., Gettys, J., Mogul, J., Nielsen, H., and T.
              Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1",

Johnston                 Expires January 2, 2010                [Page 6]

Internet-Draft              Abbreviated Title                  July 2009

              RFC 2068, January 1997.

              Raggett, D., Hors, A., and I. Jacobs, "HTML 4.01

              Hyatt, D. and I. Hickson, "HTML 5", April 2009,

              Nottingham, M., "Web Linking",
              draft-nottingham-http-link-header-05 (work in progress),
              April 2009.

              Celik, T., Marks, K., and D. Powazek, "rel="tag"
              Microformat", .

Appendix A.  Notes on use with HTML

   In the absence of a dedicated category element in HTML 4
   [W3C.REC-html401-19991224] and HTML 5 [W3C.WD-html5-20090423],
   category information (including user supllied folksonomy
   classifications) MAY be exposed using HTML A and/or LINK elements by
   concatenating the scheme and term:
   category-link = scheme term
   scheme        = URI
   term          = token

   These category-links MAY form a resolveable "tag space" in which case
   they SHOULD use the "tag" relation-type per [rel-tag-microformat].

   Alternatively META elements MAY be used:

   o  where the "name" attribute is "keywords" and the "content"
      attribute is a comma-separated list of term(s)

   o  where the "http-equiv" attribute is "Category" and the "content"
      attribute is a comma-separated list of category-value(s)

Appendix B.  Notes on use with Atom

   Where the cardinality is known to be one (for example, when
   retrieving an individual resource) it MAY be preferable to render the

Johnston                 Expires January 2, 2010                [Page 7]

Internet-Draft              Abbreviated Title                  July 2009

   resource natively over HTTP without Atom structures.  In this case
   the contents of the atom:content element SHOULD be returned as the
   HTTP entity-body and metadata including the type attribute and atom:
   category element(s) via HTTP header-field(s).

   This approach SHOULD NOT be used where the cardinality is guaranteed
   to be one (for example, search results which MAY return one result).

Appendix C.  Acknowledgements

   The author would like to thank Mark Nottingham for his work on Web
   Linking [draft-nottingham-http-link-header] (on which this document
   was based) and to the authors of [RFC2068] for specification of the
   Link: header-field on which this is based.

   The author would like to thank members of the OGF's Open Cloud
   Computing Interface [OCCI] working group for their contributions and
   others who commented upon, encouraged and gave feedback to this

Appendix D.  Document History

   [[ to be removed by the RFC editor should document proceed to
   publication as an RFC. ]]


      *  Initial draft based on draft-nottingham-http-link-header-05

Appendix E.  Outstanding Issues

   [[ to be removed by the RFC editor should document proceed to
   publication as an RFC. ]]

   The following issues are oustanding and should be addressed:

   1.  Is extensibility of Category headers necessary as is the case for
       Link: headers?  If so, what are the use cases?

   2.  Is supporting multi-lingual representations of the same
       category(s) necessary?  If so, what are the risks of doing so?

   3.  Is a mechanism for maintaining Category header-fields required?
       If so, should it use the headers themselves or some other

Johnston                 Expires January 2, 2010                [Page 8]

Internet-Draft              Abbreviated Title                  July 2009

   4.  Does this proposal conflict with others in the same space?  If
       so, is it an improvement on what exists?

Author's Address

   Sam Johnston
   Australian Online Solutions
   GPO Box 296
   Sydney, NSW  2001


Johnston                 Expires January 2, 2010                [Page 9]