Introducing rel=”shortlink”: a better alternative to URL shorteners

Yesterday I wrote rather critically about a surprisingly successful drive to implement a deprecated “rev” relationship. This developed virtually overnight in response to the growing “threat” (in terms of linkrot, security, etc.) of URL shorteners including, and their ilk.

The idea is simple: allow the sites to specify short URLs in the document/feed itself, either automatically ([compressed] unique identifier, timestamp, “initials” of the title, etc.) or manually (using a human-friendly slug). That way, when people need to reference the URL in a space constrained environment (e.g. microblogging like Twitter) or anywhere they need to be manually entered (e.g. printed or spoken) they can do so in a fashion that will continue to work so long as the target does and which reveals information about the content (such as its owner and a concise handle).

Examples of such short URLs include:

The idea is sound but the proposed implementation is less so. There is (or at least was) provision for “rev”erse link references but these have been deprecated in HTML 5. There is also a way of hinting the canonical URI by specifying a rel=”canonical” link. This makes a lot of sense because often the same document can be referred to by an infinite number of URIs (e.g. in search results, with sort orders, aliases, categories, etc.). Combine the two and you’ve got a way of saying “I am the canonical URI and this other URI happens to point at me too”, only it can only ever (safely) work for the canonical URL itself and it doesn’t make sense to list one arbitrary URL when there could be an infinite number.

Another suggestion was to use rel=”alternate shorter” but the problem here is that the content should be identical (except for superficial formatting changes such as highlighting and sort order), while “alternate” means “an alternate version of the resource” itself – e.g. a PDF version. Clients that understand “alternate” versions shoult not list the short URL as the content itself is (usually) the same.

Ben Ramsay got closest to the mark with A rev=”canonical” rebuttal but missed the “alternate” problem above, nonetheless suggesting a new rel=”shorter” relation. Problem there is the “short” URI is not guaranteed to be “shortest” or indeed even “shorter” – it still makes sense, for example, to specify a “short” URI of to a user viewing because the longer “short” URI conveys information about the content in addition to its host.

Accordingly I have updated WHATWG RelExtensions and will shortly submit the following to the IANA IESG for addition to the Atom Link Relations registry:

shortlink (

A short URI that refers to the same document.

Expected Display Characteristics:
This relation may be used as a concise reference to the document. It will
typically be shorter than other URIs (including the canonical URI) and may
rely on a [compressed] unique identifier or a human readable slug. It is
useful for space constrained environments such as email and microblogs as
well as for URIs that need to be manually entered (e.g. printed, spoken).
The referenced document may differ superficially from the original (e.g.
sort order, highlighting).

Security Considerations:
Automated agents should take care when this relation crosses administrative domains (e.g., the URI has a different authority than the current document). Such agents should also avoid circular references by resolving only once.

Note that in the interim “” can be used. Bearing in mind that you should be liberal in what you accept, and conservative in what you send, servers should use the interim identifier for now and clients should accept both. Nobody should be accepting or sending rev=”canonical” or rel=”alternate shorter” given the problems detailed above.

Update: It seems there are still a few sensible people out there, like Robert Spychala with his Short URL Auto-Discovery document. Unfortunately he proposes a term with an underscore (short_url) when it should be a space and causes the usual URI/URL confusion. Despite people like Bernhard Häussner claiming that “short_url is best, it’s the only one that does not sound like shortened content“, I don’t get this reading from a “short” link… seems pretty obvious to me and you can always still use relations like “abstract” for that purpose. In any case it’s a valid argument and one that’s easily resolved by using the term “shortcutlink” instead (updated accordingly above). Clients could fairly safely use any link relation containing the string “short”.

Update: You can follow the discussion on Twitter at #relshortcut, #relshort and #revcanonical.

Update: I forgot to mention again that the HTTP Link: header can be used to allow clients to find the shortlink without having to GET and parse the page (e.g. by doing a HEAD request):

Link: <> rel="shortlink"

Update: Both Andy Mabbett and Stan Vassilev also independently suggested rel=shortcut, which leads me to believe that we’re on a winner. Stan adds that we’ve other things to consider in addition to the semantics and Google’s Matt Cutts points out why taking rather than giving canonical-ness (as in RevCanonical) is a notoriously bad idea.

Update: Thanks to the combination of Microsoft et al recommending the use of “shortcut icon” for favicon.ico (after stuffing our logs by quietly introducing this [mis]feature) and HTML link types being a space separated list (thanks @amoebe for pointing this out – I’d been looking at the Atom RFCs and assuming they used the single link type semantics), the term “shortcut” is effectively scorched earth. Not only is there a bunch of sites that already have “shortcut” links (even if the intention was that “shortcut icon” be atomic), but there’s a bunch of code that looks for “shortcut”, “icon” or “shortcut icon”. FWIW HTML 5 specifies the “icon” link type. Moral of the story: get consensus before implementing code.

As I still have problems with the URI/URL confusion (thus ruling out “shorturl”) but have come around to the idea that this should be a noun rather than an adjective, I now propose “shortlink” as a suitable, self-explanatory, impossible-to-confuse term.

Update: I’ve created a shortlink Google Group and kicked off a discussion with a view to reaching a consensus. I’ve also created a corresponding Google Code project and modified the shorter links WordPress plugin to implement shortlinks.

rev=”canonical” considered harmful (complete with sensible solution)

Sites like provide a very simple service: turning unwieldly but information rich URLs like into something more manageable like This was traditionally useful for emails with some clients mangling long URLs but it also makes sense for URLs in documents, on TV, radio, etc. (basically anywhere a human has to manually enter it). Shorteners are a dime a dozen now – there’s over 90 of them listed here alone… and I must confess to having created one at a few years back (the idea being that you could buy a TV friendly URL). Not a bad idea but there were other more important things to do at the time and I was never going to be able to buy my first island from the proceeds. Unfortunately though there are many problems with adding yet another layer of indirection and the repurcussions could be quite serious (bearing in mind even the more trustworthy sites tend to come and go).

So a while back I whipped up a thing called “springboard” for Google Apps/AppEngine (having got bored with maintaining text files for use with Apache’s mod_rewrite) which allowed users to create redirect URLs like (and which was apparently a good idea because now Google have their own version called short links). This is the way forward – you can tell at a glance who’s behind the link from the domain and you even get an idea of what you’re clicking through to from the path (provided you’re not being told fibs). When you click on this link you get flicked over to the real (long) URL with a HTTP redirect, probably a 301 which means “Moved Permanently”, so the browsers know what’s going on too. If your domain goes down then chances are the target will be out of action too (much the same story as with third-party DNS) so there’s a lot less risk. It’s all good news and if you’re using a CMS like Drupal then it could be completely automated and transparent – you won’t even know it’s there and clients looking for a short URL won’t have to go ask a third party for one.

So the problem is that nowdays you’ve got every man and his dog wanting to feed your nice clean (but long) URLs through the mincer in order to post them on Twitter. Aside from being a security nightmare (the resulting URLs are completely opaque, though now clients like Nambu are taking to resolving them back again!?!), it breaks all sort of things from analytics to news sites like Digg. Furthermore there are much better ways to achieve this. If you have to do a round trip to shorten the URL anyway, why not ask the site for a shorter version of its canonical URL (that being the primary or ‘ideal’ URL for the content – usually quite long and optimised for SEO)? In the case of Drupal at least every node has an ID so you can immediately boil URLs down to, or even use something like base32 to get even shorter URLs like

So how do we express this for the clients? The simplest way is to embed LINK tags into the HEAD section of the HTML and specify a sensible relation (“rel”). Normally these are used to specify alternative versions of the content, icons, etc. but there’s nothing to say that for any given URL(s) the “short” url is e.g. That’s right, rel=”short”, not rel=”alternate shorter” or other such rubbish (“alternate” refers to alternate content, usually in a different mime-type, not just an alternate URL – here the content is likely to be exactly the same). It can be performance optimised somewhat too by setting an e.g. X-Rel-Short header so that users (e.g. Twitter clients) can resolve a long URL to the preferred short URL via a HTTP HEAD request, without having to retrieve and parse the HTML.

Another even less sensible alternative being peddled by various individuals (and being discussed here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here and of course here) is [ab]using the rightly deprecated and confusing rev attribute ala rev=”canonical”. Basically this is saying “I am the authorative/canonical URL and this other URL happens to point here too”, without saying anything whatsoever about the URL itself actually being short. There could be an infinite number of such inbound URLs and this only ever works for the one canonical URL itself. Essentially this idea is stillborn and I sincerely hope that when people come back to work next week it will be promptly put out of its misery.

So in summary someone’s got carried away and started writing code (RevCanonical) without first considering all the implications. Hopefully they will soon realise this isn’t such a great idea after all and instead get behind the proposal for rel=”short” at the WHATWG. Then we can all just add links like this to our pages:

<link href=”; rel=”short”>

Incidentally I say “short” and not “shorter” because the short URL may not in fact be the shortest URL for a given resource – “; could well also map back to the same page but the URL is meaningless. And I leave out “alternate” because it’s not alternate content, rather just an alternate URL – a subtle but significant difference.

Let’s hope sanity prevails…

Update: The HTTP Link: header is a much more sensible solution to the HTTP header optimisation:

Link: <>; rel="short"

An open letter to the community regarding “Open Cloud”

I write this letter in order to be 100% transparent with you about a new initiative that could prove critical to the development of computing and the Internet: the protection of the term “Open Cloud” with a certification trademark (like British Standards’ Kitemark® and the FAIRTRADE symbol) as well as its definition via an open community consensus process.

Cloud computing users will soon be able to rest assured that offerings bearing the “Open Cloud” brand are indeed “open” in that critical freedoms (such as the right to access one’s own data in an open format via an open interface) are strongly protected. It will also ensure a level playing field for all vendors while keeping the barriers to enter the marketplace low. Offerings also bearing the “Open Source” mark will have additional freedoms relating to the use, modification and distribution of the underlying software itself.

Cloud computing is Internet (“cloud”) based development and use of computer technology (“computing”). It is the first significant paradigm shift since the introduction of the PC three decades ago and it is already changing our lives. Not only is it helping to deliver computing to “the other 3 billion” people, but also facilitating communication and collaboration, slashing costs and improving reliability by delivering computing as a utility (like electricity).

The Open Source industry is built around the Open Source Definition (OSD), which is itself maintained by the non-profit Open Source Initiative (OSI). The fledgling “Open Cloud” industry should be built on a similar set of well-defined Open Cloud Principles (OCP) and the associated Open Cloud Initiative (OCI) will closely follow their example. The proposed mission is simply “To define and protect ‘Open Cloud’” and the body will be developed from inception via an open process. Even if USPTO eventually reject our pending registration, by drawing attention to this critical issue now we may have already won.

I need your help, which is why I have called on individuals like Joi Ito and Bruce Perens, as well as established vendors including Google and Amazon (and their respective developer communities) for assistance. By way of this open letter, I commit to donate assets held in trust (domains, trademarks, etc.) to a non-profit similar in spirit to the Open Source Initiative which acts to protect the rights of the number one stakeholder: You.

Sam Johnston

Introducing the Open Cloud Principles (OCP)

In light of the rapidly increasing (and at times questionable) use of the term “Open Cloud” I hereby propose the following (draft) set of principles, inspired by the Open Source Initiative (OSI) with their Open Source Definition (OSD).

I would be interested to hear any feedback people have with a view to reaching a community consensus for what constitutes “Open Cloud” (in the same way that we have had clear guidelines for what constitutes “Open Source” for many years). You can do so in reply to this post, on the document’s talk page or by being bold and editing directly – if I don’t hear from you I’ll assume you’re satisfied.

Examples of uses today include:

For the latest version of the document please refer to

Open Cloud Principles (OCP)

In order to stem the abuse of the term “Open Cloud” the community is forming a set of principles which should be met by any entity that wishes to use it, similar in spirit to the OSI‘s Open Source Definition for free software licenses.

  • No Barriers to Entry: There must be no obstacles in the path of an entity that make it difficult to enter. For example, membership fees, disproportionate capital expenditure relative to operational expenditure or dependencies on non-compliant products.
  • Rationale: Open Cloud offerings should be available to the maximum number and diversity of persons and groups. Competition must not be restricted.
  • No Barriers to Exit: There must be no obstacles in the path of an entity that make it difficult to leave. For example, a user must be able to obtain their data in a utile machine-readable form on a self-service basis.
  • Rationale: Obstacles that prevent entites from abandoning one offering for another reduce competition, which must not be restricted. If the barriers of exit are significant; a firm may be forced to continue competing in a market, as the costs of leaving may be higher than those incurred if they continue competing in the market.
  • No Discrimination: There must be no discrimination, including against any person or group of persons or specific field of endeavor. For example, it may not restrict the program from being used in certain countries, select certain people, by a commercial endeavour, or from being used for genetic research.
  • Rationale: All users should be allowed to participate without arbitrary screening.
  • Note: Some countries, including the United States, have export restrictions for certain types of products. An OCP-conformant product may warn users of applicable restrictions and remind them that they are obliged to obey the law; however, it may not incorporate such restrictions itself.
  • Interoperability: Where an appropriate standard exists for a given function it must be used rather than a proprietary alternative. Standards themselves must be clean and minimalist so as to be easily implemented and consumed. For example, if there is a suitable existing standard for single sign on than it must be used by default, although including support for alternative interfaces is permissible.
  • Rationale: Standards foster interoperability and competition giving rise to a fairer marketplace. The absence of standards and to a lesser extent, complex standards, have the opposite effect.
  • Licensing Freedom: Any material that is conveyed to users must be done so under a free license; approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD) in the case of software and Creative Commons licenses (except NonCommercial and/or NoDerivatives versions) for everything else.
  • Rationale: Free licenses impose no significant legal restriction relative to people’s freedom to use, redistribute, and produce modified versions of and works derived from the content.
  • Technological Neutrality: No provision of any license or agreement may be predicated on any individual technology or style of interface. For example, it may not require that network clients run a certain operating system or be written in a certain programming language.
  • Rationale: Such restrictions limit the utility of the solution and freedom of the user by preventing them from using their preferred solution.
  • Transparency: All related processes should be transparent and subject to public scrutiny from inception. Feedback from stakeholders should be solicited and incorporated with a view to reaching a community consensus. Conflicts of interest must be disclosed and should be further mitigated.
  • Rationale: Transparency implies openness, communication, and accountability and prevents unfairly advantaging or disadvantaging certain parties.


See also

Cloud Standards Roadmap

Almost a year ago in “Cloud Standards: not so fast…” I explained why standardisation efforts were premature. A lot has happened in the interim and it is now time to start intensively developing standards, ideally by deriving the “consensus” of existing implementations.

To get the ball rolling I’ve written a Cloud Standards Roadmap which can be seen as an authorative source for information spanning the various standardisation efforts (including identification of areas where effort is required).

Currently it looks like this:

Cloud Standards Roadmap
The cloud standards roadmap tracks the status of relevant standards efforts underway by established multi-vendor standards bodies.

Layer Description Group Project Status Due
Client ? ? ? ? ?
Software (SaaS) Operating environment W3C HTML 5 Draft 2008
Event-driven scripting language ECMA ECMAScript Mature 1997
Data-interchange format IETF JSON (RFC4627) Mature 2006
Platform (PaaS) Management API ? ? ? ?
Infrastructure (IaaS) Management API OGF Cloud Infrastructure API (CIA) Formation 2009
Container format for virtual machines DMTF Open Virtualisation Format (OVF) Complete 2009
Descriptive language for resources DMTF CIM Mature 1999
Fabric ? ? ? ? ?
Other standards efforts
Vendor-owned standards
Other resources

Approaching cloud standards with *vendor* focus only is full of fail

So I was taking stock of the cloud standards situation and found an insightful article (Cloudy clouds and standards) over at ComputerWorld via a colourful counterpoint over at f5 (Approaching cloud standards with end-user focus only is full of fail), hence the title. I made a comment which quickly turned into a blog post of its own (and was held for moderation anyway) so here goes:

I followed a link to this “short-sighted and selfish” view from Lori @ f5’s Approaching cloud standards with end-user focus only is full of fail rant and have to say that as an independent consultant representing the needs of large enterprise clients it’s not surprising that I should agree with you (representing the needs of end users in general) rather than a vendor.

Cloud computing is a paradigm shift (like mainframe to client server) and attempting to document it all in one rigid “ontology” is a futile exercise, as evidenced by the epic failure of attempts to do so thus far. A birds eye view of the landscape is possible, but only in the retrospective sense. One of the great things about cloud computing is that it is user-centric – for once the end-user has an opportunity to call the shots rather than being told what to do by vendors.

My various efforts (writing the Wikipedia article, setting up the Cloud Computing Community and more recently working on cloud standards starting with Platform as a Service) have all involved looking at what innovation is taking place in the industry and determining the consensus. Now is a very good time to do so as well because there are enough data points but no de facto proprietary standards (though the EC2 API is worryingly close to becoming one).

I tend to take advice from vendors on this topic with a grain of salt because most of their input tends to involve pulling the resulting “open standard” closer towards their particular offering – the Unified Cloud Interface (UCI) for example not only focuses on VM provisioning but goes so far as to include them specifically alongside Amazon and Google.

The user doesn’t [need to] care about this level of detail any more than they need to care about how a coal-fired power station works to turn on a light. The whole point of the cloud is that it conceals or “abstracts” details that ultimately become somebody else’s problem. Using the power analogy again, our “interfaces” to the electricity grid are very well standardised (2-4 pins and a certain voltage cycling at a certain frequency) and “The Cloud” needs similar interfaces (for example for storing data and uploading and managing workloads).

Once we have that computing will be quickly commoditised, which is every users’ best dream and vendors’ worst nightmare (except for the few, like Amazon and Google, who still have a seat after the computer industry’s next round of musical chairs).

In summary, cloud computing is finally an opportunity to shift the focus from the vendor to the user, where it arguably belongs. Vendors don’t like this of course (and anything they say on the subject should be viewed accordingly) and are doing everything they can to stake a claim in what is something equivalent of a gold rush. Only this time (unlike the dotcom bust) it’s real gold we’re talking about (not fools’ gold) and a large, sustainable (albeit heavily consolidated) industry of “computer power stations” and associated “megacomputer” supply chains will result.

Towards a Flash free YouTube killer (was: Adobe Flash penetration more like 50%)

A couple of days ago I wrote about Why Adobe Flash penetration is more like 50% than 99%, which resulted in a bunch of comments as well as a fair bit of discussion elsewhere including commentary from Adobe’s own John Dowdell. It’s good to see some healthy discussion on this topic (though it’s a shame to see some branding it “more flash hate” and an AC poster asking “How much did M$ pay you for this”).

Anyway everyone likes a good demonstration so I figured why not create a proof-of-concept YouTube killer that uses HTML 5’s video tag?

Knowing that around 20% of my visitors already have a subset of HTML 5 support (either via Safari/WebKit or Firefox 3.1 beta), and that this figure will jump to over 50% shortly after Firefox 3.1 drops (over 50% of my readers use Firefox and over 90% of them run the most recent versions), I would already be considering taking advantage of the new VIDEO tag were I to add videos to the site (even though, as a Google Apps Premier Edition user I already have a white label version of YouTube at

Selecting the demo video was easy – my brother, Michael Johns, did a guest performance on American Idol last Wednesday and as per usual it’s already popped up on YouTube (complete with a HD version). Normally YouTube use Flash’s FLV codec but for HD they sensibly opted for H.264 which is supported by Safari (which supports anything QuickTime supports – including Ogg Vorbis for users with Perian installed). Getting the video file itself is just a case of browsing to the YouTube page, going to Window->Activity and double clicking the digitally signed link that looks something like ‘‘ which should result in the video.mp4 file being downloaded (though now Google are offering paid downloads they’re working hard to stop unsanctioned downloading).

On the other hand Firefox 3.1 currently only supports Ogg Vorbis for licensing/patent reasons as even Reasonable and Non-Discriminatory (RAND) licensing is unreasonable and discriminatory for free and open source software. Unfortunately the W3C working group infamously removed a recommendation that implementors ‘should’ support Ogg Vorbis and Theora for audio and video respectively. Currently a codec recommendation is conspicuously absent from the HTML 5 working draft. So what’s a developer to do but make both Ogg and H.264 versions available? Fortunately transcoding MP4 to OGG (and vice versa) is easy enough with VLC, resulting in a similar quality but 10% smaller file (video.ogg).

The HTML code itself is quite straightforward. It demonstrates:

  • A body onLoad function to switch to Ogg for Firefox users
  • YouTube object fallback for unsupported browsers (which in turn falls back to embed)
  • A simple JavaScript Play/Pause control (which could easily be fleshed out to a slider, etc.)
  • A simple JavaScript event handler to show an alert when the video finishes playing
<?xml version="1.0" encoding="UTF-8"?>

<html xmlns=”; xml:lang=”en”>
<title>Towards a Flash free YouTube killer…</title>

<!– Basic test for Firefox switches to Ogg Theora –>
<!– TTest could be arbitrarily complex and/or run on the server side –>
<body onLoad=”if (/Firefox/.test(navigator.userAgent)){ document.getElementsByTagName(‘video’)[0].src = ‘video.ogg’; }”>
<h1>Michael Johns &amp; Carly Smithson – The Letter</h1>
<p>(Live At American Idol 02/18/2009) HD
(from <a href=””>YouTube</a&gt;)</p>

<!– Supported browsers will use the video code and ignore the rest –>
<video src=”video.mp4″ autoplay=”true” width=”630″ height=”380″>
<!– If video tag is unsupported by your browser legacy code used –>


<!– Here’s a script to give some basic playback control –>
function playPause() {
var myVideo = document.getElementsByTagName(‘video’)[0];
if (myVideo.paused);
<p><input type=button onclick=”playPause()” value=”Play/Pause”></p>

<!– Here’s an event handler which will tell us when the video finishes –>
myVideo.addEventListener(‘ended’, function () {
alert(‘video playback finished’)
} );
<p>By <a href=””>Sam Johnston</a> of
<a href=””>Australian Online Solutions</a></p>

This file (index.html) and the two video files above (video.mp4 and video.ogg) are then uploaded to Amazon S3 (at and made available via Amazon CloudFront content delivery network (at And finally you can see for yourself (bearing in mind that to keep the code clean no attempts were made to check the ready states so either download the files locally or be patient!):

Towards a Flash free YouTube killer…