HTTP2 Expression of Interest

Here’s my (rather rushed) personal submission to the Internet Engineering Task Force (IETF) in response to their Call for Expressions of Interest in new work around HTTP; specifically, a new wire-level protocol for the semantics of HTTP (i.e., what will become HTTP/2.0), and new HTTP authentication schemes. You can also review the submissions of Facebook, FirefoxGoogle, Microsoft, Twitter and others.

[The views expressed in this submission are mine alone and not (necessarily) those of Citrix, Google, Equinix or any other current, former or future client or employer.]

My primary interest is in the consistent application of HTTP to (“cloud”) service interfaces, with a view to furthering the goals of the Open Cloud Initiative (OCI); namely widespread and ideally consistent interoperability through the use of open standard formats and interfaces.

In particular, I strongly support the use of the existing metadata channel (headers) over envelope overlays (SOAP) and alternative/ancillary representations (typically in JSON/XML) as this should greatly simplify interfaces while ensuring consistency between services. The current approach to cloud “standards” calls on vendors to define their own formats and interfaces and to maintain client libraries for the myriad languages du jour. In an application calling on multiple services this can result in a small amount of business logic calling on various bulky, often poorly written and/or unmaintained libraries. The usual counter to the interoperability problems this creates is to write “adapters” (ala ODBC) which expose a lowest-common-denominator interface, thus hiding functionality and creating an “impedence mismatch”. Ultimately this gives rise to performance, security, cost and other issues.

By using HTTP as intended it is possible to construct (cloud) services that can be consumed using nothing more than the built-in, standards compliant HTTP client. I’m not writing to discuss whether this is a good idea, but to expose a use case that I would like to see considered, and one that we have already applied with an amount of success in the Open Cloud Computing Interface (OCCI).

To illustrate the original scope, versions of HTTP (RFC 2068) included not only the Link header (recently revived by Mark Nottingham in RFC 5988) but also LINK and UNLINK verbs to manipulate it (recently proposed for revival by James Snell). Unfortunately hypertext, and in particular HTML (which includes linking in-band rather than out-of-band) arguably stole HTTP’s thunder, leaving the overwhelming majority of formats that lack in-band linking (images, documents, virtual machines, etc.) high and dry and resulting in inconsistent linking styles (HTML vs XML vs PDF vs DOC etc.). This limited the extent of web linking as well as the utility of HTTP for innovative applications including APIs. Indeed HTTP could easily and simply meet the needs of many “Semantic Web” applications, but that is beyond the scope of this particular discussion.

To illustrate by way of example, consider the following synthetic request/response for an image hosting site which incorporates Web Linking (RFC 5988), Web Categories (draft-johnston-http-category-header) and Web Attributes (yet to be published):

GET /1.jpg HTTP/1.0

HTTP/1.0 200 OK
Content-Length: 69730
Content-Type: image/jpeg
Link: http://creativecommons.org/licenses/by-sa/3.0/; rel=”license”
Link: /2.jpg; rel=”next”
Category: dog; label=”Dog”; scheme=”http://example.org/animals”
Attribute: name=”Spot”

In order to “animate” resources, consider the use of the Link header to start a virtual machine in the Open Cloud Computing Interface (OCCI):

Link: </compute/123;action=start>; rel="http://schemas.ogf.org/occi/infrastructure/compute/action#start"

The main objection to the use of the metadata channel in this fashion (beyond the application of common sense in determining what constitutes data vs metadata) is implementation issues (e.g. arbitrary limitations, i18n, handling of multiple headers, etc.) which could be largely resolved through specification. For example, the (necessary) use of e.g. RFC 2231 encoding for header values (but not keys) in e.g. RFC 5988 Web Linking gives rise to unnecessary complexity that may lead to interoperability, security and other issues which could be resolved through the specification of Unicode for keys and/or values. Another concern is the absence of features such as a standardised ability to return a collection (e.g. multiple responses). I originally suggested that HTTP 2.0 incorporate such ideas in 2009.

I’ll leave the determination of what would ultimately be required for such applications to the working group (should this use case be considered interesting by others), and while better support for performance, scalability and mobility are obviously required this has already been discussed at length. I strongly support Poul-Henning Kamp’s statement that “I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the aggregations of badly thought out hacks of HTTP/1.1.” (and agree that we should incorporate the concept of a “HTTP Router”) as well as Tim Bray’s statement that: “I’m inclined to err on the side of protecting user privacy at the expense of almost all else” (and believe that we should prevent eavesdroppers from learning anything about an encrypted transaction; something we failed to do with DNSSEC even given alternatives like dnscurve that ensure confidentiality as well as integrity).

Leaving Google+


Ironically many Google employees have even given up on Google+
(though plenty still post annoying “Moved to Google+” profile pics on other social networks)

One of those sneaky tweets that links to Google+ just tricked me into wading back into the swamp that it’s become, hopefully for the last time (I say “hopefully” because in all likelihood I’ll be forced back onto it at some point — it’s already apparently impossible to create a Google Account for any Google services without also landing yourself a Google+ profile and Gmail account and it’s very likely that the constant prompting for me to “upgrade” to Google+ will be more annoying than the infamous red notification box). Here’s what I saw in my stream:

  • 20 x quotes/quotepics/comics
  • 8 x irrelevant news articles & opeds
  • 1 x PHP code snippet
  • 3 x blatant ads
  • 2 x Google+ fanboi posts (including this little chestnut: “Saying nobody uses Google+ is like a virgin saying sex is boring. They’ve never actually tried it.” — you just failed at life by comparing Google+ to sex my friend).
  • 2 x random photos

That’s pretty much 0% signal and 100% noise, and before you jump down my throat about who I’m following, it’s a few hundred generally intelligent people (though I note it is convenient that the prevalent defense for Google+ being a ghost town, or worse, a cesspool, is that your experience depends not only on who you’re following, but what they choose to share with you — reminds me of the kind of argument you regularly hear from religious apologists).

Google+ Hangouts

My main gripe with Google+ this week though was the complete failure of Google+ Hangouts (which should arguably be an entirely separate product) for Rishidot Research‘s Open Conversations: Cloud Transparency on Monday. The irony of holding an open/transparency discussion on a close platform aside, we were plagued with technical problems from the outset. First it couldn’t find my MacBook Air’s camera so I had to move from my laptop to my iMac (which called for heavy furniture to be moved to get a clean background). When I joined we started immediately (albeit late, and sans 2-3 of the half dozen attendees), but it wasn’t long before one of the missing attendees joined and repeatedly interrupted the first half of the meeting with audio problems. The final attendee never managed to join, though their name and a blank screen appeared each of the 5-10 times they tried. We then inexplicably lost two attendees, and by the time they managed to re-join I too got a “Network failure for media packets” error:

Then there was “trouble connecting with the plugin”, which called for me to refresh the page and then reinstall the plugin:

Eventually I made it back in, only to discover that we had now lost the host(!?!) and before long it was down to just me and one other attendee. We struggled through the last half of the hour but it was only afterwards that we discovered we were talking to ourselves because the live YouTube stream and recording stopped when the host was kicked out. Needless to say, Google+ Hangouts are not ready for the prime time, and if you invite me to join one then don’t be surprised if I refer you to this article.

Hotel California

To leave Google+ head over to Google Takeout and download your Circles (I grabbed data for other services too for good measure, and exported this blog separately since my profile is now Google+ integrated). You might want to see who’s following you, Actions->Select All and dump them into a circle first, otherwise you’ll probably lose that information when you close your account.

When you go to the Google+ “downgrade” page and select “Delete your entire Google profile” you’ll get a sufficiently complicated warning as to scare most people back into submission, but the most concerning part for me was this unhelpful help advising “Other Google products which require a profile will be impacted“:

Fortunately for YouTube and Blogger at least you can check and revert your decision to use a Google+ profile respectively, but you’ll immediately be told to “Connect to Google+” once you unplug:

After that it’s just a case of checking “I understand that deleting this service can’t be undone and the data I delete can’t be restored.” and clicking “Remove selected services” (what “selected services”? I just want to be rid of Google+!). I’ll let you know how that goes once my friends on Google+ have had a chance to read this.

Flash/Silverlight: How much business can you afford to turn away?

Tim Anderson was asking about the future of Silverlight on Twitter today so here are my thoughts on the subject, in the context of earlier posts on the future of Flash:2009: Why Adobe Flash penetration is more like 50% than 99%
2010: Face it Flash, your days are numbered.
2011: RIP Adobe Flash (1996-2011) – now let’s bury the dead

In the early days of the Internet, a lack of native browser support for “advanced” functionality (particularly video) created a vacuum that propelled Flash to near ubiquity. It was the only plugin to achieve such deep penetration, though I would argue never as high as 99% (which Adobe laughably advertise to this day). As a result, developers were able to convince clients to adopt the platform for all manner of interactive sites (including, infamously, many/most restaurants).

The impossible challenge for proprietary browser plugins is staying up-to-date and secure across a myriad hardware and software platforms — it was hard enough trying to support multiple browsers on multiple versions of Windows on one hardware platform (x86), but with operating systems like Linux and Mac OS X now commanding non-negligible shares of the market it’s virtually impossible. Enter mobile devices, which by Adobe’s own reckoning outnumber PCs by 3 to 1. Plugin vendors now have an extremely diverse ecosystem of hardware (AMD, Intel, etc.) and software (Android, iOS, Symbian, Windows Phone 7, etc.) and an impossibly large number of permutations to support. Meanwhile browser engines (e.g. WebKit, which is the basis for Safari and Chrome on the desktop and iOS, Android and webOS on mobile devices) have added native support for the advanced features whose absence created a demand for Flash.

Unsurprisingly, not only is Flash in rapid decline — as evidenced by Adobe recently pulling out of the mobile market (and thus 3 in 4 devices) — but it would be virtually impossible for any competitor to reach its level of penetration. As such, Silverlight had (from the outset) a snowflake’s chance in hell of achieving an “acceptable” level of penetration.

What’s an “acceptable level of penetration” you ask? That’s quite simple — it’s the ratio of customers that businesses are prepared to turn away in order to access “advanced” functionality that is now natively supported in most browsers. At Adobe’s claimed 99% penetration you’re turning away 1 in 100 customers. At 90% you’re turning away 1 in 10. According to http://riastats.com, if you’re deploying a Flash site down under then you’re going to be turning away 13%, or a bit more than 1 in 8. For Silverlight it’s even worse — almost half of your customers won’t even get to see your site without having to install a plugin (which they are increasingly less likely to do).

How much revenue can your business tolerate losing? 1%? 10%? 50%? And for what benefit?

Infographic: Diffusion of Social Networks — Facebook, Twitter, LinkedIn and Google+

Social networking market

They say a picture’s worth a thousand words and much digital ink has been spilled recently on impressive sounding (yet relatively unimpressive) user counts, so here’s an infographic showing the diffusion of social networks as at last month to put things in perspective.

There are 7 billion people on the planet, of which 2 billion are on the Internet. Given Facebook are now starting to make inroads into the laggards (e.g. parents/grandparents) with 800 million active users already under their belt, I’ve assumed that the total addressable market (TAM) for social media (that is, those likely to use it in the short-medium term) is around a billion Internet users (i.e. half) and growing — both with the growth of the Internet and as growing fraction of Internet users. That gives social media market shares of 80% for Facebook, 20% for Twitter and <5% for Google+. In other words, Twitter is 5x the size of Google+ and Facebook is 4x the size of Twitter (e.g. 20x the size of Google+).It’s important to note that while some report active users, Google report total (e.g. best case) users — only a percentage of the total users are active at any one time. I’m also hesitant to make direct comparisons with LinkedIn as while everyone is potentially interested in Facebook, Twitter and Google+, the total addressable market for a professional network is limited, by definition, to professionals — I would say around 200 million and growing fast given the penetration I see in my own professional network. This puts them in a similar position to Facebook in this space — up in the top right chasing after the laggards rather than the bottom left facing the chasm.

Diffusion of innovations

The graph shows Rogers‘ theory on the diffusion of innovations, documented in The Innovator’s Dilemma, where diffusion is the process by which an innovation is communicated through certain channels over time among the members of a social system.

There are 5 stages:

  1. Knowledge is when people are aware of the innovation but don’t know (and don’t care) about it.
  2. Persuasion is when people are interested in learning more.
  3. Decision is when people decide to accept or reject it.
  4. Implementation is when people employ it to some degree for testing (e.g. create an account).
  5. Confirmation is when people finally decide to use it, possibly to its full potential.

I would suggest that the majority of the total addressable market are at stage 1 or 2 for Google+ and Twitter, and stage 4 or 5 for Facebook and LinkedIn (with its smaller TAM). Of note, users’ decisions to reject an innovation at the decision or implementation phase may be semi-permanent — to quote Slate magazine’s Google+ is Dead article, “by failing to offer people a reason to keep coming back to the site every day, Google+ made a bad first impression. And in the social-networking business, a bad first impression spells death.” The same could be said for many users of Twitter, who sign up but fail to engage sufficiently to realise its true value. Facebook, on the other hand, often exhibits users who leave only to subsequently return due to network effects.

Social networking is also arguably a natural monopoly given, among other things, dramatically higher acquisition costs once users’ changing needs have been satisfied by the first mover (e.g. Facebook). Humans have been using social networking forever, only until recently it’s been manual and physiologically limited to around 150 connections (Dunbar’s number, named after British anthropologist Robin Dunbar). With the advent of technology that could displace traditional systems like business cards and rolodexes came a new demand for pushing the limits for personal and professional reasons — I use Facebook and LinkedIn extensively to push Dunbar’s number out an order of magnitude to ~1,500 contacts for example, and Twitter to make new contacts and communicate with thousands of people. I don’t want to maintain 4 different social networks any more than I want to have to search 4 different directories to find a phone number — I already have 3 which is 2 too many!

Rogers’ 5 factors

How far an innovation ultimately progresses depends on 5 factors:

  1. Relative Advantage — Does it improve substantially on the status quo (e.g. Facebook)?
  2. Compatibility — Can it be easily assimilated into an individual’s life?
  3. Simplicity or Complexity — Is it too complex for your average user?
  4. Trialability — How easy is it to experiment?
  5. Observability — To what extent is it visible to others (e.g. for viral adoption)

Facebook, which started as a closed community at Harvard and other colleges and grew from there, obviously offered significant relative advantage over MySpace. I was in California at the time and it seemed like everyone had a MySpace page while only students (and a few of us in local/company networks) had Facebook. It took off like wildfire when they solved the trialability problem by opening the floodgates and a critical mass of users was quickly drawn in due to the observability of viral email notifications, the simplicity of getting up and running and the compatibility with users’ lives (features incompatible with the unwashed masses — such as the egregiously abused “how we met” form — are long gone and complex lists/groups are there for those who need them but invisible to those who don’t). Twitter is also trivial to get started but can be difficult to extract value from initially.

Network models

Conversely, the complexity of getting started on Google+ presents a huge barrier to entry and as a result we may see the circles interface buried in favour of a flat “follower” default like that of Twitter (the “suggested user list” has already appeared), or automated. Just because our real-life social networks are complex and dynamic does not imply that your average user is willing to invest time and energy in maintaining a complex and dynamic digital model. The process of sifting through and categorising friends into circles has been likened to the arduous process of arranging tables for a wedding and for the overwhelming majority of users it simply does not offer a return on investment:

In reality we’re most comfortable with concentric rings, which Facebook’s hybrid model recently introduced by way of “Close Friends”, “Acquaintances” and “Restricted” lists (as well as automatically maintained lists for locations and workplaces — a feature I hope gets extended to other attributes). By default Facebook is simple/flat — mutual/confirmed/2-way connections are “Friends” (though they now also support 1-way follower/subscriber relationships ala Twitter). Concentric rings then offer a greater degree of flexibility for more advanced users and the most demanding users can still model arbitrarily complex networks using lists:

In any case, if you give users the ability to restrict sharing you run the risk of their actually using it, which is a sure-fire way to kill off your social network — after all, much of the value derived from networks like Facebook is from “harmless voyeurism”. That’s why Google+ is worse than a ghost town for many users (including myself, though as a Google Apps users I was excluded from the landrush phase) while being too noisy for others. Furthermore, while Facebook and Twitter have a subscribe/follow (“pull”) model which allows users to be selective of what they hear, when a publisher shares content with circles on Google+ other users are explicitly notified (“push”) — this is important for “observability” but can be annoying for users.

Nymwars

The requirement to provide and/or share your real name, sex, date of birth and a photo also presents a compatibility problem with many users’ expectations of privacy and security, as evidenced by the resulting protests over valid use cases for anonymity and pseudonymity. For something that was accepted largely without question with Facebook, the nymwars appear to have caused irreparable harm to Google+ in the critically important innovator and early adopter segments, for reasons that are not entirely clear to me. I presume that there is a greater expectation of privacy for Google (to whom people entrust private emails, documents, etc.) than for Facebook (which people use specifically and solely for controlled sharing).

Adopter categories

Finally, there are 5 classes of adopters (along the X axis) varying over time as the innovation attains deeper penetration:

  1. Innovators (the first 2.5%) are generally young, social, wealthy, risk tolerant individuals who adopt first.
  2. Early Adopters (the next 13.5%) are opinion leaders who adopt early enough (but not too early) to maintain a central communication position.
  3. Early Majority (the next 34%, to 50% of the population) take significantly longer to adopt innovations.
  4. Late Majority (the next 34%) adopt innovations after the average member of society and tend to be highly sceptical.
  5. Laggards (the last 16%) show little to no opinion leadership and tend to be older, more reclusive and have an aversion to change-agents.

I’ve ruled out wealth because while buying an iPhone is expensive (and thus a barrier to entry), signing up for a social network is free.The peak of the bell curve is the point at with the average user (e.g. 50% of the market) has adopted the technology, and it is very difficult both to climb the curve as a new technology and to displace an existing technology that is over the hump.

The Chasm

The chasm (which exists between Early Adopters and Early Majority i.e. at 16% penetration), refers to Moore‘s argument in Crossing the Chasm that there is a gap between early adopters and the mass market which must be crossed by any innovation which is to be successful. Furthermore, thanks to accelerating technological change they must do so within an increasingly limited time for fear of being equaled by an incumbent or disrupted by another innovation. The needs of the mass market differ — often wildly — from the needs of early adopters and innovations typically need to adapt quickly to make the transition. I would argue that MySpace, having achieved ~75 million users at peak, failed to cross the chasm by finding appeal in the mass market (ironically due in no small part to their unfettered flexibility in customising profiles) and was disrupted by Facebook. Twitter on the other hand (with some 200 million active users) has crossed the chasm, as evidenced by the presence of mainstream icons like BieberSpears and Obama as well as their fans. LinkedIn (for reasons explained above) belongs at the top right rather than the bottom left.

Disruptive innovations

The big question today is whether Google+ can cross the chasm too and give Facebook a run for its money. Facebook, having achieved “new-market disruption” with almost a decade head start in refining the service with a largely captive audience, now exhibits extremely strong network effects. It would almost certainly take another disruptive innovation to displace them (that is, according to Clayton Christensen, one that develops in an emerging market and creates a new market and value network before going on to disrupt existing markets and value networks), in the same way that Google previously disrupted the existing search market a decade ago.

In observing that creating a link to a site is essentially a vote for that site (“PageRank”), Google implemented a higher quality search engine that was more efficient, more scalable and less susceptible to spam. In the beginning Backrub Google was nothing special and the incumbents (remember Altavista?) were continuously evolving — they had little to fear from Google and Google had little to fear from them as it simply wasn’t worth their while chasing after potentially disruptive innovations like Backrub. They were so disinterested in fact that Yahoo! missed an opportunity to acquire Google for $3bn in the early days. Like most disruptive technologies, PageRank was technologically straightforward and far simpler than trying to determine relevance from the content itself. It was also built on a revolutionary hardware and software platform that scaled out rather than up, distributing work between many commodity PCs, thus reducing costs and causing “low-end disruption”. Its initial applications were trivial, but it quickly outpaced the sustaining innovation of the incumbents and took the lead, which it has held ever since:

Today Facebook is looking increasingly disruptive too, only in their world it’s no longer about links between pages, but links between people (which are arguably far more valuable). Last year while working at Google I actively advocated the development of a “PageRank for people” (which I referred to as “PeopleRank” or “SocialRank”), whereby a connection to a person was effectively a vote for that person and the weight of that vote would depend on the person’s influence in the community, in the same way that a link from microsoft.com is worth more than one from viagra.tld (which could actually have negative value in the same way that hanging out with the wrong crowd negatively affects reputation). I’d previously built what I’d call a “social metanetwork” named “meshed” (which never saw the light of day due to cloud-related commitments) and the idea stemmed from that, but I was busy running tape backups for Google, not building social networks on the Emerald Sea team.

With the wealth of information Google has at its fingertips — including what amounts to a pen trace of users’ e-mail and (courtesy Android and Google Voice) phone calls and text messages — it should have been possible for them to completely automate the process of circle creation, in the same way that LinkedIn Maps can identify clusters of contacts. But they didn’t (perhaps because they got it badly wrong with Buzz), and they’re now on the sustaining innovation treadmill with otherwise revolutionary differentiating features being quickly co-opted by Facebook (circles vs lists, hangouts vs Skype, etc).

Another factor to consider is that Google have a massive base of existing users in a number of markets that they can push Google+ to, and they’re not afraid to do so (as evidenced by its appearance in other products and services including AndroidAdWords, BloggerChrome, Picasa, MapsNewsReader, TalkYouTube and of course the ubiquitous sandbar and gratuitous blue arrow which appeared on Google Search). This strategy is not without risk though as if successful it will almost certainly attract further antitrust scrutiny, in the same way that Microsoft found itself in hot water for what was essentially putting an IE icon on the desktop. Indeed I had advocated the deployment of Google+ as a “social layer” rather than isolated product (ala the defunct Google Buzz), but stopped short of promoting an integrated product to rival Facebook — if only to maintain a separation of duties between content production/hosting and discovery.

The Solution

While I’m happy to see some healthy competition in the space, I’d rather not see any of the social networks “win” as if any one of them were able to cement a monopoly then us users would ultimately suffer. At the end of the day we need to remember that for any commercial social network we’re not the customer, we’re the product being sold:

As such, I strongly advocate the adoption of open standards for social networking, whereby users select a service or host a product that is most suitable for their specific needs (e.g. personal, professional, branding, etc) which is interoperable with other, similar products.

What we’re seeing today is similar to the early days of Internet email, where the Simple Mail Transfer Protocol (SMTP) broke down the barriers between different silos — what we need is an SMTP for social networking.

References:

Sources:

  • Facebook: 800 million users (active) [source]
  • Twitter: 200 million users (active) [source]
  • LinkedIn: 135 million users (total) [source]
  • MySpace: 75.9 million users (peak) [source]
  • Google+: 40 million users (total) [source]

RIP Adobe Flash (1996-2011) – now let’s bury the dead

Adobe kills mobile Flash, giving Steve Jobs the last laugh, reports The Guardian’s Charles Arthur following the late Steve Jobs’ epic Thoughts on Flash rant 18 months ago. It’s been about 2.5 years since I too got sick of Flash bringing my powerful Mac to its knees, so I went after the underlying lie that perpetuates the problem, explaining why Adobe Flash penetration is more like 50% than 99%. I even made progress Towards a Flash free YouTube killer, only it ended up being YouTube themselves who eventually started testing a YouTube HTML5 Video Player (while you’re there please do your bit for the open web by clicking “Join the HTML5 Trial” at the bottom of that page).

I heard a sound as though a million restaurant websites cried out at onceCharles Arthur

You see, armed with this heavily manipulated statistic, armies of developers are to this day fraudulently duping their paying clients into deploying a platform that will invariably turn away a percentage of their business at the door, in favour of annoying flaming logos and other atrocities that blight the web:

How much business can you tolerate losing? If you’ve got 95% penetration then you’re turning away 1 in 20 customers. At 90% you’re turning away 1 in 10. At 50% half of your customers won’t even get to see your product. I don’t know too many businesses who can afford to turn away any customers in this economic climate.

In my opinion the only place Flash technology has in today’s cloud computing environment is as a component of the AIR runtime for building (sub-par) cross-platform applications, and even then I’d argue that they should be using HTML5. As an Adobe Creative Suite Master Collection customer I’m very happy to see them dropping support for this legacy technology to focus on generating interactive HTML5 applications, and look forward to a similar announcement for desktop versions of the Flash player in the not too distant future.In any case, with the overwhelming majority of devices being mobile today and with more and more of them including browser functionality, the days of Flash were numbered even before Adobe put the mobile version out of its misery. Let’s not drag this out any longer than we have to, and bury the dead by uninstalling Flash Player. Here’s instructions for Mac OS X and Windows, and if you’re not ready to take the plunge into an open standards based HTML5 future then at least install FlashBlock for Chrome or Firefox (surely you’re not still using IE?).

Update: Flash for TV is dead too, as if killing off mobile wasn’t enough: Adobe Scrapping Flash for TV, Too‎

Update: Rich Internet Application (RIA) architectures in general are in a lot of trouble — Microsoft are killing off Silverlight as well: Mm, Silverlight, what’s that smell? Yes, it’s death

Update: In a surprising move that will no doubt be reversed, RIM announced it would continue developing Flash on the PlayBook (despite almost certainly lacking the ability to do so): RIM vows to keep developing Flash for BlackBerry PlayBook – no joke

Face it Flash, your days are numbered.

It’s no secret that I’m no fan of Adobe Flash:

It should be no surprise then that I’m stoked to see a vigorous debate taking place about the future/fate of Flash well ahead of schedule, and even happier to see Flash sympathisers already resorting to desperate measures including “playing the porn card” (not to mention Farmville which, in addition to the myriad annoying, invasive and privacy-invading advertisements, I will also be more than happy to see extinct). In my mind this all but proves how dire their situation has become with the sudden onslaught of mobile devices deliberately absent flash malware*.

Let’s take a moment to talk about statistics. According to analysts there are currently “only” 1.3 billion Internet-connected PCs. To put that into context, there are already almost as many Internet-connected mobile devices. With a growth rate 2.5 times that of PCs, mobiles will soon become the dominant Internet access device. Of those new devices, few of them support Flash (think Android, iPhone), and with good reason – they are designed to be small, simple, performant and operate for hours/days between charges.

As if that’s not enough, companies with the power to make it happen would very much like for us to have a third device that fills the void between the two – a netbook or a tablet (like the iPad). For the most part (again being powered by Android and iPhone OS) these devices don’t support Flash either. Even if we were to give Adobe the benefit of the doubt in accepting their deceptiveoptimistic claims that Flash is currently “reaching 99% of Internet-enabled desktops in mature markets” (for more on that subject see Lies, damned lies and Adobe’s penetration statistics for Flash), between these two new markets it seems inevitable that their penetration rate will drop well below 50% real soon now.

Here’s the best part though, Flash penetration doesn’t even have to drop below 50% for us to break the vicious cycle of designers claiming “99% penetration” and users then having to install Flash because so many sites arbitrarily depend on it (using Flash for navigation is a particularly heinous offense, as is using it for headings with fancy fonts). Even if penetration were to drop to 95% (I would argue it already has long ago, especially if you dispense with weasel wording like “mature markets” and even moreso if you do away with the arbitrary “desktop” restriction – talk about sampling bias!) that translates to turning away 1 in 20 of your customers. At what point will merchants start to flinch – 1 in 10 (90%)? 1 in 5 (80%)? 1 in 4 (75%)? 1 in 2 (50%)?

As if that’s not enough, according to Rich Internet Application Statistics, you would be losing some of your best customers – those who can afford to run Mac OS X (87% penetration) and Windows 7 (around 75% penetration) – not to mention those with iPhones and iPads (neither of which are the cheapest devices on the market). Oh yeah and you heard it right, according to them, Flash penetration on Windows 7 is an embarassing 3 in 4 machines; even worse than SunOracle Java (though ironically Microsoft’s own Silverlight barely reaches 1 in 2 machines).

While we’re at it, at what point does it become “willful false advertising” for Adobe and their army of Flash designers to claim such deep penetration? Victims who pay $$lots for Flash-based sites only to discover from server logs that a surprisingly large percentage of users are being turned away have every reason to be upset, and ultimately to seek legal recourse. Why hasn’t this already happened? Has it? In any case designers like “Paul Threatt, a graphic designer at Jackson Walker design group, [who] has filed a complaint to the FTC alleging false advertising” ought to think twice before pointing the finger at Apple (accused in this case over a few mockups, briefly shown and since removed, in an iPad promo video).

At the end of the day much of what is annoying about the web is powered by Flash. If you don’t believe me then get a real browser and install Flashblock (for Firefox or Chrome) or ClickToFlash (for Safari) and see for yourself. You will be pleasantly surprised by the absence of annoyances as well as impressed by how well even an old computer can perform when not laden with this unnecessary parasite*. What is less obvious (but arguably more important) is that your security will dramatically improve as you significantly reduce your attack surface (while you’re at it replace Adobe Reader with Foxit and enjoy even more safety). As someone who has been largely Flash-free for the last 3 months I can assure you life is better on the other side; in addition to huge performance gains I’ve had far fewer crashes since purging my machine – unsurprising given according to Apple’s Steve Jobs, “Whenever a Mac crashes more often than not it’s because of Flash“. “No one will be using Flash, he says. The world is moving to HTML5.

So what can Adobe do about this now the horse has long since bolted? If you ask me, nothing. Dave Winer (another fellow who, like myself, “very much care[s] about an open Internet“) is somewhat more positive in posing the question What if Flash were an open standard? and suggesting that “Adobe might want to consider, right now, very quickly, giving Flash to the public domain. Disclaim all patents, open source all code, etc etc.“. Too bad it’s not that simple so long as one of the primary motivations for using Flash is bundled proprietary codecs like H.264 (which the MPEG LA have made abundantly clear will not be open sourced so long as they hold [over 900!] essential patents over it).

Update: Mobile Firefox Maemo RC3 has disabled Flash because “The Adobe Flash plugin used on many sites degraded the performance of the browser to the point where it didn’t meet Mozilla’s standards.” Sound familiar?

Update: Regarding the upcoming CS5 release which Adobe claims will “let you publish ActionScript 3 projects to run as native applications for iPhone“, this is not at all the same thing as the Flash plugin and will merely allow developers to create applications which suck more using a non-free SDK. No thanks. I’m unconvinced Apple will let such applications into the store anyway, citing performance concerns and/or the runtime rule.

Update: I tend to agree with Steven Wei that The best way for Adobe to save Flash is by killing it, but that doesn’t mean it’ll happen and any case if they wanted to do that they would have wanted to have started at least a year or two ago for the project to have any relevance, and it’s clear that they’re still busy flogging the binary plugin dead horse.

Update: Another important factor I neglected to mention above is that Adobe already struggle to maintain up-to-date binaries for a small number of major platforms and even then Mac and Linux are apparently second and third class citizens. If they’re struggling to manage the workload today then I don’t see what will make it any easier tomorrow with the myriad Linux/ARM devices hitting the market (among others). Nor would they want to – if they target HTML5, CSS3, etc. as proposed above then they have more resources to spend on having the best development environment out there.

* You may feel that words like “parasite” and “malware” are a bit strong for Flash, but when you think about it Flash has all the necessary attributes; it consumes your resources, weakens your security and is generally annoying. In short, the cost outweighs any perceived benefits.

A word on the Australian Internet censorship scandal


I’ve had a quick scan over Senator Stephen Conroy‘s infamous, long-awaited report on the efficacy of current Internet filtering technology and find it to be nothing short of scandalous. Without getting into the nitty gritty details (for example, how a filtering solution can achieve the impossible by improving rather than degrading the performance of encrypted, random transfers), it reads like it’s a whitepaper for one of the various purveyors of censorship technology.

The cynic in me insisted I take a quick look at who these Enex Pty Ltd jabbers are anyway – who knows, they could be an industry lobby group for all we know. Sure enough, a quick look at their corporate client list reveals (based on some quick Google searching) over a dozen companies who make a living selling commercial censorship technology:

  • Anthology Solutions
  • Content Keeper Technologies
  • Content Watch
  • F-Secure Corporation
  • Internet Sheriff Technology
  • Manaccom
  • MessageLabs
  • NetBox Blue
  • Netgear
  • Netsweeper
  • PC Tools Software
  • Raritan (?)
  • Secure Computing Corporation (McAfee)
  • Symantec
  • Trend Micro

To put things in perspective, this represents around a quarter of their published client list, and that’s not including half a dozen or so service providers that could arguably be thrown in with this bunch. Who in their right mind would risk upsetting one in four of their paying customers by writing a report critical of their products? And does anyone really believe that these vendors resisted the urge to apply pressure? Or that there were not personal relationships involved? I don’t, not for a second. In my opinion this report was rigged from the outset to succeed, and in doing so deprive Australians of essential civil liberties.

The report itself is fatally flawed; the error margins are significant (e.g. “a conservative +/-10 percent”), critical controls were missing (e.g. “as much as 40 percent of an internet service performance could be lost [due to factors outside of our control]”), outrageous assumptions were used (e.g. “performance impact is considered minimal if between 10 and 20 percent”) and perhaps most importantly of all, it’s creator has an obvious conflict of interest. I don’t consider it to be worth the paper it’s [not] printed on.

Another deeply concerning development is government grants that would encourage ISPs to go beyond the mandatory filters, despite all censorship systems tested reporting 2.5-3.5% false positive rates (that is, where innocuous/legitimate content is filtered). To put that in perspective, the best part of a billion legitimate pages would be improperly filtered (according to Wikipedia stats), or around 1 page in 30.

Speaking of Wikipedia, many of the systems are hybrid which means that hosts known to be clean would be ignored by IP (which is much more efficient). If, however, even a single page were problematic then the entire site (and all others sharing its’ IPs) would be forced through a filtering proxy. This would affect some of the most popular sites on the Internet (such as Wikipedia and YouTube), not to mention other increasingly useful services like WikiLeaks (no doubt silencing such services is seen as a fringe benefit to our self-appointed censors). Need I remind you that similar filters in Britain caused severe problems for Wikipedia over a single CD cover only last year.

Another consideration that has not been covered anywhere near enough is the performance impact on cloud computing services. Web interfaces like Facebook, Twitter and Gmail are extremely sensitive to latency introduced by proxies and raw computing services like Amazon’s S3 are sensitive to bandwidth limitations. Then you have the problem of platforms like Google App Engine, Google Sites & Microsoft Web Office which are both difficult to identify (they have many IPs which are not disclosed and difficult if not impossible to enumerate) and which host content for a massive number of customers. If even one person shares a document deemed obnoxious to their sensibilities then the performance will be reduced to unacceptable levels for everyone until it is removed (and then some).

It is my contention that censorship is completely incompatible with cloud computing, and that this alone is reason enough to scuttle it. In the mean time Electronic Frontiers Australia (EFA) has just landed themselves a new life member and I encourage anyone who cares about their future and that of their children to join as well (my friends in the USA may want to take a look at the EFF and Europeans the FFII).

Thanks to Gizmodo Australia for the image above, used without permission but with thanks. No thanks to Gizmodo for breaking the link.