Flash/Silverlight: How much business can you afford to turn away?

Tim Anderson was asking about the future of Silverlight on Twitter today so here are my thoughts on the subject, in the context of earlier posts on the future of Flash:2009: Why Adobe Flash penetration is more like 50% than 99%
2010: Face it Flash, your days are numbered.
2011: RIP Adobe Flash (1996-2011) – now let’s bury the dead

In the early days of the Internet, a lack of native browser support for “advanced” functionality (particularly video) created a vacuum that propelled Flash to near ubiquity. It was the only plugin to achieve such deep penetration, though I would argue never as high as 99% (which Adobe laughably advertise to this day). As a result, developers were able to convince clients to adopt the platform for all manner of interactive sites (including, infamously, many/most restaurants).

The impossible challenge for proprietary browser plugins is staying up-to-date and secure across a myriad hardware and software platforms — it was hard enough trying to support multiple browsers on multiple versions of Windows on one hardware platform (x86), but with operating systems like Linux and Mac OS X now commanding non-negligible shares of the market it’s virtually impossible. Enter mobile devices, which by Adobe’s own reckoning outnumber PCs by 3 to 1. Plugin vendors now have an extremely diverse ecosystem of hardware (AMD, Intel, etc.) and software (Android, iOS, Symbian, Windows Phone 7, etc.) and an impossibly large number of permutations to support. Meanwhile browser engines (e.g. WebKit, which is the basis for Safari and Chrome on the desktop and iOS, Android and webOS on mobile devices) have added native support for the advanced features whose absence created a demand for Flash.

Unsurprisingly, not only is Flash in rapid decline — as evidenced by Adobe recently pulling out of the mobile market (and thus 3 in 4 devices) — but it would be virtually impossible for any competitor to reach its level of penetration. As such, Silverlight had (from the outset) a snowflake’s chance in hell of achieving an “acceptable” level of penetration.

What’s an “acceptable level of penetration” you ask? That’s quite simple — it’s the ratio of customers that businesses are prepared to turn away in order to access “advanced” functionality that is now natively supported in most browsers. At Adobe’s claimed 99% penetration you’re turning away 1 in 100 customers. At 90% you’re turning away 1 in 10. According to http://riastats.com, if you’re deploying a Flash site down under then you’re going to be turning away 13%, or a bit more than 1 in 8. For Silverlight it’s even worse — almost half of your customers won’t even get to see your site without having to install a plugin (which they are increasingly less likely to do).

How much revenue can your business tolerate losing? 1%? 10%? 50%? And for what benefit?

A word on the future of Europe (without the United Kingdom)

It’s rare that I rant about politics but given the train wreck that we’ve woken up to here in Europe I thought I’d make the exception as this is important for all of us — both here in the 27 member European Union (technically while part of Europe, Switzerland’s not part of the European Union nor the 17 member Eurozone as it has its own currency, but we’re landlocked by it and affected by its instability) as well as abroad, including the United States.I’m no expert on European politics, but having been a resident of the region for almost a decade now and lived and/or worked in three member states (in addition to Switzerland) I have the unusual advantage of having seen it from many angles:

  • From Ireland, which has been (and is to this day) a benefactor of the union by way of support for its relatively small economy and its inexplicably generous 12.5% corporate tax rate.
  • From France, which along with Germany is one of the powerhouses of the European economy with the most to lose if things go awry.
  • From Switzerland, which is an independent, neutral country that happens to be in the center of Europe and only recently joined the Schengen Agreement (relaxing its borders with France, Germany, Austria and Italy).
  • From the United Kingdom, which is a member state outside of the Eurozone with its own currency (British Pounds) that is isolated from the mainland by sea and apparently sees this as a reason to get special treatment.

The United Kingdom is a large and important economy in the zone, but even down to the grassroots level they see themselves as independent and assess every single decision solely on the basis of what it will do for them — there are regularly mini scandals in the papers about their relationship with their fellow Europeans (who are typically seen to be somehow benefiting at their expense). This shortsighted tweet captures the sentiment nicely:

As a prime example, the Common Agricultural Policy which is designed “to provide farmers with a reasonable standard of living, consumers with quality food at fair prices and to preserve rural heritage“, tends to redistribute funds from more urbanised countries like the Netherlands and the United Kingdom to those where agriculture actually takes place. It’s an important (albeit changing) function and it commands almost half of the EU’s budget.

Another example of unnecessary friction is their [self-]exclusion from the Schengen Agreement, which creates a borderless area within Europe, thus facilitating transport and commerce. You still have to pass border control when you enter or leave the Schengen area, including when traveling to/from the Common Travel Area (consisting only of the United Kingdom and Ireland, which are connected on the island of Ireland by the border between the Republic of Ireland and Northern Ireland), but you can travel freely within it once you’re there and there are visas which cover the entire region.

Cutting to the chase, it is of no surprise then that the brits would be stubborn when it came to changing the treaty by unanimous vote — indeed I’ve been predicting that for a while and was certain it would happen a few days ago. What is a surprise though is just how belligerent and childish they’ve been about it — as a frenchman said in reference to the following video from The Telegraph’s excellent article EU suffers worst split in history as David Cameron blocks treaty change:

Another user tweeted:

Others agreed:

And:

I think Simon Wardley sums it up nicely though:

From my point of view the brits are [allowing their representatives to get away with] acting like petulant children, benefiting from the European Union when it suits them, and taking their toys home when it doesn’t. Their argument that the very establishment that got us into this mess must absolutely be protected above all else is weak — and that it is in the interests of the city, let alone the entire country, deceptive.

They “very doggedly” (their words) sought “a ‘protocol’ giving the City of London protection from a wave of EU financial service regulations related to the eurozone crisis”. That’s right, they didn’t want to play by the same rules as everyone else, and exercised their veto when it became apparent that was the only option.

To add insult to injury, they “warned the new bloc that it would not be able to use the resources of the EU, raising real doubts as to whether the eurozone would be able to enforce fiscal rules in order to calm the markets”. So not only are they going to not participate in cleaning up the mess they played a key role in creating, but they’re going to do their best to make sure nobody else can either.

Fortunately there’s light at the end of the tunnel: “Cameron was clumsy in his manoeuvring,” a senior EU diplomat said. “It may be possible that Britain will shift its position in the days ahead if it discovers that isolation really is not a viable course of action.” Please take a moment today to express your discontent with this decision as sometimes in order to serve your own interests you also need to consider those of others — in much the same way as the tragedy of the commons (where in this case the commons is the European and global markets).

Update: Another great [opinion] piece from The Telegraph: Cameron: the bulldog has no teeth:

Cameron (and Britain) are now in a no-win situation. If the eurozone countries start to rally, then we shall be isolated from the new bloc and stuck in the slow lane of Europe. Should the euro problems deepen, then we shall bear the consequences in full. As George Osborne has indicated, a disorderly collapse of the euro would drag a voiceless Britain into depression.

In France and Germany, Cameron will be blamed for exacerbating a crisis by leaders who will brand him the pariah of Europe. Overnight, Britain has changed from a major player to an isolated outpost which, if this goes on, will become about as significant on the global stage as the Isle of Mull. Churchill would be turning in his grave.

Related:

Infographic: Diffusion of Social Networks — Facebook, Twitter, LinkedIn and Google+

Social networking market

They say a picture’s worth a thousand words and much digital ink has been spilled recently on impressive sounding (yet relatively unimpressive) user counts, so here’s an infographic showing the diffusion of social networks as at last month to put things in perspective.

There are 7 billion people on the planet, of which 2 billion are on the Internet. Given Facebook are now starting to make inroads into the laggards (e.g. parents/grandparents) with 800 million active users already under their belt, I’ve assumed that the total addressable market (TAM) for social media (that is, those likely to use it in the short-medium term) is around a billion Internet users (i.e. half) and growing — both with the growth of the Internet and as growing fraction of Internet users. That gives social media market shares of 80% for Facebook, 20% for Twitter and <5% for Google+. In other words, Twitter is 5x the size of Google+ and Facebook is 4x the size of Twitter (e.g. 20x the size of Google+).It’s important to note that while some report active users, Google report total (e.g. best case) users — only a percentage of the total users are active at any one time. I’m also hesitant to make direct comparisons with LinkedIn as while everyone is potentially interested in Facebook, Twitter and Google+, the total addressable market for a professional network is limited, by definition, to professionals — I would say around 200 million and growing fast given the penetration I see in my own professional network. This puts them in a similar position to Facebook in this space — up in the top right chasing after the laggards rather than the bottom left facing the chasm.

Diffusion of innovations

The graph shows Rogers‘ theory on the diffusion of innovations, documented in The Innovator’s Dilemma, where diffusion is the process by which an innovation is communicated through certain channels over time among the members of a social system.

There are 5 stages:

  1. Knowledge is when people are aware of the innovation but don’t know (and don’t care) about it.
  2. Persuasion is when people are interested in learning more.
  3. Decision is when people decide to accept or reject it.
  4. Implementation is when people employ it to some degree for testing (e.g. create an account).
  5. Confirmation is when people finally decide to use it, possibly to its full potential.

I would suggest that the majority of the total addressable market are at stage 1 or 2 for Google+ and Twitter, and stage 4 or 5 for Facebook and LinkedIn (with its smaller TAM). Of note, users’ decisions to reject an innovation at the decision or implementation phase may be semi-permanent — to quote Slate magazine’s Google+ is Dead article, “by failing to offer people a reason to keep coming back to the site every day, Google+ made a bad first impression. And in the social-networking business, a bad first impression spells death.” The same could be said for many users of Twitter, who sign up but fail to engage sufficiently to realise its true value. Facebook, on the other hand, often exhibits users who leave only to subsequently return due to network effects.

Social networking is also arguably a natural monopoly given, among other things, dramatically higher acquisition costs once users’ changing needs have been satisfied by the first mover (e.g. Facebook). Humans have been using social networking forever, only until recently it’s been manual and physiologically limited to around 150 connections (Dunbar’s number, named after British anthropologist Robin Dunbar). With the advent of technology that could displace traditional systems like business cards and rolodexes came a new demand for pushing the limits for personal and professional reasons — I use Facebook and LinkedIn extensively to push Dunbar’s number out an order of magnitude to ~1,500 contacts for example, and Twitter to make new contacts and communicate with thousands of people. I don’t want to maintain 4 different social networks any more than I want to have to search 4 different directories to find a phone number — I already have 3 which is 2 too many!

Rogers’ 5 factors

How far an innovation ultimately progresses depends on 5 factors:

  1. Relative Advantage — Does it improve substantially on the status quo (e.g. Facebook)?
  2. Compatibility — Can it be easily assimilated into an individual’s life?
  3. Simplicity or Complexity — Is it too complex for your average user?
  4. Trialability — How easy is it to experiment?
  5. Observability — To what extent is it visible to others (e.g. for viral adoption)

Facebook, which started as a closed community at Harvard and other colleges and grew from there, obviously offered significant relative advantage over MySpace. I was in California at the time and it seemed like everyone had a MySpace page while only students (and a few of us in local/company networks) had Facebook. It took off like wildfire when they solved the trialability problem by opening the floodgates and a critical mass of users was quickly drawn in due to the observability of viral email notifications, the simplicity of getting up and running and the compatibility with users’ lives (features incompatible with the unwashed masses — such as the egregiously abused “how we met” form — are long gone and complex lists/groups are there for those who need them but invisible to those who don’t). Twitter is also trivial to get started but can be difficult to extract value from initially.

Network models

Conversely, the complexity of getting started on Google+ presents a huge barrier to entry and as a result we may see the circles interface buried in favour of a flat “follower” default like that of Twitter (the “suggested user list” has already appeared), or automated. Just because our real-life social networks are complex and dynamic does not imply that your average user is willing to invest time and energy in maintaining a complex and dynamic digital model. The process of sifting through and categorising friends into circles has been likened to the arduous process of arranging tables for a wedding and for the overwhelming majority of users it simply does not offer a return on investment:

In reality we’re most comfortable with concentric rings, which Facebook’s hybrid model recently introduced by way of “Close Friends”, “Acquaintances” and “Restricted” lists (as well as automatically maintained lists for locations and workplaces — a feature I hope gets extended to other attributes). By default Facebook is simple/flat — mutual/confirmed/2-way connections are “Friends” (though they now also support 1-way follower/subscriber relationships ala Twitter). Concentric rings then offer a greater degree of flexibility for more advanced users and the most demanding users can still model arbitrarily complex networks using lists:

In any case, if you give users the ability to restrict sharing you run the risk of their actually using it, which is a sure-fire way to kill off your social network — after all, much of the value derived from networks like Facebook is from “harmless voyeurism”. That’s why Google+ is worse than a ghost town for many users (including myself, though as a Google Apps users I was excluded from the landrush phase) while being too noisy for others. Furthermore, while Facebook and Twitter have a subscribe/follow (“pull”) model which allows users to be selective of what they hear, when a publisher shares content with circles on Google+ other users are explicitly notified (“push”) — this is important for “observability” but can be annoying for users.

Nymwars

The requirement to provide and/or share your real name, sex, date of birth and a photo also presents a compatibility problem with many users’ expectations of privacy and security, as evidenced by the resulting protests over valid use cases for anonymity and pseudonymity. For something that was accepted largely without question with Facebook, the nymwars appear to have caused irreparable harm to Google+ in the critically important innovator and early adopter segments, for reasons that are not entirely clear to me. I presume that there is a greater expectation of privacy for Google (to whom people entrust private emails, documents, etc.) than for Facebook (which people use specifically and solely for controlled sharing).

Adopter categories

Finally, there are 5 classes of adopters (along the X axis) varying over time as the innovation attains deeper penetration:

  1. Innovators (the first 2.5%) are generally young, social, wealthy, risk tolerant individuals who adopt first.
  2. Early Adopters (the next 13.5%) are opinion leaders who adopt early enough (but not too early) to maintain a central communication position.
  3. Early Majority (the next 34%, to 50% of the population) take significantly longer to adopt innovations.
  4. Late Majority (the next 34%) adopt innovations after the average member of society and tend to be highly sceptical.
  5. Laggards (the last 16%) show little to no opinion leadership and tend to be older, more reclusive and have an aversion to change-agents.

I’ve ruled out wealth because while buying an iPhone is expensive (and thus a barrier to entry), signing up for a social network is free.The peak of the bell curve is the point at with the average user (e.g. 50% of the market) has adopted the technology, and it is very difficult both to climb the curve as a new technology and to displace an existing technology that is over the hump.

The Chasm

The chasm (which exists between Early Adopters and Early Majority i.e. at 16% penetration), refers to Moore‘s argument in Crossing the Chasm that there is a gap between early adopters and the mass market which must be crossed by any innovation which is to be successful. Furthermore, thanks to accelerating technological change they must do so within an increasingly limited time for fear of being equaled by an incumbent or disrupted by another innovation. The needs of the mass market differ — often wildly — from the needs of early adopters and innovations typically need to adapt quickly to make the transition. I would argue that MySpace, having achieved ~75 million users at peak, failed to cross the chasm by finding appeal in the mass market (ironically due in no small part to their unfettered flexibility in customising profiles) and was disrupted by Facebook. Twitter on the other hand (with some 200 million active users) has crossed the chasm, as evidenced by the presence of mainstream icons like BieberSpears and Obama as well as their fans. LinkedIn (for reasons explained above) belongs at the top right rather than the bottom left.

Disruptive innovations

The big question today is whether Google+ can cross the chasm too and give Facebook a run for its money. Facebook, having achieved “new-market disruption” with almost a decade head start in refining the service with a largely captive audience, now exhibits extremely strong network effects. It would almost certainly take another disruptive innovation to displace them (that is, according to Clayton Christensen, one that develops in an emerging market and creates a new market and value network before going on to disrupt existing markets and value networks), in the same way that Google previously disrupted the existing search market a decade ago.

In observing that creating a link to a site is essentially a vote for that site (“PageRank”), Google implemented a higher quality search engine that was more efficient, more scalable and less susceptible to spam. In the beginning Backrub Google was nothing special and the incumbents (remember Altavista?) were continuously evolving — they had little to fear from Google and Google had little to fear from them as it simply wasn’t worth their while chasing after potentially disruptive innovations like Backrub. They were so disinterested in fact that Yahoo! missed an opportunity to acquire Google for $3bn in the early days. Like most disruptive technologies, PageRank was technologically straightforward and far simpler than trying to determine relevance from the content itself. It was also built on a revolutionary hardware and software platform that scaled out rather than up, distributing work between many commodity PCs, thus reducing costs and causing “low-end disruption”. Its initial applications were trivial, but it quickly outpaced the sustaining innovation of the incumbents and took the lead, which it has held ever since:

Today Facebook is looking increasingly disruptive too, only in their world it’s no longer about links between pages, but links between people (which are arguably far more valuable). Last year while working at Google I actively advocated the development of a “PageRank for people” (which I referred to as “PeopleRank” or “SocialRank”), whereby a connection to a person was effectively a vote for that person and the weight of that vote would depend on the person’s influence in the community, in the same way that a link from microsoft.com is worth more than one from viagra.tld (which could actually have negative value in the same way that hanging out with the wrong crowd negatively affects reputation). I’d previously built what I’d call a “social metanetwork” named “meshed” (which never saw the light of day due to cloud-related commitments) and the idea stemmed from that, but I was busy running tape backups for Google, not building social networks on the Emerald Sea team.

With the wealth of information Google has at its fingertips — including what amounts to a pen trace of users’ e-mail and (courtesy Android and Google Voice) phone calls and text messages — it should have been possible for them to completely automate the process of circle creation, in the same way that LinkedIn Maps can identify clusters of contacts. But they didn’t (perhaps because they got it badly wrong with Buzz), and they’re now on the sustaining innovation treadmill with otherwise revolutionary differentiating features being quickly co-opted by Facebook (circles vs lists, hangouts vs Skype, etc).

Another factor to consider is that Google have a massive base of existing users in a number of markets that they can push Google+ to, and they’re not afraid to do so (as evidenced by its appearance in other products and services including AndroidAdWords, BloggerChrome, Picasa, MapsNewsReader, TalkYouTube and of course the ubiquitous sandbar and gratuitous blue arrow which appeared on Google Search). This strategy is not without risk though as if successful it will almost certainly attract further antitrust scrutiny, in the same way that Microsoft found itself in hot water for what was essentially putting an IE icon on the desktop. Indeed I had advocated the deployment of Google+ as a “social layer” rather than isolated product (ala the defunct Google Buzz), but stopped short of promoting an integrated product to rival Facebook — if only to maintain a separation of duties between content production/hosting and discovery.

The Solution

While I’m happy to see some healthy competition in the space, I’d rather not see any of the social networks “win” as if any one of them were able to cement a monopoly then us users would ultimately suffer. At the end of the day we need to remember that for any commercial social network we’re not the customer, we’re the product being sold:

As such, I strongly advocate the adoption of open standards for social networking, whereby users select a service or host a product that is most suitable for their specific needs (e.g. personal, professional, branding, etc) which is interoperable with other, similar products.

What we’re seeing today is similar to the early days of Internet email, where the Simple Mail Transfer Protocol (SMTP) broke down the barriers between different silos — what we need is an SMTP for social networking.

References:

Sources:

  • Facebook: 800 million users (active) [source]
  • Twitter: 200 million users (active) [source]
  • LinkedIn: 135 million users (total) [source]
  • MySpace: 75.9 million users (peak) [source]
  • Google+: 40 million users (total) [source]

RIP Adobe Flash (1996-2011) – now let’s bury the dead

Adobe kills mobile Flash, giving Steve Jobs the last laugh, reports The Guardian’s Charles Arthur following the late Steve Jobs’ epic Thoughts on Flash rant 18 months ago. It’s been about 2.5 years since I too got sick of Flash bringing my powerful Mac to its knees, so I went after the underlying lie that perpetuates the problem, explaining why Adobe Flash penetration is more like 50% than 99%. I even made progress Towards a Flash free YouTube killer, only it ended up being YouTube themselves who eventually started testing a YouTube HTML5 Video Player (while you’re there please do your bit for the open web by clicking “Join the HTML5 Trial” at the bottom of that page).

I heard a sound as though a million restaurant websites cried out at onceCharles Arthur

You see, armed with this heavily manipulated statistic, armies of developers are to this day fraudulently duping their paying clients into deploying a platform that will invariably turn away a percentage of their business at the door, in favour of annoying flaming logos and other atrocities that blight the web:

How much business can you tolerate losing? If you’ve got 95% penetration then you’re turning away 1 in 20 customers. At 90% you’re turning away 1 in 10. At 50% half of your customers won’t even get to see your product. I don’t know too many businesses who can afford to turn away any customers in this economic climate.

In my opinion the only place Flash technology has in today’s cloud computing environment is as a component of the AIR runtime for building (sub-par) cross-platform applications, and even then I’d argue that they should be using HTML5. As an Adobe Creative Suite Master Collection customer I’m very happy to see them dropping support for this legacy technology to focus on generating interactive HTML5 applications, and look forward to a similar announcement for desktop versions of the Flash player in the not too distant future.In any case, with the overwhelming majority of devices being mobile today and with more and more of them including browser functionality, the days of Flash were numbered even before Adobe put the mobile version out of its misery. Let’s not drag this out any longer than we have to, and bury the dead by uninstalling Flash Player. Here’s instructions for Mac OS X and Windows, and if you’re not ready to take the plunge into an open standards based HTML5 future then at least install FlashBlock for Chrome or Firefox (surely you’re not still using IE?).

Update: Flash for TV is dead too, as if killing off mobile wasn’t enough: Adobe Scrapping Flash for TV, Too‎

Update: Rich Internet Application (RIA) architectures in general are in a lot of trouble — Microsoft are killing off Silverlight as well: Mm, Silverlight, what’s that smell? Yes, it’s death

Update: In a surprising move that will no doubt be reversed, RIM announced it would continue developing Flash on the PlayBook (despite almost certainly lacking the ability to do so): RIM vows to keep developing Flash for BlackBerry PlayBook – no joke

VDI: Virtually Dead Idea?

I’ve been meaning to give my blog some attention (it’s been almost a year since my last post, and a busy one at that) and Simon Crosby’s (@simoncrosbyVDwhy? post seems as good a place to start as any. Simon and I are both former Citrix employees (“Citrites”) and we’re both interested in similar topics — virtualisation, security and cloud computing to name a few. It’s no surprise then that I agree with his sentiments about Virtual Desktop Infrastructure (VDI) and must admit to being perplexed as to why it gets so much attention, generally without question.

History
Windows NT (“New Technology”), the basis for all modern Microsoft desktop operating systems, was released in 1993 and shortly afterwards Citrix (having access to the source code) added the capability to support multiple graphical user interfaces concurrently. Windows NT’s underlying architecture allowed for access control lists to be applied to every object, which made it far easier for this do be done securely than what might have been possible on earlier versions. They also added their own proprietary ICA (“Independent Computing Architecture“) network protocol such that these additional sessions could be accessed remotely, over the network, from various clients (Windows, Linux, Mac and now devices like iPads, although the user experience is, as Simon pointed out, subpar). This product was known as Citrix WinFrame and was effectively a fork of Windows 3.51 (I admit to having been an NT/WinFrame admin in a past life, but mostly focused on Unix/Linux integration). It is arguably what put Citrix (now a $2bn revenue company) on the map, and it still exists today as XenApp.

Terminal Services
It turns out this was a pretty good idea. So good, in fact, that (according to Wikipedia) “Microsoft required Citrix to license their MultiWin technology to Microsoft in order to be allowed to continue offering their own terminal services product, then named Citrix MetaFrame, atop Windows NT 4.0“. Microsoft introduced their own “Remote Desktop Protocol” and armed with only a Windows NT 4.0 Terminal Server Edition beta CD, Matthew Chapman (who went to the same college, university and workplace as me and is to this day one of the smartest guys I’ve ever met) cranked out rdesktop, if I remember well over the course of a weekend. I was convinced that this was the end of Citrix so imagine my surprise when I ended up working for them, on the other side of the world (Dublin, Ireland), almost a decade later!

VDI
About the time I left Citrix for a startup opportunity in Paris, France (2006) we were tinkering with a standalone ICA listener that could be deployed on a desktop operating system (bearing in mind that by now even Windows XP included Terminal Services and an RDP listener). I believe there was also a project working on the supporting infrastructure for cranking up and tearing down single-user virtual machines (rather than multiple Terminal Services sessions based on a single Windows Server, as was the status quo at the time), but I didn’t get the point and never bothered to play with it.

Even then I was curious as to what the perceived advantage was — having spent years hardening desktop and server operating systems at the University of New South Wales to “student proof” them I considered it far easier to have one machine servicing many users than many machines servicing many users. Actually there’s still one machine, only the virtualisation layer has been moved from between the operating system and user interface — where it arguably belongs — to between the bare metal and the operating system. As such it was now going to be necessary to run multiple kernels and multiple operating systems (with all the requisite configurations, patches, applications, etc.)!

Meanwhile there was work being done on “application virtualisation” (Project Tarpon) whereby applications are sandboxed by interrupting Windows’ I/O Request Packets (IRPs) and rewriting them as required. While this was a bit of a hack (Windows doesn’t require developers to follow the rules, so they don’t and write whatever they want pretty much anywhere), it was arguably a step in the right — rather than wrong — direction.

Multitenancy
At the end of the day the issue is simply that it’s better to share infrastructure (e.g. costs) between multiple users. In this case, why would I want to have one kernel and operating system dedicated to a single user (and exacting a toll in computing and human resources) when I can have one dedicated to many? In fact, why would I want to have an operating system at all, given it’s now essentially just a life support system for the browser? The only time I ever interact with the operating system is when something goes wrong and I have to fix it (e.g. install/remove software, modify configurations, move files, etc.) so I’d much rather have just enough operating system than one for everyone and then a bunch more on servers to support them!This is essentially what Google Chrome OS (one of the first client-side cloud operating environments) does, and I can’t help but to wonder whether the chromoting feature isn’t going to play a role in this market (actually I doubt it but it’s early days).

The RightWay™
Five years ago (as I had one foot out the door of Citrix with my eye on a startup opportunity in Paris) I met with product strategist Will Harwood at the UK office and explained my vision for the future of Citrix products. I’d been working on the Netscaler acquisition (among others) and had a pretty good feeling for the direction things were going — I’d even virtualised the various appliances on top of Xen to deliver a common appliance platform long before it was acquired (and was happy to be there to see Citrix CEO Mark Templeton announce this product as Netscaler SDX at Interop).It went something like this: the MultiWin WinFrame MetaFrame Presentation Server XenApp is a mature, best-of-breed product that had (and probably still has) some serious limitations. Initially the network-based ICA Browser service was noisy, flaky and didn’t scale, so Independent Management Architecture (IMA) was introduced — a combination of a relational data store (SQL Server or Oracle) and a mongrel “IMA” protocol over which the various servers in a farm could communicate about applications, sessions, permissions, etc. Needless to say, centralised relational databases have since gone out of style in favour of distributed “NoSQL” databases, but more to the point — why were the servers trying to coordinate between themselves when the Netscaler was designed from the ground up to load balance network services?

My proposal was simply to take the standalone ICA browser and apply it to multi-user server operating systems rather than single-user client operating systems, ditching IMA altogether and delegating the task of (global) load balancing, session management, SSL termination, etc. to the Netscaler. This would be better/faster/cheaper than the existing legacy architecture, it would be more reliable in that failures would be tolerated and best of all, it would scale out rather than up. While the Netscaler has been used for some tasks (e.g. SSL termination), I’m surprised we haven’t seen anything like this (yet)… or have we?

Caveat
I can think of at least one application where VDI does make sense — public multi-tenant services (like Desktone) where each user needs a high level of isolation and customisation.

For everyone else I’d suggest taking a long, hard look at the pros and cons because any attempt to deviate from the status quo should be very well justified. I use a MacBook Air and have absolutely no need nor desire to connect to my desktop from any other device, but if I did I’d opt for shared infrastructure (Terminal Services/XenApp) and for individual “seamless” applications rather than another full desktop. If I were still administering and securing systems I’d just create a single image and deploy it over the network using PXE — I’d have to do this for the hypervisor anyway so there’s little advantage in adding yet another layer of complexity and taking the hit (and cost) of virtualisation overhead. Any operating system worth its salt includes whole disk encryption so the security argument is largely invalidated too.

I can think of few things worse than having to work on remote applications all day, unless the datacenter is very close to me (due to the physical constraints of the speed of light and the interactive/real-time nature of remote desktop sessions) and the network performance is strictly controlled/guaranteed. We go to great lengths to design deployments that are globally distributed with an appropriate level of redundancy, while being close enough to the end users to deliver the strict SLAs demanded by interactive applications — if you’re not going to bother to do it properly then you might not want to do it at all.