←back to Articles

VDI: Virtually Dead Idea?

This content is 12 years old and may not reflect reality today nor the author’s current opinion. Please keep its age in mind as you read it.

I’ve been meaning to give my blog some attention (it’s been almost a year since my last post, and a busy one at that) and Simon Crosby’s (@simoncrosbyVDwhy? post seems as good a place to start as any. Simon and I are both former Citrix employees (“Citrites”) and we’re both interested in similar topics — virtualisation, security and cloud computing to name a few. It’s no surprise then that I agree with his sentiments about Virtual Desktop Infrastructure (VDI) and must admit to being perplexed as to why it gets so much attention, generally without question.

History
Windows NT (“New Technology”), the basis for all modern Microsoft desktop operating systems, was released in 1993 and shortly afterwards Citrix (having access to the source code) added the capability to support multiple graphical user interfaces concurrently. Windows NT’s underlying architecture allowed for access control lists to be applied to every object, which made it far easier for this do be done securely than what might have been possible on earlier versions. They also added their own proprietary ICA (“Independent Computing Architecture“) network protocol such that these additional sessions could be accessed remotely, over the network, from various clients (Windows, Linux, Mac and now devices like iPads, although the user experience is, as Simon pointed out, subpar). This product was known as Citrix WinFrame and was effectively a fork of Windows 3.51 (I admit to having been an NT/WinFrame admin in a past life, but mostly focused on Unix/Linux integration). It is arguably what put Citrix (now a $2bn revenue company) on the map, and it still exists today as XenApp.

Terminal Services
It turns out this was a pretty good idea. So good, in fact, that (according to Wikipedia) “Microsoft required Citrix to license their MultiWin technology to Microsoft in order to be allowed to continue offering their own terminal services product, then named Citrix MetaFrame, atop Windows NT 4.0“. Microsoft introduced their own “Remote Desktop Protocol” and armed with only a Windows NT 4.0 Terminal Server Edition beta CD, Matthew Chapman (who went to the same college, university and workplace as me and is to this day one of the smartest guys I’ve ever met) cranked out rdesktop, if I remember well over the course of a weekend. I was convinced that this was the end of Citrix so imagine my surprise when I ended up working for them, on the other side of the world (Dublin, Ireland), almost a decade later!

VDI
About the time I left Citrix for a startup opportunity in Paris, France (2006) we were tinkering with a standalone ICA listener that could be deployed on a desktop operating system (bearing in mind that by now even Windows XP included Terminal Services and an RDP listener). I believe there was also a project working on the supporting infrastructure for cranking up and tearing down single-user virtual machines (rather than multiple Terminal Services sessions based on a single Windows Server, as was the status quo at the time), but I didn’t get the point and never bothered to play with it.

Even then I was curious as to what the perceived advantage was — having spent years hardening desktop and server operating systems at the University of New South Wales to “student proof” them I considered it far easier to have one machine servicing many users than many machines servicing many users. Actually there’s still one machine, only the virtualisation layer has been moved from between the operating system and user interface — where it arguably belongs — to between the bare metal and the operating system. As such it was now going to be necessary to run multiple kernels and multiple operating systems (with all the requisite configurations, patches, applications, etc.)!

Meanwhile there was work being done on “application virtualisation” (Project Tarpon) whereby applications are sandboxed by interrupting Windows’ I/O Request Packets (IRPs) and rewriting them as required. While this was a bit of a hack (Windows doesn’t require developers to follow the rules, so they don’t and write whatever they want pretty much anywhere), it was arguably a step in the right — rather than wrong — direction.

Multitenancy
At the end of the day the issue is simply that it’s better to share infrastructure (e.g. costs) between multiple users. In this case, why would I want to have one kernel and operating system dedicated to a single user (and exacting a toll in computing and human resources) when I can have one dedicated to many? In fact, why would I want to have an operating system at all, given it’s now essentially just a life support system for the browser? The only time I ever interact with the operating system is when something goes wrong and I have to fix it (e.g. install/remove software, modify configurations, move files, etc.) so I’d much rather have just enough operating system than one for everyone and then a bunch more on servers to support them!This is essentially what Google Chrome OS (one of the first client-side cloud operating environments) does, and I can’t help but to wonder whether the chromoting feature isn’t going to play a role in this market (actually I doubt it but it’s early days).

The RightWay™
Five years ago (as I had one foot out the door of Citrix with my eye on a startup opportunity in Paris) I met with product strategist Will Harwood at the UK office and explained my vision for the future of Citrix products. I’d been working on the Netscaler acquisition (among others) and had a pretty good feeling for the direction things were going — I’d even virtualised the various appliances on top of Xen to deliver a common appliance platform long before it was acquired (and was happy to be there to see Citrix CEO Mark Templeton announce this product as Netscaler SDX at Interop).It went something like this: the MultiWin WinFrame MetaFrame Presentation Server XenApp is a mature, best-of-breed product that had (and probably still has) some serious limitations. Initially the network-based ICA Browser service was noisy, flaky and didn’t scale, so Independent Management Architecture (IMA) was introduced — a combination of a relational data store (SQL Server or Oracle) and a mongrel “IMA” protocol over which the various servers in a farm could communicate about applications, sessions, permissions, etc. Needless to say, centralised relational databases have since gone out of style in favour of distributed “NoSQL” databases, but more to the point — why were the servers trying to coordinate between themselves when the Netscaler was designed from the ground up to load balance network services?

My proposal was simply to take the standalone ICA browser and apply it to multi-user server operating systems rather than single-user client operating systems, ditching IMA altogether and delegating the task of (global) load balancing, session management, SSL termination, etc. to the Netscaler. This would be better/faster/cheaper than the existing legacy architecture, it would be more reliable in that failures would be tolerated and best of all, it would scale out rather than up. While the Netscaler has been used for some tasks (e.g. SSL termination), I’m surprised we haven’t seen anything like this (yet)… or have we?

Caveat
I can think of at least one application where VDI does make sense — public multi-tenant services (like Desktone) where each user needs a high level of isolation and customisation.

For everyone else I’d suggest taking a long, hard look at the pros and cons because any attempt to deviate from the status quo should be very well justified. I use a MacBook Air and have absolutely no need nor desire to connect to my desktop from any other device, but if I did I’d opt for shared infrastructure (Terminal Services/XenApp) and for individual “seamless” applications rather than another full desktop. If I were still administering and securing systems I’d just create a single image and deploy it over the network using PXE — I’d have to do this for the hypervisor anyway so there’s little advantage in adding yet another layer of complexity and taking the hit (and cost) of virtualisation overhead. Any operating system worth its salt includes whole disk encryption so the security argument is largely invalidated too.

I can think of few things worse than having to work on remote applications all day, unless the datacenter is very close to me (due to the physical constraints of the speed of light and the interactive/real-time nature of remote desktop sessions) and the network performance is strictly controlled/guaranteed. We go to great lengths to design deployments that are globally distributed with an appropriate level of redundancy, while being close enough to the end users to deliver the strict SLAs demanded by interactive applications — if you’re not going to bother to do it properly then you might not want to do it at all.