How Open Cloud could have saved Sidekick users’ skins

The cloud computing scandal of the week is looking like being the catastrophic loss of millions of Sidekick users’ data. This is an unfortunate and completely avoidable event that Microsoft’s Danger subsidiary and T-Mobile (along with the rest of the cloud computing community) will surely very soon come to regret.

There’s plenty of theories as to what went wrong – the most credible being that a SAN upgrade was botched, possibly by a large outsourcing contractor, and that no backups were taken despite space being available (though presumably not on the same SAN!). Note that while most cloud services exceed the capacity/cost ceiling of SANs and therefore employ cheaper horizontal scaling options (like the Google File System) this is, or should I say was, a relatively small amount of data. As such there is no excuse whatsoever for not having reliable, off-line backups – particularly given Danger is owned by Microsoft (previously considered one of the “big 4” cloud companies even by myself). It was a paid-for service too (~$20/month or $240/year?) which makes even the most expensive cloud offerings like Apple’s MobileMe look like a bargain (though if it’s any consolation the fact that the service was paid for rather than free may well come back to bite them by way of the inevitable class action lawsuits).

“Real” cloud storage systems transparently ensure that multiple copies of data are automatically maintained on different nodes, at least one of which is ideally geographically independent. That is to say, the fact I see the term “SAN” appearing in the conversation suggests that this was a legacy architecture far more likely to fail. This is in the same way that today’s aircraft are far safer than yesterday’s and today’s electricity grids far more reliable than earlier ones (Sidekick apparently predates Android & iPhone by some years after all). It’s hard to say with any real authority what is and what is not cloud computing though, beyond saying that “I know it when I see it, and this ain’t it”.

Whatever the root cause the result is the same – users who were given no choice but to store their contacts, calendars and other essential day-to-day data on Microsoft’s servers look like having irretrievably lost it. Friends, family, acquaintances and loved ones – even (especially?) the boy/girl you met at the bar last night – may be gone for good. People will miss appointments, lose business deals and in the most extreme cases could face extreme hardship as a result (for example, I’m guessing parole officers don’t take kindly to missed appointments with no contact!). The cost of this failure will (at least initially) be borne by the users, and yet there was nothing they could have done to prevent it short of choosing another service or manually transcribing their details.

The last hope for them is that Microsoft can somehow reverse the caching process in order to remotely retrieve copies from the devices (which are effectively dumb terminals) before they lose power; good luck with that. While synchronisation is hard to get right, having a single cloud-based “master” and a local cache on the device (as opposed to a full, first-class citizen copy) is a poor design decision. I have an iPhone (actually I have a 1G, 3G, 3GS and an iPod Touch) and they’re all synchronised together via two MacBooks and in turn to both a Time Machine backup and Mozy online backup. As if that’s not enough all my contacts are in sync with Google Apps’ Gmail over the air too so I can take your number and pretty much immediately drop it in a beer without concern for data loss. Even this proprietary system protects me from such failures.

The moral of the story is that externalised risk is a real problem for cloud computing. Most providers [try to] avoid responsibility by way of terms of service that strip away users’ rights but it’s a difficult problem to solve though because enforcing liability for anything but gross negligence can exclude smaller players from the market. That is why users absolutely must have control over their data and be encouraged if not forced to take responsibility for it.

Open Cloud simply requires open formats and open APIs – that is to say, users must have access to their data in a transparent format. Even if it doesn’t make sense to maintain a local copy on the users’ computer, there’s nothing stopping providers from pushing it to a third party storage service like Amazon S3. In fact it makes a lot of sense for applications to be separated from storage entirely. We don’t expect our operating system to provide all the functionality we’ll ever need (or indeed, any of it) so we install third party applications which use the operating system to store data. What’s to stop us doing the same in the cloud, for example having Google Apps and Zoho both saving back to a common Amazon S3 store which is in turn replicated locally or to another cloud-based service like Rackspace Cloud Files?

In any case perhaps it’s time for us to dust off and revisit the Cloud Computing Bill of Rights?

Windows 7: Windows Vista Lite?

There’s no denying that Vista was a failure. A complete and utter disappointment. An unmitigated disaster. Microsoft have even essentially admitted it themselves, finally accepting what users, reviewers and wary businesses have been saying since before it even hit the shelves. It just didn’t bring enough benefit for its significant cost (early estimates were talking about $5k per seat to upgrade by the time you deliver new hardware, support it and train users), users hated it and some have even called it the most serious technical misstep in computing history. The fluff (transparent windows et al) exacted a heavy toll on the hardware and the delicate minimum requirements ‘balance’ was way off – set it too high and nobody can afford your software; too low and those who do complain about inadequate performance. Plus the long overdue security system was invasive and yet still largely ineffective.

The reality is that while XP has been ‘good enough’ for most users, Google and friends have been quietly the playing field from the corpse littered battlefields of operating systems and file formats to (now) mostly standardised browsers. It simply doesn’t matter now what your operating system is, and between Firefox’s rise to fame and so many heterogeneous mobile devices converging on the Internet it’s long since been impossible for webmasters to deny admittance to non-IE (and therefore non-Windows) clients.

In arriving at this point Free & Open Source Software (FOSS) has proven itself a truly disruptive force. Without it there would be no Google and no Amazon Web Services (and quite possibly no Amazon!). While Linux on the desktop may be a pipe dream, it’s carved a large slice out of the server market (powering the vast majority of cloud computing infrastructure) and its adoption is steadily rising on connected devices from mobiles and netbooks to television sets. There are multiple open source browsers, multiple open source scripting engines (to power web based applications), a new breed of client architecture emerging (thanks in no small part to Google Chrome) and even Microsoft are now talking about unleashing IE on the open source community (for better or worse).

So how did we get to Windows 7 (and back onto a sensible version numbering scheme) anyway? Here’s a look from an architecture point of view:

  • Windows 1/2: Rudimentary text based environment, didn’t introduce mouse/arrow keys until 2.x. Something like XTree Gold (which was my preference environment at the time).
  • Windows 3: A revolutionary step and the first version of Windows that didn’t suck and that most people are likely to remember.
  • Windows 95/98/ME: Evolution of 3.x and the first real mainstream version of Windows.
  • Windows NT 3.5x/4.0: Another revolutionary step with the introduction of the vastly superior NT (‘New Technologies’) kernel.
  • Windows 2000/XP: Refinement of NT and the result of recombining separate development streams for business and home users.
  • Windows Vista: Bloat, bloat and more bloat. Available in at least half a dozen different (expensive and equally annoying) versions, but many (most?) of its sales were for downgrade rights to XP.
  • Windows 7: Tommorow’s Windows. Vista revisited.

Before I explain why Windows 7 is to Vista what Windows Millennium Edition (WinMe) was to Windows 98 (and why that isn’t necessarily such a bad thing), let’s talk quickly about the Microsoft’s MinWin project. Giving credit where credit is due, the NT kernel is really quite elegant and was far ahead of its time when unleashed on the world over a dozen years ago. It’s stable, extensible, performant and secure (when implemented properly). It’s also been steadily improved through 3.51, 4.0, 2000, XP and Vista releases. It must be quite annoying for the bearded boffins to see their baby struggling under the load heaped on it by their fellow developers, and therein lies the problem.

That’s why the MinWin project (which seeks to deliver the minimum set of dependencies for a running system, albeit without even a graphics interface) is interesting both from a client, and especially from a cloud computing point of view. While MinWin weighs in at forty-something megabytes, Vista is well over a thousand (and usually a few gigabytes), but the point is that Microsoft now know how to be slim when they need to be.

Now that the market has spoken with its feet Microsoft are paying attention and Windows 7 lies somewhere on the Vista side of the MinWin to Vista bloat scale. The interface is a significant departure from Vista, borrowing much from other wildly successful operating systems like OS X, and like OS X it will be simpler, faster and easier to use. This is very similar to Windows ME’s notoriously unsuccessful bolting of the Windows 2000 interface onto Windows 98, only this time rather than putting a silk shirt on a pig we should end up with a product actually worth having. This is good news, especially for business users who by this time will have already been waiting too long to move on from XP.

Conversely, Azure (their forthcoming cloud computing OS) is on the MinWin side of the bloat scale. It is almost certainly heavily based on the Windows 2008 Server Core (which follows Novell’s example by evicting the unwanted GUI from the server), needing to do little more than migrate the management functions to a service oriented architecture. If (and only if) they get the management functions right then they will have a serious contender in the cloud computing space. That means sensible, scalable protocols which follow Amazon and Google’s examples (where machines are largely independent, talking to their peers for state information) rather than simply a layer on top of the existing APIs. Unfortunately Microsoft Online Services (MOS) feels currently more like the latter (even falling back to the old school web management tools for some products), but with any luck this will improve with time.

Provided they find the right balance for both products, this is good for IT architects (like myself), good for Microsoft, and most importantly, good for users. Perhaps the delay was their strategy all along, and why not when you can extract another year or two of revenue from the golden goose of proprietary software? In any case we’re at the dawn of a new era, and it looks like Microsoft will be coming to the party after all.

Compiling bash-3.0 on Interix

So you’ve followed my instructions for updating config.guess for Interix 5.2 (the version shipping with Windows 2003 Server R2) and now you want to compile something. Interix ships with C Shell (csh) and Korn Shell (ksh) but lacks the Bourne Again Shell (bash) – the shell most Linux users will be familiar with, so why not start there? From Start->Programs->Subsystem for UNIX-based Applications start either ‘Korn Shell’ or ‘C Shell’. You’ll end up in ‘/dev/fs/C/Documents and Settings/’ (this is your home directory, ‘~’) and the root is ‘%SystemRoot%\SUA’. Download bash-3.0 and extract it somewhere sensible (like /usr/src). You’ll need to ‘gunzip bash-3.0.tar.gz’ first and then do ‘tar xf bash-3.0.tar’ as it’s not gtar so it doesn’t understand ‘z’ (gzip) and ‘j’ (bzip2) options. Change to the ‘bash-3.0’ directory and ‘./configure –prefix=/usr/local/bash-3.0’, then ‘make’ and ‘make install’. Now it’s just a case of creating a link to ‘%SystemRoot%\posix.exe /u /c /usr/local/bash-3.0/bin/bash -l’ in the start menu. When you click on this link you’ll end up with a window that looks and behaves like a command window, only with a red/yellow/blue logo.

You may get errors like ‘error retrieving current directory: getcwd: cannot access parent directories: Undefined error: 0’ – I suspect these are due to permissions problems, or issues with spaces in paths. I’d be interested if someone has a better explanation, especially if it came with a fix.

Compiling on Interix 5.2 (Windows 2003 Server R2 SUA)

The soon-to-be-released Windows 2003 Server R2 includes features that were previously shipped as Services for Unix (SFU) – perhaps the most interesting of which is Subsystem for UNIX-based Applications (SUA).

At the time of writing, if you want to see what R2 is all about you’ll need to download the Windows Server 2003 R2 Release Candidate 1 (RC1) Software, which can only be installed on a trial version of Windows 2003 Server (available as a download with R2). After you’ve installed the ‘Subsystem for UNIX-based Applications’ Windows component (Add/Remove Programs applet) you will need to download and install 200Mb or so of ‘Utilities and SDK for UNIX-based Applications’. See Installing and Using Utilities and SDK for UNIX-based Applications. Be sure to do a custom install as the default doesn’t install the GNU utilities (among other things).

Once you’ve got it all installed you’ll probably want to start compiling software (like bash), but when you run configure you’ll get a message like this:

 checking build system type... ./support/config.guess: unable to guess system type
    
    This script, last modified 2005-09-19, has failed to recognize
    the operating system you are using. It is advised that you
    download the most up to date version of the config scripts from
    
      http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.guess and
      http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.sub
    
    If the version you run (./support/config.guess) is already up to date, please
    send the following data and any information you think might be
    pertinent to  in order to provide the needed
    information to handle your system.
    
    config.guess timestamp = 2005-09-19
    
    uname -m = x86
    uname -r = 5.2
    uname -s = Interix
    uname -v = SP-9.0.3790.2049
    
    /usr/bin/uname -p = Intel_x86_Family15_Model4_Stepping8
    /bin/uname -X     =
    System = Interix
    Node = aosdubvsvr03
    Release = 5.2
    Version = SP-9.0.3790.2049
    Machine = x86
    Processor = Intel_x86_Family15_Model4_Stepping8
    HostSystem = Windows
    HostRelease = SP1
    HostVersion = 5.2
    
    hostinfo               =
    /bin/universe          =
    /usr/bin/arch -k       =
    /bin/arch              =
    /usr/bin/oslevel       =
    /usr/convex/getsysinfo =
    
    UNAME_MACHINE = x86
    UNAME_RELEASE = 5.2
    UNAME_SYSTEM  = Interix
    UNAME_VERSION = SP-9.0.3790.2049
    configure: error: cannot guess build type; you must specify one

This is because the config.guess script only knows about Interix versions 3 and 4 – you’ll have to tell it about version 5:

$ diff config.guess config.guess.new
     782c782
          ---
     >     x86:Interix*:[345]*)

Once you’ve done this you should be able to start compiling…