29 May 2009

A Clear[er] Future for CloudCamp (and cloud computing in general)

Earlier today Reuven Cohen posted about A Bright Future for CloudCamp in which he publicly stated that he "will happily transfer all related IP, domains, etc to the control of the [CloudCamp] organization" in response to (if not necessarily as a result of) my Enomaly, Inc. owns CloudCamp™ - has it jumped the shark? post. The details of the organisation were light and there was indeed some confusion over the trademark but Dave Nielsen confirmed that "CloudCamp will be turned into a for-profit ... OVER MY COLD, DEAD BODY! Also, as you know, I have spent a lot of time researching the formation of CloudCamp as a non-profit (which is only fitting, since no-one has received any compensation ;-)."

Some key details are missing, such as how or indeed if the official(s) will be elected and what form of non-profit will be formed - a section 503(c)(3) created on the basis of educational and/or scientific services may also offer tax exemptions for sponsors in the US for example. I'm expecting these to be clarified at the anniversary CloudCamp event on 24 June 2009 and with lots of eyes on the details there will be no room whatsoever for shenanigans - as I won't be there be sure to ask plenty of questions if anything still seems out of place. In particular it should be burnt into the memorandum and articles of association that the organisation cannot be sold or have its assets transferred to another entity that is not similar in spirit (e.g. non-profit), and that the objective of the organisation should be to educate about cloud computing rather than promotion of commercial interests (e.g. the infamous trade association). The officials should also be elected by organisers and/or participants (within say the first year if not immediately at launch) who should be true members of/subscribers to the organisation and thus able to vote; that way if our illustrious leaders lose interest (or their minds) then the community can continue. These requirements remove all temptation and make us less of a target for subversion, thus guaranteeing CloudCamp's continued viability (at least until we're so successful that it's no longer relevant and "cloud" fades into the background like "client-server" did a few decades ago).

In order to further improve transparency relating to the handling of money I will set a good example by being 100% transparent with CloudCampParis. That is, I commit to make available all details of money received and spent for public scrutiny. So far we have a number of €250 and €500 sponsorships confirmed or in the works and quotes for €1,200 and €1,500 in catering (depending whether we go for bags or buffet - I'd still prefer beer & pizza though), as well as something like €750 in flights to get Dave there for this first French event - we're on track to break even. I have already committed to sponsors to ensure that all funds raised will be spent on the event itself and encourage other organisers to follow suit. For those who raised concerns about potential improprieties I encourage you to challenge the organisers to justify their expenses with receipts and hope that they will do so proactively in future; with complete transparency there is no need for trust (and angst when trust is lost).

The real news of the day though follows on from my being (again) silenced by a "consensus" (which included many of those implicated by my earlier allegations) and then immediately after flat out accused of "only becom[ing] involved for one reason -- to try to fork the community" (only now deprived the right of reply). This is clearly BS as if I wanted to fork the community I just would have done so by feeding the growing unrest rather than pushing the committee to put its cards on the table; I have about as much interest in being at the "top" of what I believe should be a completely flat structure as I do in contributing to something which I believe could/will be eventually subverted for the enrichment of a few individuals. Although I've been sharply critical at times (more often than I would like), everything I post is [believed to be] true and almost always links back to a primary source; this was a flat out lie and it resulted in a flat out threat should it not promptly be proven or retracted with an apology.

Immediately after Reuven forwarded my message clearly marked CONFIDENTIAL to the public list, he and I got on the phone for half an hour (the first time we've actually spoken) and discussed our differences. He does a good job of summarising it so I'll just quote him:
I just got off the phone with Sam. After almost a year of public feuding, we finally actually spoke in person. First let me say that email probably isn't the best method for dispute resolution. I probably should have called Sam long ago. It's clear we share the same passions for open cloud computing. In regards to my previous statements about Sam's intention to fork CloudCamp, he has assured me that isn't the case and he is committed to making the Paris CloudCamp event a success we can all share. I believe him.

Going forward we agreed that continuing our feud is childish and does more harm then good. We are going to actively work to strengthen our relationship and put this ridiculous feud behind us. My request to Sam is that in the future is if he does have a grievance he call me directly before we take our frustrations public, we both agree this is a better approach then a public battle.
Unfortunately those of you who found all this rather entertaining will have to go back to watching WWF as we're finally going to get on with furthering the interests of cloud computing rather than [in]fighting (which makes no sense whatsoever given we're not even competitors) or "inside baseball" according to one article. As TheOtherSam pointed out:
Reuven recently wrote about two watershed epochs in the development of the cloud industry. Given the energy and passion of these two individuals, this event might mark a third!
Given things like the ill-fated Open Cloud Alliance now have some chance of seeing the light of day, duplicate initiatives like the Unified Cloud Interface (UCI) and Open Cloud Computing Interface (OCCI) can work together and fiascos like the Open Cloud Manifesto are less likely to occur behind closed doors this may well prove correct - one thing you can be sure of is that where I'm involved there will be NoBullshit™

So let's close this chapter and get on with it...

26 May 2009

Enomaly, Inc. owns CloudCamp™ - has it jumped the shark?

So Reuven Cohen's company, Enomaly, Inc. effectively owns CloudCamp... you heard it here first:

Here's the backstory:

As you're no doubt already aware I recently stepped up to bring CloudCamp to Paris on 11 June 2009, which seemed like a good idea at the time and a nice opportunity to kickstart the community over here (we already have almost 100 registrations!). You also likely followed my coverage of previous Enomaly-related fiascos including the CCIF goat rodeo and appreciate that I have a very low tolerance for bulls--t in anything I'm involved with (I still can't for the life of me work out why Enomaly insists on involving itself in this stuff rather than focusing on its fledgling business). What you probably don't know is that the CCIF and CloudCamp organisations are (or at least were to be) one and the same, were it not for backlash from local organisers and my premature uncovering of the ill-fated [Open] Coud [Computing] Alliance just in the nick of time. I figured the shenanigans and tomfoolery were in the past and that we'd moved on but apparently not...

So we held our first organisers' meeting a few weeks back hit the ground running with an agenda, venue, sponsors and a handful of registrations in an Eventbrite site that we set up. As we expect a mixed audience and bearing in mind we're in Central Europe rather than the US we went for a more formal structure than usual with a combination of set talks and an "unpanel". This apparently wasn't the CloudCamp approved format so the agenda was overhauled only to be rejected by the venue and restored to something more like what we started with. The Eventbrite site was also handed over without question to Dave Nielsen, who claimed it would be better on his account for cross-marketing purposes. That was fine until we wanted to offer sponsorship slots to a few specific registrants but were denied access to our own list on the basis of a "no-spam policy" (if we can't trust our own organisers then who can we trust, bearing in mind BarCamp lists are public, albeit obfuscated). Needless to say my patience was already being tested because things I needed (documentation and a sponsorship kit) were absent while things I didn't (interference) were plentiful.

Naturally cynical and somewhat unsettled by our brushes with the self-appointed CloudCamp committee (which obnoxiously lists Reuven as "instigator" while failing to acknowledge any of the European contributors including Alexis Richardson, Chris Purrington and Simon Wardley who were equally critical to its' success, not to mention BarCamp itself on which the whole thing is based) I took advantage of being at the Cloud Computing Expos in Prague and London to discuss candidly with some of the other European organisers. Sure enough I'm not the only one who's anxious about the future (of course the future of cloudcamp is looking bright when you know you own the thing!) and it seems there is some well-earned and deep-seated distrust going around. I'm also not the only one concerned about the hard work of the many potentially resulting in the unjust enrichment of the few and my attempts to convince Dave (in a 3 hour call no less) that everyone who's ever organised or even attended a CloudCamp event is both stakeholder and benefactor have thus far fallen on deaf ears. It's becoming increasingly clear to me that the view from above is that a small group of people I've previously referred to as the Mighty Morphin' Power Rangers believe they "own" the community (more "pwn" than "own" if you ask me).

Everyone I spoke to agreed that the best way forward would be to take care of registering the trademark (something that should have been done long ago anyway), to be handed over to a suitable non-profit organisation run by elected representative(s). This mail was drafted to announce the contribution, which should really have been the end of the story:
Afternoon all,

As you know I've been active in protecting all things cloud computing w.r.t trademarks, for example:

I've just discovered the term CloudCamp is not protected and as one of a large and growing list of stakeholders (on which I include everyone from participants to organisers, sponsors and "instigators") I am concerned that we are unnecessarily (significantly) exposed. I bumped into Tom Leyden at the Cloud Computing Expo in Prague (who's organised a bunch more CloudCamps than I have) and he shares my concerns, as do a handful of other organisers I have spoken to.

As such (given the significant lead times and expenses usually associated with trademark registration) I've taken the liberty of registering the trademark with the USPTO which I will gladly transfer to a 503(c)3 non-profit, established to further the interests of cloud computing and run by elected officials. If we're not (eventually) reimbursed then Tom and I will cover the costs personally as a donation/sponsorship.

Sam
The problem was when we did a worldwide search last week with a view to registering the trademark we found that Reuven Cohen (with the help of Deeth Williams Wall lawyers) had already done so in March in the name of his own company, Enomaly, Inc. Even more curiously, when I raised the trademark issue on my recent call with Dave he knew nothing about it so either he's being taken for a ride along with the rest of us or he's telling fibs too. Naturally the excuse will be that this was done to protect the community while waiting for the formation of CloudCamp, Inc. but I don't buy it - the application curiously occurred contemporaneously with a brash attempt by a vendor to buy the whole lot and I don't believe for one second that this was a coincidence.

I don't plan to dwell on this point (I don't have the time anyway) and my primary/only concern is the ongoing viability and stability of the community we have all contributed to in some way (even if just as a participant). The last thing I want to see is a for-profit company being formed and run by self-appointed dictators only to be sold to a vendor - such a thing would be the antithesis of BarCamp, on which the group is based and whatever is setup should be structured so as to make this impossible (e.g. a non-profit democracy).

I'm not the first to accuse CloudCamp of jumping the shark, and we've seen it all before (right down to the silly puff pieces promoting individuals and obnoxious "instigator" title) when MashupCamp jumped the shark a few years back. However I believe it's not yet too late to avoid forking the community (and yes, if the organisers don't come to the party then everyone I've spoken to agrees there will be a fork) as I'm fairly sure they plan to announce the new regime they've been busy nutting out with their lawyers at the anniversary CloudCamp on 24 June 2009.

As a starting point for the "Future of CloudCamp" here's a mail I wrote at the start of the month, only to have it moderated and deleted. Let's try to work out what we need from any central CloudCamp organisation (and indeed if we need one at all) and then take it from there:
---------- Forwarded message ----------
From: Sam Johnston
Date: Mon, May 4, 2009 at 7:28 PM
Subject: Future of CloudCamp
To: [email protected]

Evening all,

There was apparently a "future of cloudcamp" call with European organisers a few weeks back and putting aside the question as to why I and the other CloudCampParis organisers I've spoken to weren't invited, was someone planning to at least post some minutes to the list?

So far as I am concerned CloudCamp is a good (albeit blatantly obvious) idea and is essentially a franchise shared between anyone who has contributed to its growth (from "instigators" to organisers to sponsors to attendees). Those of you entangled in the CCIF goat rodeo will be acutely aware of my fervour for transparency and as such I don't like having to ask for it, but I know I'm not the only one who wants to see more of it.

That in mind, by kicking off this thread I'm hoping we (the stakeholders of CloudCamp) can collaboratively and openly define the direction of the organisation. First thing's first (as I'm busy organising CloudCampParis as we speak) I'd like to get Dave some ideas as to how he can best assist local organisers. Here's some ideas to get the ball rolling:

  • Sponsorship Kit to facilitate selling of sponsorships (maybe just a PDF and/or web page explaining why it's a good idea), probably offering a basic level (@ ~ $350/€250) including mentions on the event minisite, at the event, etc. and a more advanced level including a lightning talk. For bonus points offer a "bronze" level for cashed up attendees. Details TBD but you get the idea - makes it an easier sell.
  • Branding Kit with logos, colours, PDFs, etc. which local organisers can use to have some sort of consistency (even a PDF of a sign with an arrow on it saves time).
  • Global Sponsors who commit to pay a certain amount per event (say €100-500 or around €5-20k/annum) and who get a mention on the main site and at each event for it. Currently cloudcamp.com has a laundry list of sponsors including pretty much anyone who's ever had anything to do with cloud computing and their mothers - that makes it essentially worthless and difficult to sell... bronze/silver/gold/platinum sponsors would be a better idea.
  • Organisation to take money, issue invoices, etc. but only if it's a 503(c)(3) as it's too easy to take the piss with other forms and this has significant tax advantages (read: easier to sell sponsorships and everything is cheaper). Regional organisers should be organisation members and the direction should be set by them democratically. Among other things that would save people like me having to bother our accountants about collecting money on behalf of the organisation.
  • Support in terms of joining conference calls, mailing lists and even attending the events where possible/feasible. This is a two way street though so I guess local organisers should offer accommodation/entertainment/etc. where possible to reduce costs.
  • Web Site optimised for creating and advertising individual events. This should probably be something like the Drupal CMS and organisers should be able to create and edit events without having to bother anyone else. It doesn't need to be fancy - a Wiki would probably do too (this works rather nicely for BarCamp). This is something I'd be more than happy to help out with, especially if we could get it in place quickly (in time for Paris).
I'm sure there's plenty of other things we could do but the point is to get some sort of discussion underway and get people involved in the governance rather than provide an exhaustive list.

Cheers,

Sam
Update: Forgot to mention that Canadian Trademark Application #1431094 has a priority date of 4 March 2009, which likely means that for an additional expense it can be extended to other Madrid protocol countries at any time in the next 6 months (e.g. until 4 October 2009). Just because it doesn't show up in USPTO yet doesn't mean it won't in due course.

25 May 2009

Is HTTP the HTTP of cloud computing?

Ok so after asking Is OCCI the HTTP of cloud computing? I realised that the position may have already been filled and that the question was more Is AtomPub already the HTTP of cloud computing?

After all my strategy for OCCI was to follow Google's example with GData by adding some necessary functionality (a search interface, caching directives, resource-specific attributes, etc.). Most of the heavy lifting was actually being done by AtomPub, thus avoiding a huge amount of tedious and error-prone protocol writing (around 20,000 words of it) - something which OGF and the OCCI working group isn't really geared up for anyway. This is clearly a workable and well-proven approach as it as been adopted strategically by both Microsoft and Google and also tactically by Salesforce and IBM, among others. Best of all adding things like queries and versioning is a manageable workload while starting from scratch is most certainly not.

But what if there were an easier way? Recall that the problem we are trying to solve is exposing a flexible interface to an arbitrarily large collection of interconnected compute, storage and network resources. We need to be able to describe and manipulate the resources (CRUD), associate them with each other via rich links (e.g. links with attributes like local identifiers - eth0, sda, etc.) and change their state (start, stop, restart, etc.), among other things.

Representational State Transfer (REST)

Actually we're not talking about exposing the resources themselves (that would be impossible) but various representations of those resources - like Plato's shadows on the cave walls - this is the "REpresentational" in "REpresentational State Transfer (REST)". There's an infinite number of possible representations so it's impossible to try to capture them all now, but here's some examples:
  • An Open Virtualisation Format (OVF) serialisation of a compute resource
  • A platform-specific descriptor file (e.g. VMX)
  • A complete archive of the virtual machine with its dependencies (OVA)
  • A graphical image of the console at a given point in time ('snapshot')
  • A video stream of the console for archiving/audit purposes (ala Citrix's Project Iris)
  • The console itself (e.g. SSH, ICA, RDP, VNC)
  • Build documentation (e.g. PDF, ODF)
  • Esoteric enterprise requirements (e.g. NMS configuration)
It doesn't take a rocket scientist to spot the correlation between this and HTTP's existing content negotiation functionality (whereby a client can ask for a specific representation of a given resource - e.g. HTML vs PDF) so this is already pretty much solved for us (see HTTP's Accept: header for the details). For bonus points this information should be exposed in the URI as it's not always possible or convenient to set headers ala:
  • http://example.com/.atom (using filename extensions)
  • http://example.com/;content-type=text/html (using the full Internet media type)
Web Linking

But what about the links? As I explained yesterday the web is built on links embedded in HTML documents using the A tag. Atom also provides enhanced linking functionality via the LINK element, where it is also possible to specify content types, languages, etc. In this case however we want to allow resources to be arbitrary types and more often than not we won't have the ability to link within the payload itself. This leaves us with two options: put the links in the payload anyway by relying on a meta-model like Atom (or one we roll ourselves) or find some way to represent them within HTTP itself.

Enter HTTP headers which are also extensible and, as it turns out, in the process of being extended (or at least refined) to handle this very requirement by fellow down under, Mark Nottingham. See the "Web Linking" IETF Internet-Draft (draft-nottingham-http-link-header, at the time of writing version 05) for the nitty gritty details and the ietf-http-wg list for some current discussions. Basically it clarifies the existing Link: headers and the result looks something like this:
Link: <http://example.com/TheBook/chapter2>; rel="previous"; title="previous chapter"
The Link: header itself is also extensible so we can faithfully represent our model by adding e.g. the local device name when linking storage and network resources to compute resources and other requisite attributes. It would be helpful if the content-type were also specified (Atom allows for multiple links of the same relation provided the content-type differs for example) but language is already covered by HTTP (it doesn't seem useful to advertise French links to someone who already asked to speak English).

It's also interesting to note that earlier versions of the HTTP RFCs actually [poorly] specified both the Link: headers as well as LINK and UNLINK methods for maintaining links between web resources. John Pritchard had a crack at clarification in the Efficient HyperLink Maintenance for HTTP I-D but like most I-Ds this one seems to have died after 6 months, and with it the methods themselves. It seems to me that adding HTTP methods at this time is a drastic (and almost certainly infeasible) action, especially for something that could just as easily be accomplished via headers ala Set-Cookie: (too bad the I-D doesn't specify how to add/delete/modify links!). In the simplest sense a Link: header appearing in a PUT or POST could replace the existing one(s) but something more elegant for acting on individual links would be nice - probably a discussion worth having on the ietf-http-wg list.

Organisation of Information

Looking back to Atom for a second we're still missing some key functionality:
  • Atom id -> HTTP URL
  • Atom updated -> HTTP Last-Modified: Header
  • Atom title and summary -> Atom/HTTP Slug: Header or equivalent
  • Atom link -> HTTP Link: Header
  • Atom category -> ???
Houston, we have a problem. OCCI use cases range from embedded hypervisors exposing a single resource to a single entry-point for an entire enterprise or the "Great Global Grid" - we need a way to organise, categories and search for the information, likely including:
  • Free text search via a Google-style "?q=firewall" syntax
  • Taxonomy via categories (already done for Atom) for things like "Operating System" and "Data Center"
  • Folksonomy via [user] tags (already done for Atom and bearing in mind that tag spaces are cool) for things like "testlab"
Fortunately the good work already done in this area for Atom would be realatively easy to port to a Category: HTTP header, following the Link: header example above. In the mean time a standard search interface (including category support) is trivial and thanks to Google, already done.

Structured Data Formats

HTML also resolves another pressing issue - what format to use for submitting key-value pairs (which constitutes a large part of what we need to do with OCCI). It gives us two options:
The advantages of being able to create a resource from a web form simply by POSTing to the collection of resources (e.g. http://example.com/compute), and with HTML 5 by PUTting the resource in place directly (e.g. http://example.com/compute/<uuid>) are immediately obvious. Not only does this help make the human and programmable web one and the same (which in turn makes it much easier for developers/users to kick the tyres and understand the API) but it means that scripting even advanced tasks with curl/wget would be trivial. Plus there's no place for time-wasting religious arguments about angle brackets (XML) over curly braces (JSON).

RESTful State Machines

Something else which has not sat well with me until I spent the weekend ingesting RESTful Web Services book (by Leonard Richardson and Sam Ruby) was the "actuator" concept we picked up from the Sun Cloud APIs. This breaks away from RESTful principles by exposing an RPC-style API for triggering state changes (e.g. start, stop, restart). Granted it's an improvement on the alternative (GETting a resource and PUTting it back with an updated state) as Tim Bray explains in RESTful Casuistry (to which Roy Fielding and Bill de hÓra also responded), but it still "feels funky". Sure it doesn't make any sense to try to "force" a monitored status to some other value (for example setting a "state" attribute to "running"), especially when we can't be sure that's the state we'll get to (maybe there will be an error or the transition will be dependent on some outcome over which we have no control). Similarly it doesn't make much sense to treat states as nouns, for example adding a "running" state to a collection of states (even if a resource can be "running" and "backing up" concurrently). But is using URLs as "buttons" representing verbs/transitions the best answer?

What makes more sense [to me] is to request a transition and check back for updates (e.g. by polling or HTTP server push). If it's RESTful to POST comments to an article (which in addition to its own contents acts as a collection of zero or more comments) then POSTing a request to change state to a [sub]resource also makes sense. As a bonus these can be parametrised (for example a "resize" request can be accompanied with a "size" parameter and a "stop" request sent with clarification as to whether an "ACPI Off" or "Pull Cord" is required). Transitions that take a while, like "format" on a storage resource, can simply return HTTP 201 Accepted so we've got support for asynchronous actions as well - indeed some requests (e.g. "backup") may not even be started immediately. We may also want to consider using something like Post Once Exactly (POE) to ensure that requests like "restart" aren't executed repeatedly and that we can cancel requests that the system hasn't had a chance to deal with yet.

Exactly how this should look in terms of URL layout I'm not sure (perhaps http://example.com/<resource>/requests) but being able to enumerate the possible actions as well as acceptable parameters (e.g. an enum for variations on "stop" or a range for "resize") would be particularly useful for clients.

Collections

This is all well and good for individual resources, but collections are still a serious problem. There are many use cases which involve retrieving an arbitrarily large number of resources and making a HTTP request for each (as well as requests for enumeration etc.) doesn't make sense. More importantly, it doesn't scale - particularly in enterprise environments where requests via proxies and filters can suffer from high latency (if not low bandwidth).

One potential solution is to strap multiple HTTP message entities together as a multipart document, but that's hardly clean and results in some hairy coding on the client side (e.g. manual manipulation of HTTP messages that would otherwise be fully automated). The best solution we currently have for this problem (as evidenced by widespread deployment) is AtomPub so I'm still fairly sure it's going to have to make an appearance somewhere, even if it doesn't wrap all of the resources by default.

24 May 2009

Is AtomPub already the HTTP of cloud computing?

A couple of weeks ago I asked Is OCCI the HTTP of cloud computing? I explained the limitations of HTTP in this context, which basically stem from the fact that the payloads it transfers are opaque. That's fine when they're [X]HTML because you can express links between resources within the resources themselves, but what about when they're some other format - like OVF describing a virtual machine as may well be the case for OCCI? If I want to link between a virtual machine and its network(s) and/or storage device(s) then I'm out of luck... I need to either find an existing meta-model or roll my own from scratch.

That's where Atom (or more specifically, AtomPub) comes in... in the simplest sense it adds a light, RESTful XML layer to HTTP which you can extend as necessary. It provides for collections (a 'feed' of multiple resources or 'entries' in a single HTTP message) as well as a simple meta-model for linking between resources, categorising them, etc. It also gives some metadata relating to unique identifiers, authors/contributors, caching information, etc., much of which can be derived from HTTP (e.g. URL <-> Atom ID, Last-Modified <-> updated).

Although it was designed with syndication in mind, it is a very good fit for creating APIs, as evidenced by its extensive use in:
I'd explain in more detail but Mohanaraj Gopala Krishnan has done a great job already in his AtomPub, Beyond Blogs presentation:

The only question that remains is whether or not this is the best we can do... stay tuned for the answer. The biggest players in cloud computing seem to think so (except Amazon, whose APIs predate Google's and Microsoft's) but maybe there's an even simpler approach that's been sitting right under our noses the whole time.

14 May 2009

Bragging rights: Valeo's 30,000 Google Apps users announced

It's been a long time in coming but I can finally tell you all about what originally brought me to France. Back in 2007 as a strategic consultant I designed, delivered and demonstrated a proof of concept of a complete cloud computing user environment (before it was even called cloud computing) to Valeo in a competitive tender, before handing over to CapGemini for deployment later that year.

What's particularly noteworthy (aside from the sheer scale) is that while many cloud computing deployments are tactical, with a view to reaching a specific goal (e.g. mail scanning, web security, shared calendaring, video hosting, etc.), this one was a high level strategy to replace as much of the existing infrastructure as possible. I also installed three Google Search Appliances as part of the solution and integrated same with a complex Active Directory and Lotus Notes infrastructure.

Granted this hasn't been a big secret for a while now but it's the first time the full details have officially emerged. Sergey Brin first bragged about it on Google's Q2 earnings call last year while talking about Google Apps' successes:
Just to give you color on what some of these businesses include, in this past quarter Valeo, one of the world’s leading automotive suppliers now has 32,000 users using Google Apps, including of course Gmail, Calendar, Docs and so forth.
Congratulations to everyone at CapGemini, Google and of course Valeo for making this a success.
Valeo launches an innovative initiative with Google to reduce administrative expenses
Wednesday, 13 May 2009

Valeo today announced that the Group's 30,000 Internet-connected employees now have access to a new communication and collaborative working platform based on Google Apps Premier Edition and supported by Capgemini.

The progressive roll-out of the new system is giving employees access to a suite of online products which will increase administrative efficiency and improve collaboration between the 193 Valeo entities in 27 countries.

"We were searching for an innovative way to reduce significantly our office infrastructure costs while simultaneously improving user collaboration and productivity," said André Gold, Valeo's Technical Senior Vice-President. "Our pilot projects demonstrate that this target is achievable."

Valeo is deploying Google Apps, supported by Google’s partner Capgemini, in a phased approach throughout 2009. As a first step, users are being given access to Google sites, on-line documents, video management and instant messaging, including voice and video chat, in order to improve teamwork. The new system will then offer applications to further enhance the company's efficiency, such as an Enterprise directory and workflow tools to automate administrative processes. In the final stage, users will benefit from Google mail, calendar, search and on-line translation solutions to reinforce personal efficiency. They will be able to access the applications from a desktop, laptop or other mobile device.

"The cost savings and innovation made possible by cloud computing help businesses better respond to a global and mobile workforce – especially in today's difficult economic environment," said Dave Girouard, President, Google Enterprise. "We're thrilled Valeo has selected Google."

Valeo is an independent industrial Group fully focused on the design, production and sale of components, integrated systems and modules for cars and trucks. Valeo ranks among the world's top automotive suppliers. The Group has 122 plants, 61 R&D centers, 10 distribution platforms and employs around 49,000 people in 27 countries worldwide.

For additional information, please contact:
Antoine Balas, Valeo Corporate Communications, Tel.: +33.1.40.55.29.36
Malgosia Rigoli, Corporate Communications, Google Enterprise EMEA, Tel.: +44207881 4537, [email protected]

For more information about the Group and its activities, please visit our web site www.valeo.com
Update 1: Google France's Laurent Guiraud (who worked closely with me on the proof of concept and who I have to thank for most of what I know about the Google Search Appliance) has written about it on the Official Google Blog: 30,000 new Google Apps business users at Valeo.

Update 2: The story is now featured on the Official Google Enterprise Blog as well.

Update 3: Said to be Google's "biggest enterprise deal yet".

Update 4: ReadWriteWeb have picked up the story: Google Apps Continues Push Into Enterprise: 30,000 New Users at Valeo.

Update 5: So have TechCrunchIT: Google Cloud:1. MS Office: 0. 

Update 6: ComputerWeekly report Google Apps gets first global customer

Update 7: BusinessWeek talk about it while asking What's Holding Back Google Apps?

Update 8: InfoWeek report that Google's Cloud Evangelism Converts Enterprise Customers

Update 9: The Register (incorrectly) report that "Cap Gemini has sold what it believes is the largest ever contract for Google's online suite of software products", only the deal was as good as done by the time they got it.

Update 10: Computer Business Review writes Google secures biggest ever apps contract

Update 11: CNET states With Valeo deal, Google Apps gains business cred

05 May 2009

Is OCCI the HTTP of Cloud Computing?

The Web is built on the Hypertext Transfer Protocol (HTTP), a client-server protocol that simply allows client user agents to retrieve and manipulate resources stored on a server. It follows that a single protocol could prove similarly critical for Cloud Computing, but what would that protocol look like?

The first place to look for the answer is limitations in HTTP itself. For a start the protocol doesn't care about the payload it carries (beyond its Internet media type, such as text/html), which doesn't bode well for realising the vision of the [Semantic] Web as a "universal medium for the exchange of data". Surely it should be possible to add some structure to that data in the simplest way possible, without having to resort to carrying complex, opaque file formats (as is the case today)?

Ideally any such scaffolding added would be as light as possible, providing key attributes common to all objects (such as updated time) as well as basic metadata such as contributors, categories, tags and links to alternative versions. The entire web is built on hyperlinks so it follows that the ability to link between resources would be key, and these links should be flexible such that we can describe relationships in some amount of detail. The protocol would also be capable of carrying opaque payloads (as HTTP does today) and for bonus points transparent ones that the server can seamlessly understand too.

Like HTTP this protocol would not impose restrictions on the type of data it could carry but it would be seamlessly (and safely) extensible so as to support everything from contacts to contracts, biographies to books (or entire libraries!). Messages should be able to be serialised for storage and/or queuing as well as signed and/or encrypted to ensure security. Furthermore, despite significant performance improvements introduced in HTTP 1.1 it would need to be able to stream many (possibly millions) of objects as efficiently as possible in a single request too. Already we're asking a lot from something that must be extremely simple and easy to understand.

XML

It doesn't take a rocket scientist to work out that this "new" protocol is going to be XML based, building on top of HTTP in order to take advantage of the extensive existing infrastructure. Those of us who know even a little about XML will be ready to point out that the "X" in XML means "eXtensible" so we need to be specific as to the schema for this assertion to mean anything. This is where things get interesting. We could of course go down the WS-* route and try to write our own but surely someone else has crossed this bridge before - after all, organising and manipulating objects is one of the primary tasks for computers.

Who better to turn to for inspiration than a company whose mission it is to "organize the world's information and make it universally accessible and useful", Google. They use a single protocol for almost all of their APIs, GData, and while people don't bother to look under the hood (no doubt thanks to the myriad client libraries made available under the permissive Apache 2.0 license), when you do you may be surprised at what you find: everything from contacts to calendar items, and pictures to videos is a feed (with some extensions for things like searching and caching).

OCCI

Enter the OGF's Open Cloud Computing Interface (OCCI) whose (initial) goal it is to provide an extensible interface to Cloud Infrastructure Services (IaaS). To do so it needs to allow clients to enumerate and manipulate an arbitrary number of server side "resources" (from one to many millions) all via a single entry point. These compute, network and storage resources need to be able to be created, retrieved, updated and deleted (CRUD) and links need to be able to be formed between them (e.g. virtual machines linking to storage devices and network interfaces). It is also necessary to manage state (start, stop, restart) and retrieve performance and billing information, among other things.

The OCCI working group basically has two options now in order to deliver an implementable draft this month as promised: follow Amazon or follow Google (the whole while keeping an eye on other players including Sun and VMware). Amazon use a simple but sprawling XML based API with a PHP style flat namespace and while there is growing momentum around it, it's not without its problems. Not only do I have my doubts about its scalability outside of a public cloud environment (calls like 'DescribeImages' would certainly choke with anything more than a modest number of objects and we're talking about potentially millions) but there are a raft of intellectual property issues as well:
  • Copyrights (specifically section 3.3 of the Amazon Software License) prevent the use of Amazon's "open source" clients with anything other than Amazon's own services.
  • Patents pending like #20070156842 cover the Amazon Web Services APIs and we know that Amazon have been known to use patents offensively against competitors.
  • Trademarks like #3346899 prevent us from even referring to the Amazon APIs by name.
While I wish the guys at Eucalyptus and Canonical well and don't have a bad word to say about Amazon Web Services, this is something I would be bearing in mind while actively seeking alternatives, especially as Amazon haven't worked out whether the interfaces are IP they should protect. Even if these issues were resolved via royalty free licensing it would be very hard as a single vendor to compete with truly open standards (RFC 4287: Atom Syndication Format and RFC 5023: Atom Publishing Protocol) which were developed at IETF by the community based on loose consensus and running code.

So what does all this have to do with an API for Cloud Infrastructure Services (IaaS)? In order to facilitate future extension my initial designs for OCCI have been as modular as possible. In fact the core protocol is completely generic, describing how to connect to a single entry point, authenticate, search, create, retrieve, update and delete resources, etc. all using existing standards including HTTP, TLS, OAuth and Atom. On top of this are extensions for compute, network and storage resources as well as state control (start, stop, restart), billing, performance, etc. in much the same way as Google have extensions for different data types (e.g. contacts vs YouTube movies).

Simply by standardising at this level OCCI may well become the HTTP of Cloud Computing.

04 May 2009

Apple iPad to be Steve Jobs' welcome back gift?

You may recall the Crystal Ball: Apple's $599 "iPad Touch" Netbook (with pictures) article last year in which I mocked up the forthcoming Apple iPad. Although it turns out I wasn't the first with this idea (further proof it's got legs) I wasn't the last either, with the Wall Street Journal claiming that "people privy to the company's strategy say Apple is working on new iPhone models and a portable device that is smaller than its current laptop computers but bigger than the iPhone or iPod Touch" in Jobs Maintains Grip at Apple. Add to that the mysterious 10-inch touchscreen order and you arrive at something not too far from what you see above.

When will all this happen? Well Steve Jobs is due back in June, coincidentally the same time as the Worldwide Developer Conference (WWDC) 2009 which runs June 8-12 in San Francisco. In addition to Snow Leopard then it looks like us Mac junkies will have a[t least one] new toy to play with. This is great news even if only because while surfing the Internet on the iPhone is possible, it's hardly pleasurable.