Trend Micro abandons Intercloud™ trademark application

Just when I thought we were going to be looking at another trademark debacle not unlike Dell’s attempt at “cloud computing” back in 2008 (see Dell cloud computing™ denied) it seems luck is with us in that Trend Micro have abandoned their application #77018125 for a trademark on the term Intercloud (see NewsFlash: Trend Micro trademarks the Intercloud™). They had until 5 February 2010 to file for an extension and according to USPTO’s Trademark Document Retrieval system they have now well and truly missed the date (the last extension was submitted at the 11th hour, at 6pm on the eve of expiry).

Like Dell, Trend Micro were issued a “Notice of Allowance” on 5 August 2008 (actually Dell’s “Notice of Allowance” for #77139082 was issued less than a month before, on 8 July 2008, and cancelled just afterwards, on 7 August 2008). Unlike Dell though, Trend Micro just happened to be in the right place at the right time rather than attempting to lay claim to an existing, rapidly developing technology term (“cloud computing”).

Having been issued a Notice of Allowance both companies just had to submit a Statement of Use and the trademarks were theirs. With Dell it was just lucky that I happened to discover and reveal their application during this brief window (after which the USPTO cancelled their application following widespread uproar), but with Trend Micro it’s likely they don’t actually have a product today with which to use the trademark.

A similar thing happened to Psion late 2008, who couldn’t believe their luck when the term “netbook” became popular long after they had discontinued their product line by the same name. Having realised they still held an active trademark, they threatened all and sundry over it, eventually claiming Intel had “unclean hands” and asking for $1.2bn, only to back down when push came to shove. One could argue that as we have “submarine patents“, we also have “submarine trademarks”.

In this case, back on September 25, 2006 Trend Micro announced a product coincidentally called “InterCloud” (see Trend Micro Takes Unprecedented Approach to Eliminating Botnet Threats with the Unveiling of InterCloud Security Service), which they claimed was “the industry’s most advanced solution for identifying botnet activity and offering customers the ability to quarantine and optionally clean bot-infected PCs“. Today’s Intercloud is a global cloud of clouds, in the same way that the Internet is a global network of networks – clearly nothing like what Trend Micro had in mind. It’s also both descriptive (a portmanteau describing interconnected clouds) and generic (in that it cannot serve as a source identifier for a given product or service), which basically means it should be found ineligible for trademark protection should anyone apply again in future.

Explaining further, the Internet has kept us busy for a few decades simply by passing packets between clients and servers (most of the time). It’s analogous to the bare electricity grid, allowing connected nodes to transfer electrical energy between one another (typically from generators to consumers but with alternative energy sometimes consumers are generators too). Cloud computing is like adding massive, centralised power stations to the electricity grid, essentially giving it a life of its own.

I like the term Intercloud, mainly because it takes the focus away from the question of “What is cloud?”, instead drawing attention to interoperability and standards where it belongs. Kudos to Trend Micro for this [in]action – whether intentional or unintentional.

Introducing Planet Cloud: More signal, less noise.


As you are no doubt well aware there is a large and increasing amount of noise about cloud computing, so much so that it’s becoming increasingly difficult to extract a clean signal. This has always been the case but now that even vendors like Oracle (who have previously been sharply critical of cloud computing, in part for exactly this reason) are clambering aboard the bandwagon, it’s nearly impossible to tell who’s worth listening to and who’s just trying to sell you yesterday’s technology under today’s label.

It is with this in mind that I am happy to announce Planet Cloud, a news aggregator for cloud computing articles that is particularly fussy about its sources. In particular, unless you talk all cloud, all the time (which is rare – even I take a break every once in a while) then your posts won’t be included unless you can provide a cloud-specific feed. Fortunately most blogging software supports this capability and many of the feeds included at launch take advantage of it. You can access Planet Cloud at:

http://www.planetcloud.org/ or @planetcloud

Those of you aware of my disdain for SYS-CON’s antics might be surprised that we’ve opted to ask for forgiveness rather than permission, but you’ll also notice that we don’t run ads (nor do we have any plans to – except for a few that come to us via feeds and are thus paid to authors). As such this is a non-profit service to the cloud computing community intended filter out much of the noise in the same way that the Clouderati provides an fast track to the heart of the cloud computing discussion on Twitter. An unwanted side effect of this approach is that it is not possible for us to offer the feeds under a Creative Commons license, as would usually be the case for content we own.

Many thanks to Tim Freeman (@timfaas) for his contribution not only of the planetcloud.org domain itself, but also of a comprehensive initial list of feeds (including many I never would have thought of myself). Thanks also to Rackspace Cloud who provide our hosting and who have done a great job of keeping the site alive during the testing period over the last few weeks. Thanks to the Planet aggregator which is simple but effective Python software for collating many feeds. And finally thanks to the various authors who have [been] volunteered for this project – hopefully we’ll be able to drive some extra traffic your way (of course if you’re not into it then that’s fine too – we’ll just remove you from the config file and you’ll vanish within 5 minutes).

Press Release: Cloud computing consultancy condemns controversial censorship conspiracy

SYDNEY, 24 December 2009: Sydney-based Australian Online Solutions today condemned the government’s plans to introduce draconian Internet censorship laws in Australia.

Senator Stephen Conroy (Minister for Broadband, Communications and the Digital Economy) recently announced the introduction of mandatory Internet Service Provider (ISP) level filtering of Refused Classification (RC)-rated content as well as grants to encourage ISPs to filter wider categories of content. This would require the implementation of complicated, expensive and unreliable, yet trivially circumvented filtering technology at the cost of the taxpayer and Internet user, despite a strong message having been sent that this is both unwanted and unwarranted. Reader polls conducted by the Sydney Morning Herald and The Age newspaper showed a staggering 95% of some 25,000 readers reject the federal government’s plans to censor the Internet in Australia, on the basis that it impinges on their freedom. “There are better and safer ways to tackle the problem, such as educating parents, teachers and children, offering customisable filtering as a value-added option and improving law enforcement (including cooperation with other countries)” said Sam Johnston, Australian Online Solutions’ Founder & CTO.

The full frontal assault on civil liberties aside, Australian Online Solutions has also raised some serious technical concerns about the program. “At a time when individuals and businesses are looking to shed expensive legacy systems in favour of cheap, scalable Internet based services, any action which can only impair performance and reliability while threatening to strangle Australia’s connectivity with the outside world calls for extensive justification”, said Johnston. “Cloud computing, which delivers computing services over the Internet on a utility basis – like electricity – gives its’ users a significant advantage over competitors. However web-based applications such as Facebook, Gmail, Hotmail and Twitter are extremely sensitive to bandwidth and latency constraints introduced by censorship technology”, added Johnston. “The proposed law threatens to exclude Australia from this large and growing industry altogether, both as provider and consumer, at a time when it could emerge as a market leader. Would you buy an Internet-based service from China or Iran, or even use one if you were based there?”. Analysts Merrill Lynch and Gartner estimate the cloud computing market to reach $175 billion in the coming years.

Trials commissioned by Senator Conroy and conducted by “highly reputable and independent testing company” Enex Testlab were also called into question, on both technical and conflict of interest bases. Enex Testlab, a supplier of “independent” evaluation, purchasing advice and product review services, boasts a corporate client list with over a dozen vendors of filtering technology including Content Keeper Technologies, Content Watch and Internet Sheriff Technology (accounting for around one quarter of all clients listed) and offers formal certification for content filters. As such it is believed they have strong motivation to avoid releasing a report directly or indirectly critical of their clients’ offerings.

Furthermore, the scope of the testing was artificially constrained, criticial controls (such as connection consistency) were missing and success criteria were poorly defined or non- existent from the outset, in a trial that appears to be a manufactured success. Nonetheless unflattering results which highlighted serious deficiencies in the proposal were disingenuously touted by Senator Conroy as showing “100 percent accuracy” with “negligible impact on internet speed”.

Other problems with the fatally flawed and heavily criticised report include include:

  • Proof that “a technically competent user could circumvent the filtering technology” while “circumvention prevention measures can result in greater degradation of internet performance”.
  • Admission that all filters were “not effective in the case of non-web based protocols such as instant messaging, peer-to-peer or chat rooms”.
  • False positive rates (over-blocking of legitimate/innocuous content) of up to 3.4% (over 5.1 billion pages per Internet Archive estimates) with failure rates as high as 2% (3 billion pages) considered “low”.
  • False negative rates (passing of inappropriate content) exceeding 20% (over 30 billion pages) with failure rates as high as 30% considered “reasonable by industry standards” (45 billion pages).
  • Admission that 100% accuracy is “unlikely to be achieved” and that the false positive rate increases with sensitivity, with no attempt to scientifically determine acceptable failure rates.
  • Faults being perceptible to end users, with some customers reporting “over-blocking and/or under-blocking of content during the pilot” while considering “mechanisms for self-management” and “improved visibility of the filter in action” to be “important”.
  • Unjustified assumptions including that “performance impact is minimal if between 10 and 20 percent”, while at least one system “displayed a noticeable performance impact”. Some customers “believe they experienced some speed degradation”.
  • Admission of “uncontrollable variables”, including ones that could result in “40 percent performance degradation over theoretical maximum line-rate, or more in some cases”, even at speeds less than 1/12 that of the proposed National Broadband Network (NBN).
  • Admission that reliable recognition of IP addresses to be filtered is unreliable (indeed often impossible), particularly for large-scale websites that use load balancing (e.g. most cloud computing solutions).
  • Results that were “irregular/incorrect” and “highly anomalous with reasonable expectations” (such as physically impossible improvements in performance when transferring encrypted, random payloads).
  • Complete absence of quantitative cost analysis (e.g. what financial load will be borne by both the taxpayer and Internet subscriber, both up front and on an ongoing basis), as well as any secondary costs such as decreased efficiency.
  • Overall results indicating that 1 in 5 customers’ needs were not met, with 1 in 3 opting out of continued use of the filtered service.

In addition to contacting local representatives, Australian Online Solutions encourages concerned individuals and businesses to join and support organisations including Electronic Frontiers Australia (EFA), GetUp and The Pirate Party Australia. The immediate availability of a limited number of sponsorships for founding members of The Pirate Party Australia is also announced for those who want to get involved but, for whatever reason, cannot afford the membership fees in this difficult economic environment. To take advantage of this opportunity please contact membership@pirateparty.org.au with a brief explanation of your situation.

“Anyone who cares about their future and that of their children and grandchildren should take action now”, said Johnston, who applied to both The Pirate Party Australia and Electronic Frontiers Australia (EFA) in response to Senator Conroy’s announcement. “The government’s gift to us this Christmas was draconian censorship, so let’s return the favour in helping The Pirate Party Australia attain official status by acquiring 500 exclusive members”.

###

About Australian Online Solutions Pty Ltd
Australian Online Solutions is a boutique consultancy that specialises in cloud computing solutions for large enterprise, government and education clients throughout Australia, Europe and the USA. Founded in 1998, Australian Online Solutions has over a decade of experience delivering next generation Internet-based systems and is a pioneer in the cloud computing space, whereby technology previously delivered as hardware and software products are delivered as services over the Internet. Cloud computing is Internet (‘cloud’) based development and use of computer technology (‘computing’). For more information refer to http://www.aos.net.au/

About The Pirate Party Australia
The Pirate Party Australia (http://www.pirateparty.org.au/) is a political party with a serious platform of intellectual property law reform and protection of privacy rights and freedom of speech. The Pirate Party Australia aims to protect civil liberties and promote culture and innovation, primarily through:

  • Decriminalisation of non-commercial copyright infringement
  • Protection of freedom of speech rights
  • Protection of privacy rights
  • Opposition to internet censorship
  • Support for an R18+ rating for games
  • Reforming the life + 70 years copyright length
  • Providing parents with the tools to run their own families.

About Electronic Fronteirs Australia (EFA)
Electronic Frontiers Australia (EFA) is a non-profit national organisation representing Internet users concerned with on-line freedoms and rights. The EFA is the organisation responsible for the “No Clean Feed” (http://nocleanfeed.com/) grassroots movement to stop Internet censorship in Australia. They are also dealing with related issues such as the Anti- Counterfeiting Trade Agreement (ACTA) and censorship of computer games. Individual memberships start at $27.50 and organisational memberships are available. For more information refer to http://www.efa.org.au/

About GetUp
GetUp is an independent, grass-roots community advocacy organisation that is actively tackling this and other pertinent issues including climate change. For more information about how to get involved refer to http://www.getup.org.au

About Sam Johnston
Sam Johnston, Australian Online Solutions’ Founder and CTO, is a prominent blogger on cloud computing, security and open source topics. He maintains a blog at https://samj.net/

Press Contact:
Sam Johnston
+61 2 8898 9090
Australian Online Solutions Pty Ltd

For the latest version of this release please refer to http://tinyurl.com/cloudcensor

If it’s dangerous it’s NOT cloud computing

Having written something similar over the weekend myself (How Open Cloud could have saved Sidekick users’ skins) I was getting ready to complement this post, but fear-mongering title aside (Cloud Computing is Dangerous) I was dismayed to see this:

“Let’s call it what it is, it’s a cloud app — your data when using a Sidekick is hosted in some elses data center.”

I simply can not and will not accept this, and I’m not the only one:

“Help me out here. I’m seeing really smart people I totally respect jump on this T-Mobile issue as a ‘Cloud’ failure. Am I losing my mind?”

For a start, Sidekicks predate cloud by 1/2 a dozen years, with the first releases back in 2001. Are we saying that they were so far ahead (like Google) that we just hadn’t come up with a name for their technology yet? No. Is Blackberry cloud? No, it isn’t either. This was a legacy n-tier Internet-facing application that catastrophically failed as many such applications do. It was NOT cloud. As Alexis Richardson pointed out to Redmonk’s James Governor “if it loses your data – it’s not a cloud”.

While I know that this analogy is inconvenient for some vendors it works and it’s the best we have: Cloud is resilient in the same way that the electricity grid is resilient. Power stations do fail and we (generally) don’t hear about it. Similarly datacenters fail, get disconnected, overheat, flood, burn to the ground and so on, but these events should not cause any more than a minor interruption for end users. Otherwise how are they different from “legacy” web applications? Sure, occasionally we’ll have cloud computing “blackouts” but we’ll learn to live with them just as we do today when the electricity goes out.

As a more specific example, if an Amazon DC fails you’ll lose your EC2 instances (the cost/performance hit of running lock-step across high latency links is way too high for live redundancy). However the virtual machine image itself should be automagically replicated across multiple geographically independent availability zones by S3 so it’s just a case of starting them again. If you’re using S3 directly (or Gmail for that matter) you should never need to know that something went wrong.

But Salesforce predates cloud by almost a decade you say? This data point was a thorn in my side until I found this article (Salesforce suffers gridlock as database collapses) and the associated Oracle press release (Salesforce.com’s 267,000 Subscribers To Go On Demand With Oracle® Grid). With wording like “one of its four data hubs collapsed” in what “appears to be a database cluster crash” I’m starting to question whether Salesforce really is as “cloudy” as they are claim (and are assumed) to be. Indeed the URL I’m staring at as I use Salesforce.com now (https://na1.salesforce.com/home/home.jsp – emphasis mine) would suggest that it is anything but. NA1 is one of 1/2 a dozen different data centers and their “cloud” only appears as a single point when you log in (http://login.salesforce.com/) at which time you are redirected to the one that hosts your data. Is it any wonder then that it’s Google and Amazon that are topping the surveys now rather than Microsoft and Salesforce?

Don’t get me wrong – Salesforce.com is a great company with a great product suite that I use and recommend every day. They may well be locked in to a legacy n-tier architecture but they do a great job of keeping it running at large scale and I almost can’t believe it’s not cloud. I see it as “Software. As a Service”, bearing in mind that it’s replacing some piece of software that traditionally would have run on the desktop by delivering it over the Internet via the browser. SaaS is, if anything, a subset of cloud and I’m sure that nobody here would suggest that any old LAMP application constitutes cloud. But we digress…

I honestly thought we had this issue resolved last year, having spent an inordinate amount of time discussing, blogging, writing Wikipedia articles and generally trying to extract sense (and consensus) from the noise. I was apparently wrong as even our self-appointed spokesman has foolishly conceded that what can only really be described as gross negligence in IT operations and a crass act of stupidity is somehow a failure of the cloud computing model itself. I agree completely with Chris Hoff in that “This T-Mobile debacle is a good thing. It will help further flush out definitions and expectations of Cloud. (I can dream, right?)” – it’s high time for us to revisit and nail the issue of what is (and more importantly, what is not) cloud once and for all.

How Open Cloud could have saved Sidekick users’ skins

The cloud computing scandal of the week is looking like being the catastrophic loss of millions of Sidekick users’ data. This is an unfortunate and completely avoidable event that Microsoft’s Danger subsidiary and T-Mobile (along with the rest of the cloud computing community) will surely very soon come to regret.

There’s plenty of theories as to what went wrong – the most credible being that a SAN upgrade was botched, possibly by a large outsourcing contractor, and that no backups were taken despite space being available (though presumably not on the same SAN!). Note that while most cloud services exceed the capacity/cost ceiling of SANs and therefore employ cheaper horizontal scaling options (like the Google File System) this is, or should I say was, a relatively small amount of data. As such there is no excuse whatsoever for not having reliable, off-line backups – particularly given Danger is owned by Microsoft (previously considered one of the “big 4” cloud companies even by myself). It was a paid-for service too (~$20/month or $240/year?) which makes even the most expensive cloud offerings like Apple’s MobileMe look like a bargain (though if it’s any consolation the fact that the service was paid for rather than free may well come back to bite them by way of the inevitable class action lawsuits).

“Real” cloud storage systems transparently ensure that multiple copies of data are automatically maintained on different nodes, at least one of which is ideally geographically independent. That is to say, the fact I see the term “SAN” appearing in the conversation suggests that this was a legacy architecture far more likely to fail. This is in the same way that today’s aircraft are far safer than yesterday’s and today’s electricity grids far more reliable than earlier ones (Sidekick apparently predates Android & iPhone by some years after all). It’s hard to say with any real authority what is and what is not cloud computing though, beyond saying that “I know it when I see it, and this ain’t it”.

Whatever the root cause the result is the same – users who were given no choice but to store their contacts, calendars and other essential day-to-day data on Microsoft’s servers look like having irretrievably lost it. Friends, family, acquaintances and loved ones – even (especially?) the boy/girl you met at the bar last night – may be gone for good. People will miss appointments, lose business deals and in the most extreme cases could face extreme hardship as a result (for example, I’m guessing parole officers don’t take kindly to missed appointments with no contact!). The cost of this failure will (at least initially) be borne by the users, and yet there was nothing they could have done to prevent it short of choosing another service or manually transcribing their details.

The last hope for them is that Microsoft can somehow reverse the caching process in order to remotely retrieve copies from the devices (which are effectively dumb terminals) before they lose power; good luck with that. While synchronisation is hard to get right, having a single cloud-based “master” and a local cache on the device (as opposed to a full, first-class citizen copy) is a poor design decision. I have an iPhone (actually I have a 1G, 3G, 3GS and an iPod Touch) and they’re all synchronised together via two MacBooks and in turn to both a Time Machine backup and Mozy online backup. As if that’s not enough all my contacts are in sync with Google Apps’ Gmail over the air too so I can take your number and pretty much immediately drop it in a beer without concern for data loss. Even this proprietary system protects me from such failures.

The moral of the story is that externalised risk is a real problem for cloud computing. Most providers [try to] avoid responsibility by way of terms of service that strip away users’ rights but it’s a difficult problem to solve though because enforcing liability for anything but gross negligence can exclude smaller players from the market. That is why users absolutely must have control over their data and be encouraged if not forced to take responsibility for it.

Open Cloud simply requires open formats and open APIs – that is to say, users must have access to their data in a transparent format. Even if it doesn’t make sense to maintain a local copy on the users’ computer, there’s nothing stopping providers from pushing it to a third party storage service like Amazon S3. In fact it makes a lot of sense for applications to be separated from storage entirely. We don’t expect our operating system to provide all the functionality we’ll ever need (or indeed, any of it) so we install third party applications which use the operating system to store data. What’s to stop us doing the same in the cloud, for example having Google Apps and Zoho both saving back to a common Amazon S3 store which is in turn replicated locally or to another cloud-based service like Rackspace Cloud Files?

In any case perhaps it’s time for us to dust off and revisit the Cloud Computing Bill of Rights?

“Bare Metal” cloud infrastructure “compute” services arrive

Earlier in the year during the formation of the Open Cloud Computing Interface (OCCI) working group I described three types of cloud infrastructure “compute” services:

  • Physical Machines (“Bare Metal”) which are essentially dedicated servers provisioned on a utility basis (e.g. hourly), whether physically independent or just physically isolated (e.g. blades)
  • Virtual Machines which nowadays uses hypervisors to split the resources of a physical host amongst various guests, where both the host and each of the guests run a separate operating system instance. For more details on emulation vs virtualisation vs paravirtualisation see a KB article I wrote for Citrix a while back: CTX107587 Virtual Machine Technology Overview
  • OS Virtualisation (e.g. containers, zones, chroots) which is where a single instance of an operating system provides multiple isolated user-space instances.

While the overwhelming majority of cloud computing discussions today focus on virtual machines, the reason for my making the distinction was so as the resulting API would be capable of dealing with all possibilities. The clouderati are now realising that there’s more to life than virtual machines and that the OS is likea cancer that sucks energy (e.g. resources, cycles), needs constant treatment (e.g. patches, updates, upgrades) and poses significant risk of death (e.g. catastrophic failure) to any application it hosts“. That’s some good progress – now if only the rest of the commentators would quit referring to virtualisation as private cloud so we can focus on what’s important rather than maintaining the status quo.

Anyway such cloud services didn’t exist at the time but in France at least we did have providers like Dedibox and Kimsufi who would provision a fixed configuration dedicated server for you pretty much on the spot starting at €20/month (<€0.03/hr or ~$0.04/hr). I figured there was nothing theoretically stopping this being fully automated and exposed via a user (web) or machine (API) interface, in which case it would be indistinguishable from a service delivered via VM (except for a higher level of isolation and performance). Provided you’re billing as a utility (that is, users can consume resources as they need them and are billed only for what they use) rather than monthly or annually and taking care of all the details “within” the cloud there’s no reason this isn’t cloud computing. After all, as an end user I needn’t care if you’re providing your service using an army of monkeys, so long as you are. PCI compliance anyone?

Virtually all of the cloud infrastructure services people talk about today are based on virtual machines and the market price for a reasonably capable one is $0.10/hr or around $72.00 per month. That’s said to be 3-5x more than cost at “cloud scale” (think Amazon) so expect that price to drop as the market matures. Rackspace Cloud are already offering small Xen VMs for 1.5c/hr or ~$10/month. I won’t waste any more time talking about these offerings as everyone else already is. This will be a very crowded space thanks in no small part to VMware’s introduction of vCloud (which they claim turns any web hoster into a cloud provider) but with the hypervisor well and truly commoditised I assure you there’s nothing to see here.

On the lightweight side of the spectrum, VPS providers are a dime a dozen. These guys generally slice Linux servers up into tens if not hundreds of accounts for only a few dollars a month and take care of little more than the (shared) kernel, leaving end users to install the distribution of their choice as root. Solaris has zones and even Windows has MultiWin built in now days (that’s the technology, courtesy Citrix, that allows multiple users each having their own GUI session to coexist on the same machine – it’s primarily used for Terminal Services & Fast User Switching but applications and services can also run in their own context). This delivers most of the benefits of a virtual machine, only without the overhead and cost of running and managing multiple operating systems side by side. Unfortunately nobody’s really doing this yet in cloud but if they were you’d be able to get machines for tasks like mail relaying, spam filtering, DNS, etc. for literally a fraction of a penny per hour (VPSs start at <$5/m or around 0.7c/hr).

So the reason for my writing this post today is that SoftLayer this week announced the availability of “Bare Metal Cloud” starting at $0.15 per hour. I’m not going to give them any props for having done so thanks for their disappointing attempt to trademark the obvious and generic term “bare metal cloud” and due to unattractive hourly rates that are almost four times the price of the monthly packages by the time you take into account data allowances. I will however say that it’s good to see this prophecy (however predictable) fulfilled.

I sincerely hope that the attention will continue to move further away from overpriced and inefficient virtual machines and towards more innovative approaches to virtualisation.