25 July 2008

The future of cloud computing - an army of monkeys?

I don't care if my cloud computing architecture is powered by a grid, a mainframe, my neigbour's desktop or an army of monkeys, so long as it's fast, cheap and secure.
There's been a good deal of confusion of late between the general concept of cloud computing, which I define as "Internet ('Cloud') based development and use of computer technology ('Computing')", and its various components (autonomic, grid & utility computing, SaaS etc.). Some of this confusion is understandable given issues get complex quickly when you start peeling off the layers, however much of it comes from the very same opportunistic hardware & software vendors who somehow convinced us years ago that clusters had become grids. These same people are now trying to convince us that grids have become clouds in order to sell us their version of a 'private cloud' (which we would be led to believe is any large, intelligent and/or reliable cluster)[1].

Let's not forget that much of the value of The Cloud (remember, like the Highlander "there can be only one") comes from characteristics that simply cannot be replicated internally, like not having to engineer for peak loads and being able to build on top of the ecosystem's existing services. Yes, you can build a cloud computing architecture with large, intelligent clusters that are a second rate citizens or 'clients' of The Cloud (as most of these 'private clouds' will be) but calling them 'clouds' is a stretch at best and deceptive at worst - let's call a spade a spade shall we.
The Cloud is what The Grid could have been.
The term 'Grid' was coined by the likes of Ian Foster in the 90's to define technoloies that would 'allow consumers to obtain computing power on demand', following on from John McCarthy's 1960 prediction that 'computation may someday be organized as a public utility'. While it is true that much of the existing cloud infrastructure is powered by large clusters (what these vendors call grids) there are some solid, successful counterexamples including:
  • BitTorrent which shares files between a 'cloud' of clients
  • SETI which distributes computational tasks between volunteers
  • Skype which has minimal centralised infrastructure for tasks like account creation and authentication, delegating what they can to 'supernode' clients
By focusing on batched computational workloads and maximizing peak processing performance rather than efficiently servicing individual requests, grid computing has painted itself into a corner (or at least solves a different set of problems) thus creating a void for The Cloud to fill.
The Cloud is like the electricity network, only photons are more convenient than electrons so the emergence of a single global provider is a possibility (some would say a threat).
Perhaps Thomas J. Watson Sr. (then president of IBM) was right when he was famously [mis]quoted as predicting a worldwide market for 5 computers back in 1943. On one hand, without the physical constraints of electrons (eg attenuation, crosstalk) it is concievable that our needs could be serviced by photons channeled over fiber optic to one massive, centralised computing fabric. We don't have national water grids simply because water is too heavy and even electrons get unmanageable on this scale (though many problems were solved by standardising and moving to alternating current), but weightless photons have no such limitation. At the other end of the scale we distribute the load across relatively tiny devices which may well outnumber their masters (pun intended). The reality will almost certainly fall somewhere in between, perhaps not too far from what we have today: a handful of global providers, scores of regional outfits and then the army of niche players. The forces of globalization, unusually free of geographic constraints, will also certainly affect how this plays out by drawing in providers from emerging economies.
The Cloud equivalent of an electron could be a standardised workload consisting of a small bundle of (encrypted) data and the code required to perform an action upon it.
Much of the infrastructure is already in place but in order to better approximate the electricity grid we need a 'commodity', analogous to the electron. Today we transfer relatively large workloads (eg virtual machines, application bundles, data sets) to our providers who run them for a relatively long time (days, weeks, months), however it's possible to concieve of far more granular alternatives for many applications. These could be processed by networked computing resources in much the same way as the cell processors that power the PlayStation 3 handle apulettes.

These resources could be anything from massive centralised data centers to their modular counterparts or indeed your neighbour's idle computer (which would pump 'computing resources' into the cloud in the same way as enterprising individuals can receive rebates for negative net consumption of electricity). Assuming you were to be billed at all, it would likely be per unit (eg MIPS and RAM time rather than kWh) and at prices set by a marketplace not unlike the existing electricity markets. There may be more service specifications than voltage and frequency (eg security, speed, latency) and compliance with the service contract(s) would be constantly validated by the marketplace. In any case, given Moore's law and rapid advances in computing hardware (particularly massively parallel processing) it is impossible to accurately predict beyond more than a few years out how these resources and marketplaces will look, but we need to start thinking outside the box now.

For those that are looking for more background information, or a more formal comparison between the different components, check out Wikipedia's cloud computing article which I have been giving a much needed overhaul:
Cloud computing is often confused with grid computing (a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks), utility computing (the packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity) and autonomic computing (computer systems capable of self-management)[4]. Indeed many cloud computing deployments are today powered by grids, have autonomic characteristics and are billed like utilities, but cloud computing is rather a natural next step from the grid-utility model[5]. Some successful cloud architectures have little or no centralised infrastructure or billing systems whatsoever including Peer to peer networks like BitTorrent and Skype and Volunteer computing like SETI.
Updated: 4 August 2008 16:00 UTC to expand on electricity grid comparison, for republication in the Cloud Computing Journal
Updated: 5 August 2008 06:00 UTC to clarify the source of this definition, per Is a Grid a Cloud? Probably not, but... [1]

24 July 2008

The Cloud and Cloud Computing consensus definition?


"Cloud Computing is the realisation of Internet ('Cloud') based development and use of computer technology ('Computing') delivered by an ecosystem of providers."
It's amazing that such a simple concept has caused so much confusion, but having spent the last few days reviewing the recent discussions it seems many are falling into the trap of trying to align Cloud Computing with (or contrast it against) existing terminology like SaaS and Utility Computing. It is in fact far more suitable as an umbrella term encompassing all of these related components.

'The Cloud'

While there can be multiple definitions for Cloud Computing, for The Cloud itself 'there can be only one' as it's a metaphor for the Internet; people talking about clouds (plural) are probably confusing it with grids. Yes you can replicate some of this in a 'private cloud', but it will always be exactly that: a replica, and it will likely be somehow connected to (and therefore part of) the real cloud anyway. Remember, much of the value of Cloud Computing comes from leveraging other services in The Cloud for a result greater than the sum of its parts.

Why 'The Cloud'?

Remember all those network diagrams with a fluffy cloud in the middle? Why a cloud and not a black box or some other device? Because we simply don't know, and better yet we don't need to know, what goes on in there - we just pass a packet down our pipe and (most of the time) it arrives at its destination. This is an abstraction (in reality the Internet is an incredibly complex beast) but an important one; it significantly reduces the complexity of our systems; a good example is relatively simple VPNs having quickly displaced many complex WANs.

Definition

Let's break down my definition (which I came to by collating the assertions that were in line with my view and then boiling the result down to the basic common elements):

"Cloud Computing...
  • ...is the realisation of...
    While many of the requisite components have been available in various forms for some time (eg Software as a Service, Utility Computing, Web Services, Web 2.0, etc.) it is only now they are reaching critical mass that the Cloud Computing concept is working its way into the mainstream. As more of a collection of trends (a 'metatrend') we still have some way to go yet, but Cloud Computing solutions are a reality today and will rapidly mature and expand into virtually every corner of our lives and enterprises.
  • ...Internet ('Cloud') based...
    Although some have [ab]used the 'Cloud Computing' term in reference to infrastructure (particularly grid computing, like Amazon's pioneering Elastic Compute Cloud), much of its value is derived from the universal connectivity of the Internet; between businesses (B2B e.g. Web Services like Amazon Web Services), businesses and consumers (B2C e.g. Web 2.0 like Google Apps) and between consumers themselves (C2C e.g. peer to peer like BitTorrent). Many of us are now connected to 'The Cloud' where we work (office), rest (home) and play (mobile) and there are solutions (eg Gears) for when we are not.
  • ...development and use of computer technology'...
    an accepted, all-encompassing definition of computing - there are very few areas which will not be affected in some way by Cloud Computing so I've gone for the broadest possible definition.
  • ...delivered by an ecosystem of providers."
    While it is possible to enjoy some of the advantages using a single provider (eg Google), it is hard to imagine a functionally complete solution which does not draw on multiple providers (in much the same way as we install task-specific applications onto our legacy computers). Your electricity is almost certainly generated by wholesale providers who pump it into the grid and similarly Cloud Computing will typically be delivered by layered (eg Smugmug on Amazon S3) and/or interconnected (eg Facebook<->Twitter) systems.
Cloud Computing Architecture

Cloud Computing is typically universally accessible, massively scalable (with vast pools of multi-tenant 'on-demand' resources), highly reliable (see my TrustSaaS site for proof that the main services are up over 99% of the time), cost effective and utility priced with low barriers to entry (eg capital expenditure, professional services), but none of these attributes are absolute requirements (no, not even massive scalability - even an esoteric web service may still be an absolute requirement for a small handful of users and thus an important part of the ecosystem).

Cloud Computing architecture looks something like this, with layers similar to the OSI networking stack:
ServicesClientwhich consumes these applications via a browser and/or programmatically
Composite(Composite Applications or Mashups) which are linked together using APIs like REST (eg TrustSaaS), in much the same way as 'pipes' are used in Unix to create arbitrarily complex systems from simple tools
SoftwareApplicationwhich ideally follow proven Unix philosophy of 'do one thing and do it well', but which may grow quite complex
Platformon which applications are built, including the language itself (eg Java, Python) as well as supporting systems like storage
HardwareInfrastructureconsisting of the physical computing resources (and virtualisation layer(s) at the hardware and/or operating system layers)
Networkingcourtesy the existing Internet (eg TCP/IP)

Cloud Computing Components

Although many of these are solutions to the same problems, most of them are actually components of Cloud Computing, rather than Cloud Computing itself (working from the ground up):
  • Grid computing, any network of loosely-coupled computers acting in concert, is mostly concerned with tackling complexity and improving managability of computing resources (for example, production servers not being taken down by server failures or routine maintenance). You'll find grids outside of Cloud Computing architectures, though there is a [vendor driven] tendency to confuse the two (particularly where some intelligent/autonomic management aspects are involved). Don't make this mistake yourself; although many Cloud Computing systems are based on grids because their scalability needs can only be satisfied by horizontal scaling (usually involving thousands of commodity grade PCs), these are very different animals.
  • Virtualisation (in the Cloud Computing context), which allows you to deploy a virtual server where you might otherwise have provisioned physical hardware, is an enabler for Infrastructure as a Service (IaaS). Increased automation of operating system and application deployment is pushing the interface further and further up towards the application layer itself (eg Desktone's Desktop as a Service).
  • Infrastructure as a Service (IaaS) (Amazon EC2, GoGrid, AppNexus) While Internet ('cloud') connected grids are particularly useful (and a natural progression for virtualisation and SOA solutions being rolled out en-masse in enterprises today), implying that this is somehow equivalent to cloud computing is too narrow a view. Integrate a SaaS/Utility style billing system to a traditional grid and you've got Infrastructure as a Service (IaaS). These are more cost effective, reliable, scalable and user friendly than their disconnected counterparts and are one big step closer to the panacea of autonomic computing. Expect to see existing 'virtual infrastrucutre' providers like VMware and Citrix seamlessly complementing on-premises solutions with cloud based services.
  • Platform as a Service (PaaS) (Google's AppEngine, Salesforce's force.com, Heroku, Joyent, Rackspace's Mosso): takes grid computing to the next level of abstraction by pushing the interface up to the platform or 'stack' on which applications themselves are built (eg Django, Ruby on Rails, Apex Code). This is primarily interesting for developers and power users and is an increasingly important component of the cloud computing ecosystem. It allows them to focus on development without the overhead of hardware and operating system maintenance, database tuning, load balancing, network connectivity etc. while exposing technology like BigTable (and massive scalability) which might not otherwise be available to them. More importantly, it eliminates capital expenditure requirements, allowing boutique Independent Software Vendors like us to 'stay in the game'.
  • Utility Computing (Amazon S3) is more about a 'utility' (gas, water, electricity) pricing model, yet one can derive the benefits of cloud computing with a more traditional pricing model, or indeed without having to pay for it at all (consider Google's AppEngine for example, where it's utility-style pricing only applies to the more demanding users).
  • Web Services (Amazon Web Services): 'the 'glue' that holds cloud computing components together', are finally maturing and being adopted 'en-masse' thanks in no small part to simplification by way of protocols like REST, commercilisation by providers like Amazon (Jeff Bezos' Risky Bet) and the abundance of web toolkits (e.g. Ruby on Rails) which lower the barrier to entry by providing native support. You can do everything from payments to 'human intelligence tasks' with Web Services now and mashups rely on them heavily to make products that are greater than the sum of their parts. Companies like Ariba and Rearden Commerce are taking this to the extreme.
  • Web 2.0 (Wikipedia, Facebook, WebEx) which while a force in itself, deals more with making the web 'read/write', shifting power towards the consumer and leveraging their collective energy. While AJaX does a lot to make this environment more user friendly, the underlying theme is turning the 'reader' into a 'contributor'. Most of the players in cloud computing exhibit Web 2.0 attributes.
  • Software as a Service (SaaS): (Google Apps, Salesforce CRM) falls under the cloud computing umbrella and is a primary component, but to align the two definitions is too narrow a view. SaaS is typically sold per user as pizza is per slice, but what is more important is that it is implemented and maintained by a provider who handles much of the complexity of running software on your behalf (eg scaling, backups, updates, etc.).
  • 'Cloud' System Integrators (Australian Online Solutions) and consultancies deploy the various components, make them work in concert together (using services like RightScale), integrate them to each other and with legacy systems using the exposed APIs as well as migrating data (eg email, calendars, contacts, documents, etc.) so that users can 'hit the ground running' and continue to collaborate efficiently with those who have not yet migrated 'to the cloud'. Seamless migration is a reality today, and a critical component for cloud adoption.
Cloud Computing Today
The Cloud Computing revolution is upon us. Expect it to rapidly proliferate your enterprise, with much of the drive coming from individual grassroots users (who are almost certainly already improving operational efficiency with Web 2.0 tools like Google, Salesforce and WebEx) so plan accordingly. It must be embraced for competitiveness rather than resisted (in much the same way as the PC was embraced decades ago) but it also requires careful governance and change management by experts. Low risk, high return offerings like messaging and web security are available for those who want to 'test the water' without opting for a complete Enterprise 2.0 deployment.

The draw of loosely coupled, massively scalable services will eventually result in most enterprises being swallowed by the cloud (or by more agile, possibly 'digital native' competitors who already were), or at least becoming nodes on it; indeed many already have. Barriers to adoption (eg offline support, security and compliance services) are being torn down every day and practical solutions exist for those that remain (eg encryption) so there are less and less reasons to sit on the sidelines.

Even the largest of enterprises are now starting to jump (typically having completed controlled pilots) and just as company officers would have difficulty explaining downtime losses caused by continuing to generate their own power after cheap, reliable utility electricity became available, shareholders will not accept companies wasting resources on commotitised infrastructure rather than focusing on their core competencies.

Thanks to Jeff Kaplan, Markus Klems, Reuven Cohen, Douglas Gourlay, Praising Gaw, Jimmy Pike, Damon Edwards, Brian de Haaf, Ben Kepes, Jack van Hoof, Kirill Sheynkman, Ken Ostreich, James Urquhart, Thorsten von Eicken, Omar Sultan, Nick Carr and others for their inadvertent contributions.


This article is also available as a Google Knol: Cloud Computing.

22 July 2008

DNS is dead... long live DNS!

Most of us rely heavily (more heavily than we realise, and indeed should) on this rickety old thing called DNS (the Domain Name System), which was never intended to scale as it did, nor to defend against the kinds of attacks it is subjected to today.

The latest DNS related debacle is (as per usual) related to cache poisoning, which is where your adversary manages to convince your resolver (or more specifically, one of the caches between your resolver and the site/service you are intending to connect to) that they are in fact the one you want to be talking to. Note that these are not man-in-the-middle (MitM) attacks; if someone can see your DNS queries you're already toast - these are effective, remote attacks that can be devastating:
Consider for example your average company using POP3 to retrieve mail from their mail server every few minutes, in conjunction with single sign on; convince their cache that you are their mail server and you will have everyone's universal cleartext password in under 5 minutes.
The root of the problem(s) is that the main security offered in a DNS transaction is the query ID (QID) for which there are only 16 bits (eg 65,536 combinations). Even when properly randomised (as was already the case for sensible implementations like djbdns, but not for earlier attempts which foolishly used sequential numbering), fast computers and links can make a meal of this in no time (read, seconds), given enough queries. Fortunately you typically only get one shot for a given name (for any given TTL period - usually 86,400 seconds; 1 day), and even then you have to beat the authorative nameserver with the (correct) answer. Unfortunately, if you can convince your victim to resolve a bunch of different domains (a.example.com, b.example.com ... aa.example.com and so on) then you'll eventually (read, seconds) manage to slip one in.

So what you say? You've managed to convince a caching server that azgsewd.victim.com points at your IP - big deal. But what happens if you slipped in extra resource records (RRs) for, say, www.victim.com or mail.victim.com? A long time ago you might have been able to get away with this attack simply by smuggling unsolicited answers for victim.com queries along with legitimate answers to legitimate queries, but we've been discarding unsolicited answers (at least those that were not 'in-baliwick'; eg from the same domain) for ages. However here you've got a seemingly legitimate answer to a seemingly legitimate question and extra RRs from the same 'in-baliwick' domain, which can be accepted by the cache as legitimate and served up to all the clients of that cache for the duration specified by the attacker.
This is a great example of multiple seemingly benign vulnerabilities being [ab]used together such that the result is greater than the sum of its parts, and is exactly why you should be very, very sure about discounting vulnerabilities (for example, a local privilege escalation vulnerability on a machine with only trusted users can be turned into a nightmare if coupled with a buffer overrun in an unprivileged daemon).
Those who still think they're safe because an attacker needs to be able to trigger queries are sadly mistaken too. Are your caching DNS servers secure (bearing in mind UDP queries can be trivially forged)? Are your users machines properly secured? What about the users themselves? Will they open an email offering free holidays (containing images which trigger resolutions) or follow a URL on a flyer handed to them at the local metro station, café or indeed, right outside your front door? What about your servers - is there any mechanism to generate emails automatically? Do you have a wireless network? VPN clients?

Ok so if you're still reading you've either patched already or you were secure beforehand, as we were at Australian Online Solutions given our DNS hosting platform doesn't cache; we separate authorative from caching nameservers, and our caches have used random source ports from the outset. This increases the namespace from 16 bits (65k combinations) to (just shy of, since some ports are out of bounds) 32 bits (4+ billion combinations). If you're not secure, or indeed not sure if you are, then contact us to see how we can help you.

20 July 2008

Writing Valid XHTML 1.1

There's a lot of good reasons to write valid XHTML (even if the vast majority of sites don't bother):
  • Your site will render better, faster and more consistently across all browsers.
  • Your layout will be pushed from tables and tags to CSS, separating data from presentation and reducing maintenance costs.
  • Computers (most notably, search engines) will be able to parse and make sense of your content easier than they might otherwise have been able to.
  • You're supporting standards compliance (which translates to freedom for you and your users) and you can advertise valid XHTML using the W3C logos:


Once you've gone to the effort of writing valid XHTML and CSS and the W3C Markup Validation Service (http://validator.w3.org/) is happy with your efforts you'll still want to make sure you're serving your content with the right mime-type: application/xhtml+xml, but only to browsers that support it (and ask for it via the HTTP Accept: header)... most notably not IE6 :|

It's unfortunate those who care about standards compliance have to jump through hoops by implementing content negotiation, but it's not too hard to do.... for example in PHP you can do something like this:

<?php
header("Vary: Accept");
if (stristr($_SERVER["HTTP_ACCEPT"], "application/xhtml+xml") ||
stristr($_SERVER["HTTP_USER_AGENT"], "W3C_Validator"))
header("Content-Type: application/xhtml+xml; charset=utf-8");
else
header("Content-Type: text/html; charset=utf-8");
?>

Notice that the validator won't send an Accept header by default. You can force it to, but I'm just checking for the user agent; if you don't you'll get a warning about the mime-type even if the document is valid (and you're serving it correctly).

Note also that neither this blog nor the Australian Online Solutions blog are valid XHTML (and I'm not rewriting Blogger templates to make them compliant), but TrustSaaS.com is and validates cleanly.

18 July 2008

Apple iPhone 2.0: The real story behind MobileMe Push Mail and Jabber/XMPP Chat

So those of you who anticipated a Jabber/XMPP chat client on the iPhone (and iPod Touch) after TUAW rumoured that 'a new XMPP framework has been spotten(sic) in the latest iPhone firmware' back in April were close... but no cigar. Same applies for those who hypothesised about P-IMAP or IMAP IDLE being used by MobileMe for push mail.

The real story, as it turns out, is that Jabber (the same open protocol behind many instant messaging networks including Google Talk) is actually being used for delivering push mail notifications to the iPhone. That's right, you heard it here first. This would explain not only why the libraries were curiously private (in that they are not exposed to developers) but also why IMAP IDLE support only works while Mail.app is open (it's a shame because Google Apps/Gmail supports IMAP IDLE already).

While it's in line with Apple's arguments about background tasks hurting user experience (eg performance and battery life), cluey developers have noted that the OS X (Unix) based iPhone has many options to safely enable this functionality (eg via resource limiting) and that the push notification service for developers is only a partial solution. It's no wonder though with the exclusive carrier deals which are built on cellular voice calls and SMS traffic, both of which could be eroded away entirely if products like Skype and Google Talk were given free reign (presumably this is also why Apple literally hangs onto the keys for the platform). If you want more freedom you're going to have to wait for Google Android, or for ultimate flexibility one of the various Linux based offerings. We digress...

So without further ado, here's the moment we've all been waiting for: a MobileMe push mail notification (using XMPP's pubsub protocol) from aosnotify.mac.com:5223 over SSL:


<message from="pubsub.aosnotify.mac.com" to="[email protected]/5e60ad2e47da9fca36de59244f25c9b1cd8e0cb8" id="/protected/com/apple/mobileme/samnsofi/mail/[email protected]__3gK4m">
<event xmlns="http://jabber.org/protocol/pubsub#event">
<items node="/protected/com/apple/mobileme/samnsofi/mail/Inbox">
<item id="5WE7I82L5bdNGm2">
<plistfrag xmlns="plist-apple">
<key>maild</key>
<string>E1B537</string>
</plistfrag>
</item>
</items>
</event>
<x xmlns="jabber:x:delay" stamp="2008-07-18T01:11:11.447Z"/>
</message>

<message from="pubsub.aosnotify.mac.com" to="[email protected]/5e60ad2e47da9fca36de59244f25c9b1cd8e0cb8" id="/protected/com/apple/mobileme/samnsofi/mail/[email protected]__NterM">
<event xmlns="http://jabber.org/protocol/pubsub#event">
<items node="/protected/com/apple/mobileme/samnsofi/mail/Inbox">
<item id="8ATABX9e6satO6Y">
<plistfrag xmlns="plist-apple">
<key>maild</key>
<string>544FE17</string>
</plistfrag>
</item>
</items>
</event>
<headers xmlns="http://jabber.org/protocol/shim">
<header name="pubsub#subid">3DEpJ055dXgB2gLRTQYvW4qGh91E36y2n9e27G3X</header>
</headers>
</message>
I'll explain more about the setup I used to get my hands on this in another post later on. So what's the bet that this same mechanism will be used for the push notification service to be released later in the year?

Proof Gmail IMAP (Gimap) supports IMAP IDLE

So for those of you with capable mail clients (like OS X Mail.app), here's proof that IMAP IDLE works for delivering push mail:

$ openssl s_client -connect imap.gmail.com:993 -crlf
* OK Gimap ready for requests from 1.2.3.4 0123456789abcdef
. capability
* CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY
. OK Thats all she wrote! 0123456789abcdef
. login [email protected] letmein
. OK [email protected] authenticated (Success)
. examine inbox
* FLAGS (\Answered \Flagged \Draft \Deleted \Seen)
* OK [PERMANENTFLAGS ()]
* OK [UIDVALIDITY 2]
* 4498 EXISTS
* 0 RECENT
* OK [UNSEEN 1431]
* OK [UIDNEXT 25141]
. OK [READ-ONLY] inbox selected. (Success)
. idle
+ idling
---mail sent and deleted here---
* 4499 EXISTS
* 4499 EXPUNGE
* 4498 EXISTS

This is invariably why some clients 'feel' more responsive than others, and why you should find an IMAP IDLE capable client.

17 July 2008

Single command Django installer for OS X

So you want to see what all the fuss around Django is about? To get the latest bleeding edge snapshot (as discussed here, here, here, here, here, here, here and here) you just need to run these commands (as root), per the official install instructions:

/usr/bin/svn co http://code.djangoproject.com/svn/django/trunk/ /usr/local/django-trunk
ln -s /usr/local/django-trunk/django /Library/Python/2.5/site-packages/django
ln -s /usr/local/django-trunk/django/bin/django-admin.py /usr/local/bin
For the lazy, get root (eg sudo -s) and run:
curl https://s3.amazonaws.com/media.samj.net/devel/django.sh | sh
If it's already there then this script will update to the latest revision.

01 July 2008

Using Pingdom's Web Services API with NuSOAP

So you want to use Pingdom's excellent Web Services API [WSDL] [Documentation] but you don't have SOAP in your PHP? All is not lost as you can still use NuSOAP to achieve essentially the same thing, but you'll need to modify their examples, per pingdom-nusoap.diff.