HOWTO: Reverse engineer the iPhone protocols

A few months back (‘Apple iPhone 2.0: The real story behind MobileMe Push Mail and Jabber/XMPP Chat‘) I analysed how the iPhone interacted with the new MobileMe service with a view to offering the same features to Google Apps customers. Unfortunately this is not yet possible (the APIs don’t exist on both sides of the fence) but we learnt a lot in the process.

For those of you who have been living under a rock, MobileMe (previously known as .Mac) is Apple’s foray into cloud computing. It offers some impressive synchronisation and push services, but for a relatively steep annual subscription. One of the most coveted features is push mail, which makes e-mail behave more like instant messaging; as soon as the mail hits Apple’s servers they notify the clients which then retrieve the item. Technically that’s ‘pull’ with notifications rather than ‘push’ per se, but the result is the same; the user experience for email improves dramatically. They do similar things with contacts and calendar items. Due to popular demand (and making good on my promise to elaborate), here’s a brief explanation of how it was I got ‘under the hood’ of the iPhone’s encrypted communications with the MobileMe servers.

The first problem was to see what it was talking to. We’ve got a Mac household and a bunch of VMs (Windows, Linux and some other strange stuff) so I set up internet sharing on one of them and installed Wireshark. This allowed me to capture the (encrypted) traffic, which was terminating at Although we couldn’t decipher the traffic itself we already knew a fair bit from the server name, port number and traffic patterns; whenever a test mail arrived there was a small amount of traffic on this connection followed immediately by an IMAP poll and message retrieval. ‘AOS’ presumably stands for ‘Apple Online Services‘ (a division that’s at least 15 years old assuming it still exists) rather than Australian Online Solutions (which is what I translate ‘AOS’ to) and ‘notify’ tells us that they have a specific notification service, which reconciles with what we observed. Most importantly though, network port tcp/5223 was traditionally used by Jabber (XMPP) chat servers for encrypted (SSL) traffic; that reconciled too because Wireshark dissectors were able to peer into the SSL handshakes (but obviously not the data itself, at least not without the private keys stored safely on Apple’s servers and/or SSL accelerators).

The next problem was to tell the iPhone to talk to us rather than Apple. There’s a number of ways to skin this particular cat but if I remember well I went with dnsmasq running in a Linux VM and simply configured the DHCP server to use this DNS server rather than those of the ISP. Then it was just a case of overriding’s IP with the address of the VM by adding a line to the /etc/hosts file. That worked nicely and I could not only see the iPhone hitting the server when it started up, but could see that it still worked when I wired port 5223 up to the real Apple servers using a rudimentary Perl/Python proxy script. At this point I could capture the data, but it was still encrypted.

I now needed to convince the iPhone that it was talking to Apple when in fact it was talking to me; without a valid certificate for this wasn’t going to work unless Apple programmers had foolishly ignored handshake failures (that said, stranger things have happened). Assuming this wasn’t the case and knowing that this would be easily fixed with a patch (thereby blowing my third-party integration service out of the water) I started looking at the options. My iPhone was jailbroken so I could have hacked OS X or the iPhone software (eg by binary patching with aosnotify.<mydomain>.com) but I wanted a solution that would work OOTB.

Conveniently Apple had provided the solution in the iPhone 2.0 software by way of the new enterprise deployment features. All I needed to do was create my own certification authority and inject the root certificate into the iPhone by creating an enterprise configuration which contained it. I’d already played with both the iPhone Configuration Utility for OS X and the official Ruby on Rails iPhone Configuration Web Utility so this was a walk in the park. Deploying it was just a case of attaching the.mobileconfig file to an email and sending it to the iPhone (you can also deploy over HTTP I believe). Now I just needed to create a fake certificate, point the proxy script at it and start up the iPhone.

Sure enough this worked on the first attempt and I was even able to peer into the live, cleartext connection with Wireshark. Although I was surprised with the choice of XMPP’s publish/subscribe functionality, it makes a good deal of sense to use an existing protocol and what better choice than this one (though I wonder if Cisco’s acquisition of Jabber will have any effect). This discovery would have come as a disappointment for those touting the discovery of the private XMPP libraries (who naturally assumed that this translated to an Instant Messaging client like iChat), but it is interesting for us cloud architects.

Privacy and cloud computing

There has been a good deal of talk of late on the important topic of security and privacy in relation to cloud computing. Indeed there are some legitimate concerns and some work that needs to be done in this area in general, but I’m going to focus today on the latter term (indeed they are distinct – as a CISSP security is my forte but I will talk more on this separately):

Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

Traditionally privacy has been maintained by physically controlling access to sensitive data, be it by hiding one’s diary under one’s mattress through installation of elaborate security systems. Access is then selectively restricted to trusted associates as required, often without surrendering physical control over the object. In a world of 1’s and 0’s it’s a similar story, only involving passwords, encryption, access control lists, etc.

Occasionally however we do need to surrender information to others in order to transact and as part of everyday life; be it to apply for a drivers license or passport, or subscribe to a commercial service. In doing so we hope that they (‘the controller’ in European Union parlance) will take care of it as it were their own, but this is rarely the case unless economics and/or regulations dictate:

Externalisation leaves the true cost of most breaches to be borne by the data subject rather than the controller; the victim rather than the perpetrator.

Currently even the largest breaches go relatively unpunished, in that corporations typically only face limited reputational damage and (depending on the jurisdiction) the cost of notifying victims, while the affected individuals themselves can face permanent financial ruin and associated problems. According to the Data Loss Database, only days ago arrests were made over 11,000,000 records copied by a call center worker, and the hall off shame is topped by TJX with almost 100m customer records (including credit card numbers). Often though the data is simply ‘lost’, on a device or backup media which has been stolen, misplaced or sold on eBay.

Personal information has similar properties to nuclear waste; few attributes are transient (account balance), most have long half-lives (address, telephone) many can outlive the owner (SSN) and some are by definition immutable (DoB, eye colour).

In an environment of rampent consumer credit being foisted on us by credit providers who have little in the way of authentication beyond name, address and date of birth these losses can be devastating. This imbalance will need to be leveled by lawmakers (for example by imposing a per-record penalty for losses that would transform minor annoyances into serious financial disincentives), but this is tangential to the special case of cloud computing, rather serving to give background into the prevalent issues.

Cloud computing is relatively immune to traditional privacy breaches; there is no backup media to lose, laptop based databases to steal, unencrypted or unauthenticated connections to sniff or hijack, etc.

The fact is that many (likely most) of these breaches could have been avoided in a cloud computing environment. Data is stored ‘in the cloud’ and accessed by well authenticated users over well secured connections. Authentication is typically via passwords and/or tokens (we even have a prototype smart card authentication product) and encryption usually over Transport Layer Security (TLS), centrally enforced by the cloud applications and cloud services. A well configured cloud computing architecture (with a secure client supporting strong authentication and encryption) is a hacker’s worst nightmare. Granted we still have some tweaking to do (eg the extended validation certificates farce) but the attack surface area can be reduced to a single port (tcp/443) which is extremely antisocial until it is satisfied that you are who you say you are (and vice versa).

A well configured cloud computing architecture is a hacker’s worst nightmare. Conversely, a poorly configured cloud computing architecture is a hacker’s best dream.

On the other hand, one of the best ways to keep information safe is not to collect it in the first place; by consolidating the data the reward for a successful attack increases significantly. Fortunately the defenses typically improve at least proportionally, with vendors (whose businesses are built on trust) deploying armies of security boffins that an individual entity could only dream of. The risk is similar to that of a monoculture, the same term that has been used to describe the Windows monopoly (and we have seen the effects of this in the form of massive distributed botnets); the Irish can tell you why putting all your eggs in one basket is a particularly bad idea.

In summary the potential for enhanced privacy protection in a cloud computing environment is clear, provided the risks are properly and carefully mitigated. We are making good progress in this area and overall the news is good, but we need to tread carefully and keep a very close eye on the spectre of ubiquitous surveillance (Big Brother), large scale privacy breaches and targeted attacks.

Cloud computing has the technology and many of the systems in place already; now it is up to the lawmakers to step up to the plate.

The Cloud Computing Doghouse: Nirvanix (aka Streamload aka MediaMax aka The Linkup)

Although Dell have been denied the ill-fated cloud computing trademark (that’s lowercase please. hold the ™) and moved on to more interesting things, they’re yet to concede defeat and withdraw their application. Even though the double decker bus has disappeared from the moon, that leaves us with 6 months of uncertainty before USPTO consider it abandoned, during which time they can appeal the decision. Although it is generally accepted that they would have a snowball’s chance in hell of succeeding, I would have preferred they take it out the back and put it out of its misery, and they can stay in the doghouse until they do (or it expires).

On the other hand there’s a backlog of crass acts of stupidity in the cloud computing space so they’re going to have so shove over and make room in the doghouse for someone (or something) new; the inaugurating member can’t monopolise it forever. And who better than a ‘new’ company associated with “the meltdown of an online storage service that will leave about 20,000 paying subscribers without their digital music, video, and photo files”: Nirvanix.

First and foremost (given they have apparently threatened to sue one of their own founders) this is an opinion piece based on what little information I have been able to scratch together from various online sources – draw your own conclusions and do your own research before you rely on anything here. It is more a commentary on one of the inherent but easily mitigated risks of cloud computing – unreliable providers – than on Nirvanix itself.

Let’s start with some background and basic maths:

Today you can buy a terabyte (1Tb) hard drive with a 5 year (60 month) warranty for $150 retail single unit quantities. Meanwhile the going rate for cloud storage is about $0.15/Gb/month. Ignoring complications like formatting losses, servers (which are cheap and can host many drives), bandwidth, etc., simply by wiring these up to the cloud one could get a return on investment in a month ($0.15 x 1000Gb/m = $150) and over the life of the $150 drive you can make a whopping $9,000.

Admittedly a gross simplification, but to remote users looking down relatively narrow pipes it can be very difficult to tell the difference between a cheap desktop hard drive and an expensive enterprise SAN (that run at about $20/Gb/year, over an order of magnitude more expensive than cloud storage). At least it is until the thing loses their precious 1’s and 0’s, in which case you hope it was run (or at least backed) by a large storage vendor from redundant datacenters rather than a long haired 16 year old from his basement. Herein lies the problem; presumably Nirvanix/Streamload/MediaMax/The Linkup (or whatever they’re calling themselves today) fall somewhere between the two extremes (hopefully the former rather than the latter), but it’s hard to tell where.

If the various articles (especially this one) are to be believed, the whole sorry saga goes something like this:

  • Steve Iverson (a uni student at the time) develops “adaptive data compression algorithms” for his thesis in 1998
  • Shortly after graduation he founded Streamload to “easily and securely send, store, move, receive and access their digital files
  • By 2005 Streamload was hosting about half a petabyte (425Tb) of data for “well over 20,000 users
  • Streamload was rebadged (after receiving some investment) to Streamload MediaMax™ (as distinct from MediaMax, Inc. which did not exist at the time) on the DEMOfall 05 stage as “a suite of ultra-high capacity online services that helps you manage, share, and access all the files and digital media in your life.
  • However by December 2006 it was losing money and Patrick Harr (current Nirvanix CEO) replaced Steve (with his blessing) as CEO and Steve became CTO. After 60 days assessment the new CEO “advocated letting it ‘gracefully die’ and creating a new company selling ‘cloud’ storage to paying enterprise customers“.
  • Disaster struck on June 15 2007 when “a Streamload system administrator’s script accidently misidentified and deleted ‘good data’ along with the ‘dead data’ of some 3.5 million former user accounts and files
  • Two weeks later Streamload’s board of directors pressed on with Harr’s strategy and “split the company into two independent businesses. Streamload changed its name to Nirvanix. It kept many of the former company’s physical assets [including all the servers and data] and employees, and secured $12$18 million in initial venture funding.
  • Meanwhile “The MediaMax consumer product and its disgruntled customers went to Iverson as CEO of a ‘new’ business” along with “only about $500,000 in working capital” while Nirvanix managed to scratch together a cool $18m from the likes of Intel.
  • After a botched upgrade to MediaMax v5 (which by Steve’s own admission introduced a bunch of features users didn’t want) they changed their name again to The Linkup which was marketed as “a social networking site based around storage“, only to also botch the migration to 20% more expensive (at $5.95/$11.95 per month) paid-only services.
  • Users of the free service were given three weeks (which was extended due to problems with the ‘mover’ script) to upgrade or permanently lose their data. Curiously the data was the whole time stored on Nirvanix servers and was being migrated to their new enterprise Storage Delivery Network.
  • Late July Nirvanix Clarifies False Information in Blogosphere in a blog post buried in their developer site.
  • MediaMax/The Linkup closed its doors on 8/8/08, having given users 30 days notice to retrieve their (remaining) data.

As at today the various angry masses are waiting for Nirvanix to give them access to (what remains, apparently about half of) their data, which Nirvanix assures us “remain[s] secure in the old Streamload/MediaMax storage system” (although it is not clear whether the files migrated to The Linkup were not deleted 8 days after the 8/8/8 closure). They also claim “access to those files requires the MediaMax application front-end and database” (roping SAVVIS, who apparently maintained the frontend, into the fray) but MediaMax claim to have offered it to them, noting that “if they could have got the files back, they would have”. Steve goes on to say:

Fundamentally, MediaMax is responsible because you are our customer, and the biggest mistake we made was to trust Nirvanix to manage our customer data – yes, it was on the “old Streamload system”, and not their new Nirvanix SDN, but I believe the care and attention that was required was not there and was beyond unprofessional.

Here’s where it gets really interesting. In Nirvanix’s own words:

Are Nirvanix Inc. and MediaMax Inc. the same company?

No. Nirvanix and MediaMax split out of the same company, Streamload, Inc. in July 2007. Each company would be independently formed with separate ownership, oversight and investors. The companies were subsequently split off in July 2007 and have been separate and distinct entities since that time.

Did Nirvanix delete user data?

No, Nirvanix has not deleted any customer data.

Did a storage problem occur at Streamload?

As documented on the MediaMax blog in July 2007, a storage problem did occur at Streamload on the Streamload/MediaMax storage system in June 2007. This occurred prior to the formation of Nirvanix Inc. and was completely independent of the Nirvanix Storage Delivery Network which was not launched until October 2007.

The problem with these denials, and in particular the claim that the mass deletions at the start of the death spiral “occurred prior to the formation of Nirvanix Inc.”, is that it conflicts not only with what investors, ex-partners, users, etc. say but also with the California Secretary of State, who list Nirvanix, Inc. as a “merged out” California corporation (C2111900) filed on 15 June 1998 (conveniently the exact same month Streamload was founded; almost a decade before they claim it came into existence) and as a Delaware corporation (C3051094) filed on 16 October 2007. Incidentally MediaMax, Inc. (C2998020) was filed earlier, on 16 May 2007. In case you’re wondering what “merged out” means (despite having to learn all this as CAcert‘s Organisation Assurance Officer I had to look it up too), here’s the definition:

The limited partnership or limited liability company has merged out of existence in California into another business entity. The name of the surviving entity can be obtained by requesting a status report.

Thus it appears that Streamload, Inc. changed its name to Nirvanix, Inc. which then “merged out” of existence in California, “into” Nirvanix, Inc. (Delaware)… the corporate equivalent of moving house (it would be good if someone in the US could get a status report to confirm).

A murderer changing her name after the crime and then claiming immunity on the grounds that it happened before she existed would spend the rest of her life in jail.

Even if they were a different legal entity as claimed they still appparently have the same staff, same 525 B Street, San Diego address, even the same CEO (which I’ll bet a judge would find interesting). If they are one and the same then is it not actually Nirvanix, Inc. who still has a binding contract with all those customers (at the very least least the ones who didn’t migrate to The Linkup)? Did the original Streamload terms allow for a transfer from StreamloadNirvanix to MediaMax? Did the customers agree? Indeed, was it not then a StreamloadNirvanix system administrator who ordered the deletion of the data? (Update: According to a comment MediaMax claim it was, which reconciles with the dates above.)

So why have Nirvanix thus far managed to escape culpability in the form of public (PR) execution and class action lawsuits? This appears to be no accident, rather the result of a sustained [dis]information campaign. For example, most of this information is from the Nirvanix article in Wikipedia which was recently nominated for deletion, apparently by Matthew Harvey at JPR Communications (Nirvanix’s PR firm) who already blanked it twice before being blocked for doing it a third time as a sock puppet. Jonathan Buckley (Nirvanix’s Chief Marketing Officer) also weighed in with a Strong Delete vote (that was largely ignored as a conflict of interest) and the article was unsurprisingly kept and remains to give a voice to the disenfranchised masses. They have also apparently been fairly active with the bloggers, calling their posts “inaccurate and libelous”, a post by an investor “suspect and untrue”, again claiming “Nirvanix was not even incorporated in June of 2007”, and you can bet there’s plenty more going on that we don’t hear about (Update: including press censorship, astroturfing and blaming the victims, claiming they “are all software pirates and porn addicts”).

The more cynical reader could be forgiven for believing that this was planned (but I think it was more a case of incompetence and gross negligence):

  • Develop interesting technology
  • Build reputation by servicing users for free
  • Get millions in investment
  • Float said users off on a leaky liferaft with $1 in $37 ($500k for MediaMax vs $18m for Nirvanix), and the inventor himself
  • $$$Profit$$$

Why do I care? I don’t particularly (at least not about this specific situation) but like the rest of the fledgling cloud computing industry I do find articles that could have been easily avoided (like “Storms in the cloud leave users up creek without a paddle“) difficult to swallow. I’ve never used their services and I don’t compete with them; if anything I may end up recommending them to my consulting clients if they are the best fit for a problem. I do however feel for the 20,000 or so people who lost irreplacible photographs, video, music and other data through acts that can only be described as gross negligence; as a long time professional system administrator I find occurances like the June 2007 accidental deletion extremely hard to accept. The story of a disenfranchised inventor having been parted with his invention is oh-so-common too. Finally, I just don’t like coverups:

Trust is (for now) an essential component in cloud computing infrastructure and victims of outages, data loss, privacy breaches, breakins, etc. have every right to full transparency.

Were this another storage provider (eg Amazon S3) there would have been a clear demarcation point (the APIs) and it would have been possible to demonstrate that the client either called for the destruction of data or did not. Accordingly, immutable audit logs should be maintained and made available to cloud computing users (this is not always the case today – often they are kept but not accessible). There should also be protection against accidental deletions (in that they should not be immediately committed unless purging is required and requested, eg to satisfy a privacy policy or other legal requirement). Nirvanix notes that (for the SDN at least) “at any point during this eight-day [deletion] process, the file can be fully recovered” and other providers have similar checks and balances (this is almost certainly why you can’t recreate a Google Apps user for 5 days, for example).

So where to from here? If Nirvanix do have the data as they claim, then they should stop the ‘internal’ bickering and do everything within their power to get as much of the property (data) as possible back to its rightful owners, or give a full and transparent explanation for why this is impossible. If they are in fact the same legal entity the users contracted with initially (Streamload, Inc., as appears to be the case) then they should take responsibility for their [in]actions, apologise and offer a refund. That being the case, customers should hold them to this, both directly ( or 619.764.5650) and with the help of organisations like, Better Business Bureau or if necessary, the courts.

In the mean time they can stay in the doghouse, with Dell…

Google Chrome: Cloud Operating Environment

Google Chrome is a lot more than a next generation browser; it’s a prototype Cloud Operating Environment.

Rather than blathering on to the blogosphere about the superficial features of Google’s new Chrome browser I’ve spent the best part of my day studying the available material and [re]writing a comprehensive Wikipedia article on the subject which I intend for anyone to be free to reuse under a Creative Commons Attribution 3.0 license (at least this version anyway) rather than Wikipedia’s usual strong copyleft GNU Free Documentation License (GFDL). This unusual freedom is extended in order to foster learning and critical analysis, particularly in terms of security

My prognosis is that this is without doubt big news for cloud computing, and as a CISSP watching with disdain at the poor state of web browser security big news for the security community too. Here’s why:

Surfing the Internet today is like unprotected sex with strangers; Chrome is the condom of the cloud.

The traditional model of a monolithic browser is fundamentally and fatally flawed (particularly with the addition of tabs). Current generation browsers lump together a myriad trusted and untrusted software (yes, many web sites these days are more software than content) running in the same memory address space. Even with the best of intentions this is intolerable as performance problems in one area can cause problems (and even data loss) in others. It’s the web equivalent of the bad old days where one rogue process would take down the whole system. Add nefarious characters to the mix and it’s like living in a bad neighbourhood with no locks

Current generation browsers are like jails without cells.

Chrome introduces a revolutionary new software architecture, based on components from other open source software, including WebKit and Mozilla, and is aimed at improving stability, speed and security, with a simple and efficient user interface.

The first intelligent thing Chrome does is split each task into a separate process (‘sandbox’), thus delegating to the operating system which has been very good at process isolation since we introduced things like pre-emptive multitasking and memory protection. This exacts a fixed per-process resource cost but avoids memory fragmentation issues that plague long-running browsers. Every web site gets its own tab complete with its own process and WebKit rendering engine, which (following the principle of least privilege) runs with very low privileges. If anything goes wrong the process is quietly killed and you get a sad mac style sad tab icon rather than an error reporting dialog for the entire browser.

Chrome enforces a simple computer security model whereby there are two levels of multilevel security (user and sandbox) and the sandbox can only respond to communication requests initiated by the user. Plugins like Flash which often need to run at or above the security level of the browser itself are also sandboxed in their own relatively privileged processes. This simple, elegant combination of compartments and multilevel security is a huge improvement over the status quo, and it promises to further improve as plugins are replaced by standards (eg HTML 5 which promises to displace some plugins by introducing browser-native video) and/or modified to work with restricted permissions. There are also (publicly accessible) blacklists for warning users about phishing and malware and an “Incognito” private browsing mode.

Tabs deplace windows as first class citizens and can migrate between them like an archipelago of islands.

The user interface follows the simplification trend, and much of the frame or “browser chrome” (hence the name) can be hidden altogether so as to seamlessly blend web applications (eg Gmail) with the underlying operating system. Popups are confined to their source tab unless explicitly dragged to freedom, the “Omnibox” simplifies (and remembers) browsing habits and searches and the “New Tab Page” replaces the home page with an Opera style speed dial interface along with automatically integrated search boxes (eg Google, Wikipedia). Gears remains as a breeding ground for web standards and the new V8 JavaScript engine promises to improve performance of increasingly demanding web applications with some clever new features (most notably dynamic compilation to native code).

Just add Linux and cloud storage and you’ve got a full blown Cloud Operating System (“CloudOS”)

What is perhaps most intersting though (at least from a cloud computing point of view) is the full-frontal assault on traditional operating system functions like process management (with a task manager that allows users to “see what sites are using the most memory, downloading the most bytes and abusing (their) CPU”). Chrome is effectively a Cloud Operating Environment for any (supported) operating system in the same way that early releases of Windows were GUIs for DOS. All we need to do now is load it on to a (free) operating system like Linux and wire it up to cloud storage (ala Mozilla Weave) for preferences (eg bookmarks, history) and user files (eg uploads, downloads) and we have a full blown Cloud Operating System!

Update: Fixed URLs.

Chrome URLs:

DNS is dead… long live DNS!

Most of us rely heavily (more heavily than we realise, and indeed should) on this rickety old thing called DNS (the Domain Name System), which was never intended to scale as it did, nor to defend against the kinds of attacks it is subjected to today.

The latest DNS related debacle is (as per usual) related to cache poisoning, which is where your adversary manages to convince your resolver (or more specifically, one of the caches between your resolver and the site/service you are intending to connect to) that they are in fact the one you want to be talking to. Note that these are not man-in-the-middle (MitM) attacks; if someone can see your DNS queries you’re already toast – these are effective, remote attacks that can be devastating:

Consider for example your average company using POP3 to retrieve mail from their mail server every few minutes, in conjunction with single sign on; convince their cache that you are their mail server and you will have everyone’s universal cleartext password in under 5 minutes.

The root of the problem(s) is that the main security offered in a DNS transaction is the query ID (QID) for which there are only 16 bits (eg 65,536 combinations). Even when properly randomised (as was already the case for sensible implementations like djbdns, but not for earlier attempts which foolishly used sequential numbering), fast computers and links can make a meal of this in no time (read, seconds), given enough queries. Fortunately you typically only get one shot for a given name (for any given TTL period – usually 86,400 seconds; 1 day), and even then you have to beat the authorative nameserver with the (correct) answer. Unfortunately, if you can convince your victim to resolve a bunch of different domains (, … and so on) then you’ll eventually (read, seconds) manage to slip one in.

So what you say? You’ve managed to convince a caching server that points at your IP – big deal. But what happens if you slipped in extra resource records (RRs) for, say, or A long time ago you might have been able to get away with this attack simply by smuggling unsolicited answers for queries along with legitimate answers to legitimate queries, but we’ve been discarding unsolicited answers (at least those that were not ‘in-baliwick’; eg from the same domain) for ages. However here you’ve got a seemingly legitimate answer to a seemingly legitimate question and extra RRs from the same ‘in-baliwick’ domain, which can be accepted by the cache as legitimate and served up to all the clients of that cache for the duration specified by the attacker.

This is a great example of multiple seemingly benign vulnerabilities being [ab]used together such that the result is greater than the sum of its parts, and is exactly why you should be very, very sure about discounting vulnerabilities (for example, a local privilege escalation vulnerability on a machine with only trusted users can be turned into a nightmare if coupled with a buffer overrun in an unprivileged daemon).

Those who still think they’re safe because an attacker needs to be able to trigger queries are sadly mistaken too. Are your caching DNS servers secure (bearing in mind UDP queries can be trivially forged)? Are your users machines properly secured? What about the users themselves? Will they open an email offering free holidays (containing images which trigger resolutions) or follow a URL on a flyer handed to them at the local metro station, café or indeed, right outside your front door? What about your servers – is there any mechanism to generate emails automatically? Do you have a wireless network? VPN clients?

Ok so if you’re still reading you’ve either patched already or you were secure beforehand, as we were at Australian Online Solutions given our DNS hosting platform doesn’t cache; we separate authorative from caching nameservers, and our caches have used random source ports from the outset. This increases the namespace from 16 bits (65k combinations) to (just shy of, since some ports are out of bounds) 32 bits (4+ billion combinations). If you’re not secure, or indeed not sure if you are, then contact us to see how we can help you.

Apple iPhone 2.0: The real story behind MobileMe Push Mail and Jabber/XMPP Chat

So those of you who anticipated a Jabber/XMPP chat client on the iPhone (and iPod Touch) after TUAW rumoured that ‘a new XMPP framework has been spotten(sic) in the latest iPhone firmware‘ back in April were close… but no cigar. Same applies for those who hypothesised about P-IMAP or IMAP IDLE being used by MobileMe for push mail.

The real story, as it turns out, is that Jabber (the same open protocol behind many instant messaging networks including Google Talk) is actually being used for delivering push mail notifications to the iPhone. That’s right, you heard it here first. This would explain not only why the libraries were curiously private (in that they are not exposed to developers) but also why IMAP IDLE support only works while is open (it’s a shame because Google Apps/Gmail supports IMAP IDLE already).

While it’s in line with Apple’s arguments about background tasks hurting user experience (eg performance and battery life), cluey developers have noted that the OS X (Unix) based iPhone has many options to safely enable this functionality (eg via resource limiting) and that the push notification service for developers is only a partial solution. It’s no wonder though with the exclusive carrier deals which are built on cellular voice calls and SMS traffic, both of which could be eroded away entirely if products like Skype and Google Talk were given free reign (presumably this is also why Apple literally hangs onto the keys for the platform). If you want more freedom you’re going to have to wait for Google Android, or for ultimate flexibility one of the various Linux based offerings. We digress…

So without further ado, here’s the moment we’ve all been waiting for: a MobileMe push mail notification (using XMPP’s pubsub protocol) from over SSL:

<message from="" to="" id="/protected/com/apple/mobileme/sam/mail/Inbox__sam@aosnotify.mac.com__3gK4m">
<event xmlns="">
<items node="/protected/com/apple/mobileme/sam/mail/Inbox">
<item id="5WE7I82L5bdNGm2">
<plistfrag xmlns="plist-apple">
<x xmlns="jabber:x:delay" stamp="2008-07-18T01:11:11.447Z"/>

<message from="" to="" id="/protected/com/apple/mobileme/sam/mail/Inbox__sam@aosnotify.mac.com__NterM">
<event xmlns="">
<items node="/protected/com/apple/mobileme/sam/mail/Inbox">
<item id="8ATABX9e6satO6Y">
<plistfrag xmlns="plist-apple">
<headers xmlns="">
<header name="pubsub#subid">3DEpJ055dXgB2gLRTQYvW4qGh91E36y2n9e27G3X</header>

I’ll explain more about the setup I used to get my hands on this in another post later on. So what’s the bet that this same mechanism will be used for the push notification service to be released later in the year?

Making SSL work with Apache 2 on Mac OS X with CAcert

Getting SSL up and running on OS X is not too difficult these days. First you need to tell it to read the SSL config file (removing red lines, adding green lines):

— /etc/apache2/httpd.conf 2008-06-11 03:42:25.000000000 +0200
+++ /etc/apache2/httpd.conf.dist 2008-06-11 04:15:15.000000000 +0200
@@ -470,7 +470,7 @@
#Include /private/etc/apache2/extra/httpd-default.conf

# Secure (SSL/TLS) connections
-#Include /private/etc/apache2/extra/httpd-ssl.conf
+Include /private/etc/apache2/extra/httpd-ssl.conf
# Note: The following must must be present to support
# starting without SSL on platforms with no /dev/random equivalent

Then you need to fix this config file for your environment:

— /private/etc/apache2/extra/httpd-ssl.conf.dist 2008-06-11 03:43:21.000000000 +0200
+++ /private/etc/apache2/extra/httpd-ssl.conf 2008-06-11 04:03:50.000000000 +0200
@@ -22,9 +22,9 @@
# Manual for more details.
#SSLRandomSeed startup file:/dev/random 512
-#SSLRandomSeed startup file:/dev/urandom 512
+SSLRandomSeed startup file:/dev/urandom 512
#SSLRandomSeed connect file:/dev/random 512
-#SSLRandomSeed connect file:/dev/urandom 512
+SSLRandomSeed connect file:/dev/urandom 512

@@ -75,8 +75,8 @@

General setup for the virtual host

DocumentRoot “/Library/WebServer/Documents”
ErrorLog “/private/var/log/apache2/error_log”
TransferLog “/private/var/log/apache2/access_log”

@@ -125,6 +125,7 @@

Makefile to update the hash symlinks after changes.

#SSLCACertificatePath “/private/etc/apache2/ssl.crt”
#SSLCACertificateFile “/private/etc/apache2/ssl.crt/ca-bundle.crt”
+SSLCACertificateFile “/private/etc/apache2/server-ca.crt”

Certificate Revocation Lists (CRL):

Set the CA revocation path where to find CA CRLs for client

@@ -143,6 +144,8 @@

issuer chain before deciding the certificate is not valid.

#SSLVerifyClient require
#SSLVerifyDepth 10
+SSLVerifyClient require
+SSLVerifyDepth 2

Access Control:

With SSLRequire you can do per-directory access control based

Notice that I’m using client certificates for authentication but you can comment out the SSLCACertificateFile, SSLVerifyClient and SSLVerifyDepth options if you don’t need this. If you do you’ll want to grab the root from CAcert:

# curl -o server-ca.crt

You’ll want to generate random nubmers (key) and a certificate signing request (csr) in order to get a certificate (crt) file, and despite most information on the topic this can be done in one command:

# openssl req -newkey rsa:2048 -nodes -keyout server.key -out server.csrGenerating a 2048 bit RSA private key
writing new private key to ‘server.key’
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:New South Wales
Locality Name (eg, city) []:Sydney
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Australian Online Solutions Pty Ltd
Organizational Unit Name (eg, section) []:Security
Common Name (eg, YOUR name) []
Email Address []

Please enter the following ‘extra’ attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Actually in the case of everything except the common name is ignored so you can leave it as defaults.

For testing we’ll use a script which prints all the environment variables (this is what I was after for my certificate authentication anyway):

# cat /Library/WebServer/CGI-Executables/printenv
echo “Content-type: text/plain”
echo “”

And when you browse to your machine (eg you should see something like this:

HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-GB; rv:1.9) Gecko/2008053008 Firefox/3.0
SSL_SERVER_I_DN=/O=CAcert Inc./OU= Class 3 Root
SSL_CLIENT_S_DN=/CN=Sam Johnston/
SSL_CLIENT_V_START=Jun 11 01:38:03 2008 GMT
SSL_CLIENT_I_DN_CN=CA Cert Signing Authority
SSL_CLIENT_V_END=Jun 11 01:38:03 2010 GMT
SERVER_SOFTWARE=Apache/2.2.8 (Unix) mod_ssl/2.2.8 OpenSSL/0.9.7l DAV/2
SSL_SERVER_V_END=Jun 11 01:48:07 2010 GMT
SSL_CLIENT_I_DN=/O=Root CA/OU= Cert Signing Authority/
SSL_SERVER_I_DN_CN=CAcert Class 3 Root
SSL_SERVER_V_START=Jun 11 01:48:07 2008 GMT

That’s it for this morning’s lesson.