A new service was just announced at the RSA conference that takes an interesting approach to hashing passwords. Called “Blind Hashing,” from TapLink, the technology is fully buzzword-compliant, promising to “completely secure your passwords against offline attack.” Pretty grandiose claims, but from I’ve been able to see in their patent so far, it seems like it has some promise. With a few caveats.
Traditionally, passwords are hashed and stored in place. First we had the the Unix cyrpt() function, which, though it was specifically designed to be “slow” on systems at the time, is now hopelessly outdated and should be killed with fire at every opportunity. That gave way to unsalted MD5-based hashes (also a candidate for immediate incendiary measures), salted SHA hashes, and today’s state of the art functions bcrypt, scrypt, and PBKDF2. The common goal throughout this progression of algorithms has been to make the hashing function expensive, in either CPU time or memory requirements (or both), thus making a brute force attack to guess a user’s password prohibitive.
So far, we seem to have accomplished that goal, but a downside is that a slow hash is still, well, slow. Which can potentially add up, when you’ve got a site that processes huge numbers of logins every day.
The “Blind Hashing” system takes a different approach. Rather than handling the entire hash locally, the user’s password is, essentially, hashed a second time using data from a cloud-based service. Here’s an excerpt from the patent summary:
A blind hashing system and method are provided in which blind hashing is used for data encryption and secure data storage such as in password authentication, symmetric key encryption, revocable encryption keys, etc. The system and method include using a hash function output (digest) as an index or pointer into a huge block of random data, extracting a value from the indexed location within the random data block, using that value to salt the original password or message, and then hashing it to produce a second digest that is used to verify the password or message, encrypt or decrypt a document, and so on. A different hash function can be used at each stage in the process. The blind hashing algorithm typical runs on a dedicated server and only sees the digest and never sees the password, message, key, or the salt used to generate the digest.
Thinking through the process, here’s one way this might work:
The user provides their userid and password to the system.
The password is hashed (optionally using a locally-stored salt, unique to the user)
Traditionally, the hash is then stored locally, and this is what’s used to compare against the identically-generated hash at next login.
In Blind Hashing, the hash is then sent to a remote service.
This service uses the hash as an index into a massive (petabyte-sized) database, to retrieve a random number. Each hash thus points to some unique random number.
The number is returned to the server, and used as a salt to hash the password a second time.
This second hash is stored locally and used for future logins.
Put in a more functional notation, this might look like:
In the event of a compromise on the server, the attacker may recover all the Salt1 and Hash2 values. However, they will not be able to retrieve Salt2 without the involvement of the remote blind hash service. So a brute force attack will require cycling through all possible passwords and, for each password tested, requesting Salt2 from the remote service. This should, in theory, be significantly slower than a local hash / salt computation, and can also be rate-limited at the service to further protect against attacks.
On its surface, this seems a pretty solid idea. The second salt is deterministically derived from the first hash, but not in an algorithmic manner, so there isn’t a short-circuit that allows for immediate recovery of the salt. The database used to store Salt2 values is too large to be copied by an attacker. And the round trip process is (presumably) too slow to be practical for a brute force attack. Finally, the user’s password isn’t actually sent to the blind hash lookup service, only a hash of the password (salted with a value that is not sent to the service).
An attacker who compromises the (website) server gains only a collection of password hashes that are uncrackable without the correct password and the cooporation of the blind hash service. If they are able to collect all blind hash responses, they could build a dictionary of secondary salts to use in brute force attacks, but that would still be very slow (for a large site), as each password tested would be multiplied by the length of this secondary salt list. (Of course, if they can intercept the blind hash response data, then the attacker can probably also intercept the initial login process and just grab the passwords in plaintext.) Finally, an attacker who compromises the blind hash service gains access to a database too large to exfiltrate, and to an inbound stream of passwords hashed with unknown salts.
So in theory, at least, I can’t see anything seriously wrong with the idea.
But is it worth it? The only argument I’ve heard against “slow” hash algorithms like bcrypt or scrypt is that it may present too big a load to busy sites. But wouldn’t the constant communication with the blind hash service also present a fairly large load, both for CPU and especially for network traffic? What happens if the remote service goes down, for example, because of a DDOS attack, or network problems? This service protects against future breakthroughs that make modern hash algorithms easy to brute force, but I think we already know how to deal with that eventuality.
I think the biggest problem we have today, with regards to securely hashing passwords, isn’t the technology available, but the fact that sites still use the older, less secure approaches. If a site cares enough to move to a blind hash service, they’d certainly be able to move to bcrypt. If they haven’t already moved away from MD5 or SHA hashes, then I really don’t see them paying for a blind hashing service, either.
In the end, though I think it’s a very interesting and intriguing idea, I’m just not sure I see anything to recommend this over modern bcrypt, scrypt, or PBKDF-based password hashes.
Arguably one of the more interesting developments (aside from the SIM thing, which I’m not even going to touch) was the decision by Lenovo to pwn all of their customers with a TLS Man-In-The-Middle attack. The problem here was two-fold: That Lenovo was deliberately snooping on their customer’s traffic (even “benignly,” as I’m sure they’re claiming), and that the method used was trivial to put to malicious use.
Which has me thinking again about the nature of the Certificate Authority infrastructure. In this particular case, Lenovo laptops are explicitly trusting sites signed with a private key that’s now floating around in the wild, ready to be abused by just about anyone. But it’s more than just that — our browsers are already incredibly trusting.
On my Mac OS X Yosemite box, I count (well, the Keychain app counts, but whatever) 214 different trusted root certificate authorities. That means that any website signed by any of those 214 authorities…or anyone that those authorities have delegated as trustworthy….or anyone those have trusted…will be trusted by my system.
That’s great, if you trust the CAs. But we’ve seen many times that we probably shouldn’t. And even if you do trust the root CAs on your system, there are other issues, like if a corporation or wifi provider prompts the user to install a custom MITM CA cert. (Or just MITMs without even bothering with a real cert).
I’ve been trying to bang the drum on certificate pinning for a while, and I still think that’s the best approach to security in the long run. But there’s just no easy way for end users to handle it at the browser level. Some kind of “Trust on First Use” model would seem to make sense, where the browser tracks the certificate (or certificates) seen when you first visit a site, and warns if they change. Of course, you have to be certain your connection wasn’t intercepted in the first place, but that’s another problem entirely.
Some will inevitably argue that ubiquitous certificate pinning will break applications in a corporate environment, and yes, that’s true. If an organization feels they have the right to snoop on all their users’ TLS-secured traffic, then pinned certificates on mobile apps or browsers will be broken by those proxies. Oh, well. Either they’ll stop their snooping, or people will stop using those apps at work. (I’m hoping that the snooping goes away, but I’m probably being naïve).
When a bunch of CA-related hacks and breaches happened in 2011, we saw a flurry of work on “replacements,” or at least enhancements, of the current CA system. A good example is Convergence, a distributed notary system to endorse or disavow certificates. There’s also Certificate Transparency, which is more of an open audited log. I think I’ve even seen something akin to SPF proposed, where a specific pinned certificate fingerprint could be put into a site’s DNS record. (Of course, this re-opens the whole question of trusting DNS, but that’s yet another problem).
But as far as I know, none of these ideas have reached mainstream browsers yet. And they’re certainly not something that non-security-geeks are going to be able to set up and use.
So in the meantime, I thought back to my post from 2011, where I have a script that dumps out all the root CAs used by the TLS sites you’ve recently visited. Amazingly enough, the script still works for me, and also interestingly, the results were about the same. In 2011, I found that all the sites I’ve visited eventually traced back to 20 different root certificate authorities. Today, it’s 22. (and in both cases, some of those are internal CAs that don’t really “count”). (It’s also worth noting — in that blog post, I reported that I had 175 roots on my OS X Lion system. So nearly 40 new roots have been added to my certificate store in just 3 years).
So of the 214 roots on my system, I could “safely” remove 192. Or probably somewhat fewer, since the history file I pulled from probably isn’t that comprehensive (and my script didn’t pull from Safari too). But still, it helps to demonstrate that a significantly large percentage (like on the order of 90%) of the trust my computer has in the rest of the Internet is unnecessary in my usual daily use.
Now, if I remove those 190ish superfluous roots, what happens? I won’t be quite as vulnerable to malware or MITM attacks using certs signed by, say, an attacker using China’s CA. Or maybe the next time I visit Alibaba I’ll get a warning. But I’d bet that most of the time, I’ll be just fine. Of course, if I do hit a site that uses a CA I’ve removed, I’d like the option to put it back, which simply brings me back to the “Trust on First Use” certificate option I mentioned earlier. If we’re to go that route, might just as well set it up to allow for site-level cert pinning, rather than adding their cert provider’s CA, to “limit the damage” as it were. (Otherwise, over time, you’d just be back to trusting every CA on the planet again).
And of course, even if I wanted to do this, there’s no (easy) way to do this on my iOS devices. And the next time I got a system update, I’d bet the root store on my system would be restored to its original state anyway (well, original plus some annual delta of new root certs).
Last Saturday, I gave a talk at ShmooCon detailing the results of a short survey of iOS applications, and the way they handled (and secured) network-based authentication. For a quick summary of my talk, read on. If you’d like to follow along with the slides, they can be downloaded here. If you’d like a very detailed white paper explaining everything I said in the talk and more, well, you’ll have to wait a little longer. But I’m working on it.
As part of my “day job,” I frequently review the security of iOS applications. In most cases, these applications do not exist only within the confines of any given device, but connect to dedicated back-end services, authenticated with a username and password (or something very similar). We as consumers place a fair amount of trust in that relationship, between iPhone app and the server, and it occurred to me that it might be interesting to see how well-founded that trust is. This is an especially interesting question for applications which handle sensitive data, like a banking or healthcare app.
So I looked at the apps on my phone. Out of over 200 applications, I dropped apps which either didn’t use internet-based servers, or for which I didn’t actually have an active account. This left me with about 50 apps, of which about 40 were actually reviewed. The applications came from (what I feel to be) a fairly representative cross-section of applications, including: Banking, healthcare, travel, cloud storage, and social networking.
The review was fairly simple, focused exclusively on authentication, and didn’t allow me the time to perform deep reviews of any single application. Some few apps appeared much more complex than the rest, and I could easily have spent days examining each of them to fully understand how they worked. But most of the time, I spent between 30 minutes and a couple of hours (per application) to gather the information I needed.
I wanted to focus on four specific areas of interest:
Secure Network - Are network communications properly protected?
Secure Login - Is the network based login performed in a way that is open to attack?
Secure Session - Are login credentials properly handled for ongoing application use, or after the app has been quit and restarted?
Secure Storage - Are login credentials stored securely on the device (if at all)?
To complete the survey, I used a jailbroken iPhone running iOS 8.1.2, and Burp Suite Pro, a man-in-the-middle proxy tool. The proxy allowed me to collect and observe traffic between the applications and their servers, while the jailbroken phone allowed me to easily access and review data stored on the device. I made four complete passess across the list of applictions, focusing on a different behavior or device configuration for each pass.
For the first pass at all the applications, I did not have Burp’s MITM proxy certificate installed on the device. This allowed me to measure whether the applications noticed that their communications were being intercepted, and also to get a feel for how well errors were reported to the user. I’m happy to report that most (all but 2) applications did in fact detect the intercepted communication, and refused to proceed. However, of those 38 applications, only one had a decent error message, where all the rest were cryptic, unhelpful, or flat-out misleading.
Of the two applications which didn’t detect the MITM interception, one continued to communicate over TLS as if nothing was wrong, while the other never noticed because it wasn’t even using TLS in the first place: all communications with this one app happened over HTTP.
I then installed the proxy’s CA certificate onto my test device, which caused all the TLS communications to suddenly become “trusted,” and allowed for interception, and inspection, of nearly all the application traffic.
I say nearly all because four applications appeared to use certificate pinning. For these apps, it was not enough that the TLS connection be certified with a trusted certificate, the connection required a known certificate. Since my MITM cert was trusted by the OS, it made it past the first check, but because it was my cert, it wasn’t known by the application to be the right certificate, and so these four apps refused to continue communicating.
I was able to bypass this certificate pinning on two of the applications, while the other two I had to set aside, and due to time constraints, I was unable to review them any further.
(It bears mentioning that one of the cert-pinned apps, which was the only application to provide a useful certificate error message to the user, was not a bank, health care, or even social media app, but simply a podcast player. Kudos to the developer for taking security so seriously, even for low-impact data like podcast list synchronization.)
With a trusted and operational MITM proxy functioning, I was able to review the actual passing of credentials and security tokens between the applications and their servers. Most applications sent credentials (username, password) as parameters in an HTTP POST request. A few passed the credentials via HTTP headers (for example, as an Authentication: Basic header), while two sent credentials in the URL (one of which didn’t even bother to obscure the password — it was sent in human-readable form).
With the initial login observed and understood, I then used each application for a little while, to create plenty of traffic with which to observe session authentication. In most cases, the continuing session authentication was carried via some form of security token. Most of these tokens were static in nature (unchanging from request to request), while a few were dynamic, either changing with each request, or actually being cryptographically tied to the request (for example, signing each request). These tokens were generally sent in HTTP headers, but a few were sent as URL parameters or in POST data.
Two applications didn’t use tokens, but instead simply re-sent the userid and password with every single request.
Finally, I reviewed the application sandbox for each app to see whether credential information (userid, password, tokens) were being deliberately, or accidentally, saved to the device. For this stage I looked in the system keychain, at the app’s preferences file, the HTTP cache and Cookie files, and any other developer-created files in the Documents or Library folders (I found nothing in any app’s /tmp folder).
I found userids stored just about everywhere (preferences, cookies, application-specific files in /Documents, and the keychain), passwords in a few locations (5 in the keychain and 4 stored elsewhere in the app’s filesystem), and tokens all over the place (14 in the keychain, 27 in the applications’ filesystem storage).
About a week after completing the initial collection and review of data, I relaunched every single app (after having force-quit each during pass two), to determine how they reacted after being out of use for a few days. Most apps simply sent a stored token, while a few re-sent the userid and password, and 6 asked for either the user’s password or both their userid and password. [It should be noted that this likely wasn’t enough time to measure the expiration rate of static tokens, which was not a target of the review.] This also allowed me a chance to re-observe the traffic and generally look for things I may have missed, and to verify my data.
Finally, I force-quit all the apps again, and removed the Burp CA certificate from the device, then relaunched everything. This was mostly to see if the TLS errors caused by the untrusted MITM connection returned (I could imagine some applications only checking for the trusted certificate during login, and ignoring the error from then on). All applications behaved as they did during the first pass, though a few appeared to function normally, but were instead simply displaying locally cached data. Upon forcing a network refresh, TLS errors were reported in these applications as well (and, again, most of the error messages were unhelpful).
Summary of Findings
In the end, my general conclusion was that (for the 40 apps I reviewed) security was “Not bad, but could be better.” Of the 38 applications which completed review (remember, two were pinned and I couldn’t bypass):
12 had only minor issues (insecurely stored userid, certificate pinning not in use)
6 had at least one major issue (password stored insecurely, application ignores TLS errors or doesn’t use TLS)
0 (ZERO) had no issues at all
In most cases, a few simple fixes are all that’s needed to improve the security stance of these applications.
Note that I’m calling lack of certificate pinning “minor”, because as much as I feel it’s necessary, I haven’t seen its use become anywhere near commonplace, which was certainly reflected in my findings here. Applications rely on the strength of the TLS connection (OAuth 2.0 makes this reliance explicit), but with bugs and certification authority issues, that reliance may be misplaced. Certificate pinning remains a very easy way to increase the reliability of that connection.
Top 5 Suggestions
I concluded my talk with five suggestions that I feel would greatly improve the security of any iOS application using network-based servers to host personalized data (whether sensitive or not):
Use TLS certificate pinning.
Store credential components (password, tokens, and if possible, the userid) only in the keychain.
Always use strong “hash” constructs (PBKDF, HMAC, etc., as appropriate).
Take steps to avoid leaked storage of credentials in cache and cookie files
If possible, use one-time (nonce / timestamp based) tokens. Even better, tie these tokens to the request contents via a signature.
More details, especially describing attack vectors and rationale for these suggestions, as well as detailed summary tables of all the findings, are in the slides, available here.
Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:
Available for: iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact: An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description: In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney
We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.
Enter a bad passcode several times, until you have a “disabled for 1 minute” warning.
Wait a minute, and enter one more bad passcode. Now you should have to wait 5 minutes to try again.
As soon as the “iPhone is Disabled” message appears, hold down the power and home buttons until the phone reboots.
Once you see the Apple logo, release the power button, but keep holding Home.
After four seconds, release Home as well, and the phone should continue rebooting.
Once it’s rebooted, go back to the passcode screen and you’ll see that it’s enabled and there’s no entry lockout delay.
This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.
Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)
Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.
Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.
I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.
I just voted, even though pundits and statisticians have proven fairly definitively that my particular vote won’t matter. My district has had a Republican congressman for 30 years and his hand-picked heir is likely to win, and I don’t live in one of the 6 states all the news organizations tell me will decide control of the Senate. I voted because it’s the right thing to do, and because if I don’t vote, I lose the moral right to complain about the idiots in power (and anyone who knows me knows I love to complain.)
But why I hate voting isn’t the issues, or the parties, or the polarized electorate, or the aforementioned futility of my particular involvement. It’s the process. The process makes my blood boil.
For months, we are subjected to constant attack ads, literally he-said-she-said finger pointing about which candidate is the bigger idiot for siding with whichever other idiots are in power.
For weeks, the candidates clutter the countryside with illegally placed campaign signs that aren’t just an eyesore, but can seriously impede traffic safety simply by blocking drivers’ view of oncoming traffic. (Though to be fair, this has gotten much better in Fairfax County over the last few years…I don’t know how they got the candidates to stop, but I’m glad they did it).
I work at home, in my basement. When the doorbell rings, I answer it. Which means I have to interrupt my work, walk upstairs, and attend to whoever is at the door. And then get annoyed when it’s just someone stumping for a politician I don’t care about (or even one I do like). And then they get annoyed when I’m annoyed at them — as if they weren’t the ones being rude by disturbing me in the first place.
Then, finally, election day. That’s the worst.
Rather than experiencing relief that it’s all about to be over, my annoyance level spikes to new highs. First, I drop the kids off at their school (for school-provided daycare while the school is closed for election day). There’s no way to get through the front door without running a gauntlet of partisan party representatives handing you their “Sample Ballots” (which conveniently exclude all other parties — not actually a sample at all, but I suppose we’re used to the lies). Sure, there’s a “50 foot exclusion zone” around the entrance, but it’s not possible to park within that zone. So all they have to do is hover around the perimeters and they get you.
But at this point I’m not even there to vote — I’m just there to drop off my kids. (In fact, two Republican candidates even had people camped out in front of the school on Back to School night this year, so even then we weren’t able to escape their harassment). Why the school system doesn’t kick these people off their property is beyond me. (And don’t tell me it’s because of First Amendment rights — politicians can still express their views…they just shouldn’t be allowed to interrupt voters on their way to the polls).
It’s even worse today, because I’ll have to sneak past the same people for parent/teacher conferences this afternoon.
Then when I actually do go to vote, I have to navigate a different set of politicians’ antagonists (because my polling place is in a different school). And I have to present an ID to vote, because there’s an astronomically small chance that someone could be trying to vote illegally (which Never Ever Happens. Seriously.) And after I present my ID, the poll workers ask me to tell them my address — as if it weren’t already printed on my ID. Somehow, going to vote where the poll workers can’t even read the address on my ID doesn’t fill me with confidence.
(No, I know it’s because they want to be sure that I really know my address and am not simply taking someone else’s identity. It’s still bullshit. Next year, I’m reading the address from my ID before I even hand it to them. See what happens then.)
So by the time I’m done, I’ve been harassed by politicians on the radio, on the TV, in my mail, at my front door, on the way to drop off the kids, on my way to conferences with my kids’ teachers, on the way to actually vote, and then while voting, I’m told pretty clearly that the state doesn’t think I’m actually me and am trying to fraudulently cast a ballot. All this after being told again and again by, well, Science, that my vote really doesn’t matter.
In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)
Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.
It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).
But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:
In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry.
“The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”
This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.
Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.
This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?
I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).
Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.
But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.
The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).
Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.
It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?
This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.
It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.
The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”
It’d be nice if we could figure out what’s going on and actually fix things.
Lots of discussion the last few days about Rite Aid and CVS (and possibly other merchants) actually disabling existing NFC point of sale functionality simply because they were suddenly getting used (by Apple Pay).
NFC payments are nothing new — Android has supported them for a couple years now (on select phones, though not without some complicated political shenanigans between manufacturers and carriers). Not a lot of places support such contactless payments, though I’ve certainly been seeing more and more POS terminals with NFC-looking logos lately. And I’ve seen a LOT of new POS terminals going in recently (at Panera and Target, in particular) which definitely support the upcoming EMV (“Chip and PIN”) cards, and I believe also support NFC.
So it was a bit of a surprise when suddenly Rite Aid and CVS stopped processing NFC payments. The rumor is that they have some agreement with the Merchant Customer Exchange to specifically prohibit Apple Pay, which is silly. Apple Pay isn’t anything new (as far as I can tell), it’s just implemented on a very popular phone and so is actually getting traction today.
In a way, disabling all NFC payments is almost like saying “okay, yes, that’s a valid payment method, but we don’t take magnetic stripe cards anymore, sorry.” It would seem to me that the card networks (Visa, MasterCard, etc.) should be able to have a say in this, but I’m not sure they’ve spoken up yet.
This is all made much worse by the apparent reasoning behind the new policy: They want to push their own solution. Which itself has several problems:
It’s much clunkier (you have to launch an app, scan a code, accept the charge, and then display a new code on the phone for the cashier to scan back)
It’s not as secure (on iPhones, the Apple Pay data is stored in a separate, secure chip. For these apps, it’s stored in the phone’s filesystem)
It links directly to your checking account (not to bank credit cards)
It’s not as private (one big goal of the system is to “encourage loyalty” by providing customers with targeted offers and coupons)
It’s not even available yet
Obviously, the “friction” that such a cumbersome interface presents is a big reason that I think this will eventually fail. But I’m far more worried about the direct links to bank accounts. And much more annoyed about the “loyalty” and privacy aspects.
If a merchant wants my loyalty, they can build it, strongly, in a very simple manner: Have the products I want, at prices I find reasonable, and offered in an environment with a pleasant shopping experience. Fail on any of those three criteria and I’ll only shop at your store grudgingly. Succeed on all three, and your locations will always be at the top of my list.
In addition to the great Tech Crunch article linked by this post, there’s also some good commentary from Gruber (which links to some other articles), and I think the simplest description of the problem in this great image from Dan Frommer. “Can’t wait for the mobile payments app from the company that designed this receipt.”
For once, I’m glad that my iPhone is a year out-of-cycle. By the time I’ve upgraded to the iPhone 6S (or whatever it’ll be called), hopefully the MCX thing will have died a very swift and public death, and Apple Pay (along with Android based NFC payments as well) will simply work.
As @BenedictEvans said:
Few things are more predictable than the failure of a tech product made by an industry consortium of non-tech companies.
The recent release of iOS 8 brought with it several cool new features, especially some which more tightly integrate the iOS world with the OS X desktop world. Some of these are limited by physical proximity (like handing off email drafts among devices), while others are require being on the same local subnet (forwarding phone calls to the desktop).
However, one feature apparently Just Works all the time, and that’s SMS message forwarding. If you have an iPhone, running iOS 8, then you can send and receive normal text messages (to your “Green bubble friends”) from your iPad or Yosemite desktop. Even if the phone is the next town over.
This is actually pretty cool — I use text messaging a lot, and while most of the people I communicate with use iPhones, a fair number (especially customers) don’t. If I need to send them something securely, like a password to a document I just emailed them, I have to manually type the password into my iPhone and hope I don’t mess it up. With SMS messages bridged between the systems, now I can just copy out of my password safe and paste right into iMessage.
However, this does raise one possible security issue. Many services which offer Two-Factor Authentication (2FA, or as many are preferring to all this particular brand of 2FA, “two step authentication”), send the 2FA confirmation codes over SMS. The theory being that only the authorized user will have access to that user’s cell phone, and so the SMS will only be seen by the intended person.
But if your SMS messages are also copied to your iPad (which you left on your desk at work) or your laptop or desktop (which, likewise, may be left in the office, out of your control) then password reset messages sent over SMS will appear on those devices too.
Which means that your [fr]enemies at work may be able to easily gain control over some of your accounts, simply by requesting a password reset while you’re at lunch. And, since you’re really enjoying your three-bourbon lunch, you don’t even notice the messages appearing on your phone until it’s too late (at which point you’re alerted, not by the Twitter account reset, but by dozens of replies to the “I’m an idiot!” tweet your co-workers posted on your behalf.)
Fortunately, there’s an easy way to correct this.
In OS X Yosemite, go into the System Preferences application and select “Notifications.” Then go down to “Messages,” and where it says “Show message preview” make sure the pop-up is “when unlocked,” not “always.” If this is set to “when unlocked,” then the contents of SMS messages won’t be displayed when the desktop is locked, only a “you got a message” sort of notification. You might also consider disabling the “Show notifications on lock screen” button just above it, which will even disable the notification of the notification.
In iOS, a similar setting can be found in Settings, also under Notifications:
However, the control here isn’t quite as fine-grained — you can either show notifications on the lock screen, or not, and if they’re shown at all, then the contents will be displayd as well.
You might consider even preventing SMS notifications from displaying on your primary phone when locked, but if it’s almost never out of your control, then perhaps that’s not a big risk to worry about.
Note that both of these settings apply to iMessages as well as SMS messages.
If you never use SMS messages for account validation (whether you call them 2FA or 2SV or just “validation messages,” then you might not need to worry about this at all. Though it’s probably a good idea to at least consider disabling these notifications anyway…
Recent reports (and a slew of tweets) have circulated about the new Spotlight search on OS X Yosemite. Rene Ritchie at iMore explains the concerns, facts, and back-and-forth of the situation pretty well.
Especially damning was this lede from The Washington Post:
Apple has begun automatically collecting the locations of users and the queries they type when searching for files with the newest Mac operating system, a function that has provoked backlash for a company that portrays itself as a leader on privacy.
Ignoring for a moment the fact that this feature was first demonstrated months ago at WWDC, and also that the exact same feature debuted last month on iPhones to nary a whisper… the use of the word “collecting” is particularly loaded.
Yes, the data is technically being collected That is, it’s received from the user’s device by Apple. But it’s not stored for more than 15 minutes, so collected is probably a bit of a strong word. In that respect, Siri has been doing the same thing for over three years.
The first time you use the new feature, the privacy implications are pretty well explained, and it’s also described in high levels on Apple’s privacy overview pages. Additional details for how the feature works are given in the iOS Security Guide (see page 39).
I’m lucky to live near a really good information security group, NoVA Hackers. We meet once a month, and usually have 6-10 speakers of all levels, speaking on just about anything they’d like.
I thought this might be an audience who’d be interested in learning how the recent iOS security changes actually worked, and so threw together a quick talk based mostly on my blog post of a week before.
It was well received, and I had a lot of really good questions during and after the talk. One question I didn’t have the answer for at the time was:
What AES mode is used for file data protection?
A quick review of the iOS Security guide from Apple reveals:
Per-file encryption: AES-256-CBC, IV “is calculated with the block offset into the file, encrypted with SHA-1 hash of the per-file key”
Filesystem: (not specified, Sogeti slides say AES-256-CBC)
Keychain items: AES-128-GCM
Wrapped encryption keys: RFC 3394 (AES wrapping)
Encrypted encryption keys: AES-256 (Again, Sogeti slides say AES-256-CBC)
Another question regarded the new “5-second” delay for repeated failed passcode attempts. Quick testing during the talk showed that we’re not seeing the delay (which I’d previously presumed would be “invisible” to the user as they take time to enter a new passcode). Apple’s documentation states:
On a device with an A7 or later A-series processor, the key operations are performed by the Secure Enclave, which also enforces a 5-second delay between repeated failed unlocking requests. This provides a governor against brute-force attacks in addition to safeguards enforced by iOS.
It may be that the word “repeated” is key here — perhaps the delay doesn’t kick in until after a few attempts (perhaps even until after the iOS UI would have locked the user out anyway). It might be good for Apple to clarify this.
Finally, there was some discussion about the UID “fused” into the System on a Chip (SoC), and how we know that Apple doesn’t retain a copy of it. Well, so far the only answer (which people hate me for giving) is “Because Apple said they don’t.” I think in general, Apple’s been very above-board with regards to security and privacy, especially with the iOS Security documentation, and I’m (personally) inclined to take them at their word.
Nonetheless, there remains a possibility that a UID database may be maintained somewhere. Perhaps an undocumented CPU instruction will reveal a chip-ID that can be used to look it up, for example. How that database would be produced as chips are rolled out of the factory seems logistically quite difficult, though.
But one intriguing possibility occurred to me after I’d left the meeting. The Apple docs say:
The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor during manufacturing.
What exactly does “fused” mean, in regards to chips? I believe that, for FPGAs and other field-programmable devices, “fusing” refers to the act of burning a configuration or data directly into the chip, and is a one-time, irreversible action. What if “during manufacture” doesn’t mean “as we actually fabricate the chip” but instead “after it rolls off the line and goes through quality checking”?
The chip could be built with an all-blank UID area, and the first time it powers up, it generates a random number and fuses that into the UID. In that way, neither Apple nor the chip manufacturer could ever have a way to determine the UID (aside from destroying the chip to visually identify the fuses directly).
Again, I’d love it if Apple could clarify this, as really the security of the UID is the linchpin for the entire ecosystem’s security.
Thanks to the NoVAHackers crowd for the great discussion afterwards, which clearly led directly to new ideas and questions that are definitely worth exploring.
If you’d like to see the slides, you can download them here (or click on the title link for this blog entry).
Many in law enforcement are upset that Apple is “unilaterally” removing a key tool in their investigations (whether that tool has ever been truly “key” is another debate). Some privacy experts hail it as a great step forward. Others say “it’s about time.” And still others debate whether it’s quite as absolute a change as Apple’s making it sound.
I wrote extensively about this earlier this week, trying to pull together technical details from Apple’s “iOS Security” whitepaper and some key conference presentations. What’s amusing, now that I look through my archives, is that I said a lot of the same things 18 months ago.
As I was finishing this weekend’s post, Matthew Green posted a very good explanation as well, a bit higher on the readability scale without losing too many of technical details. He later referred to my own post (thanks!) with an accurate note that we don’t know for certain whether the “5 second delay” in consecutive attempts can be overridden by Apple with a new Secure Enclave firmware.
Also later on Monday, Julian Sanchez published a less technical, much more analytic piece that’s worth reading for some of the bigger picture issues. His Cato Institute post is also a good read, to help understand why backdoors in general are a bad idea, and how this may turn out to be a rerun of the 1990’s Crypto Wars.
And just this morning, Joseph Bonneau posted a great practical analysis of the implications of self-chosen passcodes on the Freedom to Tinker blog. This latest story shows how even though, at a technical level, some strong passcodes may take years to break, in practical terms users don’t pick passcodes that are “random enough”. It even has a pretty graph.
One final suggestion made in Mr. Bonneau’s post (and also voiced by many others in posts or on twitter, including myself) is that a hardware-level “wrong passcode count” seems like a great idea. I’d been concerned about how to integrate that count with the user interface, but then he estimates that “A hard limit of 100 guesses would leave about 3% of users vulnerable” (based on the statistics he presents).
This almost throwaway comment made me wonder — if the user interface is (typically) configured to completely lock, or even wipe, a phone after 10 guesses, then why not let OS-level brute force attempts (initiated through the mythical Apple-signed external boot image) continue until 20 attempts? Then the hardware can simply refuse to attempt any further passcode key derivations, and not even worry about what to do with the phone (lock, wipe, or whatever). If the user has already hit 10 attempts through the UI, this count will never be reached in hardware anyway.
The only hard part about this idea would be finding a secure way for the secure element to know that the passcode was properly entered. If we rely on the operating system to actually verify the passcode, and then notify the secure element, then that notifcation may be subject to spoofing by an attacker. This may be an intractable problem, but I’m confident that it wouldn’t be, and that a workable (or even elegant) solution may be found.
If Apple could add that level of protection, then even a 4-digit numeric passcode could be “strong enough” (provided they stay away from the top-50 or so bad passcodes). And at that point, it would absolutely be “technically infeasible” for Apple to do anything with a locked phone, other than retrieve totally unencrypted data.
A story hit the press this morning about a comapny installing Bluetooth beacons (in the iOS world, known as “iBeacons”) on phone booths in New York City. The fear is that these could be used to track users and send unwanted advertisments to their phones.
This article on Forbes does a pretty good job of explaining the situation, far better than the lengthy blog post I tried to write this morning (one really long post from me in a day is probably more than enough).
A very good quote in the article, which comes from Jules Polonetsky is that “beacons don’t track you; you track beacons.”
Beacons themselves don’t collect any data. They do not send marketing messages to your phone. They broadcast location marks that your phone and apps using your phone can take advantage of to understand more precisely where you are.
The fact is, beacons only do one thing — “beacon.” When a phone which has been configured to listen for a particular beacon (by installing a store’s app, for example) happens nearby, it hears the beacon and may react accordingly. Frequently, they’ll reach back to the app’s servers, which may then respond with a location-specific offer or other such “enhancements” to the “consumer shopping experience.”
Don’t want this to happen? Don’t install the app, or deny them the use of push notifications or location services. Unfortunately, I couldn’t find any way to enable iBeacons within an app but to disable them when the app isn’t running, which seems like an ideal compromise.
Or, worst case, just turn off Bluetooth, except when you’re actively using it.
On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it's not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.
What exactly does this mean? And what was Apple doing before to support law enforcement? Well, to really understand that, we have to go kind of deep into how iOS encryption works. It’s complicated, and I’m not always the best at explaining things, but I’ll try my best.
To really understand it all, I highly recommend Apple’s iOS Security whitepaper. First released in May of 2012, with updates in October 2012, February 2014, and September 2014, the newest version (dated October 2014) includes changes to iOS 8. Also a great reference is “iPhone data protection in depth” by Jean-Baptiste Bédrune and Jean Sigwald of Sogeti, presented at HITB Amsterdam in 2011. Keep these open in other windows as I struggle to explain things. To truly understand what’s going on, those two references are your best bet.
In fact, I’ll probably find it convenient to refer back to these from time to time. I’ll call the iOS paper “ISG” and and the HITB presentation “Sogeti”. I’m sure I’m totally butchering the proper way to cite sources, but I’ve been out of school for over 20 years, so give me a break, okay?
Or go read Matthew Green’s much simpler explanation, which was posted after I’d finished writing my first draft of this post…
Where to begin?
Let me start by saying that I’m going to gloss over a lot of stuff here. This isn’t a formal presentation, it’s not a whitepaper, it’s not even meant to be a serious reference. This is in response to frustration trying to discuss this on twitter in 140-character bites.
So this is my “Tweet Longer” response. Think of it as the conversation (well, more like endless monologue you’re too polite to extricate yourself from) that I’d have with you if I ran into you at a con and you asked me how all this works.
Full Disk Encryption
Data on iPhones is encrypted.
Okay, glad that’s cleared up.
Well, it wasn’t at first. But starting with iOS 3.0 and the iPhone 3GS, the full filesystem was encrypted. The key for this encryption is not user-selectable, but depends on a UID which is “burned” into the phone’s chips at the factory (Sogeti, pp 4 and 5, also ISG, p 9). The UID is a 256-bit key “fused into the application processor during manufacturing.” Apple further stages that “no software or firmware can read them directly” but can only see the results of encryption using those keys.
The UID key is used to create a key called “key0x89b.” Key0x89b is used in encrypting the device’s flash disk. Because this key is unique to the device, and cannot be extracted from the device, it is impossible to remove the flash memory from one iPhone and transfer it to another, or to read it offline. (And when I say “Impossible,” what I really mean is “Really damned hard because you’d have to brute force a 256-bit AES key.”)
The exact mechanisms used to encrypt the storage are pretty complicated (see Sogeti, pp 31-39). The over-simplified answer is something like this:
A random key is generated and used as basis for encrypting the entire disk
This key is itself encrypted using key0x89b, and stored in a special form of memory called “effaceable storage”
Because key0x89b is tied to the device (being derived from UID), this disk key can’t be decrypted on any other device — the memory chips must stay in the device they were formatted on — unless you extract key0x89b
(which may be possible on jailbroken devices — but if you’ve jailbroken the device, you already have access to the decrypted filesystem anyway)
To wipe the device, simply wipe the effaceable storage, and the key goes away.
Of course, all of this is fully automatic. If you can get access to the file system, you can read the data — the decryption “just works.” The primary protections this adds are:
Can’t move the disk from one phone to another (not that this was a huge concern)
Can completely wipe the disk pretty much instantly
But the data itself isn’t terribly well protected. If the device is unlocked, or if you can boot off an external drive and read the filesystem, then you can read everything.
Data Protection API
In iOS 4, Apple introduced the Data Protection API (DPAPI). Under DPAPI, several classes of protection were introduced for both files and keychain entries. I’ll focus on just files, but the concepts map relatively cleanly to keychain data as well.
Each file is individually encrypted with a “Class Key.” The class key is simply another random key, but which is applied to any and all files which share the same DPAPI level. For example, all files marked as “FileProtectionComplete” use class 1 (Sogeti, p 15). There are currently four file protection classes, and four (nearly) analogous keychain protection classes (ISG, pp 10-13). The three classes most important to this discussion are:
Complete (locked when the device is locked)
Complete until First Authentication (locked, until the user’s unlocked the device once after the most recent reboot)
None (no additional protections)
The keys for all these classes are stored in a “keybag.” There are several keybags — one on the device (the “system keybag,”) another stored on trusted computers (for syncing and backing up devices), and another stored on MDM servers (to remotely unlock a device, in the event you’ve forgotten your passcode). (ISG, pp 14-15).
When a file is encrypted under (for example) “Complete” protection, the system extracts the appropriate class key from the keybag, and encrypts the file using that key. To decrypt the file, the key is again read from the keybag, and the file decrypted.
When you set (or change) a passcode, a key is derived using the system UID, and this key is then used to encrypt individual class keys within the keybag. The key derivation process is complicated, but essentially expands the passcode and a salt, using multiple rounds designed to take about 80 milliseconds, no matter the device. On newer devices (A7 or later processors, which is to say, iPhone 5S, 6, and 6+, the iPad Air, and the Retina iPad Mini) this is augmented with a 5-second delay between failed requests. This delay is added at the hardware level, while the escalating delays seen by the user at the lock screen are all part of the operating system. (ISG, p 11).
Because the passcode key is “entangled” with the UID, it’s not possible to simply extract the encrypted keybag and brute force the passcode on a fast password cracking machine. The key must be decrypted on the device itself, which requires either a jailbroken device or a trusted external boot image (more on those later).
When you lock the device, the decrypted “complete protection” class key is wiped from memory. So the device can no longer read any file encrypted with that protection level, until it’s been unlocked again.
Default Data Protection
When Apple debuted the DPAPI, it was entirely an optional feature, and virtually no applications took advantage of it. For some time, the only Apple application which used any data protection was the Mail app. Under iOS 7, Apple changed the default to “Complete until first authentication”, and any new applications should use this protection level automatically.
Unfortunately, though 3rd party apps would inherit somewhat better protections, it wasn’t the best possible mode. Perhaps Apple left that as an option for developers to avoid making background use too difficult, or perhaps there were other reasons. And Apple opted to exclude most of their own applications from this new default.
Support to Law Enforcement
So what exactly was Apple doing to support police who need access to data on a seized iPhone or iPad? This has never been terribly clear. Every few months, a story seemed to hit the press about Apple unlocking phones for the police, but details were scarce, and speculation rampant.
As far as I have been able to guess, Apple had three avenues for extracting data from a seized iPhone:
Extract unprotected data over USB using forensic tools
Boot the device using a signed external disk image and extract data directly from the filesystem
Boot the device from the signed external image and brute force the passcode
As for the first item, I don’t work in forensics and so can’t really speak to what these tools can do. But several open-source tools exist (in particular, the iphone-dataprotection kit by our friends at Sogeti) which can illustrate some of what’s possible.
However, much of the data extracted by these tools may be limited when the device is locked, and no forensic tool can directly bypass encryption provided on a locked device by DPAPI. (this changes if the forensic examiner has access to a desktop used to sync the device, but that’s a whole different blog post.)
Booting a Trusted Image
The second is a bit more complicated. Essentially, you’re booting the device using an external drive as the operating system. But since you’re still “on” the device, the locally-stored keys and UID are still available, and so the entire filesystem can be mounted and read. To prevent just anyone from doing this, iOS devices require the external image to be signed by Apple (so we can’t simply create our own drive and boot off that).
Fortunately (or unfortunately, depending on your point of view) there was a bug in the bootrom on several early iOS devices that allowed an attacker to bypass this signature requirement. So up until (and including) iPhone 4 and iPad 1, it was possible for anyone to perform this attack and extract any non-encrypted data (DPAPI protection level “None”) from the phone. Because the phone has to be rebooted in order to perform this attack, however, in-memory keys for the “complete” and “complete until first authentication” are lost, and so any data protected in those modes cannot be read, even using this approach.
Even Apple, booting from a trusted external image, can’t unlock those protected files. The class keys needed for decryption are stored in the system keybag, which is encrypted using the user’s passcode.
Brute Forcing a Passcode
Well, what about brute forcing a passcode? As I said above, older devices could be booted from an external drive, allowing full access to unencrypted files on the device filesystem. Also available are the library routines needed to decrypt the system keybag. And these routines (as far as we know) don’t have any rate limiting, escalating delays, or lockouts, so a program with access to the filesystem and this API can brute force as long as it needs to.
But the key derivation still needs to happen on the device itself, and each device has its work factor tailored so this takes about 80 mS per guess. So though a weak four-digit number could be cracked in 20 minutes or less, a strong alphanumeric passcode could still take months, years, or centuries to break.
And once Apple patched the bootrom hole (beginning with iPhone 5 and iPad 2), this became impossible for anyone outside of Apple to do anyway.
However, because the possibility remained that Apple could crack the passcode, most iOS security experts still recommend that users choose a strong passcode. Just in case.
There’s no evidence that Apple ever actually offered this as a service to law enforcement. I could see where they might, under the right circumstances, but I can also understand where they might be reluctant to ever offer such a service, for fear of it being abused (or just over-requested).
But by brute forcing the passcode, all the data protection is rendered useless.
What changed with iOS 8?
Two things changed with iOS 8:
A 5 second delay was added, at the hardware level, for passcode attempts (newer hardware only)
The default data protection mode was extended to more of Apple’s apps
The first change only affects anyone trying to brute force a passcode directly on a device, which means this only affects Apple (unless law enforcement, forensics teams, or wily hackers have access to a signed image).
The second change is somewhat more significant, under certain circumstances. A device which has been unlocked once already (since the last reboot) will behave exactly the same as iOS 7. That is, anything which is under the “Complete until first authentication” protection level will be (essentially) unencrypted, since the keys will remain in memory after the first time the user unlocks the device.
Much of the built-in application data was moved under the stricter controls: photos, messages, contacts, call history, etc. — all items which were described in the privacy message quoted way back when I started this. This data, once the phone’s been unlocked once, may still be available using 3rd party forensic tools.
However, and here’s what I think is probably key: Once the phone is rebooted, that class key is lost, and the data is unreadable until the user enters their passcode again.
So even Apple, with a trusted external boot image, can’t access the data unless they crack the user’s passcode.
I think this is what Apple referred to when they said that it is “not technically feasible” to respond to warrants for this data.
Could Apple still attempt to brute force the passcode? Yes, possibly. If they’ve ever done that before.
It’s also possible that Apple could (or already has) added brute-force protections to newer iOS hardware, which would prevent even Apple from breaking a user’s passcode. They’ve already added the 5-second delay (for A7-based devices), but whether the hardware enforces escalating delays for consecutive bad attempts has not been disclosed. I suspect that if it were the case, we’d’ve heard by now, but I’ve not personally tried brute forcing passcodes on anything newer than an iPad 1. Maybe this was added in iPhone 6 — but we’ll probably have to wait until they’re jailbroken to be sure.
A Quick Demo
If you’d like to see the difference between iOS 7 and iOS 8 for yourself, here’s a simple test.
Get an iPhone running iOS 7, and another running iOS 8.
Get a landline (or a 3rd cell phone) and make sure each iPhone has that number in its Contacts database (add a name, picture, etc.)
Reboot both phones. Do not unlock either phone.
Call the iOS 7 phone from the 3rd phone. You should see not only the phone’s number, but also the name and picture you put into the Contacts database.
Call the iOS 8 phone. You should see the phone number, and nothing else (since most cellular providers don’t provide name over caller ID).
Unlock the iOS 8 phone, then lock it again.
Call the iOS 8 phone again, and this time, you should see the Contacts entry appear on the locked screen.
This shows how the Contacts database is locked with “complete until first authentication.” After rebooting, the phone simply does not have access to the Contacts database, because the class key is still safely encrypted in the system keybag. Once you unlock it, however, the key is extracted, decrypted, and retained in memory, so the next time you call, the phone can read Contacts and display the information.
(This is also why you can’t connect to your home Wi-Fi after rebooting the phone, but after you’ve unlocked it once, it remains connected even when locked. The keychain entries for Wi-Fi are stored with “complete until first authentication.”) (ISG, p13).
So what about law enforcement?
Well, if the assumptions I am making here are correct (and it’s not just me — I believe many in the iOS security community have come to the same conclusion), then Apple simply cannot provide much beyond very basic forensic-level data on phones, especially once they’ve been powered off.
But to my mind, any such service from Apple was always just a matter of convenience anyway. If a warrant can be issued to seize the data on a phone, then one can be issued to compel the owner of the phone to unlock it. (Yes, I’m aware that this is a legally murky area. Some recent decisions have upheld such orders, while at least one other has struck such an order down. One might be able to fight a court-imposed unlock demand, but it’d almost certainly take a lot of time, and be quite expensive in the long run). (And it should go without saying that there are very many good reasons that I am not a lawyer).
So if the police absolutely need access to the data on a phone, they can (probably) compel the owner to unlock it, and so Apple’s inability to help becomes irrelevant.
They can also (presumably) serve warrants to collect any computers belonging to that user, extract the pairing records from them, and then collect just about anything they want from the phone over USB.
Finally, much of the data on an iOS device will also exist “in the cloud” somewhere. So police can certainly go after those providers as well.
So the bottom line here is that, even without Apple’s help, law enforcement still has many ways to get at the data on a locked iOS device.
Even with the very good documentation from Apple, several questions remain.
Where exactly does the low-level encryption happen? The iOS Security guide says that the UID is “fused” into the application co-processor, and that “no software or firmware can read them directly” (p 9), but on page 6 it says it’s stored in the “Secure Enclave” (which is not to be confused with the “Secure Element” used by NFC and Apple Pay), and that the Secure Enclave has its own software update process.
So is the UID available to the software within the Secure Enclave (SE)? Or are operations utilizing the UID handled by a “black box” within the SE, and only the results of these operations visible to SE firmware (and, as moved from SE to the main processor, to the OS in general)?
Or is it possible that the UID in the context of the Secure Enclave is not the UID used to generate the file protection keys? (If so, I sincerely hope Apple updates their naming system to clarify this, because it’s obviously confusing the heck out of a lot of us).
Has Apple ever provided brute-force passcode breaking services to law enforcement? Has iOS been modified to restrict or eliminate this attack? If not, can Apple theoretically perform such an attack today? If asked, would they refuse? Could they?
What does it all mean?
So, what’s the bottom line? Is there a “TL;DR” summary?
Since the iPhone 3GS, all iOS devices have used a hardware-based AES full-disk encryption that prevents storage from being moved from one device to another, and facilitates a fast wipe of the disk.
Since iOS 4 (iPhone 4), additional protections have been available using the Data Protection API (DPAPI).
The DPAPI allows files to be flagged such that they are always encrypted when the device is locked, or encrypted after reboot (but not encrypted after the user enters their passcode once).
On older devices (up to and including iPhone 4 and iPad 1), a bootrom bug could be exploited to boot off an external image and brute force weak passcodes, as well as read non-encrypted data from the filesystem.
It remains possible (but unproven) that Apple retains the capability to do this on modern devices using a trusted external boot image.
Once a user locks a device with a passcode, the class keys for “complete” encryption are wiped from memory, and so that data is unreadable, even when booting from a trusted external image.
Once a user reboots a device, the “complete until first authentication” keys are lost from memory, and any files under that DPAPI protection level will be unreadable even when booted from an external image.
Under iOS 8, many built-in applications received the “complete until first authentication” protection.
This new protection level means that even when booting from a trusted external image, Apple cannot read data encrypted using that protection, unless they have the user’s passcode.
It may still be possible to use forensic tools to extract data from a locked device. In many ways, iOS 8 behaves exactly like iOS 7 for such tools, if the user has unlocked it at least once after a reboot.
The entire hierarchy of encryption keys, class keys, and keybags, is entangled with a device-specific UID that cannot be extracted from the device nor accessed by on-device software.
Many of the keys are further protected by a key derived from the passcode (and the internal UID).
It is not entirely clear whether the Secure Enclave can be manipulated by Apple or an attacker to bypass any or all of the encryption key hierarchy by gaining direct or indirect access to the UID or derived keys.
It is also unknown whether current devices are vulnerable to a passcode brute force attack by Apple (or anyone with access to a trusted external boot image).
Many of these protections are rendered (somewhat) irrelevant if law enforcement (or a determined adversary) have access to a trusted computer used to sync the device, or potentially to copies of the data existing in cloud-based services.
The bottom line, the real “too long, didn’t read”:
Apple does seem to have made it much more difficult for anyone to get at data on a locked phone
Some of these protections are reduced once you’ve unlocked the phone once
Many of these protections are aimed at attacks requiring a reboot of the device
Apple may be able to access the filesystem on a device, but this would require a reboot
So unless they crack your passcode, they won’t be able to read the protected files.
Apple may be able to crack a passcode
But each attempt takes at least 80 mS, and as much as 5 seconds on newer devices
A strong passcode (6 or more letters or 8 or more numbers) can take years to break
To sum it up in one sentence:
Use a strong passcode, and power off your device whenever it’s out of your control.