Lock screen has been vulnerable with a bypass exploit on several occasions so caution is probably a good idea. Way too many times to give me any sort of confidence.
As it happens, Secure Transport (edit: on OS X) is open source. I just spent the last hour rummaging through source code ... It was later renamed to "libsecurity_ssl" when it landed on iOS. It's been around since OS X launched. They also have a folder with about four dozen regression tests and a test app, not that I've had the chance to inspect either that closely. The tarballs are spread across two folders in the tarballs index: Security (look for multi-MB file sizes) and libsecurity_ssl. The 5-digit numbers appear to be sequential revision IDs, any single-digit numbers seem historical. (E.g. libsecurity_ssl-6 is before libsecurity_ssl-10 rather than after libsecurity_ssl-55471.)
It appears they haven't posted newer source than this. The most recent timestamp I could find was Oct 11, 2013 in 55471, which corresponds to 7.0 and my 10.9 system has the same version number for Security.framework -- same bundle version of 55471 for 10.9.1 aka 13A581. Previous version numbers don't appear to be as well-maintained. I don't expect a newer release to be posted until the next OS X release, as the source was only published under 10.9, not iOS. Additionally, there's no mention of iOS 7 on http://www.opensource.apple.com/
I couldn't easily find the bug without more to go on, because the code is spread across a few components and really, I'm not an expert in TLS. It appears to have been largely unchanged from 2000-2006 or so. TLS 1.2 brought quite a few changes, but it was neat to browse through the lines of "FIXME" and "TODO" comments, as well as various diffs between releases. And neat to see how much code today still goes back to 1999-2001, sometimes all they did was add a 'k' in front of a few variable names or delete the line in the first README saying the server code wasn't tested against Windows ;-)
It sounds like when 10.9.2 is released, or at worst when 10.10 comes along, you'll see a new push of code to the opensource site. We can all diff 55471 against what comes next to see the changes. (If someone's already running 10.9.2 and its unaffected by the bug exhibited via curl, open /System/Library/Frameworks/Security.framework/Versions/Current/Resources/Info.plist and post the Bundle version.)
But, are you sure the 10.8.5 (-55179.13) version isn't in some sense a later, patched maintenance branch compared to a 10.9 (-55471) that might have been frozen earlier? The release dates are very close (~2013-10-03 for 10.8.5; ~2013-10-22 for 10.9), and they might have already been separate branches.
Good point. I just checked that file ever since it was open-sourced in 10.8 and it stays exactly the same throughout all of 10.8.x (10.8 - 10.8.5) and only changes in 10.9. (You can check for yourself here: http://opensource.apple.com/)
It doesn't seem like what you said is the case here but obviously we're still missing changesets that may have been committed between 10.8.5 (Security-55179.13) and 10.9 (Security-55471). It'd be really interesting to do a git-blame on that file.
EDIT: Nevermind, that file wasn't open-sourced in 10.8. It's actually really old. (Look for directories starting with libsecurity_ssl in pre-10.8 OS X versions.) Didn't find anything particularly interesting in the old versions though.
At that point the variable 'err' is still zero (noErr). So the function returns noErr, the caller thinks everything is good, and the communication is allowed.
From a different point of view, part of the problem is the undefined state of the code at the "fail" label.
Execution will arrive there somehow, but the 'how' is unclear. The word "fail" implies you should reach that point only if there was an error, but that is a bad assumption in this case.
If the real answer to 'how did we get here?' was checked, then the bug could not hide in the undefined behavior. This would not allow a dangling goto to result in a false positive. A false negative will get someone's attention when their web page doesn't load.
Something like this could remove the undefined state:
goto pass;
fail:
if ( err == 0 ) {
assert( err != 0 ); // BUG! variable must contain an error number
err = kSomeAppropriateErrorNumber;
}
pass:
SSLFreeBuffer(&signedHashes);
SSLFreeBuffer(&hashCtx);
return err;
Haha love the whitespace comment. This also makes you think of the static source-code analysis done at Apple. Surely static tools would have picked this up, no...?
Clang currently does not warn about this, but I'd wager that Xcode will boast a feature in the next version that detects dead code due to early returns/gotos.
Clang may not warn about white space issues, but it certainly does warn about unreachable code if -Wunreachable is on. This code would not compile in my project because the unconditional goto leaves the following code unreachable.
Really? I did something in Python for the first time a while ago and the indentation as code block is something I find very elegant. I can't fathom why would people find something wrong with this.
it's relatively harder to refactor code - you can't just select between { and }. otherwise, no idea - python is my weapon of choice, so i may be biased.
Here's a little bit of a conspiracy theory for you. There's an old saying: "there are no coincidences on Wall Street". Not with the amounts of money that are involved.
I think something similar should apply to critical security code. This reminds me of when someone tried to add the following to the Linux kernel:
True, but in languages with operator overloading you (may) end up with other issues. Unless you look at the source you have no idea what the '==' operator will do.
Because it is often really convenient, particularly with a simple macro preprocessor like cpp but in other circumstances too. I like gcc's approach of forcing you (if you have the right warning enabled) to put an extra pair of unnecessary parentheses around assignments in expressions.
Good question, even though Python doesn't allow assignment in an expression, but for this reason I put the constant on the left side of the comparison no matter what language I use.
Don't C compilers warn about code that is never executed?
I'm actually pretty impressed how this bug was not caught before. The compiler warning is just one thing that came to my mind but test coverage should have been the strongest hint. A linter could have also make the thing easier for a code review. Probably these parts of the code should have stricter coding guideline.
> Don't C compilers warn about code that is never executed?
Not by default. On both GCC and Clang this kind of error isn't caught using -Wall or even -Wextra - you have to explicitly opt-in using -Wunreachable-code. It'd really be nice if that changed and Clang basically had close to -Weverything on by default and required you to opt out, preferably with something like `#pragma ALLOW_SHODDY_<category>` per file to put some pressure on C programmers not to just continue ignoring errors rather than fixing them.
In the latest GCC releases, -Wunreachable-code has been removed (the option still exists but is silently ignored for compat with existing Makefiles), as its output varied so much between releases (depending on what got optimized away).
I'd hope after this someone brings it back in some form – it's an incredibly useful tool and a diversity of implementations would be great for security-critical code.
You are 100% right. LLVM should fire a dead code warning here. However later code is used as a jump target for goto so it's quite hard for the compiler to infer this (or is it because its a different scope?).
Either way, definitely avoidable.
Looks typical of a merge cock up to me as the indentation is preserved suggesting it's a duplicate line or there was an if statement on the previous line that was removed.
Does anyone else look at this file and suddenly have a greater appreciation that their company uses a cumbersome, time-consuming code review process which enforces an annoying, rigid code style?
We've gone to doing pull requests from branches with our code within our own repo because many of us work remotely and we want the eyeballs. Your teammates have to approve the code and merge it and close your branch, you can't do it yourself.
I thought I would hate this model but I absolutely love it. It's the next best thing to pair programming and somewhat less exhausting.
Conspiracy theories aside, every programmer has probably made that mistake. The better ones learn to just avoid if statements without blocks ;)
But one thing that pisses me off is that they go and implement SSL and don't have any automated tests for it!!
For a company of Apple's size, that can only be called grossly negligent. There is no excuse. And they probably don't have any automated testing for most of their other code either.
What I would expect is that SSL code is tested with all the different of ways of spoofing a certificate. And that in an automated manner, on every build.
My sentiments exactly. For them to have a piece of code that returns OK/notOK, in SSL of all things, and not have tests for both branches is inexcusable.
Not only is this code untested, obviously, but the calling code (which should have two branches based on the result of this call) must be untested too. Bleagh!
It's not a mistake, though. The second goto line was added separately.
I could see if they were copying & pasting goto statements all over the place, and happened to paste it twice by accident. But that's how it occurred. The second one was added in a separate commit.
Of all the comments, I think yours is most to the point:
we cannot rely on developers not making mistakes, but some kind of processes (tools, QA testing) should have caught an error like this one.
Oops. I was doing this from an iPad, and iOS Safari is extremely bad at rendering source code (it's plain text, so it wraps it. It also chooses an incredibly large font size, so lines wrap at 50 characters or so in portrait mode. That made me bail out early, after reading this and not spotting the SRVR = SERVER abbreviation:
I wonder about the code style, too, but on a different level.
You don't need curly braces (surely you should always use them) or whitespace sensitivity or source code indentation beautification if you use the appropriate programming technique for this situation.
Such endless error checking followed by releasing of resources at the end is a case for try / catch / finally.
I wonder when people start using decent languages for important coding.
CloudFlare just opensourced "Red October", which implements the two-man rule for any cryptographic checkout, but it could be really good for code checkins to catch things like this.
Damn what kind of half-assed developers do they have at Apple these days? They broke 2 of my (admittedly many) cardinal rules of C development:
1 - There is never ever a valid reason for using goto. It should not be part of the language.
2 - Always enclose script blocks in {} even if it's only a single line.
This is so bad that it is hard to imagine how it could have escaped notice until now, Apple really need to beef up their security competence. Lets hope that malevolent hackers were similarly asleep.
Because it's not a blatant regression, it's a complicate bug that requires a custom-patched TLS stack on the server to be explocited. As Adam Langley put it:
> A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes. In Chromium we have a patched version of TLSLite to do this sort of thing but I cannot recall that we have a test case for exactly this. (Sounds like I know what my Monday morning involves if not.)
The really scary thing is that there is basically no good testsuite for SSL/TLS in existence. I would not be surprised if other stupid bugs showed up in other implementations given one…
This is a good example of why static analysis is so useful. Testing this in QA is a non-trivial problem but simply adding a compiler flag would report it.
Odd. Apple updated iOS 7 and iOS 6 but didn't update Secure Transport in iOS 5? I wonder if the bug was introduced between iOS 5 and 6.
Background on Secure Transport:
"At the bottom of the TLS stack on both iOS and Mac OS X is a component known as Secure Transport. Secure Transport maintains a per-process TLS session cache. When you connect via TLS, the cache stores information about the TLS negotiation so that subsequent connections can connect more quickly. The on-the-wire mechanism is described at the link below.
FWIW, the first time I went to https://notinthecert.imperialviolet.org in Mobile Safari on iOS 5.1.1, I got a "Cannot Verify Server Identity" error with Cancel, Details, and Continue buttons. After pressing Continue, it remembered my choice and did not alert me the second time.
Looks like they weren't checking the hostname in the certificate. This would allow anyone with a certificate signed by a trusted CA to do a MITM attack on iOS devices. Very very bad vulnerability.
It seems to be a bit more than that, as I've verified that iOS 7.0.4 and Mac OS X 10.9.1 both refuse to connect to a server with a certificate for a different hostname. Sounds like verification is somehow different when connecting to a raw IP address. Safari treats that case differently, anyway: if I try to connect to a hostname that's different from what the certificate says, it simply refuses to establish the connection at all. If I try to connect to the raw IP address, it says the certificate is wrong, but gives me the choice to ignore the error and proceed anyway. Odd stuff.
Looking at the code, this bug probably doesn't happen on connections which use TLS 1.2, which any properly configured server should support these days. (There's a seperate codepath for TLS 1.2 connections.) Note that this doesn't provide any protection against an attacker exploiting the vulnerability, since they get to choose what TLS version is used.
Potentially. This exploit was known to apple for some period of time, if NSA has access to the internal apple bug tracker, then they could certainly exploit the bug.
Not at all: that's the bug. It's not properly verifying that the other-end of a TLS session is the entity able to sign with the certificate-declared private-key.
Most likely hrrsn was referring to code-signing keys. Even if you can successfully MITM a software update connection, iOS won't run your trojan unless it's got a valid signature.
Of course the jailbreaking community knows well that there have been many ways around that...
If I believe [1], 4% of all the iOS devices are still on versions earlier than 6, and will not be patched to this specific issue. This is pretty severe. I wonder (but presume not) if Apple is going to issue patches for earlier versions.
Really old devices are probably out of luck, though. I think that would encompass the original iPhone, the 3G, the corresponding iPods Touch, and (probably most importantly) the first generation iPad. That's assuming, of course, that the bug is in iOS 4/5 in the first place, but if it dates back to iOS 6 I'd give good odds that it dates back farther still.
If 96% of the installed base gets this patched more or less immediately, the situation is endlessly better than a similar situation would be for Android, given most manufacturers and operators' lack of interest in providing software updates.
Sounds like a classic Man-in-the-Middle (MITM) attack.
Just a guess, but from the short description I suspect if you have control over DHCP you can get iOS to use your proxy. From there you can use something like mitmproxy (
http://mitmproxy.org/) to forge SSL certificates on the fly and intercept and decrypt SSL traffic without any warnings showing up on the iOS device.
You can do that but you'll be throwing certificate errors everywhere if they're self signed. By the sounds of it this is a bypass or method of getting around the CA altogether.
In this case Apple is not performing the domain validity checks on the presented cert. This allows an attacker that is performing an mitm attack to present a valid cert for another domain and establish an SSL connection with the victim.
Wow. Maybe the inconsistent indentation and brackets-optional formatting helped the bug both arrive and persist?
Perhaps a preferable practice for security-conscious code would be to only set a success value after all checks have passed, rather than trust intervening logic to reset a default-success value, to an error-value, before return.
So, I'm not sure about that one. Apparently s_client ignores the error and completes the connection because it's intended to be used for debugging.
> Currently the verify operation continues after errors so all the problems with a certificate chain can be seen. As a side effect the connection will never fail due to a server certificate verify failure.
I don't know what you think that pastebin shows, but that error is not specific to OS X or to ssl.apple.com. OpenSSL is failing to validate the server certificate because you forgot to specify the -CAfile option.
Oh wow, you're right, sorry. I saw the error code at the top, and missed the fact that it was reporting success anyway at the bottom. That's... pretty terrible.
Yeah the biggest problem with mobile devices I see that desktop applications have less often is crypto problems..It will get better as the time goes on.
Holy shit!
So they are really not checking the CN and knew it since 2013-11-28.
I've lost the last bit of respect I had for apple (that was mostly building webkit) now.
(I deleted my gp, because it's pretty much obsolete now with your full disclousure. Thanks!)
Wow, this sounds bad. It sure would be nice if Apple gave us a little bit more information, like when the bug was introduced and what exactly was not being verified. Until we get more information, you should assume the worst that all your IOS TLS traffic might be compromised.
I'll manage installing an OS that's
1. audited by a community of volunteers I generally trust and
2. can be audited (read: modified) by me: I can compile it from scratch.
Apple's OS offerings provide neither 1 nor 2.
But, go figure, Apple's OS offerings are based off of OS code that is both 1 and 2.
Like a "Linux distro", Apple gives me a lot of stuff I do not want or need, and makes it very difficult if nt impossible to remove.
I want a very minimal BSD or Plan9 OS running on my Apple hardware.
And nothing more.
Hardware we purchase should come with full documentation for writing drivers.
If enough users demand this, maybe someday it will.
Since the source code is available, might it be possible to produce a hot patch to the binary so that those of us running Mavericks won't have to go through the next few days or weeks with our pants down? It would be a simple matter of finding the JMP instruction generated by the second GOTO and replacing it with no-ops. How hard could it be,at least for someone who actually knows their way around OS X binary file formats?
It's actually the reverse: Safari at least performs additional checks and does not appear to be affected, which would explain why this wasn't noticed faster. The big question is whether those checks are specific to Safari or to a higher-level API which most Cocoa apps would use.
"congrats to the Apple iOS team on adding SSL/TLS hostname checking in their latest update! very cool feature."
https://twitter.com/will_sargent/status/436985812878491648