Hacker News new | past | comments | ask | show | jobs | submit login
About the security content of iOS 7.0.6 (support.apple.com)
205 points by davepeck on Feb 21, 2014 | hide | past | favorite | 145 comments



Heh...

"congrats to the Apple iOS team on adding SSL/TLS hostname checking in their latest update! very cool feature."

https://twitter.com/will_sargent/status/436985812878491648


Comedic derision appreciated :)

However this is a pretty damn serious oversight.

I've just shut down my MacBook and picked up my ThinkPad.


Mac os is not vulnerable


It appears to be, per https://gist.github.com/rmoriz/fb2b0a6a0ce10550ab73 (and my own testing on OS X 10.9.1).


Not in 10.9.2 which is using the same curl version: http://pastebin.com/AZ38WYaB


Why in release note of Apple it wasn't mentioned?


The patch isn't ready for OS X. It will be in the next minor OS update.

Bad Apple.


I'm able to reproduce your results with cURL. However, Safari on OS X correctly shows a warning. Can anybody explain that?


OK, the answer to that is at the end of Adam Lengley's analysis: https://www.imperialviolet.org/2014/02/22/applebug.html

The lack of hostname checking for IP addresses in Apple's cURL is a completely different problem.


cURL uses OpenSSL, Safari uses Apple's Secure Transport.


That's not correct. The cURL version shipped with OS X uses SecureTransport.


No but it says something about the quality I can expect from the black boxes that Apple provide me with. And its not a good thing.


Lock screen has been vulnerable with a bypass exploit on several occasions so caution is probably a good idea. Way too many times to give me any sort of confidence.


As it happens, Secure Transport (edit: on OS X) is open source. I just spent the last hour rummaging through source code ... It was later renamed to "libsecurity_ssl" when it landed on iOS. It's been around since OS X launched. They also have a folder with about four dozen regression tests and a test app, not that I've had the chance to inspect either that closely. The tarballs are spread across two folders in the tarballs index: Security (look for multi-MB file sizes) and libsecurity_ssl. The 5-digit numbers appear to be sequential revision IDs, any single-digit numbers seem historical. (E.g. libsecurity_ssl-6 is before libsecurity_ssl-10 rather than after libsecurity_ssl-55471.)

It appears they haven't posted newer source than this. The most recent timestamp I could find was Oct 11, 2013 in 55471, which corresponds to 7.0 and my 10.9 system has the same version number for Security.framework -- same bundle version of 55471 for 10.9.1 aka 13A581. Previous version numbers don't appear to be as well-maintained. I don't expect a newer release to be posted until the next OS X release, as the source was only published under 10.9, not iOS. Additionally, there's no mention of iOS 7 on http://www.opensource.apple.com/

I couldn't easily find the bug without more to go on, because the code is spread across a few components and really, I'm not an expert in TLS. It appears to have been largely unchanged from 2000-2006 or so. TLS 1.2 brought quite a few changes, but it was neat to browse through the lines of "FIXME" and "TODO" comments, as well as various diffs between releases. And neat to see how much code today still goes back to 1999-2001, sometimes all they did was add a 'k' in front of a few variable names or delete the line in the first README saying the server code wasn't tested against Windows ;-)

It sounds like when 10.9.2 is released, or at worst when 10.10 comes along, you'll see a new push of code to the opensource site. We can all diff 55471 against what comes next to see the changes. (If someone's already running 10.9.2 and its unaffected by the bug exhibited via curl, open /System/Library/Frameworks/Security.framework/Versions/Current/Resources/Info.plist and post the Bundle version.)


Take a look at http://opensource.apple.com/source/Security/Security-55471/l...

specifically check the function SSLVerifySignedServerKeyExchange

I leave the joy of spotting it to you. It is obvious and if you know c you'll see it(You don't need any knowledge of crypto).


Here's a diff of that file from OS X 10.8.5 (Security-55179.13) to 10.9 (Security-55471): https://gist.github.com/alexyakoubian/9151610/revisions

Check line 631. Appears seemingly out of nowhere.


Good find!

But, are you sure the 10.8.5 (-55179.13) version isn't in some sense a later, patched maintenance branch compared to a 10.9 (-55471) that might have been frozen earlier? The release dates are very close (~2013-10-03 for 10.8.5; ~2013-10-22 for 10.9), and they might have already been separate branches.

(Is there a 10.8.4 version to compare?)


Good point. I just checked that file ever since it was open-sourced in 10.8 and it stays exactly the same throughout all of 10.8.x (10.8 - 10.8.5) and only changes in 10.9. (You can check for yourself here: http://opensource.apple.com/)

It doesn't seem like what you said is the case here but obviously we're still missing changesets that may have been committed between 10.8.5 (Security-55179.13) and 10.9 (Security-55471). It'd be really interesting to do a git-blame on that file.

EDIT: Nevermind, that file wasn't open-sourced in 10.8. It's actually really old. (Look for directories starting with libsecurity_ssl in pre-10.8 OS X versions.) Didn't find anything particularly interesting in the old versions though.


Thanks for clarifying. I know silly errors like this can slip in, but I hope Apple does a deep x-ray on all circumstances surrounding the change.


This bug of an extra duplicate line looks like a merge issue to me.


So, what does that mean, the goto fail portion of the code. It seems like it will go to that no matter what. What is the end outcome?

Thanks in advanced


At that point the variable 'err' is still zero (noErr). So the function returns noErr, the caller thinks everything is good, and the communication is allowed.


Goto considered harmful, indeed!


Not much a problem of `goto` per se. Rather a problem with if conditions used without code blocks.

Others might say it's a problem of whitespace insensitive languages ;)


From a different point of view, part of the problem is the undefined state of the code at the "fail" label.

Execution will arrive there somehow, but the 'how' is unclear. The word "fail" implies you should reach that point only if there was an error, but that is a bad assumption in this case.

If the real answer to 'how did we get here?' was checked, then the bug could not hide in the undefined behavior. This would not allow a dangling goto to result in a false positive. A false negative will get someone's attention when their web page doesn't load.

Something like this could remove the undefined state:

      goto pass;
  fail:
    if ( err == 0 ) {
    	assert( err != 0 ); // BUG! variable must contain an error number
    	err = kSomeAppropriateErrorNumber;
    }
  pass:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;


Haha love the whitespace comment. This also makes you think of the static source-code analysis done at Apple. Surely static tools would have picked this up, no...?


Clang currently does not warn about this, but I'd wager that Xcode will boast a feature in the next version that detects dead code due to early returns/gotos.


Clang may not warn about white space issues, but it certainly does warn about unreachable code if -Wunreachable is on. This code would not compile in my project because the unconditional goto leaves the following code unreachable.


Even if there were braces, the bug would still exist if the extra goto was outside the braces.

It might be more noticeable, but then, the original bug existed because no one noticed.


    return ERRCODE;
Would have produced the exact same bug.


People bitch about indentation in Python. But in this case it would have prevented the bug!

Of course Python doesn't even have a goto, so the code would have needed to be structured differently to start with.

If I were a betting man I'd wager that this bug wasn't really an accident. It would be really interesting to check commit logs to see the history.


> People bitch about indentation in Python

Really? I did something in Python for the first time a while ago and the indentation as code block is something I find very elegant. I can't fathom why would people find something wrong with this.


it's relatively harder to refactor code - you can't just select between { and }. otherwise, no idea - python is my weapon of choice, so i may be biased.


Can't really write onliners. And use of otherwise great interactive shell is not as good (not that bracketed blocks would help).


Style guideline of using curly braces with ALL if statements, even one liners, would have gone a long way to prevent this.


Code reviews and comprehensive test suites for critical code would have prevented this and many other kinds of mistakes besides.


Here's a little bit of a conspiracy theory for you. There's an old saying: "there are no coincidences on Wall Street". Not with the amounts of money that are involved.

I think something similar should apply to critical security code. This reminds me of when someone tried to add the following to the Linux kernel:

if ((options == (__WCLONE|__WALL)) && (current->uid = 0)) retval = -EINVAL;

Oops, is that "uid == 0", or is that "uid = 0"? Yet another "typo" that couldn't happen in Python.


How could this not happen in Python?


Python does not allow an assignment to occur within an expression. They deliberately chose that restriction, to avoid that hard-to-see bug.


True, but in languages with operator overloading you (may) end up with other issues. Unless you look at the source you have no idea what the '==' operator will do.


I have always wondered why all languages don't enforce this?


because, for example in C, such a incompatible change would alienate most developers used to the very ideomatic

        if((val = function(args)) != expected)
            return val;


Because it is often really convenient, particularly with a simple macro preprocessor like cpp but in other circumstances too. I like gcc's approach of forcing you (if you have the right warning enabled) to put an extra pair of unnecessary parentheses around assignments in expressions.


A better solution is gcc's which forces you (if you enable the restriction) to put an extra pair of parens around inline assignments.


The parenthesis in this attack are sufficient to prevent that warning.


Good question, even though Python doesn't allow assignment in an expression, but for this reason I put the constant on the left side of the comparison no matter what language I use.


Because assigning in if would be a syntax error in Python.


Don't C compilers warn about code that is never executed?

I'm actually pretty impressed how this bug was not caught before. The compiler warning is just one thing that came to my mind but test coverage should have been the strongest hint. A linter could have also make the thing easier for a code review. Probably these parts of the code should have stricter coding guideline.


> Don't C compilers warn about code that is never executed?

Not by default. On both GCC and Clang this kind of error isn't caught using -Wall or even -Wextra - you have to explicitly opt-in using -Wunreachable-code. It'd really be nice if that changed and Clang basically had close to -Weverything on by default and required you to opt out, preferably with something like `#pragma ALLOW_SHODDY_<category>` per file to put some pressure on C programmers not to just continue ignoring errors rather than fixing them.


In the latest GCC releases, -Wunreachable-code has been removed (the option still exists but is silently ignored for compat with existing Makefiles), as its output varied so much between releases (depending on what got optimized away).


I'd hope after this someone brings it back in some form – it's an incredibly useful tool and a diversity of implementations would be great for security-critical code.


You are 100% right. LLVM should fire a dead code warning here. However later code is used as a jump target for goto so it's quite hard for the compiler to infer this (or is it because its a different scope?).

Either way, definitely avoidable.

Looks typical of a merge cock up to me as the indentation is preserved suggesting it's a duplicate line or there was an if statement on the previous line that was removed.


The basic blocks from just after the unconditional goto up to the "fail:" label are all unreachable, so a dead code warning could be issued.

> However later code is used as a jump target for goto so it's quite hard for the compiler to infer this

Nope, because a jump target always starts a new basic block.


Looks to me like Apple aren't running Code-Coverage tests on their OS. That's kind of very scary.


I agree that they should. This is one of the types of errors that lint was traditionally used to find.


(Spoiler) This is another reason why you should never use if statements without curly braces.

Thanks for the hint. It was amusing to spot the actual bug.


I partially agree. Curly braces won't fix a typo or a rushed code review.


Does anyone else look at this file and suddenly have a greater appreciation that their company uses a cumbersome, time-consuming code review process which enforces an annoying, rigid code style?


We've gone to doing pull requests from branches with our code within our own repo because many of us work remotely and we want the eyeballs. Your teammates have to approve the code and merge it and close your branch, you can't do it yourself.

I thought I would hate this model but I absolutely love it. It's the next best thing to pair programming and somewhat less exhausting.


That's hilarious. Took me a while as I was looking for something more subtle. Thanks for the clues.

Another clue: indentation fail.


Conspiracy theories aside, every programmer has probably made that mistake. The better ones learn to just avoid if statements without blocks ;)

But one thing that pisses me off is that they go and implement SSL and don't have any automated tests for it!!

For a company of Apple's size, that can only be called grossly negligent. There is no excuse. And they probably don't have any automated testing for most of their other code either.

What I would expect is that SSL code is tested with all the different of ways of spoofing a certificate. And that in an automated manner, on every build.


My sentiments exactly. For them to have a piece of code that returns OK/notOK, in SSL of all things, and not have tests for both branches is inexcusable. Not only is this code untested, obviously, but the calling code (which should have two branches based on the result of this call) must be untested too. Bleagh!


It's not a mistake, though. The second goto line was added separately.

I could see if they were copying & pasting goto statements all over the place, and happened to paste it twice by accident. But that's how it occurred. The second one was added in a separate commit.


Of all the comments, I think yours is most to the point: we cannot rely on developers not making mistakes, but some kind of processes (tools, QA testing) should have caught an error like this one.


For the likely error I spot, you don't even need To know C. Anything remotely algol-like Will do.

I wonder whether the software from the guys at viva64.com would spot that.


You don't need fancy software, compilers warn about this kind of thing.


Just eyeballs and a clue stick!

However I can't see a valid case for this pattern so static analyser rule to knock it on the head would be a good idea.


Oops. I was doing this from an iPad, and iOS Safari is extremely bad at rendering source code (it's plain text, so it wraps it. It also chooses an incredibly large font size, so lines wrap at 50 characters or so in portrait mode. That made me bail out early, after reading this and not spotting the SRVR = SERVER abbreviation:

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
That is something that I think static analysis tools could signal. It would be a red herring, though.


Was there a more fundamental problem in the assumptions of the code?

The state of the operation is "success" (err==zero) until some step changes the state to "failure" (non-zero).

The state should be assumed to be "failure" until the last step changes it to "success".


Whoops. goto fail indeed


Just from looking the code it seems pretty sloppy, not what I would expect from security sensitive code.

Mixed tabs and spaces, inconsistent indentation, two empty lines in a row, sometimes "if (...)" and sometimes "if(...)".


I wonder about the code style, too, but on a different level. You don't need curly braces (surely you should always use them) or whitespace sensitivity or source code indentation beautification if you use the appropriate programming technique for this situation. Such endless error checking followed by releasing of resources at the end is a case for try / catch / finally. I wonder when people start using decent languages for important coding.


CloudFlare just opensourced "Red October", which implements the two-man rule for any cryptographic checkout, but it could be really good for code checkins to catch things like this.

http://blog.cloudflare.com/red-october-cloudflares-open-sour...


Damn what kind of half-assed developers do they have at Apple these days? They broke 2 of my (admittedly many) cardinal rules of C development: 1 - There is never ever a valid reason for using goto. It should not be part of the language. 2 - Always enclose script blocks in {} even if it's only a single line.


What about these?

  - don't leave local variables uninitialized.
  - don't try to hand-optimize lines unnecessarily. 
  - don't mix assignments into boolean expressions. (Which is really an unneeded optimization.)


This does not look like a merge error. The added line stands alone and it's very well located for an exploit.


This is so bad that it is hard to imagine how it could have escaped notice until now, Apple really need to beef up their security competence. Lets hope that malevolent hackers were similarly asleep.


Indeed. My first thought upon seeing this: "how in the FUCK did automated testing not catch this IMMEDIATELY!?"

Absolutely terrifying that crypto safety is such a low QA priority that something like this could ever leave the building.


Because it's not a blatant regression, it's a complicate bug that requires a custom-patched TLS stack on the server to be explocited. As Adam Langley put it:

> A test case could have caught this, but it's difficult because it's so deep into the handshake. One needs to write a completely separate TLS stack, with lots of options for sending invalid handshakes. In Chromium we have a patched version of TLSLite to do this sort of thing but I cannot recall that we have a test case for exactly this. (Sounds like I know what my Monday morning involves if not.)


Except: ensuring that the server is signing with the right, certificate-certified private key is the major thing that TLS is supposed to provide.

So no matter how strange or malicious the server-side stack would need to be... not having a test for such a deviation is a major oversight.


The really scary thing is that there is basically no good testsuite for SSL/TLS in existence. I would not be surprised if other stupid bugs showed up in other implementations given one…


This is a good example of why static analysis is so useful. Testing this in QA is a non-trivial problem but simply adding a compiler flag would report it.


Same with openssh on OS X. For some unknown reason, Apple chose to disable ECDSA on ssh. You can generate the keys but you can't use them. Try it.


Odd. Apple updated iOS 7 and iOS 6 but didn't update Secure Transport in iOS 5? I wonder if the bug was introduced between iOS 5 and 6.

Background on Secure Transport:

"At the bottom of the TLS stack on both iOS and Mac OS X is a component known as Secure Transport. Secure Transport maintains a per-process TLS session cache. When you connect via TLS, the cache stores information about the TLS negotiation so that subsequent connections can connect more quickly. The on-the-wire mechanism is described at the link below.

http://en.wikipedia.org/wiki/Transport_Layer_Security#Resume...

"This presents some interesting gotchas, especially while you're debugging." More at: https://developer.apple.com/library/ios/samplecode/AdvancedU...


FWIW, the first time I went to https://notinthecert.imperialviolet.org in Mobile Safari on iOS 5.1.1, I got a "Cannot Verify Server Identity" error with Cancel, Details, and Continue buttons. After pressing Continue, it remembered my choice and did not alert me the second time.


Apple do not support two versions of iOS back. They expect you to upgrade to a newer, slower version of iOS every time it's released.

You won't be able to get the new iOS 6.1.6 on your iPhone unless it supports iOS 6 but not iOS 7, it is basically only for iPod 4th gen.


Seems the bug was introduced in iOS 6. https://gotofail.com has been offering a test for the bug, and my iOS 5 devices pass the test.


Looks like they weren't checking the hostname in the certificate. This would allow anyone with a certificate signed by a trusted CA to do a MITM attack on iOS devices. Very very bad vulnerability.


It seems to be a bit more than that, as I've verified that iOS 7.0.4 and Mac OS X 10.9.1 both refuse to connect to a server with a certificate for a different hostname. Sounds like verification is somehow different when connecting to a raw IP address. Safari treats that case differently, anyway: if I try to connect to a hostname that's different from what the certificate says, it simply refuses to establish the connection at all. If I try to connect to the raw IP address, it says the certificate is wrong, but gives me the choice to ignore the error and proceed anyway. Odd stuff.


Looking at the code, this bug probably doesn't happen on connections which use TLS 1.2, which any properly configured server should support these days. (There's a seperate codepath for TLS 1.2 connections.) Note that this doesn't provide any protection against an attacker exploiting the vulnerability, since they get to choose what TLS version is used.


Maybe due to lots of appliances that run HTTPS with a self-signed cert to "productname", but are often accessed via an IP?


Adam Langley has written up the details of the issue here: https://www.imperialviolet.org/2014/02/22/applebug.html


Just to be clear:

/usr/bin/curl on Mavericks suffers from this problem:

    ~ /usr/bin/curl https://imperialviolet.org:1266
    If you can see this message then[...]
Keep that in mind when you download an installer using curl and pipe it to bash.


Unless curl uses Secure Transport on OS X, then it's a separate bug in curl/libcurl.

Edit: apparently curl does use Secure Transport on OS X as of 7.27:

http://curl.haxx.se/mail/lib-2012-06/0334.html

http://daniel.haxx.se/blog/2012/06/28/darwin-native-ssl-for-...


So, was this plus automatic updates over SSL one of the ways the NSA could access "any iOS device"?


Potentially. This exploit was known to apple for some period of time, if NSA has access to the internal apple bug tracker, then they could certainly exploit the bug.


They'd also need access to Apple's private keys.


Not at all: that's the bug. It's not properly verifying that the other-end of a TLS session is the entity able to sign with the certificate-declared private-key.


Most likely hrrsn was referring to code-signing keys. Even if you can successfully MITM a software update connection, iOS won't run your trojan unless it's got a valid signature.

Of course the jailbreaking community knows well that there have been many ways around that...


No.


If I believe [1], 4% of all the iOS devices are still on versions earlier than 6, and will not be patched to this specific issue. This is pretty severe. I wonder (but presume not) if Apple is going to issue patches for earlier versions.

[1] http://appleinsider.com/articles/13/12/31/ios-7-now-installe...


iOS 6 gets a fix too: http://arstechnica.com/apple/2014/02/apple-releases-ios-7-0-...

Really old devices are probably out of luck, though. I think that would encompass the original iPhone, the 3G, the corresponding iPods Touch, and (probably most importantly) the first generation iPad. That's assuming, of course, that the bug is in iOS 4/5 in the first place, but if it dates back to iOS 6 I'd give good odds that it dates back farther still.


If 96% of the installed base gets this patched more or less immediately, the situation is endlessly better than a similar situation would be for Android, given most manufacturers and operators' lack of interest in providing software updates.


Sounds like a classic Man-in-the-Middle (MITM) attack.

Just a guess, but from the short description I suspect if you have control over DHCP you can get iOS to use your proxy. From there you can use something like mitmproxy ( http://mitmproxy.org/) to forge SSL certificates on the fly and intercept and decrypt SSL traffic without any warnings showing up on the iOS device.


You can do that but you'll be throwing certificate errors everywhere if they're self signed. By the sounds of it this is a bypass or method of getting around the CA altogether.


Yes, normally certificate errors would be thrown.

In this case Apple is not performing the domain validity checks on the presented cert. This allows an attacker that is performing an mitm attack to present a valid cert for another domain and establish an SSL connection with the victim.


Ugh, that's unbelievably awful.


SSLVerifySignedServerKeyExchange in http://opensource.apple.com/source/Security/Security-55471/l...

If you want to see my favorite SSL bug ever.


Oh god, that's horrifying. Get ready to check certificate validity, then report success before actually checking validity!


Wow. Maybe the inconsistent indentation and brackets-optional formatting helped the bug both arrive and persist?

Perhaps a preferable practice for security-conscious code would be to only set a success value after all checks have passed, rather than trust intervening logic to reset a default-success value, to an error-value, before return.


It's not just cURL. It's much bigger, and much worse.

http://pastebin.com/Yb6pUvr6


So, I'm not sure about that one. Apparently s_client ignores the error and completes the connection because it's intended to be used for debugging.

> Currently the verify operation continues after errors so all the problems with a certificate chain can be seen. As a side effect the connection will never fail due to a server certificate verify failure.

https://www.openssl.org/docs/apps/s_client.html

https://www.mail-archive.com/openssl-users@openssl.org/msg71...


The s_client connection continues but should still report a verify error. On Linux:

http://pastebin.com/QWpSrR5p


I don't know what you think that pastebin shows, but that error is not specific to OS X or to ssl.apple.com. OpenSSL is failing to validate the server certificate because you forgot to specify the -CAfile option.

EDIT: I'm an idiot, see below.


Except it isn't failing.


Oh wow, you're right, sorry. I saw the error code at the top, and missed the fact that it was reporting success anyway at the bottom. That's... pretty terrible.


Doesn't affect any app using SSL pinning.

I have been trying to work on an implementation of TACK to mitigate headaches involved in pinning. Wish I had more free time.

http://tack.io/index.html


Yeah the biggest problem with mobile devices I see that desktop applications have less often is crypto problems..It will get better as the time goes on.


Bad as it may be, the fix Apple is pushing will reach roughly 95% of the installed base more or less immediately.

I can't help wonder how much worse a similar situation would be for the Android ecosystem, with the poor update track record of operators and OEMs.


Google would just push an updated Chrome app which soft fixes it for many use-cases.


Gruber, is that you?


[deleted]


https://gist.github.com/rmoriz/fb2b0a6a0ce10550ab73

http://i.imgur.com/CoALymQ.png

(i've not checked that on iOS or Apple TV just on OSX. Maybe it's another issue but the update description pretty much fits too well ;-)


Holy shit! So they are really not checking the CN and knew it since 2013-11-28. I've lost the last bit of respect I had for apple (that was mostly building webkit) now.

(I deleted my gp, because it's pretty much obsolete now with your full disclousure. Thanks!)


FYI: This curl bug is totally unrelated to the bug just patched.


If they aren't, it's quite a coincidence ;-) Curl relies on third-party libraries for SSL.

http://daniel.haxx.se/blog/2012/06/28/darwin-native-ssl-for-...

Secure Transport by Apple is also known as Darwin/SSL.


Interesting: bare IP addresses as you demonstrated but they do still appear to catch mismatched hostnames:

https://213.133.107.227.xip.io/

Still an epic QA failure but much less of a threat if it doesn't allow arbitrary MITM attacks.


Doesn't seem to do it in 10.9.2 which is using the same curl version: http://pastebin.com/AZ38WYaB


So OS X as well as iOS?


I think so but I only discovered the OSX issue.


Good find. What about 10.8 and earlier? Have you heard anything about iOS6?


Nope. I'm just a Ruby developer, not a security researcher :)


Wow, this sounds bad. It sure would be nice if Apple gave us a little bit more information, like when the bug was introduced and what exactly was not being verified. Until we get more information, you should assume the worst that all your IOS TLS traffic might be compromised.


Maybe Apple should stick to hardware.

Just sell me the beautiful hardware.

I'll manage installing an OS that's 1. audited by a community of volunteers I generally trust and 2. can be audited (read: modified) by me: I can compile it from scratch.

Apple's OS offerings provide neither 1 nor 2.

But, go figure, Apple's OS offerings are based off of OS code that is both 1 and 2.

Like a "Linux distro", Apple gives me a lot of stuff I do not want or need, and makes it very difficult if nt impossible to remove.

I want a very minimal BSD or Plan9 OS running on my Apple hardware.

And nothing more.

Hardware we purchase should come with full documentation for writing drivers.

If enough users demand this, maybe someday it will.


What's stopping you from installing FreeBSD or Apple's hardware?

Everything has bugs, community volunteers are not infallible. It's good to be able to audit the things you use, but most people can't or won't do it.


Since the source code is available, might it be possible to produce a hot patch to the binary so that those of us running Mavericks won't have to go through the next few days or weeks with our pants down? It would be a simple matter of finding the JMP instruction generated by the second GOTO and replacing it with no-ops. How hard could it be,at least for someone who actually knows their way around OS X binary file formats?


"It would be a simple matter of finding the JMP instruction generated by the second GOTO and replacing it with no-ops."

Only if they used a truly stupid compiler. Chances are the unreachable code behind it got optimized away.


Oh, right. (duh)


Does anyone know if this is an issue at the Foundation API level (affecting any apps using NSURLConnection, for example) or if it's a Safari issue?


It's actually the reverse: Safari at least performs additional checks and does not appear to be affected, which would explain why this wasn't noticed faster. The big question is whether those checks are specific to Safari or to a higher-level API which most Cocoa apps would use.


How do you not notice this? Maybe a lot of SSL interception is done with self signed certs and not ones that are signed by a valid CA?


Does this contain patches for the jailbreak?


No, apparently. See https://twitter.com/winocm/status/436923366147375104 and https://twitter.com/winocm/status/436923608762695680 for a jailbreak developer saying that it doesn't seem to fix any jailbreak bugs. He's not one of the developers of the iOS 7 jailbreak, but he knows what he's talking about.



Are there any known software updaters that rely only on Secure Transport hostname checking for update authenticiity?

If you use NSS-based browsers exclusively on OS X, can you have been pwned via fake updates due to this bug?


lol oops




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: