I'm not going to comment on the linked article, other than say I think it contains serious mistakes in describing what OCSP is for and errors in other statements.
In the background there is a war going on, but it isn't what you think. It is a war between malware creators and OS creators.
From my perspective it goes like this:
To "identify" malware you need signatures. Signatures need valid certificates. Signing keys get lost (or worse stolen), bad actors need to get identified and their identity revoked.
If you don't have signatures you will end up with polymorphic malware. (Brew, pip, ruby gems, npm make me uneasy with how much external trust they require!)
To stop persistence you need some system immutability and secure startup / boot.
If you don't have secure startup / boot you will allow persistence in ways it may be impossible to remove.
If you don't have an encrypted disk protected by a passphrase, a lost device means your personal data is now potentially now "public". If you don't also have the other above protections having a passphrase at this point may be meaningless.
I use an iPhone because it boots from ROM and a DFU Restore wipes all mutable firmware. The flash, where my data is stored is encrypted. When I can, I will also use an Apple Silicon Mac because it works the same way.
Hardware products and operating systems from other vendors and sources do not have these protections. Some attempt to, but the implementations are very flawed or poor - I choose not to use them with my personal information.
For me, Apple products really are designed to protect my personal information.
I think this is complete FUD to be honest. There is a war on the market due to economic incentives. There are some criminals, but they don't target your workstation specifically.
Previously a hacker might have gotten access to your computer and maybe you had a keylogger, which could compromise you to a great deal. Today you have so much of your info not on your local device that you have countless additional attack vectors. Where do peoples info get leaked? In cloud services mostly.
edit: And of course your info is not safe. The programs you run get send to Apple. That is nothing else as a data leak.
You're absolutely right, and I use a Pixelbook and an iPad Pro for much the same reason. The cryptographic protections are great. (They'd be even more great if I could blow my own bootrom CA into the fuses.)
The phone-home is the issue, however. I've long understood the issue with certificate validity periods and the tradeoffs between short notAfter/frequent reissue and revocation check intervals.
The side effect is that it functions as telemetry, regardless of what the original intent of OCSP is or was. Additionally, even though the OCSP responses are signed, it's borderline negligent that the OCSP requests themselves aren't encrypted, allowing anyone on the network to see what apps you're launching and when.
Many things function as telemetry, even when not originally intended as so. The intelligence services that spy on everyone they can take advantage of this when and where it occurs, regardless of intent.
It's not worth putting everyone in a society under surveillance to defeat, for example, violent terrorism, and it's not worth putting everyone on a platform under the same surveillance to defeat malware. You throw out the baby with the bathwater when, in your effort to produce a secure platform, you produce a platform that is inherently insecure due to a lack of privacy.
I don't think that snooping on the connection explicitly reveals what app is being being launched - it just reveals the identity of the developer that signed it. But I agree that if the developer only has one App, then it does sort of give the game away.
Yes, and DNS is another example of Telemetry. What is google doing with the data they get from their DNS services?
I have in the past examined other devices - A QNAP NAS for example, phones home, sending the last part of the device MAC address. It did this once a minute from when it was turned on. I stoped using it - I do not know if this has changed in recent QNAS versions.
Having OCSP encrypted would cause a chicken and egg problem... OCSP is supposed to validate a certificate, but how do you check the validity of the certificate if the OCSP endpoint also requires validation?
You could ignore the validity of the TLS certificate when checking OCSP. That way, passive listeners are foiled, and only active MITM would be able to see which certificates you're checking. It's better than HTTP plaintext, which is how it works now.
Most of the bulk surveillance, pattern-of-life IC stuff is passive, not active.
Ultimately, though, I should be able to opt out of app/binary signing (and associated certificate checking) entirely if I so desire, ideally with a preference setting, or at least with Little Snitch. It looks like I'm going to have to compromise platform security overall to disable it, or use external network filtering hardware.
Additionally, it seems excessive to check every time. Checking once a day would be enough (if it needs to be global and immediate, Apple could push a kill hash to Gatekeeper as I understand it). The volume of queries would greatly exceed the size of getting the updates to the CRL every day (or hour!). Indeed, OCSP stapling is a cache of the signed proof of validity.
It also seems like a bloom filter could be used instead.
It really points to Apple being quietly satisfied to have this massive stream of usage data. And available every ISP and snooper along the way too.
>In the background there is a war going on, but it isn't what you think. It is a war between malware creators and OS creators.
No, this is _exactly_ the war we think. It is a war between Apple and malware over control of _my_ personal(!) computer.
At this point both malware and Apple leak personal data from _my_ personal computer. So from the perspective of controlling MY personal computer they are both malware and differences are neglectable if I do not even have a choice to install my own OS.
Secure boot is secure if I(!) can secure it, not Apple or any other external authority.
>For me, Apple products really are designed to protect my personal information.
Apple products designed to get _control_ of your personal information from you and thus to control you and your behavior.
Computer is no longer a "Personal Computer" with all of these developments. It's simply a termianal of some big machine where your 'personal' space isn't even really your personal.
Probably it is a good idea to recall what "Personal Computer" meant and what it should mean.
For me it is My control over My computer and My data if I wish to, and this a minimum for My privacy.
And privacy is a minimum for society respecting freedom and rights of a person.
How much private entities get to shape this stuff is a sensible concern but this sort of absolutism is how pretty much nothing actually works. You can't drive an unsafe car on public roads. You can't build an unsafe house, even on "your" land, etc. It's not crazy to regulate what kind of computer you can put on the public network.
It has nothing to do with absolutism, we are talking about basic human rights for privacy, massively disrespected recently. That fact though doesn't make them less valueable, relevant or required for the free society to function.
May be for you it's not even crazy to regulate what kind of paper person uses for own notes because who knows, may be he will bring those notes to the public one day. But for me this communist and totalitarian way of disrespect to personal rights and freedoms is crossing the red line. I've been in such situations and I saw in person the results of such disrespect therefore I think those who are willing to attack personal freedoms should learn some history and shouldn't be surprised about what people can do to protect their freedoms when words do not work.
No, you're trying to elevate some flaky security feature to a debate about human rights. It's not particularly meaningful or conducive to any kind of insight because once you make that jump, you just get to fulminate endlessly about principles and oppression. It's like starting to recite the Universal Declaration of Human Rights every time you stub your toe instead of just saying 'shit!' and thinking about maybe moving that table out of the way.
The broader topic is a debate about human rights that's actively being fought in elections, parliaments, and courts. The right to privacy and the right to repair, for example, and a nascent debate about the right to run whatever software you choose.
"Your computer isn't yours" is a powerful way to express the essence of those debates. Who has the right to do what they want with a computer, the company that built it or the person who owns it?
Software signing is admittedly a relatively minor battle in that war, but it's not completely separate from the broader issues.
>you're trying to elevate some flaky security feature to a debate about human rights.
It is not a "flaky security feature" It is a main thing that defines who controls the personal computer.
Accessing and controlling Personal Computer with personal notes is the same as accessing and controlling personal notes written on paper.
There is no elevation here from my side, it is literally about human right for privacy and I am not the one who tried to elevate the issue to unrelated areas like cars and houses.
It is a main thing that defines who controls the personal computer
It's a flaky feature that people had a workaround for in minutes. Nobody lost control of their computer.
There is no elevation here from my side, it is literally about human right
My secondary school had a bigass Universal Declaration of Human Rights mounted on a wall behind plexiglass. Whenever I got detention I'd protest citing Article 9. At 12, this seemed both funny and trenchant commentary on my predicament.
So what? There are risks everywhere in life. You buy a new car and you decide to go on a road trip and there is a dangerous road ahead. The car company should stop your car remotely?
When you are driving a car, the dangers you encounter are immediately perceptible, and your driver’s license means you specifically trained for and are certified as able to deal with them.
With our laptops full of PII, corporate secrets, cryptocurrency wallet keys and so on, without proper security measures one can be taken advantage of without detection over long periods of time, and only a tiny percentage of the user base can adequately assess potential attack vectors (even the scope of possible damage is not intuitively obvious).
There may be issues with specific implementations, and the degree of trust one has to put in their device’s manufacturer could probably be lower, but without those measures a typical user would be like a 5-year old driving a Ferrari on a winding mountain road. Yes, I’d be okay if my car automatically stopped before a cliff that I couldn’t see even if I drove right into it.
This all sounds well and good. But why not just use signed binaries? Have a user editable keystore that includes the accepted signatures. The default would be apple, installing chrome would require accepting the Google signature, photoshop would require the adobe key. Then users could add their own for brew, firefox, or whatever community software they would use.
This would give good protection against hackers, and not require uploading a signature for every binary run.
Mac users can already add any certificate to the Apple keychain and authorize them for code signing. The outage today, which what was being written about in the article, was caused by the OCSP.APPLE.COM service not responding. The OCSP service was likely being used to validate if an Apple Developer certificate was still valid.
Operating a "trusted" Certificate Authority generally requires operating under some rules. For example, the "Certification Authority Browser Forum" requires operating a CRLs (now considered bad) or a live OCSP endpoint.
Let's Encrypt does this, as does every other certificate issuer.
As is being discussed, separate OCSP is bad from a privacy standpoint - if a check of OCSP is being made, it gives telemetry on if you are trying to validate a certificate. If you can see the traffic it does reveal the certificate being checked.
FWIW, there is an OCSP Stapling method of attaching "recent" OCSP responses inside of TLS requests so that a TLS client doesn't have to make a separate request to an OCSP service.
And for websites, there's a better way to do OCSP. The web server using the certificate can get an OCSP response for itself (usually once every few minutes) and attach it to all TLS handshakes for that same domain ("OCSP stapling"). That way, clients get an up-to-date OCSP response, but without having to reveal their browsing behavior to the OCSP server.
Unfortunately, there is no obvious way to carry over this behavior to application binaries, since we're not dealing with a client-server architecture here.
Don't forget that Apple is a multinational megacorp, and is user centric only when it suits them. Consider Tim Cook speaking at the conference used by the Chinese government to promote internet regulation, saying that the vision of the conference is one that Apple shares.
Sorry, but there are no details about how PRISM works.
The 2012 date is also suspicious - it is in the same year a new Apple datacenter in Prineville came online.
I personally think that PRISM works by externally intercepting data communication lines running to / from the unwitting companies.
The NSA has previously tapped lines (AT&T), but they made the mistake of doing it inside the AT&T building. That eventually leaked out -- so I think it is most likely that PRISM is implemented without the knowledge of anyone except the NSA.
[edit] I started doing more research. Facebook also has a datacenter right next door. I wonder who else is there and where do all the network cables go.
> Sorry, but please can you point to any real evidence that Apple allows state-level actors access? From what I can see it is exactly the opposite.
> Sorry, but there are no details about how PRISM works.
> I personally think that PRISM works by externally intercepting data communication lines running to / from the unwitting companies.
Unless you can prove you're ex-NSA or worked in SIGINT, sharing your opinion is not proof. Snowden's leaks are proof.
Why not give weight to the arguments and proof put forth by Edward Snowden? There have been no attempts by the US govt. or the NSA to disprove his leaks. No counter-evidence presented.
My question is: what do you benefit from continuing to deny and refuse to accept Snowden's whistleblowing?
I'm not ex-NSA, current NSA and I have not worked in SIGINT for any country. I cannot demonstrate proof of this.
I do not believe that the Snowden slides show proof of collusion by any of those companies. All the slides do are indicate that data is being collected from those companies and the program is called PRISM.
If for example, there was also a slide that indicated payments were being made in exchange for the data, then that would be an entirely different thing. That would indicate collusion and be more believable.
I am not denying or refusing to accept Snowden's whistleblowing. I think it is highly likely that PRISM exists. What I refute are the speculations that the companies listed are complicit.
I will also add that PRISM and "beam splitting" are a bit too close for coincidence.
Room 641A at 611 Folsom Street, SF is where "beam splitting" was done. That info leaked. The NSA isn't stupid, I doubt they wanted to repeat that sort of discovery - which is why I think it is believable and likely that the companies listed on the slides have no idea what has been done.
[edited to fix grammar and that there was more than one slide leaked]
They aren't, since they are proprietary software. And taking into account Apple's history, and "proprietary software in a relative position of power", such as the ones we have keeping tabs on our entire lives, chances are quite against users at best.
And the government is designed to protect you but it can also come after you.
If you are not going to allow "malware" on your computer, you're giving the power to Apple to label anything they don't want you to run as "malware".
They already have a walled garden on the phones and tablets, which has greatly benefited them. They can deny anyone access to publish software and they have exhibited everything that is obviously anti-competitive.
Now, they're bringing it to laptops.
Honestly, I don't see anything happening to Apple anytime soon. They're way too well off and they have very important government officials in their pockets. The best thing that may happen will probably come from the outside... like the EU antitrust investigation that is going on. Even so, it's not going to stop, not anytime soon anyway.
In the background there is a war going on, but it isn't what you think. It is a war between malware creators and OS creators.
From my perspective it goes like this:
To "identify" malware you need signatures. Signatures need valid certificates. Signing keys get lost (or worse stolen), bad actors need to get identified and their identity revoked.
If you don't have signatures you will end up with polymorphic malware. (Brew, pip, ruby gems, npm make me uneasy with how much external trust they require!)
To stop persistence you need some system immutability and secure startup / boot.
If you don't have secure startup / boot you will allow persistence in ways it may be impossible to remove.
If you don't have an encrypted disk protected by a passphrase, a lost device means your personal data is now potentially now "public". If you don't also have the other above protections having a passphrase at this point may be meaningless.
I use an iPhone because it boots from ROM and a DFU Restore wipes all mutable firmware. The flash, where my data is stored is encrypted. When I can, I will also use an Apple Silicon Mac because it works the same way.
Hardware products and operating systems from other vendors and sources do not have these protections. Some attempt to, but the implementations are very flawed or poor - I choose not to use them with my personal information.
For me, Apple products really are designed to protect my personal information.