Too right, it was far more problematic than they ever made out.
> The UK government's demand came through a "technical capability notice" under the Investigatory Powers Act (IPA), requiring Apple to create a backdoor that would allow British security officials to access encrypted user data globally. The order would have compromised Apple's Advanced Data Protection feature, which provides end-to-end encryption for iCloud data including Photos, Notes, Messages backups, and device backups.
One scenario would be somebody in an airport and security officials are searching your device under the Counter Terrorism Act (where you don't even have the right to legal advice, or the right to remain silent). You maybe a British person, but you could also be a foreign person moving through the airport. There's no time limit on when you may be searched, so all people who ever travelled through British territory could be searched by officials.
Let that sink in for a moment. We're talking about the largest back door I've ever heard of.
What concerns me more is that Apple is the only company audibly making a stand. I have an Android device beside me that regularly asks me to back my device up to the cloud (and make it difficult to opt out), you think Google didn't already sign up to this? You think Microsoft didn't?
Then think for a moment that most 2FA directly goes via a large tech company or to your mobile. We're just outright handing over the keys to all of our accounts. Your accounts have never been less protected. The battle is being lost for privacy and security.
> you think Google didn't already sign up to this?
My understanding is that Android's Google Drive backup has had an E2E encryption option for many years (they blogged about it at https://security.googleblog.com/2018/10/google-and-android-h...), and that the key is only stored locally in the Titan Security Module.
If they are complying with the IPA, wouldn't that mean that they must build a mechanism into Android to exfiltrate the key? And wouldn't this breach be discoverable by security research, which tends to be much simpler on Android than it is on iOS?
> My assumption is that Google has keys to everything in its kingdom
If that were true, then their claims to support E2E encrypted backups are simply false, and they would have been subject to warrants to unlock backups, just like Apple had been until they implemented their "Advanced Data Protection" in 2022.
Wouldn't there have been be some evidence of that in the past 7 years, either through security research, or through convictions that hinged on information that was gotten from a supposedly E2E-protected backup?
It is possible to set up end to end encryption where two different keys unlock your data. Your key, and a government key. I assume google does this.
1. encrypt data with special key
2. encrypt special key with users key, and
3. encrypt special key with government key
Anyone with the special key can read the data.the user key or the government key can be used to get special key.
This two step process can be done for good or bad purposes. A user can have their key on their device, and a second backup key could be in a usb stick locked in a safe, so if you loose your phone you can get your data back using the second key.
"…two different keys…. Your key, and a government key. I assume google does this."
With the present state of politics—lack of both government and corporate ethics, deception, availability of much fake news, etc.—there's no guarantee that you could be certain of the accuracy of any information about this no matter what its source or apparent authenticity.
I'd thus suggest it'd be foolhardy to assume that total privacy is assured on any of these services.
BTW, I don't have need of these E2E services and don't use them, nor would I ever use them intentionally to send encrypted information. That said, occasionally, I'll send a PDF or such to say a relative containing some personal info and to minimize it being skimmed off by all-and-sundry—data brokers, etc. I'll encrypt it, but I always do so on the assumption that government can read it (that's if it's bothered to do so).
Only fools ought to think otherwise. Clearly, those in the know who actually require unbreakable encryption use other systems that are able to be better audited. If I were ever in their position, then I'd still be suspicious and only out of sheer necessity/desperation would I send an absolute minimum of information.
Yes. There is no ability to know one way or the other if Google, and similar services retain a secondary way to access decryption key. In light of this the only option is to _assume_ they have the capability.
Given the carefully crafted way companies describe their encryption services, it seems more likely than not they have master keys of some sort.
That would definitely be a safe assumption, that Google can look into anything they own or is on what they own. It's not like they are strong privacy advocates or don't already cooperate with any state apparatus they see as profitable or to their benefit.
> …there's no guarantee that you could be certain of the accuracy of any information about this no matter what its source or apparent authenticity.
In any case like this, the only thing you could truly trust would be the source code and even then you’d have to be on the lookout for backdoors, which would definitely be beyond my own capability to spot.
In other words, the best bet is to probably only use open source solutions that have been audited and have a good track record, wherever available. Not that there are that many options when it comes to mobile OSes, although at least there are some for file storage and encryption.
Obviously, that's the ideal course of action but I'd reckon that in practice those who would have both a good understanding of the code as well as the intricacies/strengths of encryption algorithms and who also have need to send encrypted messages is vanishing small—except perhaps for some well-known government agencies.
Just because something you do today is legal and not a cause for scrutiny does not mean the same will be true tomorrow.
We have seen this many times throughout history, where people like academics, researchers, teachers, people of particular faith, etc are targeted and each of them has some sort of “evidence” produced as to some sort of crime they have committed either in the present or past to justify their arrest.
The group who needs it today may be small, but having it on and secure by default for all is a far better protection than any justification that the current need is small.
E2EE means only your intended recipients can access the plaintext. Unless you intend to give the government access to your plaintext, what you described isn’t E2EE.
Is that google's definition or your definition? not being rude, but its pretty easy to get tricky about this.
Since you are sending the data to google, isn't google an intended recipient? Google has to comply with a variety of laws, and it is likely that they are doing the best they can under the legal constraints. The law just doesn't allow systems like this.
history already proved you wrong. companies offering backdoor to abusive law enforcement are never sued.
they also employ things like exempt cases. for example, Whatsapp advertise E2E... but connect for the first time with a business account to see all the caveats that in plain text just means "meta will sign your messages from this point on with a dozen keys"
You are extremely naive if you think a company the size of Google or Microsoft or Apple will face any serious consequence from lying about E2EE actually being open to various governments.
They have lawyers aplenty, governments would file amicus briefs "explaining" E2EE and so on. Worse case they'll settle for a pittance.
So all you’ve got is hypotheticals that coincidentally confirm your biases? These are giant companies. Show me where a civil suit for lying about a product’s security was defended by this kind of claim.
Oh thanks. I've never done that before. I'll try that, it'll be very interesting to see those disclaimers.
I guess for consumer use all that stuff is hidden in the T&C legalese which is unreadable for normal people. I know the EU was trying to enforce that there must be a TL;DR in normal language but I haven't seen much effect of that yet.
> E2EE means only your intended recipients can access the plaintext.
No, it does not. It means that only endpoints - not intermediaries - handle plaintext. It says nothing about who those endpoints are or who the software is working for.
No, it is not. This is precisely why we have the term E2EE. An escrow agent having your keys but pinky promising not to touch them is indistinguishable from the escrow agent simply having your plaintext.
Unless you’re fine with the escrow agent and anybody they’re willing to share the keys with being a member of your group chat, in which case my original point still stands.
Edit: I think you might be confusing your personal intention (ie I wanted this to be private but didn't realize the service provider retained a copy of the keys) with the intention of the protocol (ie what the system is designed to send where). Key escrow is "by design" whereas E2EE protects against both system intrusions (very much not by design) as well as things like bugs in server software or human error when handling data.
> is indistinguishable
Technically correct (with respect to the escrow agent specifically) but rather misleading. With E2EE intermediary nodes serving or routing a request do not have access to it. This protects you against compromise of those systems. That's the point of E2EE - only authorized endpoints have access.
The entire point of key escrow is that the escrow agent is authorized. So, yes, the escrow agent has access to your stuff. That doesn't somehow make it "not E2EE". The point of E2EE is that you don't have to trust the infra. You do of course have to trust anyone who has the keys, which includes any escrow agents.
If we used the definition "only your intended recipients can access the plaintext" ... well let's be clear here, an escrow agent is very much an "intended recipient", so there's no issue.
But lets extrapolate that definition. That would make E2EE a property of the session rather than the implementation. For example if my device is compromised and my (E2EE) chat history leaks suddenly that history would no longer be considered E2EE ... even though the software and protocol haven't changed. It's utterly nonsensical.
> I think you might be confusing your personal intention with the intention of the protocol
So what would be the name for a mechanism where escrow is deliberately not a part of the design and nobody aside from the sender and recipient can access the plaintext data, no 3rd parties whatsoever, as long as those two participants aren’t compromised.
I’m not disagreeing with you but I’ve heard people talk about E2EE while actually thinking it’s more like the above. There is probably a term for truly private communication but I’m sleepy and it eludes me.
The literal answer to your question would be "E2EE without key escrow" I guess. Or E2EE between just me and this single party.
However I don't think that's so much a technical mechanism as it is a statement of preference or understanding about who you intend to have access to something.
To that end, you'll need to define "intended recipient" pretty carefully. After all, your intended recipient could take a screenshot and share it. Or there could be someone in a group chat who isn't participating and you forgot was there. Etc.
> There is probably a term for truly private communication
I'd argue that E2EE is "truly private" between the intended recipients, and that understanding who exactly those are is entirely the responsibility of the user.
Of course I recognize that we're talking past each other at that point. Your concern seems to be users not realizing an escrow agent is present. To the extent they might have been deceived about the implementation I'd point out that "snuck in an escrow agent" is just the tip of the security iceberg. They could also have been deceived about the implementation itself. And even if they weren't deceived initially, a binary or web app could be intentionally updated with a malicious version. Does it count as "truly private" if you didn't compile it yourself?
> Of course I recognize that we're talking past each other at that point. Your concern seems to be users not realizing an escrow agent is present. To the extent they might have been deceived about the implementation I'd point out that "snuck in an escrow agent" is just the tip of the security iceberg. They could also have been deceived about the implementation itself. And even if they weren't deceived initially, a binary or web app could be intentionally updated with a malicious version. Does it count as "truly private" if you didn't compile it yourself?
All of these are good points, thanks for taking the time to respond! I think that to a certain degree this means that, for the average layperson and someone with more skills and knowledge, there are still a bunch of challenges and attack vectors to contend with.
It probably involves more of something in the category of OpenPGP (or just Signal, I guess) where you yourselves are in control of the keys, and less of counting on various web apps to do right by the users. That said, E2EE with escrow is still helpful against certain risks and is a net positive, even if I've seen a lot of that misunderstanding about what it actually does.
No problem! The more people conscious of this stuff the better off we all are in the long run.
Anything that you can either audit or compile yourself is generally a good bet. You might add Matrix, XMPP with OMEMO, Briar, and Cwtch to your list.
Proprietary stuff isn't an entirely bad deal though. If you assume they aren't blatantly fraudulent then presumably your data is better protected than it would have been without even an attempt at E2EE.
Same for key escrow schemes. Even if the agent was literally the NSA you'd still most likely be better off than the much more vulnerable alternative. The fewer entities with access and the more deliberate that access is the better.
Well, WhatsApp backups claim they are E2E encrypted, but there’s a flow that uses their HSM for the encryption key, which still feels like some escrow system.
True but you can choose to store the key completely yourself. That fixes a big backdoor that's been around for ages.
The biggest problem remaining to me is that you don't chat alone. You're always chatting with one or more people. Right now there's no way of knowing how they handle their backups and thus the complete history of your chats with them.
It's the same thing as trying to avoid big tech reading your emails by setting up your own mailserver. Technically you can do it but in practice it's pointless because 95% of your emails go to users of Microsoft or Google anyway these days.
Those would be end-to-end encrypted x how many recipients you intend for. Very different from (end-to-end-encrypted x how many recipients you intend for) + an arbitrary amount of recipients you don't intend for.
Presumably there are a finite number of escrow agents who are known to you. Worrying that they will pass your messages along to others is the same as worrying that the people you're chatting with do the same. It's always on you to assess the trustworthiness of the other parties; key escrow is no exception to that.
To be clear I'm not a fan of large scale key escrow schemes and am not going to willingly use one outside of a corporate setting. But lets have accurate use of terminology while discussing these things.
Surely a company with auditing requirements running their own key escrow would still be considered E2EE? If not E2EE then what would you suppose to call that and where would you draw the line?
> Worrying that they will pass your messages along to others is the same as worrying that the people you're chatting with do the same.
This makes absolutely _no sense_. If I do not trust my end user to not propagate the message I send them, then I will not send them that message. There is no need for a third party here to make that mistake. It _is_ that black and white. Adding another end user is compromising your promise on the secure communication you established. There is no workaround to that.
Similarly, if you do not trust a particular escrow agent then do not use that escrow agent.
I can imagine a likely objection. "But I'm forced to use this particular agent by [ tech company | employer | government ]!" I don't see how that's any different from needing to communicate with a particular person. If I need to communicate with someone and I don't trust them not to share things then I will (must!) compose my correspondence accordingly.
If the government is forcing this on you, well, what is the alternative? Is point to point encryption somehow better in that scenario? Either way they're getting copies of everything you write assuming that the service you're using abides by the law. With key escrow that snooping is more explicit and there are fewer unknowns for the end user.
Manufacturers have lied about E2EE since the beginning. Some claim that having the key doesn't change that it's e2ee. Others claim that using https = e2ee, because it's encrypted from one end to the other, you see? (A recent example is Anker Eufy)
The point is that the dictionary definition of E2EE really doesn't matter. Being pedantic about it doesn't help. The only thing that matters is that the vendor describes what they call E2EE.
Yes, but going by that, most messaging services advertised as "E2EE" are already not E2EE by default. You trust them to give you the correct public keys for peer users, unless you verify your peers in-person. Some like iMessage didn't even have that feature until recently.
To call it lying is just arguing about the meanings of words. This is literally what lawyers are paid to do. The data payload can be called end to end encrypted. You can easily say to the user that "your emails are encrypted from end to end, they are encrypted before it leaves your computer and decrypted on the receivers computer" without talking about how your key server works.
Systems that incorporate a method to allow unlocking using multiple keys don't usually advertise the fact that this is happening. People may even be legally obligated to not tell you.
“End-to-end encryption (E2EE) is a method of implementing a secure communication system where only communicating users can participate. No one else, including the system provider, telecom providers, Internet providers or malicious actors, can access the cryptographic keys needed to read or send messages.”
So if you send another set of keys to someone else, it’s obviously not E2E.
I agree completely that it is wrong in spirit. But wikipedia's text is a definition, not the only existing one. And for practical use even the most obvious definitions have legal caveats.
For example, asking for 10 gallons of soda at a restaurant advertising unlimited refills will not fly, even though virtually everyone will agree on the definition of the term "unlimited". My 2c.
I expect this is what they are all doing tbh, although isnt google open source? should be checkable, if the binaries the distribute match the source... oh...
"a special key" afaik is where instead of using 2 large primes for a public key, it uses 1 large prime and the other is a factor of 2 biggish primes, where 1 of the biggish is known, knowing one of the factors lets you factor any public key with a not insignificant but still more compute than most people have access to.
UK has also invested in some serious compute that would appear dedicated to exactly this task.
basically if you dont have full control over the key generation mechansim and enc/dec mechansim it is relatively trivial for states to backdoor anything they want.
A trivial method for circumventing code review is to simply push a targeted update of the firmware to devices subject to a government search order.
There are no practical end-user protections against this vector.
PS: I strongly suspect that at least a few public package distribution services are run by security agencies to enable this kind of attack. They can distribute clean packages 99.999% of the time, except for a handful of targeted servers in countries being spied upon. A good example is Chocolatey, which popped up out of nowhere, had no visible source of funding, no mention of their ownership structure anywhere, and was incorporated along with hundreds of other companies in a small building in the middle of nowhere. It just screams of being a CIA front, but obviously that's hard to prove.
Chocolatey assuredly did not "pop up out of nowhere" - it was a labour of love from Rob Reynolds to make Windows even barely usable. It likely existed for years before you ever heard of it.
> had no visible source of funding
Rob was employed by Puppet Labs to develop it until he started the commercial entity which now backs it.
> a small building in the middle of nowhere.
As I recall, Rob lives in Topeka, Kansas. It follows that his business would be incorporated there, no?
There was no evidence of any of this on the website until recently (maybe 2 or 3 years ago?), and I did look at every page on there. Similarly, I searched on Google for a while and raised the question in more than a few forums. I dug through the business registration records, etc... and found none of the above.
Look at it from the perspective of a paranoid sysadmin half way around the world raising a quizzical eyebrow when random Reddit posts mention how convenient it is, but it's distributing binaries to servers with absolutely no obvious links back to any organisations, people, or even a legitimate looking business building.
The end user protection is to sign updates and publish the fingerprints. It should not be possible for one device to get a different binary than everyone else.
How exactly do you plan on implementing this as an end user?
Even if you somehow manage to ensure 100% consistency with other users for updates you manually “pull” from the vendor, the vendor could simply have your device automatically reach out and update itself with a stealth update.
Or everyone can get the same exact binary, but it has a hash code check on it that activates the evil bits only on your device.
> They were curious to learn which open source libraries are integrated to the Telegram app. You know, on the client side," Durov said. "And they were trying to persuade him to use certain open source tools that he would then integrate into the Telegram code
Is the source code for every binary blob present on an Android device available for inspection, and is the code running on every Android device verifiable as having been built from that source?
> or through convictions
If they wanted to use this evidence for a normal criminal case, they would just do parallel construction.
Would it be possible that they feel that the revelation of this backdoor would be too big of a loss so that any of these theoretical cases of the past 7 years have used parallel construction to avoid revealing the encrypted data was viewed?
It's worth noting that what the security services don't have access to is as secret as what they do have access to. According to the late Ross Anderson, for many years the police were unable to trace calls (or was it internet access?) on one of the major UK mobile networks, because it had been designed without that and in such a way that it was hard to retrofit. This was considered highly confidential, lest all the drug dealers etc switch to that network.
> Wouldn't there have been be some evidence of that in the past 7 years, either through security research, or through convictions that hinged on information that was gotten from a supposedly E2E-protected backup?
I wouldn't count on it. The main way we'd know about it would be a whistleblower at Google, and whistleblowers are extremely rare. Evidence and court records that might expose a secret backdoor or that the government was getting data from Google that was supposed to be private could easily be kept hidden from the public by sealing it all away for "national security reasons" or by obscuring it though parallel construction.
There were a lot of people working for the NSA besides snowden, but none of them blew the whistle even though some of the programs he exposed had been around for 12 years. There were a whole lot of people working at AT&T but employees weren't lining up to tell us about Room 641A (https://en.wikipedia.org/wiki/Room_641A) before Mark Klein. How did everyone else manage to be kept quiet? The details about MKUltra and the Manhattan Project were successfully kept a secret for decades before eventually being declassified.
It'd be a huge mistake to look at the instances where somebody did come forward and spill a secret and assume that it means secrets aren't possible to keep or that there are no secrets being kept right now. It's may not be easy to keep a secret, but governments and corporations are extremely well practiced and have many documented successes.
You have a point, but a major reason that the examples you cited above were kept secret was because knowledge about them was compartmentalized. As knowledge leaks, so does the possibility of whistleblowers. It’s an unstable equilibrium. My argument (which admittedly is based on an anecdata about how undisciplined large tech corporations are) is that it’s uniquely hard to keep secrets in modern tech companies because by design, knowledge is not compartmentalized. Modern large tech companies have replaced fiefdoms of knowledge with fiefdoms of operational expertise, if that makes sense.
Anyway, there have been hundreds, perhaps thousands of whistleblowers in the past and the examples you picked I think are representative of the upper bound, rather than the lower bound of the secret keeping capacity of organizations.
They might have keys to everything in their kingdom, but only if you look through the right len$$
--
as one who helped build the total awareness apparatus, I dont care about my privacy, only as a defeatist.
The only weapon again is trancperency of the Entanglements (recall that term, about AI entanglements?) -- What is unclear, is, WRT to these current revealings /confirmations(DOGE, etc) -- Are these institutions being untangled and removed, or squeezed out of their territory?
They are so used to bend reality that could easily call it e2e encryption even if the key was generated by Google or had a skew that made it vulnerable with some extra knowledge that they have or will have in the next sync.
I don't know the particulars, but in general, silence around a massive tech company on warrants does not mean "they said no and the feds decided to leave them alone"
I doubt it. Much to my annoyance they moved Google Maps Timeline from their database to an encrypted copy on my phone specifically so if law enforcement asks for the records of where you were at a given time and place they can say dunno, can't tell. If they had the keys it would wreck their legal strategy not to get hassled every time law enforcement are trying to track someone.
The linked article makes a lot of assumptions about the "Massive Digital Data Systems Program". It seems this program existed. For example, here is a 1996 paper [1] about research funded by the "Massive Digital Data Systems (MDDS) Program, through the Department of Defense."
But it's not clear that funding for early research into data warehousing (back when a terabyte was a lot of data) has anything to do with whether or not Google uses end-to-end encryption? Lots of research got funded through the Department of Defense.
Without having relevant evidence, this is just "let's assume X is true, therefore X is true."
Google didn't announce that they could no longer process geofence warrants because they no longer stored a copy of user location data on their servers until last October.
How much good does an encrypted device backup do when harvesting user data and storing it on your servers (to make ad sales more profitable) is your entire business model?
This would mean no independent security researcher has ever taken a look at Google Drive's E2EE on Android. Or those that did missed the part where the key is uploaded.
It's possible to decrypt this network traffic and see if the key is sent. It may be obfuscated though.
> What concerns me more is that Apple is the only company audibly making a stand.
But still Apple operates in China and Google does not. This is weird to me. Google left China when the government wanted all keys to the citizens data. Apple is making a stand when it's visible and does not threaten their business too much.
Apple is not really in the business of protecting your data, they are just good at marketing and keeping their image.
> Google left China when the government wanted all keys to the citizens data.
Google left China after China started hacking into Google's servers.
> In January, Google said it would no longer cooperate with government censors after hackers based in China stole some of the company’s source code and even broke into the Gmail accounts of Chinese human rights advocates.
They were working to reenter the China market on China's terms many years later, when Google employees leaked the effort to the press. Google eventually backed down.
China feels like an important difference here though. Google leaving China doesn't protect Chinese citizen's data any more than Apple turning off ADP in the UK does. As far as I know, Apple isn't pretending that the data of Chinese users is encrypted from their government, and the way they're complying with the Chinese laws shouldn't impact the security of users outside of China.
Apple pulling ADP from UK users is similar - the UK has passed an ill-considered law that Apple doesn't think it can win a court case over, so they're complying in a way that minimally effects the security of people outside the UK. If, as someone outside the UK, I travel to the UK with ADP turned on, my understanding is it won't disable itself.
Would you have been more satisfied if Apple just pulled out of the UK entirely? Bricked every iPhone ever purchased there? Google doesn't seem to have made any stand for security ever - them pulling out of China feels more to do with it meaning they wouldn't have had access to Chinese users' data, which is what they really want.
> Would you have been more satisfied if Apple just pulled out of the UK entirely? Bricked every iPhone ever purchased there?
The request/law would be rolled back in minutes in that case. They wouldn't dare though. (wouldn't even have to be bricking - just disable services like icloud)
Apple has 40 retail stores in the UK with thousands of employees. They have a big new HQ in London where they have engineering, etc there.
I cannot see Apple completely shutting down in the UK, firing thousands of staff, selling off any property, and cancelling leases, just for a week long bargaining chip.
>iCloud in China is operated by a local subsidiary
It's not operated by an Apple subsidiary. It's operated by a government owned company. I'm not aware of any local laws that require this particular arrangement.
It’s different. Apple follows Chinese law to operate their services in China, just like Microsoft.
With Google, their services are way broader. Operating a hunk of their search business with a third party Chinese firm just isn’t viable for their services, which are way more complex.
The government, with anti trust laws, could easily force this issue. On the other hand, they really love how few places they have to go with FISA warrants to just take anyones data. This is the long tail of the American security state. So it's really ironic that China takes most of the blame.
Perhaps Apple has a greater leverage in China due to its outsized manufacturing presence. And it's likely they already dont offer ADP to Chinese citizens.
> Perhaps Apple has a greater leverage in China due to its outsized manufacturing presence.
Perhaps china has greater leverage over apple in this case...
China had been an important area of growth for many companies during the 2010s. Apple bent over backwards to cater to that market. It was discussed in every financial release, and they obviously made tons of concessions for iCloud.
The UK just comparatively isn't that much revenue, and not worth the fallout.
Apples revenue from china has been super dependent on new iPhone looking different, and has been steadily declining or flat for years, except for a few quarters when Huawei was sanctioned.
Chinese money was absolutely the forbidden temptress that continues to screw businesses. Luxury goods, cars, electronics, etc were all banking on china’s economic rise to grow their revenue, and post covid recovery saw all that money stay domestic.
China won’t oust Apple because twisting Tim Cook’s arm is way more useful. Same with Tesla and any other company that makes a big bet there. But they absolutely won’t be giving American companies an equal chance at success.
Eh Google had pretty good reasons to not operate in China (not seeing them in this thread, don't recall the details precisely enough to relate here)
Apple is deeply embedded in China (manufacturing) and benefits from a decent (but shrinking) userbase in the country. China isn't asking for the keys to all iphone user data, just data stored in China.
> Doesn't the US have access to all the data of non US citizens whose data is stored in the US without any oversight?
Totally agree. Having this discussion so US centred just makes us miss the forest for the trees. Apart from data owned by US citizens, my impression is that data stored in the US is fair game for three letter agencies, and I really doubt most companies would spend more than five minutes agreeing with law enforcement if asked for full access to their database on non-US nationals.
Also, remember that WhatsApp is the go-to app for communication in most of the world outside the US. And although it's end-to-end encrypted, it's always nudging you to back up your data to Google or Apple storage. I can't think of a better target for US intelligence to get a glimpse of conversations about their targets in real time, without needing to hack each individual phone. If WhatsApp were a Chinese app, this conversation about E2E and backup restrictions would have happened a long time ago. It's the same on how TikTok algorithm suddenly had a strong influence on steering public opinion and instead of fixing the game we banned the player.
International users that have Advanced Protection enabled would in theory be safe from all of the 3-letter agencies (like safe from those agencies getting the data from Apple...not safe generally).
Realistically we are talking about FISA here, so in theory if the FBI gets a FISA court order to gather "All of the Apple account data" for a non-us person, Apple would either hand over the encrypted data OR just omit that....
Based on the stance Apple is taking here, its reasonable to assume they would do the same in the US (disable the feature if USG asked for a backdoor or attempted to compel them to decrypt)
It's worth pointing out that just because the FBI didn't have the access they wanted, it doesn't mean that other agencies don't, or that the FBI couldn't get the data they wanted by other means (which was exactly what they ended up doing in that specific case). It just means that they wanted Apple to make it easier for them to get the data.
It's good that Apple refused them, but I wouldn't count that as evidence that the data is secure from the US government.
It's also worth noting that the US courts have long held that computer code is speech.
Apple's legal argument that the government's demand that they insert a backdoor into iOS was tantamount to compelled speech (in violation of the first amendment) was going over a little too well in court.
The Feds will often find an excuse to drop cases that would set a precedent they want to avoid.
Would your answer be the same if this encrypted data was stored in China instead of US?
I don't think messages should ever leave the device, if you want to migrate to a different device this could be covered by that user flow directly. Maybe you want to sync media like photos or videos shared on a group chat and I'm fine with that compromise but I see more risks than benefits on backing up messages on the cloud, no matter if it's encrypted or not.
I think the average human will disagree with you. They want to preserve their data and aren't technically competent and organized enough to maintain their own backups with locally hosted hardware. Even the technically literate encourage _offsite_ backups of your data.
Know your threat model and what actions your trying to defend against.
Typical humans need trusted vendors that put in actual effort to make themselves blind to your personal data.
This is different IMO. When you buy Apple you buy an American product and you know the company is beholden to US law. Snowden has made perfectly clear how much they can be trusted. When you buy it anyway it's an informed choice.
Here a country that has no ties with most of apple's customers is just butting in and claiming access to all of them.
So what's next. Are we also giving access to everyone's data to Russia? Iran?
Agree in principle, though WhatsApp backups are encrypted with a user provided password, so ostensibly inaccessible to Google or whoever you use as backup
What makes you think WhatsApp backups don’t have a secondary way to unlock the encryption key? Wouldn’t it be more logical to assume the encryption key for whatsapp backups can also be unlocked by an alternate “password”
If the US is willing to build an entire data center in Outback Australia to allow warrantless access to US citizen data, why wouldn’t they be forcing WhatsApp backups to be unlockable?
> Totally agree. Having this discussion so US centred just makes us miss the forest for the trees. Apart from data owned by US citizens, my impression is that data stored in the US is fair game for three letter agencies, and I really doubt most companies would spend more than five minutes agreeing with law enforcement if asked for full access to their database on ̶n̶o̶n̶-̶U̶S̶ ̶n̶a̶t̶i̶o̶n̶a̶l̶s̶ anyone.
Android data isn't encrypted at rest (or at least not in a way Google doesn't have the key). If the uk gov has a warrant, they can ask Google to provide your Google Drive content. The whole point of this issue is Apple specifically designed ADP so they couldn't do that.
So not hugely secure for most people if they use 4-6 decimal digits, but possible to make secure if you set a longer passphrase.
I don't know what Google's going to do about this UK business.
edit: Ah it looks like they have a Titan HSM involved as well. Have to take Google's word for it, but an HSM would let you do rate limits and lockouts. If that's in place, it seems all right to me.
I wonder how hard it would be for the US government to force Google to just get the lockscreen pin off of your device or for them to just infect your device with something to capture it themselves.
Wrong. Google Android user cloud backups are E2EE by default.There is no option to opt out. Use Google's backup service and your data is encrypted at rest, in transit, and on device. aka end-to-end.
It's not just Google saying it. Google Cloud encryption is independently verified
i think people focus on whether backups are encrypted too much. it really doesn't matter when the government has remote access equivalent to your live phone when it's in an unencrypted state, which they almost certainly do.
> Doesn't the US have access to all the data of non US citizens whose data is stored in the US without any oversight?
Er, no...? I'm not sure where you get that idea. Access requires a warrant, and companies are not compelled to build systems which enable them to decrypt all data covered by the warrant.
See, for example, the Las Vegas shooter case, where Apple refused to create an iOS build that would bypass iCloud security.
I asked if your Android backup is encrypted. Implies I'm talking about unencrypted data.
> See, for example, the Las Vegas shooter case
I am not in Las Vegas or anywhere else in the US. So as far as i know all the data about me that is stored in the US is easily accessible without a warrant unless it's encrypted with a key that's not available with the storage.
> companies are not compelled to build systems which enable them to decrypt all data covered by the warrant
Again, not what I was talking about.
I'm merely pointing out that your data is not necessarily encrypted, and that the "rest of the world" was already unprotected vs at least one state. The UK joining in would just add another.
This is why Apple, and more recently Google, create systems where they don't have access to your unencrypted data on their servers.
> Google Maps is changing the way it handles your location data. Instead of backing up your data to the cloud, Google will soon store it locally on your device.
Which is why Apple takes the stance that the users device shouldn't be sending data to the mothership at all, if it isn't absolutely necessary.
Compare Apple Maps and Google Maps.
Google initially hoovered up all your location data and kept it forever. They learned from Waze that one use case for location data was keeping your map data updated.
Apple figured out how to accomplish the goal of keeping map data updated without storing private user data that could be subject to a subpoena.
> “We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it — in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B.
The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data sent to Apple... Apple is working very hard here to not know anything about its users.
Google or Apple could be forced by authorities to perform correlation on the map tiles being requested by users under investigation. Not as accurate as GPS coordinates but probably useful nonetheless.
One more reason to prefer offline maps for those who value privacy.
Given that you can browse map data for any location, not just where you happen to be, I'm betting that triangulation data from your carrier would be more accurate.
Sure, triangulation of carrier signals could lead to more accurate position estimates, but if the carrier isn't based in the US they are under no obligation to make this data available to US authorities.
Apple and Google are based in the US so are bound by the CLOUD Act to provide any and all data they have upon request, no matter where in the world it is being collected or stored.
Google had "created a system where they don't have access to your data on their servers" a couple of years BEFORE Apple. Android 10 introduced it in 2019.
Google didn't announce plans to stop storing a copy of user location data on their servers until the middle of last year.
See the story linked above.
They didn't announce that they could no longer access user location data on their servers to respond to geofence warrants until the last quarter of 2024.
Were talking about protecting your personal data from government overreach, and Google's entire business model is to collect as much of your personal data as possible and store it on their servers to make ad sales more profitable.
Apple does its best not to collect personal data in the first place.
> all the data about me that is stored in the US is easily accessible without a warrant
No, law enforcement needs a warrant to legally access any data. This is why Prism was illegal, and why companies like Google are pushing back against overly broad geofence search warrants.
Also, I wondered if by complying with British law that they may somehow be breaking laws of another country?
Hypothetically, if Apple just provide a back door to the data they have on US Senators for instance, then providing that information may be considered treason by the US.
That's a totally made up example, and I have no idea, but it seems like it's possibly an issue.
Which is all about the issues around data sovereignty I suppose!
> Treason is the only crime defined in the constitution, and it is quite a high bar.
Well, it's defined, or bounded above, in the constitution. It's not exactly a high bar:
> Treason against the United States, shall consist only in levying War against them, or in adhering to their Enemies, giving them Aid and Comfort.
So, if you happened to know Nicolas Maduro, thought he was looking stressed, and bought him some food, that would qualify as treason. There's no requirement that you act against the interests of the United States. The constitution will stop you from being prosecuted for treason for sleeping with Melania Trump. It won't stop you from being prosecuted for treason for completely spurious reasons.
Treason is a very heavy charge and as far as I know it applies more to individuals. Can a company be prosecuted for treason? I guess it depends on the country and I don't know US law well (never even visited there)
But I'm sure local laws conflict heavily between countries yes. I'm often wondering how multinationals manage to navigate this maze. This is why we have such a big legal department I guess :) And the company I work for is a pretty honest one, I've never seen any skullduggery going on with eg privacy or media manipulation. In fact employees are urged to report such things and I have to do a course on responsible behaviour yearly. Probably a result of being purely B2B. But anyway I digress, just wanted to say that getting away with stuff does not seem to be the reason for us having a big legal dept.
But just look at the laws of e.g. the EU and Iran. Pretty diametrically opposed on many topics. There's no way to satisfy them both.
I think what helps to make this happen is that most countries don't try to push their laws outside of their jurisdiction. Which the UK is trying to do here.
> One scenario would be somebody in an airport and security officials are searching your device
No Heathrow connection necessary. “The law has extraterritorial powers, meaning UK law enforcement would have been able to access the encrypted iCloud data of Apple customers anywhere in the world, including in the US” [1].
Spot on, 727 comments, most probably by Americans, and only 2 (including yours) bringing up the CLOUD Act, the much worse US equivalent. Incredible ignorance.
Providing encrypted data and not providing encryption are two different things. The CLOUD act requires you to hand over data. It could be encrypted. The UK government is asking to hand over data that is also not encrypted. The two are not the same. Note : Not American.
It's all lip service, because the UK Govt wouldn't ask them that. WhatsApp messages are EE2E. They probably already handover all the metadata surrounding those messages.
With almost everyones backups stored in plain-text, making it all a little silly.
Think about it for a second: you can re-establish your WA account on a new device using only the SIM card from your old device. SIM cards don't have a storage area for random applications' encryption keys, and even if they did, a SIM card cannot count as "end-to-end" anymore. Same goes for whatever mobile cloud platform those backups might be stored on. And you'd hope Apple or Google aren't happily sending off your cloud decryption keys to any app that wants them. Though maybe they are?
Reestablishing your WhatsApp account on a new device doesn't give access to your old chat messages, you need to restore a WhatsApp backup for that. The backup doesn't need to be stored in the cloud, you can choose to create a local file and manually transfer that to your new device.
In any case, as soon as you start using WhatsApp on a new device, users in the chats you participate in will receive a message informing them that your encryption keys have changed.
I don't really understand your comment to be honest. Section 3 of the Regulation of Regulatory Powers Act 2000 allows for compelled key disclosure (disclosure of the information sought instead of the key is also possible). Schedule 7 of the Counter-Terrorism Act allows 9 hour detention, questioning and device search at the border. With these powers it isn't necessary to get access to iCloud backups, as you can get the device and/or the data.
I don't think the e2e icloud backup is problematic under existing legislation / before the TCN. While you can't disclose the key because it lives in the secure enclave, you can disclose the information that is requested because you can log into your apple account and retrieve it. IANAL, but I believe this to be sufficient (and refusing would mean jail).
The Investigatory Powers Act allows for technical capability notices, and the TCN in this case says (as far as we know) "allow us a method to be able to get the contents of any iCloud backup that is protected by E2EE for any user worldwide". This means that there is no need to ask the target to disclose information and if implemented as asked, also means that any user worldwide could be a target of the order, even if they'd never been to the UK.
I imagine they want the ability to look at someone's iCloud backups without notifying the owner that they are doing so or they want to do it when the owner is unwilling or unable to provide keys.
For the latter, there are a lot of cases where jail isn't much a threat (e.g. the person is dead or not in the country).
Also given automatic iPhone backup it might contain information they want as part of an investigation that they'd otherwise have to demand key disclosure for (if cloud backup didn't exist)... Absolutely.
The jail time for failure to comply with key disclosure is 2 years unless it is national security, then it is 5. But if you're organised crime and facing who knows what for being a snitch it might be better simply to do the time.
I can see why they want it. I just don't understand why the person I'm replying to said the feature (I think) was problematic. Not really a criticism, I'm just struggling to identify the tone and why 'too right' and 'more problematic than they let on'.
You have no laws when traveling through immigration. Thats true in US too. There was an article (trying to look for it could be arstechnica verge I dont remember where) once where a US citizen journalist was detained at the border for hours while traveling into the US and questioned. You can be in the immigration for hours or even decades until you give out what they demand which can involve your unlocked phone and password. There are no laws protecting you.
> Apple is the only company audibly making a stand
Apples stand is false, they take with one hand and give with the other. There have been many times that Apple have been caught giving user data to governments at their request, lied about it, then later on admitted it once it had leaked from another source.
This whole 'we will never make a backdoor' is a complete whitewash marketing stunt, why do they need to make a backdoor when they are providing any and all metadata to any government on request.
> There have been many times that Apple have been caught giving user data to governments at their request, lied about it, then later on admitted it once it had leaked from another source.
In other words, Apple complies with legal government orders, as they are required to. The government can compel them with a warrant to hand over data that they have, and can prohibit them from talking about it. That's the whole reason for the push towards end-to-end encryption and for not collecting any data Apple doesn't need to operate the products. This also ties into things like photo landmark identification, where Apple designed it such that they don't get any information about the requests and so they don't have any information that they could be compelled to hand to the government.
Irrespective of political leanings, a lot of British people are saying this. They stand for it because they have to. It's a government that was voted in by a large margin only six months ago. Disquiet, if that's the word, is pretty much universal and I am not sure we've been quite in this position before. Keir Starmer's decline in approval ratings 'marks the most substantial post-election fall for any British prime minister in recent history'.
By a large margin with their seat count doubling off a 1.6% swing in their favour. The decline in approval ratings should have been entirely predictable to them.
> What concerns me more is that Apple is the only company audibly making a stand.
Dropping the functionality for a particular market hardly equals to making a stand. Sure they haven't added a backdoor that would give all user's data access to UK icloud user's data so in the end UK residents didn't win anything.
And who knows if they simply have an agreement with US gov to have a backdoor only available to them and not the other govs.
"technical capability notice" under the Investigatory Powers Act (IPA)
Sounds a lot like the godawful "assistance and access" laws that were rushed through in Australia a couple of years ago, right down to the name of the secret instrument sent to the entity who gets forced into to building the intercept capability.
Now that Apple has caved once, I expect to see other providers strongarmed in the same way, as well as the same move tried in other countries.
For photos, it's probably best to use an open-source (also self-hostable) service like Ente. For files it's best to self-host Nextcloud or similar. And rely on other people's computers as little as possible. Sadly, operating systems are very complex and mostly composed of proprietary blobs nowadays so there is still a risk of it leaking data but people can still do at least something.
Not exactly. It generates the keys for you and stores them on device in the Secure Enclave. You cannot "bring your own" encryption key, but the primary benefit of doing so--that Apple does not have access to it--is intentionally accomplished anyway by the implementation.
I’m not sure I appreciate the value of literally bringing your own keys. My device generating them on my behalf as part of a setup process seems sufficient. You’d use openssl or something and defer to software to actually do keygen no matter what.
I agree it seems sort of academic at first blush, but I'm going to venture a guess it's the idea that you own them, instead of Apple.
So you can eg. keep a backup on your own (secure) infrastructure. Transfer them when switching devices or even mirror on two different ones*. Extract your own secret enclave contents. Improve confidence they were generated securely. And depending on implementation, perhaps reduce the ease with which Apple might "accidentally" vacuum the keys up as a result of an update / order.
*Not sure how much these two make sense in the iOS ecosystem. I know on the Android side I'd absolutely love to maintain a "hot standby" phone that is an exact duplicate of my daily driver, so if I drop it in the ocean I can be up and running again in a heartbeat with zero friction (without need to restore backups, reliance on nerfed backup API's outside the ones Google uses, having to re-setup 2FA, etc. and without ever touching Google's creepy-feeling cloud).
You would need to have a completely trusted software and hardware stack to actually own the keys. And that is already hard enough to get on a PC where ownership still means something, it is not going to happen on most mobile devices. To whatever extent you trust any of the stack already, the Secure Enclave is a better bet than BYOK. The real risk, as you imply, is if Apple is able to compromise the security coprocessor with an OTA firmware update, but they can definitely already push a regular OS update that exfiltrates any key you type in.
Just make an airgapped Linux device on a DYI FPGA CPU. This part is not that difficult comparing to persuading commercial vendors let you use your own cloud and your own encryption/backup mechanisms.
Yeah... unfortunately it ought to be the other way around. They should have a hard time pursuading us to trust them enough to use theirs.
If your phone company asked you to give them the key to your house, in perpetuity, how would you feel about that? (Particularly if they insisted you sign a 15 page Terms of Use first that disclaims all their liability if anything goes missing).
It depends what kind of backdoor the UK is asking for but "encryption backdoor" sounds like cryptographic compromise. I don't know if that's what it means but either way the only way to be sure your keys are secure is to generate them yourself.
BYOK does not provide any additional security over the Secure Enclave (and similar security coprocessors). In fact, unless the Secure Enclave were to directly accept your input and bypass the OS, BYOK is worse because the software can just upload your key to a server as soon as you type it in. Whereas, a key generated on the Secure Enclave stays there, because there exists no operation to export it.
I don't believe it's the SE itself that encrypts user data so it must already be the case that the key is generated outside the SE, sent to it for storage, and is retrieved if the user is authenticated.
So the difference between Apple generating the key on device and storing it in the SE and the user generating it and storing it in the SE is that the user can use a known-secure key generation algo. If Apple generates the key you can't be sure it's cryptographically secure and doesn't have a backdoor.
The SE’s AES engine line encrypts and decrypts data to flash, and the SEP is responsible for generating all keys.
At this point, the people who claim they can’t trust Apple’s key generation should also distrust Intel or AMD or any other vendor’s key generation as well. Might as well generate keys by hand.
But if you don't trust Apple, how to you get the key into the Secure Enclave to begin with? Doesn't Apple control the software on your device that provides the interface into the Secure Enclave from outside of it?
Yes Apple controls the device so you're right, you can never be sure what it's doing. My thinking is that an encryption backdoor means the key generation algo is compromised. In that case you want to bypass that by generating the key yourself.
If the backdoor is some other method of getting your key off the device then all bets are off.
> One scenario would be somebody in an airport and security officials are searching your device under the Counter Terrorism Act
No, it's much broader than that. The UK is asking for a backdoor to your data and backups in the cloud, not on your device. Why bother with searching physical devices when they can just issue a secret subpoena to any account they want?
It's actually pretty amazing that Apple made ADP possible for the general public. This is the culmination of a major breakthrough in privacy architecture about ten years ago.
Traditionally you had to make a choice between end-to-end encryption and data recoverability. If you went with E2EE, it's only useful if you use a strong password, but if you forget it then Apple can't help you recover your account (no password reset possible). So that was totally unsuitable for precious memories like photos for the average user.
Apple's first attempt to make this feasible was a recovery key that you print out and stuff in a drawer somewhere. But you might lose this. The trusted contact feature is also not totally reliable either, because chances are it's your spouse and they might also lose their device at that same time as you (for example in a house fire).
So while recovery keys and trusted contacts help, the solution that really made the breakthrough for ADP was iCloud Keychain Backup. This thing is low-key so cool and kind of rips up the previous assumptions about E2EE.
iCloud Keychain Backup makes it possible to recover your data with a simple, weak 6 digit passcode that you are virtually guaranteed never to forget, yet you are also protected from brute force attacks on the server. It is specifically designed to work on "adversarial clouds" that are being actively attacked. This is... sort of not supposed to be possible in the traditional thinking. But they added something called hardware security modules to limit the number of guesses an attacker can make before it wipes your key.
And crucially it ensures you don't forget this passcode because it's your device passcode which the OS keeps in sync with the backup key. This is part of the reason your iPhone asks you to enter your passcode now and then even though your biometrics work just fine.
It is a true secret that only you know and can keep in your brain even when your house burns down and nobody (hopefully) can derive from something they can research about you. This didn't really exist for the general populace until smartphones came along. And that ultimately was the breakthrough that allowed for changing the conventional wisdom on E2EE.
iCloud Keychain Backup came out about a decade ago and it has taken this long to gradually test the feasibility of going 100% E2EE without significantly risking customer data loss. The UK is kind of panicking but when people see how well ADP protects their most personal data from breaches, I think they will demand it. It just wasn't practical before.
> No, it's much broader than that. The UK is asking for a backdoor to your data and backups in the cloud, not on your device. Why bother with searching physical devices when they can just issue a secret subpoena to any account they want?
My point was that there was already a clear chain in place that would give them access to the data of foreign nationals. It's not just a "UK problem", but actually the ramifications are further reaching.
Another thing to consider is that these cookie alerts on sites were for EU countries only, but ended up everywhere. If Apple were to comply, this cloud backdoor could end up in other countries too, with the keys sitting there ready for collection.
To make things more complex still, they would need to support dual/multi nationality. It probably ends up looking like a dual key E2E system where there is a unique key for the end-user and then a third party. Key revocation would likely be difficult, so it would likely be the cloud provided decrypting and re-encrypting the files per request, throwing E2E out the window entirely.
Your smartphone cannot be considered a private device. You as the owner don’t have sufficient control over its operating system and applications to ever make that claim.
In theory you have the likes of the PinePhone where you can run a full Linux kernel [1]. You could then use something like Waydroid to run Android apps [2].
I think the biggest concern is that many of the important apps are anti-emulation, for example banking apps and authentication apps.
It's amusing to think of Apple as a "monopoly" (if anything they have a monopsony on TSMC production) but let's just replace that with "giant" for purposes of discussion.
Tech giants typically devolve local operations to small companies to avoid liability - think petroleum suppliers not owning gas stations (because those typically end up as superfund sites). Not sure if this analogy this works for Google Android and all the manufacturers that deploy it for their smartphones too.
So corporations have been doing this forever, trying to find legal loopholes where they can have their cake and eat it too.
> There's no time limit on when you may be searched, so all people who ever travelled through British territory could be searched by officials.
> Let that sink in for a moment. We're talking about the largest back door I've ever heard of.
Codename 'Krasnov' is the largest backdoor I have ever heard of. And, we only need to look at his behavior.
These E2EE from USA can be tainted in so many ways, and FAMAG sits on so much data, that codename 'Krasnov' can abuse such to target whoever he wants in West. Because everyone you know is or has been in ecosystem of Apple, Google, or Microsoft.
Whataboutism! Fair. From my PoV, as European, the UK government is (still) one of the good guys who will protect Europe from adversaries such as those who pwn codename 'Krasnov'. Such protection may come with a huge price.
The real prescient threat in that movie was the predictive AI algorithm that tracked individual behaviors and identified potential threats to the regime. In the movie they had a big airship with guns that would kill them on sight, but a more realistic threat is the AI deciding to feed them individualized propaganda to curtail their behavior. This is the villain's plot in Metal Gear Solid 2, which is another great story.
> Your persona, experiences, triumphs, and defeats are nothing but byproducts. The real objective was ensuring that we could generate and manipulate them.
It's really brilliant to use a video game to deliver the message of the effectiveness of propaganda. 'Game design' as a concept is just about manipulation and hijacking dopamine responses. I don't think another medium can as effectively demonstrate how systems can manipulate people's behavior.
Life is imitating too many dystopian books, movies, etc these days. I think we need to put an end to all creative works before the timeline becomes irrecoverably destroyed.
It's always hilarious to see how far people here are ready to go to twist some bad Apple news into something which might be considered good.
I mean seriously. Apple making a stand? What stand? They are ripping security out of their customers hands. Customers which are already dependent on the company's decision in their locked in environment.
There is absolutely nothing good about it, and you dragging Android into it and making it look like it's even worse is suspicious. You can have full control over your Android device. Something impossible on an Apple phone. You can make your Android device safer than your iPhone.
The government forced them to pull the feature. Would you rather they left a toggle-switch that doesn't actually do anything? Or are you thinking they should just pull out of the EU altogether?
No, this tells the customer that backups to iCloud are not secure from the government. Adding the back door would make people think that there was more security than there was. Transparency is always better than deception.
Dropping the feature that the UK was targeting allows their customers to use all the other ways that Apple does things. Leaving the UK altogether is the nuclear option denying their customers of everything. “Apple should just leave the UK/China” never takes into consideration the millions of customers that bought or might want to buy in the future. Nobody would better off if Apple withdraws from a country.
I don't think we both have the same concept of "making a stand".
Yes, it would have been the nuclear option, but this is Apple. Probably most of the most influential people in the UK have an Apple phone. Just saying that you leave would cause an avalanche of influence targeted at this law. Maybe other companies would have joined them.
This, this is just cover dance and I wish they'd pay for this, but they won't and they know it. People locked into the Apple bubble only change if it REALLY hurts. This doesn't hurt the average Apple user, and those who really care moved onto a system they can control themselves.
vs. taking their phone away??? Idk if you're trolling or what but I would be incredibly pissed at Apple if they deprecated my phone over something like this.
Yes, imagine the outrage in the rich and influential in the UK if Apple would seriously threaten to leave the country about this. They would cause the law to be fixed which would help everybody.
But instead. They run away.
Selling this as "making a stand" is ridiculous. Nothing more.
Making a stand would be displaying a full-screen notification about why they cannot provide protection for British users' data and which party voted for this.
No. Making a stand would be to threaten to leave and watch all those influential iPhone users scramble to get this law rolled back. Everything else is marketing and cowardice.
What I fund 'amusing' is the swap between Left vs Right.
'Back in the day' it was the "Right" that wanted have total access/total control over everything. So people turned a bit "left". Now the "Left" government is seeking totalitarian-style control ('because paedophiles/drugs/etc.).
As a reminder, both Right and Left extremes went from 'liberal/conservatives' to "we don't need elections ever again - trust me!".
I saw this happening in the US, in Saudi (e.g. Blackberry 'keys'). Now I see it in the UK. So I interpret this in two ways:
1) The "Left is the new Right" (or "Right is the new Left")
2) Left and Right are irrelevant terms when it comes down to "we need to exert control over people/knowledge/data/information/etc. And the 'guise' of Left/Right is just on the fiscal policies. So UK has been playing around with 'snooper charter' but at 'that' time Apple's encryption was not on the table.
Apple (I don't blame them - very much - just a little) does what a company does. Makes money. And they prefer to sell-out the data of their clients and keep their money, than lose that money.
So... yeah.. if your data is in someone else's server, that happens.
If you go too far right or left, both types of authoritarianism are difficult to distinguish. I think this just makes the case that every election you need to be a swing voter, make sure your politicians still overlap with your ideals.
Apple today appear to be on the 'correct side of history', but even then you need to be swing consumer.
> What concerns me more is that Apple is the only company audibly making a stand.
They are not making a stand. They roll over without a peep. And this is concerning users' privacy which they say is the core of the company.
Compare it to fighting every government tooth and nail over every single little thing concerning the "we don't know if it's profitable and we don't keep meeting records" AppStore
"Not making a stand" would be leaving everything as is, and handing your encryption keys over to the government. By loudly disabling ADP and saying this feature is illegal in the UK (they really should have said "illegal" instead of "unavailable" so people would know it was the government), they are at least making half a stand. By leaving it enabled in other regions and for visitors from other regions to the UK, they're making three quarters of a stand.
For example if I encrypt a file locally, a zip file containing images, am I not permitted to upload that zip file to a cloud service in the UK?
Even if the UK's demands were "access to encrypted cloud services", does that also mean encrypted files within encrypted storage? It all seems so messy. Anyone who really wants to hide their files, can do so regardless of demands for backdoors.
What are you talking about? This is literally them doing the opposite, and there are multiple other public instances of them making a stand, not to mention in the design of their systems.
They had two paths to comply with the law. Silently backdoor the worldwide cloud serving every Apple device, or loudly tell people in the UK they don't get to have security because their government prohibits them. Between these two options, this is clearly "making a stand".
It's not as much "making a stand" as telling a major government that you have substantial seizable assets under their jurisdiction who is a major market you want to be in, that you're not going to do the thing that their laws say you are required to do, but it's hardly simple compliance either, instead of doing what the government wants them to do, they are making sure there is blowback.
Whether to try to fight it in court likely depends on details of case law and the wording of the laws they'd be contesting, I imagine much of the delay in their response to the demand was asking their lawyers how well they think they would fare in court.
This doesn't affect only people in the UK. It allows access to all Apple users' data globally:
> No Heathrow connection necessary. “The law has extraterritorial powers, meaning UK law enforcement would have been able to access the encrypted iCloud data of Apple customers anywhere in the world, including in the US” [1].
> The UK government's demand came through a "technical capability notice" under the Investigatory Powers Act (IPA), requiring Apple to create a backdoor that would allow British security officials to access encrypted user data globally. The order would have compromised Apple's Advanced Data Protection feature, which provides end-to-end encryption for iCloud data including Photos, Notes, Messages backups, and device backups.
One scenario would be somebody in an airport and security officials are searching your device under the Counter Terrorism Act (where you don't even have the right to legal advice, or the right to remain silent). You maybe a British person, but you could also be a foreign person moving through the airport. There's no time limit on when you may be searched, so all people who ever travelled through British territory could be searched by officials.
Let that sink in for a moment. We're talking about the largest back door I've ever heard of.
What concerns me more is that Apple is the only company audibly making a stand. I have an Android device beside me that regularly asks me to back my device up to the cloud (and make it difficult to opt out), you think Google didn't already sign up to this? You think Microsoft didn't?
Then think for a moment that most 2FA directly goes via a large tech company or to your mobile. We're just outright handing over the keys to all of our accounts. Your accounts have never been less protected. The battle is being lost for privacy and security.