Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I appreciate the explanation. However, I think you do not address the main problem, which is that my data is being sent off my device by default and without any (reasonable) notice. Many users may agree to such a feature (as you say, it may be very secure), but to assume that everyone ought to be opted in by default is the issue.


I'm not sure I agree -- asking users about every single minor feature is (a) incredibly annoying, and (b) quickly causes request-blindness in even reasonably security-conscious users. So restraining the nagging for only risky or particularly invasive things makes sense to me.

Maybe they should lump its default state into something that already exists? E.g. assume that if you already have location access enabled for Photos (it does ask!), you've already indicated that you're okay with something about this identifying being sent to Apple whenever you take a picture.

My understanding is that Location Services will, among other things, send a hash of local WiFi network SSIDs and signal strengths to a database Apple maintains, and use that to triangulate a possible position for you. This seems loosely analogous to what's going on here with the compute-a-vector thing.


> Maybe they should lump its default state into something that already exists?

It could be tied to iCloud Photos, perhaps, because then you already know that your photos are getting uploaded to Apple.


Insofar as the photos aren't getting uploaded to Apple for this, that seems a bit extreme.

(We could argue about it, but personally I think some kind of hash doesn't qualify.)


What's the Venn diagram of people who both (1) deliberately refrain from enabling iCloud Photos but nonetheless (2) want the Photos app to phone home to Apple in order to identify landmarks in locally stored photos?


It's probably a pretty large set of people, perhaps even the majority, since I'd suspect that most people don't pay for additional iCloud storage and can't fit their photo library into 5GB.

In fact, I'm willing to bet that if they'd added this feature and gated it behind iCloud Photos being enabled, we'd have different articles complaining about Apple making a cash grab by trying to get people to pay for premium storage. :P


> It's probably a pretty large set of people, perhaps even the majority

As the article notes, this new feature is so "popular" that neither Apple nor the Apple media have bothered to mention it. AFAICT it's not even in Apple's document listing all the new features of iOS 18: https://www.apple.com/ios/ios-18/pdf/iOS_18_All_New_Features...


True, but I don't see how that relates to anything? You asked for a hypothetical set of people who'd have iCloud Photos disabled but would accept metadata being sent to Apple for better search. I can't help you if you want to move the goalposts after I give you that.


> You asked for a hypothetical set of people who'd have iCloud Photos disabled but would accept metadata being sent to Apple for better search.

No, I didn't ask for a hypothetical set. I wanted the actual set of people.


Well, neither of us have any way of surveying the public about that, do we? My claim that those people would be okay with it has as much weight as yours that they wouldn't.

I can try to turn down my natural inclination towards cautious phrasing, if you'd like? Get us on the same level. :D


> It's probably a pretty large set of people, perhaps even the majority, since I'd suspect that most people don't pay for additional iCloud storage and can't fit their photo library into 5GB.

Large set? Yes. Majority? No. CIRP says 2/3 of US Apple users pay for iCloud storage[0]. It's this popular for the exact reason you mentioned. Almost no one can fit their photo library into 5GB so they opt in to the cheap 50GB for $0.99/month. 50GB is enough for a lot of people.

[0] https://wccftech.com/paid-icloud-subscription-is-apples-most...


Time Machine does not backup your desktop and other spots that might be essential in case of needing a backup. iCloud does.

I know users who would prefer not to trust Apple for anything, and only pay for and use iCloud to backup the Desktop [and similar locations]. If they were to hear that their opt-in for iCloud means that Apple starts copying random things, they would not be happy.

[OT, I use Arq. But admit that iCloud is simpler, and it is not apples to apples.]

IMO, the fact that Apple backs up your keychain to the Mothership; and that this is a "default" behavior that will re-enable itself when shut off, reflects an attitude that makes me very distrustful of Apple.


Huh, I'm honestly kind of surprised. Good to learn something!

Well, I'll take back what I said about the majority. I do still think that the remaining 1/3 of users who don't have enough storage to turn on iCloud Photos qualify as what lapcat was asking for, though.


"asking users about every single minor feature is (a) incredibly annoying"

Then why lie and mislead customers that your data stays local?


I don't think that's a fair characterization of what they're doing.


No? There’s literal billboards linked on this thread that say “what happens on your iPhone stays on your iPhone.”

Apple patting itself on the back.


If you one-way encrypt a value, and that value leaves the phone, with no way to recover the original value, then the original data never left the phone.


I'm sure you know that the point of that billboard is to state that your iPhone protects your privacy. That is generally true, Apple is by far the most privacy-focused major phone and software company. Advertising isn't literal, if we're going to be pedantic here the photons emitted by your iPhone's screen technically leave your iPhone and definitely contain private information.


It's not pedantic to call out misleading advertising unless you're a shill for the ones doing the misleading.


> asking users about every single minor feature

Then perhaps the system is of poor design and needs further work before being unleashed on users…


Especially for a company which heavily markets about how privacy-focused it is,

1)sending my personal data to them in any way is not a "feature." It's especially not a feature because what it sets out to do is rather unnecessary because every photo has geotagging, time-based grouping, and AI/ML/whatever on-device keyword assignments and OCR. I can open up my phone right now and search for every picture that has grass in it. I can search for "washington" and if I took a picture of a statue of george washington that shows the plaque, my iPhone already OCR'd that and will show the photo.

2)"minor" is not how I would ever describe sending data based off my photos to them, regardless of how much it's been stuffed through a mathematical meat grinder.

3)Apple is usually very upfront about this sort of thing, and also loves to mention the most minor, insignificant, who-gives-a-fuck feature addition in the changenotes for "point" system updates. We're talking things like "Numbers now supports setting font size in chart legends" (I'm making that up but you get the point.)

This was very clearly an "ask for forgiveness because the data we want is absolutely priceless and we'll get lots of it by the time people notice / word gets out." It's along the lines of Niantic using the massive trove of photos from the pokemon games to create 3d maps of everywhere.

I specifically use iOS because I value my privacy (and don't want my cell phone data plan, battery power, etc to be a data collection device for Google.) Sending data based off my photos is a hard, do-not-pass-go-fuck-off-and-die line in the sand for me.

It's especially shitty because they've gated a huge amount of their AI shit behind owning the current iPhone model....but apparently my several generation old iPhone is more than good enough to do some AI analysis on all my photos, to upload data for them?

Fuck everyone Apple who was involved in this.


> This was very clearly an "ask for forgiveness because the data we want is absolutely priceless and we'll get lots of it by the time people notice / word gets out.

It's very clearly not, since they've gone to huge lengths to make sure they can't actually see the data themselves see the grandparent post.


> It's especially shitty because they've gated a huge amount of their AI shit behind owning the current iPhone model....but apparently my several generation old iPhone is more than good enough to do some AI analysis on all my photos

Hear hear. As if they can do this but not Visual Intelligence, which is just sending a photo to their servers for analysis. Apple has always had artificial limitations but they've been getting more egregious of late.


I think it does address the main problem. What he is saying is that multiple layers of security is used to ensure (mathematically and theoretically proved) that there is no risk in sending the data, because it is encrypted and sent is such a way that apple or any third party will never be able to read/access it (again, based on theoretically provable math) . If there is no risk there is no harm, and then there is a different need for ‘by default’, opt in/out, notifications etc.

The problem with this feature is that we cannot verify that Apple’s implementation of the math is correct and without security flaws. Everyone knows there is security flaws in all software, and this implementation is not open (I.e. we cannot review the code, and even if we could review code we cannot verify that the provided code was the code used in the iOS build). So, we have to trust Apple did not make any mistakes in their implementation.


Your second paragraph is exactly the point made in the article as the reason why it should be an informed choice and not something on by default.


If you don’t trust Apple to do what they say they do, you should throw your phone in the bin because it has total control here and could still be sending your data even if you opt out.


Bugs have nothing to do with trust. You can believe completely that someone’s intentions are pure and still get screwed by their mistake.


Oh yeah, the well known "blind trust" model of security. Never verify any claims of any vendor! If you don't trust them, why did you buy from them?!


As someone with a background in mathematics I appreciate your point about cryptography. That said, there is no guarantee that any particular implementation of a secure theoretical algorithm is actually secure.


There is also no guarantee that Apple isn't lying about everything.

They could just have the OS batch uploads until a later point e.g. when the phone checks for updates.

The point is that this is all about risk mitigation not elimination.


> There is also no guarantee that Apple isn't lying about everything.

And at that point all the opt-in dialogs in the world don't matter and you should not be running iOS but building some custom Android ROM from scratch.


> There is also no guarantee that Apple isn't lying about everything.

Other than their entire reputation


A reputation has to be earned again and again.


Maybe your threat model can tolerate an "oopsie woopsie". Politically exposed persons probably cannot.


If you don't personally write the software stack on your devices, at some point you have to trust a third party.


I would trust a company more if their random features sending data are opt-in.

A non-advertized feature, which is not independently verified, which about image contents? I would be prefer independent verification of their claims.


Agreed, but surely you see a difference between an open source implementation that is out for audit by anyone, and a closed source implementation that is kept under lock & key? They could both be compromised intentionally or unintentionally, but IMHO one shows a lot more good faith than the other.


No. That’s your bias as a nerd. There are countless well-publicised examples of ‘many eyeballs’ not being remotely as effective as nerds make it out to be.


can you provide a relevant example for this context?


That was an entire body of research at the University of Minnesota and the “hypocrite commits” weren’t found until the authors pointed people to them.

https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...


How long did the log4j exist?

https://www.csoonline.com/article/571797/the-apache-log4j-vu...

What was the other package that had the mysterious .?


And yet they were found. How many such exploits lurk unexamined in proprietary codebases?


yet you say this like Apple or Google or Microsoft has never released an update to address a security vuln


Apple[1], Google[2], and Microsoft[3] you say?

You say this as if being shamed into patching the occasional vuln is equivalent to security best practices.

Open code which can be independently audited is only a baseline for trustworthy code. A baseline none of those three meet. And one which by itself is insufficient to counter a reflections on trusting trust style attack. For that you need open code, diverse open build toolchains, and reproducible builds. None of which is being done by those three.

Are you getting your ideas about security from the marketing department?

1: https://arstechnica.com/security/2024/03/hackers-can-extract... 2: https://www.wired.com/story/google-android-pixel-showcase-vu... 3: https://blog.morphisec.com/5-ntlm-vulnerabilities-unpatched-...


Go ahead and put that cup of kool-aid down for a minute. There are so so many OSS packages out there that have never been audited? Why not? Because people have better things to do. How many packages have you audited? Personally, I don't have the skillz to do that. The people that do expect to be compensated for their efforts. That's why so many OSS packges have vulns that go unnoticed until after they are exploited, which is the same thing as closed source.

OSS is not the panacea that everyone touts it to be.


> There are so so many OSS packages out there that have never been audited? Why not? Because people have better things to do.

I'm not aware of any major open source projects that haven't experienced some level of auditing. Coverity alone scans everything you're likely to find in a distribution like Debian or Fedora: https://scan.coverity.com/o/oss_success_stories

> How many packages have you audited?

Several on which I depend. And I'm just one pair of eyeballs.

> Personally, I don't have the skillz to do that.

Then why are you commenting about it?

> OSS is not the panacea that everyone touts it to be.

I don't know who's touting it as a panacea, seems like a strawman you've erected. It's a necessary pre-requisite without which best practices aren't possible or verifiable.


The developer-to-user trust required in the context of open-source software is substantially less than in proprietary software. this much is evident.


I’m stealing your information.

Hey! That’s wrong.

But I promise I won’t do anything wrong with it.

Well ok then.


This is still a very dishonest representation of what’s actually happening.


You're welcome to check their implementation yourself:

https://github.com/apple/swift-homomorphic-encryption


Hypothetical scenario: Theo de Raadt and Bruce Schneier are hired to bring Apple products up to their security standards. They are given a public blog, and they are not required to sign an NDA. They fix every last vulnerability in the architecture. Vladimir Putin can buy MacBooks for himself and his generals in Moscow, enable Advanced Data Protection, and collaborate on war plans in total confidence.

Where are the boundaries in this scenario?


Theo de Raadt is less competent than Apple's security team (and its external researchers). The main thing OpenBSD is known for among security people is adding random mitigations that don't do anything because they thought them up without talking to anyone in the industry.


I mean half the reason the mitigations don't do anything is that nobody actually cares to target OpenBSD


Freedom of speech can not exist without private communications. It is an inalieanable right, therefore privacy is as well.


I am pretty sure that if we had those people in charge of stuff like this there would be no bar above which "opt in by default" would happen, so I am unsure of your point?


Except for the fact (?) that quantum computers will break this encryption so if you wanted to you could horde the data and just wait a few years and then decrypt?


Quantum computers don't break Differential Privacy. Read the toy example at https://security.googleblog.com/2014/10/learning-statistics-...

>Let’s say you wanted to count how many of your online friends were dogs, while respecting the maxim that, on the Internet, nobody should know you’re a dog. To do this, you could ask each friend to answer the question “Are you a dog?” in the following way. Each friend should flip a coin in secret, and answer the question truthfully if the coin came up heads; but, if the coin came up tails, that friend should always say “Yes” regardless. Then you could get a good estimate of the true count from the greater-than-half fraction of your friends that answered “Yes”. However, you still wouldn’t know which of your friends was a dog: each answer “Yes” would most likely be due to that friend’s coin flip coming up tails.


> Except for the fact (?) that quantum computers will break this encryption […]

Quantum computers will make breaking RSA and Diff-Hellman public key encryption easier. They will not effect things like AES, nor things like hashing:

> Client side vectorization: the photo is processed locally, preparing a non-reversible vector representation before sending (think semantic hash).

And for RSA and DH, there are algorithms being deployed to deal with that:

* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...


Quantum computers don't and won't meaningfully exist for a while, and once they do exist, they still won't be able to crack it. Quantum computers aren't this magical "the end is nigh" gotcha to everything and unless you're that deep into the subject, the bigger question you've got to ask yourself is why is a magic future technology so important to you that you just had to post your comment?

Anyway, back to the subject at hand; here's Apple on that subject:

> We use BFV parameters that achieve post-quantum 128-bit security, meaning they provide strong security against both classical and potential future quantum attacks

https://machinelearning.apple.com/research/homomorphic-encry...

https://security.apple.com/blog/imessage-pq3/


I’m a cryptographer and I just learned about this feature today while I’m on a holiday vacation with my family. I would have loved the chance to read about the architecture, think hard about how much leakage there is in this scheme, but I only learned about it in time to see that it had already been activated on my device. Coincidentally on a vacation where I’ve just taken about 400 photos of recognizable locations.

This is not how you launch a privacy-preserving product if your intentions are good, this is how you slip something under the radar while everyone is distracted.


In engineering we distinguish the "how" of verification from the "why" of validation; it looks like much comments disagreement in this post is about the premise of whether ANY outgoing data counts as a privacy consent issue. It's not a technical issue, it's a premises disagreement issue and that can be hard to explain to the other side.


The premise of my disagreement is that privacy-preserving schemes should get some outside validation by experts before being turned on as a default. Those experts don’t have to be me, there are plenty of people I trust to check Apple’s work. But as far as I can tell, most of the expert community is learning about this the same way that everyone else is. I just think that’s a bad way to approach a deployment like this.


Apple of course thinks their internal team of experts is enough to validate this.


To play Apple's advocate, this system will probably never be perfect, and stand up to full scrutinity from everyone on the planet. And they also need the most people possible activated as it's an adverserial feature.

The choice probably looks to them like:

  A - play the game, give everyone a heads up, respond to all feedback, and never ship the feature

 B - YOLO it, weather the storm, have people forget about it after the holiday, and go on with their life.
Wether B works is up to debate, but that was probably their only chance to have it ship from their POV.


To give you feedback in your role as Apple's advocate:

"we had to sneak it out because people wouldn't consent if we told them" isn't the best of arguments


Agreed. This two/three years in particular, there has been more instances where what's best for Apple hasn't been what's best for their users.


Did a variation of A already happen in 2022, with "client-side scanning of photos"?


Yes. That also was a thoroughly botched version of A, but I think even a good version of A won't see them ship anything within this century.

IMO giving up on having it widely used and just ship it turned off would be the best choice. But it's so obvious, there must be other ceitical reasons (good or bad) that's not an option.


I think I'm saying: you're not sending "your data" off device. You are sending a homomorphically encrypted locally differentially private vector (through an anonymous proxy). No consumer can really understand what that means, what the risks are, and how it would compare to the risk of sending someone like Facebook/Google raw data.

I'm asking: what does an opt in for that really look like? You're not going to be able to give the user enough info to make an educated decision. There's ton of risk of "privacy washing" ("we use DP" but at very poor epsilon, or "we use E2E encryption" with side channel data gathering).

There's no easy answer. "ask the user", when the question requires a phd level understanding of stats to evaluate the risk isn't a great answer. But I don't have another one.


In response your second question, opt in would look exactly like this: don't have the box checked by default, with an option to enable it: "use this to improve local search, we will create an encrypted index of your data to send securely to our servers, etc..." A PhD is not necessary to understand the distinction between storing data locally on a machine vs. on the internet.


Even here with HN crowd: it's not an index, it's not stored on a server, and it's not typical send-securely encryption (not PK or symmetric "encrypted in transit", but homomorphic "encrypted processing"). Users will think that's all gibberish (ask a user if they want to send an index or vector representation? no clue).

Sure, you can ask users "do you want to use this". But why do we ask that? Historically it's user consent (knowingly opting in), and legal requirements around privacy. We don't have that pop up on any random new feature, it's gated to ones with some risk. There are questions to ask: does this technical method have any privacy risk? Can the user make informed consent? Again: I'm not pitching we ditch opt-in (I really don't have a fix in mind), but I feel like we're defaulting too quickly to "old tools for new problems". The old way is services=collection=consent. These are new privacy technologies which use a service, but the privacy is applied locally before leaving your device, and you don't need to trust the service (if you trust the DP/HE research).

End of the day: I'd really like to see more systems like this. I think there were technically flawed statements in the original blog article under discussion. I think new design methods might be needed when new technologies come into play. I don't have any magic answers.


> I think there were technically flawed statements in the original blog article under discussion.

Such as?


The third choice, after opt-in and opt-out is to force the user to choose on upgrade before they can use their device again. "Can we use an encrypted, low-resolution copy of your photos that even we ourselves can't see?"


Okay except "encrypted, low-resolution copy of your photos" is an incredibly bad explanation of how this feature works. If nobody on HN so far has managed to find an explanation that is both accurate and understandable to the average consumer, any "hey can we do this" prompt for this feature is essentially useless anyways. And, IMO, unnecessary since it is theoretically 100% cryptographically secure.


I think it's sufficiently accurate, why don't you think it is? I don't think the vector vs low-res aspect is particularly material to understanding the key fact that "even we ourselves can't see?"


I Think the best response is make it how iCloud storage works. The option is keep my stuff on the local device or use iCloud.


Exactly. It's the height of arrogance to insist that normal users just can't understand such complex words and math, and therefore the company should not have to obtain consent from the user. As a normal lay user, I don't want anything to leave my device or computer without my consent. Period. That includes personal information, user data, metadata, private vectors, homomorphic this or locally differential that. I don't care how private Poindexter assures me it is. Ask. For. Consent.

Don't do things without my consent!!! How hard is it for Silicon Valley to understand this very simple concept?


Every TCP session leaks some PRNG state for the ISN. That might leak information about key material.

Every NTP session leaks time desync information, which reveals—on modern hardware—relativistic travel, including long airplane trips.

Every software update leaks a fortune about what you run and when you connect.

I don’t think it’s reasonable to ask that people consent to these; I don’t think they can. I absolutely agree that photo metadata is different and at a way higher level of the stack.


This, 1000x. Thank you for voicing the absurdness of their approach to 'consent'.


The average smartphone is probably doing a hundred things you didn’t knowingly consent to every second.

Should Apple insist that every end user consents to the user agent string sent on every HTTP request?


> The average smartphone is probably doing a hundred things you didn’t knowingly consent to every second.

You've succinctly identified a (maybe the) huge problem in the computing world today. Computers should not do anything without the user's command/consent. This seems like a hopeless and unachievable ideal only because of how far we've already strayed from the light.

Even Linux, supposedly the last bastion of user control... it's a mess. Do a fresh install and type ps ax at a shell. You'll see dozens of processes in the background doing god knows what. I didn't consent to any of this! The distribution's maintainer simply decided on my behalf that I want the computer to be running all these processes. This is totally normalized!

I don't expect my computer to ask for consent again and again for every byte sent over the network, but I do expect it to obtain my consent before generally accessing the network and sending bytes over the network.


"The light" you claim is that users should have the knowledge and discernment to consent to what a computer does.

To me, there's never been a case, except maybe in the first decade or so of the hobby/tinkering PC movement, where most users had this ability.

Should we just not use computers?


> Should we just not use computers?

I don't think "should we just give up?" is a reasonable question to anything.


> I do expect it to obtain my consent before generally accessing the network and sending bytes over the network.

How would that make any difference in this case? Presumably, you'll have long-ago checked the "allow general access to the network" setting, so you've given consent to the "send my photo data" action. Heck, surely connecting to the internet in the first place is implicit consent that you want to send stuff over the network?


If I were actually given the choice, I would not check any checkbox allowing an application broad, unfettered access to the network. But, in most cases I'm not even given that choice!


> I didn't consent to any of this!

Yes you did. You purchased a computer, put this software on it and executed it. If you didn't want it to do whatever it's doing you should have determined what it would do beforehand and chose not to do it.


> whatever it's doing

Even assuming that running the software implies my consent (which I would dispute), how do I make the decision about whether I should execute the software if I don't know what it is doing?

This all-or-nothing approach is also problematic. I should not have to allow the developer free rein to do whatever he wants, as a condition of using the software. This is why operating systems are slowly building granular permissions and consent checks.


Installing and booting Linux absolutely implies consent to let it do what it does. It's open source, you can evaluate what it does before booting it. You know it's comprised of many processes, you know it has a networking stack, you connected it to a network. You can't then ask OMG why didn't it ask before sending something?

I agree that all-or-nothing is problematic but even with a flexible permission system the best you can hope for is for all the things apps do to be itemized and set to sane defaults. But even then sanity is subjective. For every person like you (and me fwiw) who values privacy there are 1000 people who will never find the settings, don't care about privacy, and will wonder why stuff isn't working.

Ultimately privacy is similar to security in that it comes down to trust. If you don't trust your OS you're screwed. Your choices are try to exert as much control over it as possible, or don't use it.


That's not how informed consent works.


> You've succinctly identified a (maybe the) huge problem in the computing world today.

And getting downvoted for saying it, which is a fascinating incongruity.


> incongruity

Or signal of non-named stakeholders.


It’s amazing how hostile Silicon Valley (and HN commenters) are to the basic idea of consent. It’s as if simply asking the user for permission is a grave insult to these technologists. “I shouldn’t have to ask permission! It implies I’m doing something bad!” they might be thinking.

If the world was a nightclub, “Silicon Valley” would be a creepy guy who walks up to every woman and says “You’re now dating me. To stop, you need to opt out using a form that I will do my best to make sure you can’t read.”


You're inverting morality and infantilising the consumer. Apple is a corporation. Corporations don't owe you moral anything, except as required by law.

Choosing an Apple product is consent to trusting Apple. Continued use their products represents ongoing consent. This is an objective fact about all complex connected devices and it cannot possibly be otherwise.


Corporation are driven by people. They’re not a separate entity that decides to do things while their owners are sleeping. Every actions have someone that suggested it and someone that gave the green light.


Corporations are driven by shareholders, through the board of directors, through the c-suite, which have a fiduciary obligation to maximise profits.


There is significant middle ground between "do it without asking" and "ask about every single thing". A reasonable option would be "ask if the device can send anonymized data to Apple to enable such and such features". This setting can apply to this specific case, as well as other similar cases for other apps.


Asking the user is perfectly reasonable. Apple themselves used to understand and champion that approach.

https://www.youtube.com/watch?v=39iKLwlUqBo


If you can't meaningfully explain what you're doing then you can't obtain informed consent. If you can't obtain informed consent then that's not a sign to go ahead anyway, it's a sign that you shouldn't do it.

This isn't rocket surgery.


+100 for "rocket surgery".

I mostly agree. I'm just annoyed "this new privacy tech is too hard to explain" leads to "you shouldn't do it". This new privacy tech is a huge net positive for users.

Also: from other comments sounds like it might have been opt-in the whole time. Someone said a fresh install has it off.


> This new privacy tech is a huge net positive for users.

It's a positive compared to doing the same "feature" without the privacy tech. It's not necessarily a positive compared to not forcing the "feature" on the user at all.

The privacy tech isn't necessarily a positive as a whole if it leads companies to take more liberties in the name of "hey you don't need to be able to turn it off because we have this magical privacy tech (that nobody understands and may or may not actually work please don't look into it too hard)".


I don't care if all they collect is the bottom right pixel of the image and blur it up before sending it, the sending part is the problem. I don't want anything sent from MY device without my consent, whether it's plaintext or quantum proof.

You're presenting it as if you have to explain elliptic curve cryptography in order to toggle a "show password" dialogue but that's disingenuous framing, all you have to say is "Allow Apple to process your images", simple as that. Otherwise you can argue many things can't possibly be made into options. Should location data always be sent, because satellites are complicated and hard to explain? Should we let them choose whether they can turn wifi on or off, because you have to explain IEEE 802.11 to them?


> I don't want anything sent from MY device without my consent

Then don’t run someone else’s software on your device. It’s not your software, you are merely a licensee. Don’t delude yourself that you are morally entitled to absolute control over it.

The only way to have absolute control over software is with an RMS style obsession with Free software.


They might not be legally entitled to it, but that's just because of our shitty "intellectual property" laws. Morally speaking, OP is absolutely entitled to have a device that they own not spying on them.


Regardless of one's opinion of intellectual property laws, nobody is morally entitled to demand that someone else build the exact oroduct they want. In fact it is immoral to demand that of other people — and you certainly wouldn’t like it if other people could demand that of you.

Want a phone that doesn’t spy on you? Make it yourself. If you can’t, find some like-minded people and incentivise them (with money or otherwise) to make it for you. If they can’t (or won’t) perhaps contemplate the possibility that large capitalist enterprises might be the only practical way to develop some products.


This is just "might makes right" bullshit with slightly prettier framing.


This has absolutely nothing to do with "might makes right". If a fast food store decides to offer a Vietnamese Peanut Burger and Sugar Cane Juice combo, nut allergy suffers are not "morally entitled" to a nut-free option and diabetics are not "morally entitled" to a sugar-free juice option. This applies whether the fast food store is a small family run business, or McDonalds.

To suggest that customers are "morally entitled" to a Samsung phone with zero tracking and zero telemetry is similarly absurd. If you don't like Samsung's product, don't buy it.


> If a fast food store decides to offer a Vietnamese Peanut Burger and Sugar Cane Juice combo, nut allergy suffers are not "morally entitled" to a nut-free option and diabetics are not "morally entitled" to a sugar-free juice option.

Why not? What gives McD the right to make such a decision unilaterally, other than might?

In fact, this is how disability legislation (for example) already tends to work. You don't get to tell disabled people to just go somewhere else, you have to make reasonable accomodations for them.


> What gives McD the right to make such a decision unilaterally

This cannot be a serious question.


> nut allergy suffers are not "morally entitled" to a nut-free option

Restaurant have a legal obligation to warn the customers. AKA "opt-in" which is NOT what Apple is doing. And it's the whole issue with their behavior.


Apple's food scientists have verified the food safety of their new recipe, and they are sufficiently confident that nobody will suffer any allergic reaction. Nobody has disputed their assessment.

That doesn't stop consumers from engaging in Info Wars style paranoia, and grandstanding about the aforementioned paranoia.


That's absurd.

We can regulate these problems.

If the EU can regulate away the lightning connector they can regulate away this kind of stuff.


You're seriously arguing that it's absurd for customers to have "absolute control" over all software?

No EU regulation could regulate away all "moral" concerns over software. More specifically, they EU could regulate, but the overwhelming majority of software companies would either strip significant features out for EU customers, or exit the market altogether.


Lol, they keep threatening that but they still like the money of the europeans.


The EU hasn't threatened granting consumers "absolute control" over all software.


I'd vote for a party that said the only legal license is AGPL :D


The “moral entitlement” has nothing to do with this. The software is legally required to abide by its license agreement (which, by the way, you are supposed to have read, understood, and accepted prior to using said software).


I honestly can’t tell if you’re being sarcastic. A license grants the end user permission to use the software. It is not a series of obligations for how the software operates. This would be excruciatingly obvious if you read any software license.


A license agreement is, well, an agreement between the manufacturer and the consumer which may include a requirement to acknowledge certain aspects of how the software operates (e.g. the user may be required to agree to “share” some data).


Some commercial software licenses may include various disclaimers which exist to ward away litigious assholes. They only serve to protect the vendor against legal complaints, and do not impart responsibilities upon the vendor. Such disclaimers are not necessary but corporate lawyers have a raison d'être, and at a certain scale assholes become inevitable.


Notice is always good and Apple should implement notice.

However, "my data is being sent off my device" is incorrect, as GP explained. Metadata, derived from your data, with noise added to make it irreversible, is being sent off your device. It's the equivalent of sending an MD5 of your password somewhere; you may still object, but it is not factually correct to say your password was transmitted.


> It's the equivalent of sending an MD5 of your password somewhere; you may still object, but it is not factually correct to say your password was transmitted.

Hackers love to have MD5 checksums of passwords. They make it way easier to find the passwords in a brute force attack.

https://en.wikipedia.org/wiki/Rainbow_table


>> It's the equivalent of […]

> Hackers love to have MD5 checksums of passwords.

Hackers love not understanding analogies. :)


Hackers love to make defective analogies (especially redundant recursive ones) and invite sarcastic corrections to them.


Nobody responding seriously to this because you seem to have missed the part where GP said "with noise added to make it irreversible" and the third sentence in that wikipedia article.


Hackers don’t know about salts yet?


Bath salts yes, security salts, not so much.


> However, "my data is being sent off my device" is incorrect, as GP explained. Metadata, derived from your data, with noise added to make it irreversible, is being sent off your device.

Sounds like my data is being sent off my device.

> It's the equivalent of sending an MD5 of your password somewhere

Sounds even worse lol


It does not sound like that at all.

There is plenty of data on your device that isn’t “your data” simply due to existing on your device.


If the information being sent from my advice cannot be derived from anything other than my own data then it is my data. I don't care what pretty dress you put on it.


> It's the equivalent of sending an MD5 of your password somewhere

a) MD5 is reversible, it just cost GPU time to brute force

b) It is unproven that their implementation is irreversible


BFV has been proven to be irreversible, and Apple open sourced their Swift library implementing it, so it's not totally unproven.

https://github.com/apple/swift-homomorphic-encryption


Well that's what you're told is happening. As it's all proprietary closed source software that you can't inspect or look at or verify in any manner, you have absolutely zero evidence whether that's what's actually happening or not.


If you can't inspect it that just means you don't know how to use Ghidra/Hopper. ObjC is incredibly easy to decompile and Swift isn't much harder.


"Your data" is not actually being sent off your device, actually, it is being scrambled into completely unusable form for anyone except you.

This is a much greater level of security than what you would expect from a bank, for example, who needs to fully decrypt the data you send it. When using your banking apps over HTTPS (TLS), you are trusting the CA infrastructure, you are trusting all sorts of things. You have fewer points of failure when a key for homomorphic encryption resides only on your device.

"Opting-in by default" is therefore not unsafe.


I guess it depends on what you're calling "your data" -- without being able to reconstruct an image from a noised vector, can we say that that vector in any way represents "your data"? The way the process works, Apple makes their own data that leaves your device, but the photo never does.


It's the same as the CSAM initiative. It doesn't matter what they say they send, you cannot trust them to send what they say they send or trust them not to change it in the future.

Anything that leaves my devices should do so with my opt-IN permission.


Even if they implemented the feature with opt-in permissions, why would you trust this company to honor your negative response to the opt-in?


How would you explain client side vectorization, differential privacy and homomorphic encryption to a layman in a single privacy popup so that they can make an informed choice?

Or is it better to just trust that mathematics works and thus encryption is a viable way to preserve privacy and skip the dialog?


The big mistake here is ownership of your apple devices is an illusion...


Do you consider your data to include non-reversible hashes of your data injected with random noise? I'm not sure I consider that my data. Its also not even really meta-data about my data.


Do you use iCloud to store your photos?


I’m not the person you asked, but I agree with them. To answer your question: No, I do not use iCloud to store my photos. Even if I did, consent to store data is not the same as consent to scan or run checks on it. For a company whose messaging is all about user consent and privacy, that matters.

This would be easily solvable: On first run show a window with:

> Hey, we have this new cool feature that does X and is totally private because of Y [link to Learn More]

> Do you want to turn it on? You can change your mind later in Settings

> [Yes] [No]


When iCloud syncs between devices how do you think that happens without storing some type of metadata?

You don’t use iCloud for anything? When you change phones do you start fresh or use your computer for backups? Do sync bookmarks? Browsing history?

Do you use iMessage?


In response to your question in the parent comment, no, I do not use iCloud. And I do not sync any of the things you mentioned here. If someone already consented to using iCloud to store their photos then I would not consider the service mentioned this post to be such a big issue, because Apple would already have the data on their servers with the user's consent.

edit: I will just add, even if we accept the argument that it's extremely secure and impossible to leak information, then where do we draw the line between "extremely secure" and "somewhat secure" and "not secure at all"? Should we trust Apple to make this decision for us?


> If someone already consented to using iCloud to store their photos then I would not consider the service mentioned this post to be such a big issue, because Apple would already have the data on their servers with the user's consent.

No, if you enable Advanced Data Protection for iCloud[1], the photos stored in Apple Photos are end to end encrypted.

[1] https://support.apple.com/en-us/108756


Do you start fresh with an iOS installation after each upgrade or do you back up your iPhone using your computer and iTunes?


I do not have anything backed up on any cloud servers on any provider. If I had to buy a new phone I would start from a fresh installation and move all of my data locally. It's not that I'm a "luddite", I just couldn't keep track of all of the different ways each cloud provider was managing my data, so I disabled all of them.


If only Apple had a centralized backup service that could store everything automatically at a click of a button so you wouldn’t have to juggle multiple cloud providers…


Not all apps support Apple’s backup solution. Threema and Signal come to mind.


And that is because of policy choices by Signal.


So because of policy choices made by app developers, you have to manage multiple cloud solutions.

Or as the GP suggested, forego the cloud entirely. iCloud and Apple’s built in iOS backup is not a magic bullet unfortunately.


By one lone outlier who decides for “security” that the don’t want to support the platforms backup solution. That app purchase didn’t have to do anything besides store information locally in their sandbox


Does Signal allow the user to opt-in/opt-out into their policy? Or are they forcing this policy on their users?


No. They do not allow users to opt in


I kinda was somewhat with you until this point.

Apple IS just another cloud provider / centralized backup service. It's not fundamentally different than others, and if you're not in select group of whatever the respectful term is for those who stay strictly inside apple ecosystem, you will have multiple clouds and multiple data sets and multiple backups that all interact with each other and your heterogeneous devices in unpredictable ways. Icloud will not help you with that any more than google cloud or Samsung cloud etc. They all want to own all of your stuff, neither is simply a hyper helpful neutral director.


The “fundamental difference” is that it’s better integrated with your device and can backup the internal state of your device and the apps.

Even if you use Microsoft Office or GSuite and save using the standard file picker, you can save to iCloud. iCloud has a native app for Windows and plug ins on Windows to sync browser bookmarks for Chrome, Edge and Firefox

And the alternative people are proposing are four or five self hosted solutions?


Again, I think there's an assumption of single device / ecosystem loyalty in your statement? I have an android phone and iOS phone and three android tablets and a bunch of laptops with various operating systems.

Iphone is "just another device". I don't feel Icloud is any better integrated with my Samsung note, than google is integrated with my iPhone - in fact, the opposite. Google, for example, CAN sync my photos across iphone and Android and windows devices. Whereas my wife knows the primeval scream from the home office every 6 months I try to claw photos out of apple's greedy selfish hands :-)

For people who JUST use iphone, sure, Icloud is the boss just like for people who JUST use e.g. Samsung galaxy the Samsung cloud is awesome. But that's not a high bar. I feel we are still lacking empathy here for people like original poster who may have more than one device in their lives.


And none of these can sync your bookmarks, iOS settings or store the internal state of apps on your iPhone.

And I wouldn’t have the same arguments if they weee using Google cloud. But they are concerned about “privacy” and trust Google?

But my argument is about people thinking that Apple or Google should care about the minuscule number of people who are hosting their own syncing services


None of that is relevant to my point. You seem to be trying to catch people in some kind of gotcha instead of engaging honestly with the problem at hand. But alright, I’ll bite.

Yes, I always start with clean installs, both on iOS and on macOS. Sometimes I even restart fresh on the same device, as I make sure my hardware lasts. I don’t sync bookmarks, I keep them in Pinboard and none of them has any private or remotely identifiable information anyway. I don’t care about saving browser history either, in fact I have it set to periodically auto-clear, which is a feature in Safari.


No I am trying to say with a connected device using online services, the service provider is going to have access to your data that you use to interact with them.

To a first approximation, everyone in 2024 expects their data and settings to be transferred across devices.

People aren’t working as if it is 2010 when you had to backup and restore devices via iTunes. If I’m out of town somewhere and my phone gets lost, damaged or stolen, I can buy another iPhone, log into my account and everything gets restored as it was.

Just as I expect my watch progress to work when I use Netflix between my phone, iPad, Roku devices etc.


And that should rightfully be your informed choice. Just like everyone else should have the right to know what data their devices are sending before it happens and be given the informed choice to refuse. People shouldn’t have to learn that from a random blog post shared on a random website.


In what world is Netflix for instance not going to know your watch history?

How many people are going to say in 2024 that they don’t want continuous cloud backup? You want Windows Vista style pop ups and permissions?


How many times are you going to shift the goalposts? This is getting tiresome, so I’ll make it my last reply.

I don’t have Netflix but neither is that relevant to the point, you’re obviously and embarrassingly grasping at straws.

No one is arguing against continuous cloud backups, they’re arguing about sending data without consent. Which, by the way, is something Apple used to understand not to do.

https://www.youtube.com/watch?v=39iKLwlUqBo

Apple’s OS are already filled with Windows Vista style popups and permissions for inconsequential crap, people have been making fun of them for that for years.


If you are doing continuous cloud backups and using Apple services - you are already giving Apple your data and your solution is to add even more permissions? You are not going to both use any Apple service that requires an online component and keep Apple from having your data.

Isn’t it bad enough that I have a popup every time I copy and paste between apps?


> Isn’t it bad enough that I have a popup every time I copy and paste between apps?

For me, not really no. It reminds me I am copying information and not from some phishing app, I find it informative.

And I'm probably one of the few who actually click "Reject" to the cookie pop ups having to click no on 3742 legitimate consents.

The simple answer is everything should be opt-out. I'll opt-in if I require it because frankly, regardless to how Fort-Knox my data is $CORP still cannot be trusted.


If that’s the case, you aren’t using email either or messaging?


Strictly Signal via self-hosted VPN for messages. My email web client provided by my email server (Zimbra) which are hosted on colocated servers. 3cx for calls via self-hosted PBX.

Video conferencing instead of FaceTime are made via self-hosted Jitsi and if I am to brag all running on FreeBSD.

Out of Apple or Google I trust neither however will align with Apple more than Google. It's as close as I can get from not having data collected from mongrels.


Netflix being unable to know your watch history on their service is exactly the goal of homomorphic encryption. The technology to make that work at that scale does not exist, however for smaller bits of data, eg phone numbers, that's entirely possible!

With PIR, an Apple phone recieving a phone call queries Apple's database with that phone number, but because it's using homomorphic encryption, Apple doesn't know the number that called despite looking it up in their database to provide caller id info, so they can't tie your phone number and the callers phone number together.

https://machinelearning.apple.com/research/homomorphic-encry...


As a general principle, I think computers should execute commands that users issue, and then wait for the next command. That's it.

Computers should not be sneakily doing things in the background without my commanding them to do so. But if they insist that the only way they can work is by doing things in the background, then I expect the computer to at the very least obtain my consent before doing those things. And computers should definitely not be exfiltrating anything over to the network without my explicit command to do so. This shit world we are living in where your computer just does whatever the application developer wants it to do rather than what the user wants it to do has to come to an end!


Some iOS apps synchronize data with standard protocols (e.g. IMAP, WebDAV, CalDAV) to cloud or self-hosted services.


And that doesn’t help with internally stored data within apps, settings, which apps you have installed on what screen, passwords, etc


iOS supports local device backups.


[flagged]


Modern MacOS has that functionality included, no iTunes necessary.


Apple iTunes, iMazing (3rd party), Linux imobiledevice (OSS).


I hate this type of lukewarm take.

"Ah, I see you care about privacy, but you own a phone! How hypocritical of you!"


You’re describing Matt Bors’ Mister Gotcha.

https://thenib.com/mister-gotcha/


If you care about your “privacy” and no external service providers having access to your data - that means you can’t use iCloud - at all, any messages service, any back up service, use Plex and your own hosted media, not use a search engine, etc.


Do you use a phone?


Yes. I also don’t use Plex, have my own file syncing service running, run my own email server, etc.

I also don’t run a private chat server that people log into - I’m like most of the iPhone and Android using world


Maybe lay off the sanctimonious attitude then.


[flagged]


So you spam whataboutism comments here because you just don't care?

We need less sarcasm, not more.


No what we need is for people to realize that no multi trillion dollar company is going to make life harder for 99.999% of their users because of a few outliers


How exactly is a new feature that is not advertised harder for you, or for anyone for that matter?

I bet most of those made up numbers of yours will have no idea that the feature exists.

A simple screen like they usually do with "Whats new in iOS" could easily have let you enabled it on the get go, with the additional benefit that you would have been made aware of it existing.

This iOS 18.2 update had no such screen, I just updated.


Along with the dozens of other ways that Apple services are integrated into iOS?


> Along with the dozens of other ways that Apple services are integrated into iOS?

You're not making any sense.

The question I asked was

> How exactly is a new feature that is not advertised harder for you, or for anyone for that matter?


When your phone sends out a ping to search for cellular towers, real estate brokers collect all that information to track everywhere you go and which stores you visit.

Owning a phone is a privacy failure by default in the United States.


> When your phone sends out a ping to search for cellular towers, real estate brokers collect all that

Care to provide a pointer to what device they are using? I would absolutely get my real estate license for this.


You are being downvoted because you're so painfully correct. It's not an issue exclusive to the United States, but American intelligence leads the field far-and-away on both legal and extralegal surveillance. The compliance forced by US Government agencies certainly helps make data tracking inescapable for the average American.

Unfortunately, the knee-jerk reaction of many defense industry pundits (and VCs, for that matter) is that US intelligence is an unparalleled moral good, and the virtues of privacy aren't worth hamstringing our government's work. Many of these people will try to suppress comments like yours because it embarrasses Americans and American business by association. And I sympathize completely - I'm dumbfounded by the response from my government now that we know China is hacking our telecom records.


FWIW, SS7 had known flaws very long ago.

It's apparent it has been kept in place because of all of the value it provides to the 5 eyes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: