My opinion about:
1. Every pedophile know about the existence of this system, so I don't think it will be useful to fight those monster, maybe only marginally;
2. Anyway, is that legal ? Even if some crazy store material on his Apple hardware isn't that illegal search non usable in law courts ?
3. Child abuse is often used as Trojan horse to introduce questionable practice. What if:
- the system is used to looking for dissidents: I look for people that have a photo of Tiananmen Square protests on their pc, for example;
- for espionage: I have the hash of some documents of interest, so all the PCs with that kind of documents could be a valuable target;
- profiling people: you have computer virus sample on your PC -> security researcher/hacker;
I think that the system is prone to all kind of privacy abuse.
4. this could be part of the previous point, but, because I think it's the final and real reason for the existence of that system, I give to this point its own section: piracy fight. I think that the one of the real reason is to discourage the exchange of illigal multimedia material to enforce copyrighs.
For the listed reasons, I think that is a bad idea. Let me know what are you thinking about.
5. The system can easily be abused by governments or malicious actors to frame innocent people. People merely suspected of keeping such images are de-facto punished and stripped of rights even without standing before a judge or getting a conviction.
This is my primary concern. It will become a weapon to destroy the lives of anyone who is targeted by someone with middling or better hacking skills. A sort of digital “swatting” that makes using apple products a no go for anyone with cyber-enemies (one can’t opt out of Apple ID and apply security updates).
Google has been scanning your entire account for kiddie porn for the past decade.
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
Once again, this scan takes place on _their_ servers on data that is stored on _their_ servers. It does not take place on the device itself, which is the case with this new Apple thing.
Once again, the notion that everything on the device is scanned under Apple's system was never true.
Only photos you attempt to upload to Apple's iCloud are scanned. If you turn off iCloud photos, NOTHING is scanned.
>Q: So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?
A: If users are not using iCloud Photos, NeuralHash will not run
> Once again, the notion that everything on the device is scanned under Apple's system was never true.
That isn't what I said.
Also, that's not why most people are so upset. Most people are so upset mainly because Apple has now proven that the capability exists, so they can now be more easily compelled by governments to scan for "extra things".
Prior to this, if a government asked Apple to scan someone's phone, Apple could respond with "we don't have that capability", and it would presumably be a tough legal battle to force a company to add a capability that doesn't exist.
This hurdle is now much lower. The effort has gone from "force Apple to design a new system for scanning phones" to "add these couple of hashes to the pre-existing database".
Also, expanding this from just iCloud upload candidates to the entire device is a very small leap now. I mean, the bad guys could just turn off iCloud, and we must think of the children...
Then you have Apple's "reassurance" that they won't comply with government requests to scan for additional things, which is completely moot considering Apple relies on a third party database and has absolutely no control or idea of what the hashes really are.
The notion that scanning cloud data on device is somehow worse than doing the same thing on server is deeply flawed.
If you have a false positive on device, nothing is sent to Apple's servers. It takes several (possibly false) positives at once to trigger a human review.
If you have a single false positive on server, that data is sitting there where it can be subpoenaed and abused.
Also, recent history shows that Apple is willing to fight government demands to invade user privacy in court.
> Also, recent history shows that Apple is willing to fight government demands to invade user privacy in court.
I can only think of one instance where they did that (the San Bernardino shooter case), and the request was hugely overreaching (the FBI wanted them to compromise their software update signing services), and also they actually DID comply with giving the FBI access to their iCloud data -- just not the software update service.
This is a big part of the reason people are surprised and concerned about the scanning program, because it seems like a departure from what Apple has said and done about privacy of iPhone data for the last decade.
This fact is well known and changes nothing. The problem is that the system exists at all. The fine print WILL change - it always does, and that's also a well known fact.
Yes, that's a very reasonable assumption. I assume all information I give out will be shared beyond my control, unless the recipient promises in writing to protect it and would suffer proportionally if they broke that promise. In practice, this only happens when federal regulations apply (i.e., health care or banking).
If you want to rely on other people behaving a certain way in the future, either form a personal relationship or write up a contract.
It's completely within the realm of possibility that this could happen. A few bad quarters down the road and a leadership change might be all it takes.
Many of us are taking the perspective of decades long changes given our current trajectory.
If not in our time, it could be in our children's time. This is an extremely dangerous system.
It’s a difference in policy vs technical capability. Currently the policy is only scan when iCloud Photos is enabled, but the capability to scan at any/all times is just a policy change away.
No, it's a difference between scanning the files that users store in their respective clouds on-server or on-device.
Scanning on-device (where a single false positive cannot be subpoenaed and misused to incriminate their customers) is simply more private.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
Apple's new system is scanning personal property that doesn't belong to them and isn't yet in their cloud.
Gmail files that get scanned are contained on Google's property, in their cloud, on their machines.
Entirely different context.
It's the difference between the USPS coming into my home without permission and going through my documents, records, mail - versus if I send mail through their system and they track it, scan the envelope, etc.
The iPhone used to be pretty obviously personal property, now Apple is saying that's clearly no longer going to be the case going forward.
Oh, I like this USPS analogy but I'll clean it up. Google photos and chat are like a USPS that only stores and transmits post cards. It's understood by the creator/sender that anyone who has access to them can read them. Apple here is a USPS that sends sealed envelopes. They (say they) can't read what's inside as it's sent or stored. With this change they will create the 'capability' to show up whenever you decide to send an envelope and read it before you seal it up for sending.
> Apple's new system is scanning personal property that doesn't belong to them and isn't yet in their cloud.
Apple's new system only scans photos you attempt to upload to their cloud.
Nothing else is scanned.
Scanning the files on server, the way Google and Microsoft do it, means that false positive data is lying around where it can be subpoenaed and used to incriminate innocent people.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
>Apple's new system only scans photos you attempt to upload to their cloud.
And what if in the future they decide they need to scan more than images going to the cloud? What if there is some huge epidemic of child abuse or some other terrible thing and Apple decides they need to do more?
Capability has always been there. It is only worded now in a way which made it the most people understand.
Speculation about "doing something in hidden", is as valid as before.
In reality, we can be only be mad when they they are publicly making things worse in black box systems. Not about something, which is "policy change" away. Let's be mad when they actually change that policy.
I mean, we can split hairs over the words to use, but ultimately "immoral and unethical things are being done by big companies that hold all your stuff". The sentiment is the same.
What I'm getting at is that the things Google and Microsoft are doing are entirely irrelevant to the conversation at hand.
Apple is going to compromise your device's privacy in the name of child safety, and will - invariably - eventually cave to pressure to extend that capability well beyond it's originally well-meaning use case.
Stop bringing up what other companies are doing - it is, as I said, entirely irrelevant.
So what is your point then?
Is it that Apple's punch in the gut here, while bad/wrong, is beyond criticism or outage because if you use Google, you'll get a slap to the face?
Or is it that Apple actually isn't doing anything wrong simply because there is some roughly analogous behavior in your view by other companies?
My point is that Google and Microsoft have been scanning everything in your account (including data like emails and the files you mirror to their cloud drive) and have been doing so for the past decade.
Apple has announced a plan to scan only those photos you upload to iCloud Photos, and nothing else.
Further, Apple's scans will occur on device where a single false positive cannot be misused to incriminate you by anyone who can get a subpoena, because Apple's servers won't hold any data showing something happened.
Google and Microsoft's systems are much more invasive and much less private.
I can't reply to or upvote ~stalkersyndrome's response to add my applause, but I'll do it here instead. He's right. It's inconvenient, and people don't like reading that, which is why it's downvoted to dead, but alas, this is the same reason people avoid getting counseling, because they haven't worked up the courage to be honest with themselves yet. We'll get there, I hope.
Perhaps we should also rise again the question about who controls own computer because any abuse starts there.
Only full control over own device can prevent abuses. Especially when device comes any close to definition of being personal. You should be able to install own software on the personal device. Including os and bios/firmware.
Not only would this be marginal it also wouldn’t necessarily be catching the real “monsters”. I don’t think if you find someone with old already known about images that it would necessarily equate to someone that actually abuses children. I think about this in a similar way (not exactly) as I do with drugs, just because a person gets busted with drugs doesn’t mean they are a drug dealer or a maker of drugs.
This is not to say that perhaps there are some more active real-time stuff in these databases that maybe with enough searching could make its way back to the perpetrator and indeed maybe even find a victim. It’s just seems that that would be far more marginal and is generally what I’m concerned about when it comes to these issues. For me it’s more important to protect children than it is to bust some weirdos for looking at the wrong porn (these can both be related as well and I do understand that I just think it’s not as cut and dry as we believe it is), further if it keeps said weirdo from actually harming a child then let them have it. We allow these databases to exist for, presumably, the same reason, with the idea that we can stop future victims from happening.
> 2. Anyway, is that legal ? Even if some crazy store material on his Apple hardware isn't that illegal search non usable in law courts ?
Yes, it's considered legal. Apple reviews the content first. Courts say this means it is not an illegal government search. It's a search by a private party, who then manually decides to notify the government.
No, it's not. At least not here in Germany. By law, even police officers are not allowed to look at child porn. The only institution explicitely allowed to do so is the BSI.
The rest of the population implicitely incriminates themselves when they look at (not own) child porn, including Apple's legal entity or employees.
See [1] for 184b Strafgesetzbuch
I'm trying to point out that with this action Apple bluntly decided to ignore a whole lot of countries and their federal laws, which is not something I would embrace - even when they had good intentions.
I think this is a bad move by Apple even if the point is to set up E2EE later. However, one thing that everyone seems to forget is that all these pictures were already being sent un-encrypted to iCloud. ALL of the same issues already completely exist today and were already being scanned and we have heard no outcry. ALL of the same loopholes and unreasonable warrants can be used against you today with all of the un-encrypted data they have on their server right now.
The one thing that occurred to me is that this is almost seems like this is a cya, Section 230 protection in disguise. There has been more discussions about Big Tech and 230, and this is one way to say "Look, we are compliant on our platform. Don't remove our protections or break us up, we are your friend!" It also shouldn't be too surprising given how Apple has behaved in China. They will only push back against the government up until the point it starts to affect profits.
> this is a black pattern of going down step-by-step.
This is very hard to argue. Functionality like spying all your files is trivial to add, and technically we haven't really moved anywhere. "Now technology is there", is not valid argument since it has always been there.
Scanning your files and send some metadata is the feature which requires least effort to make from everything that Apple has released.
It might feel bad, when your device scans your to-be-uploaded-cloud images now, but iOS has never been yours. It is very closed system, a part of Apple-ecosystem. Only a guy who as has access for whole iOS source code knows what is actually happening in there. On Apple-ecosystem, only the final result matters in reality and what they say.
Since your device is not really yours, you should think like that you are just using Apple-ecosystem, being part of it. If you don't want that, you should have switched into some Linux phone already.
You can speculate all-day what else it might do in hidden in the future. Speculation about hidden features is as valid now as it was yesterday or will be tomorrow.
In reality, we can only be really worried when they publicly say something, which finally makes the end results worse. This did not happen yet. Actually the opposite happened, but here we go.
We have been trusting Apple for quite some time, and they really haven't got caught on doing something else than they have said, so what has changed?
I agree we should do so! That said, we should discourage behavior that is absolutely guaranteed to be abusive in the future.
We can call it speculation - that doesn't mean it's wrong. Power is the play, and companies will always be leveraged for the benefit of the powerful. This seems pretty indisputable to me.
I agree and my first statement was this is bad. My point was why has no one been complaining about it already being very bad and reacting with "I'm selling all iDevices" when it gets worse?
Of course it would be possible to implement content search, profiling and reporting mechanisms for such content, but this seems to be a singularly bad platform for that sort of search.
The image profiles are part of the OS so there's no mechanism to deliver image profiles separately for different countries. Also when the threshold number of matching images is reached, the matches are reported to a manual reviewer at Apple not a government. It only checks images on upload to iCloud photo storage.
So of course each of these limitations of the system could be changed, but you'd really need to change all of them and at that point you've created a completely different system. There's no simple change to this system that would suddenly turn it into a snitch for e.g. China or Saudi Arabia.
I've seen exactly the same objections raised every time any kind of device content search has become mainstream. Back in the 90s it was virus checking (Do you trust the AV company? What if they were bribed by the content companies?), full device indexing and search (Do you trust the OS vendor? What if they're in league with the government?). I'm very surprised this didn't blow up when Apple implemented ubiquitous image text recognition. Maybe it did. AV and device indexing mechanisms, which are ubiquitous, seem like a far more vulnerable target for such requirements.
So I don't really buy the slippery slope argument. In theory any government could pass a law requiring any company operating in it's jurisdiction to do anything, with an implementation suitable to that actual purpose. Of course this mechanism is motivated by laws in the US so it's a perfect example of exactly that, and it's a completely new system not a slippery slope subversion of an existing one. The real slippery slope here is legislative, not technical and I think that should be far, far more concerning.
I do think the legal and moral questions about this mechanism are legitimate. I think it would make more sense for Apple to scan photos in their cloud storage on the cloud storage rather than on upload. I understand there are theoretical privacy benefits to users from this implementation but the optics of having user's devices snitch on them are all wrong.
>Back in the 90s it was virus checking (Do you trust the AV company? What if they were bribed by the content companies?), full device indexing and search (Do you trust the OS vendor? What if they're in league with the government?)
These are examples of companies choosing to do something as a selling point of their software as a benefit to the end user, and people worrying that it could aid the government down the line if they change their mind.
Apple's content review change is explicitly FOR reporting people to police in a way that can be expanded beyond it's currently set purpose (child porn) later.
>I'm very surprised this didn't blow up when Apple implemented ubiquitous image text recognition.
I'm personally not a fan of that stuff anyway, but personally if it's only my local device I don't tend to care about image recognition, it's only when it involves communicating information from MY hardware to THEIR servers that I get antsy.
>Apple's content review change is explicitly FOR reporting people to police in a way that can be expanded beyond it's currently set purpose (child porn) later.
I think it would be very hard to expand this beyond it's currently intended purpose, for the reasons I've given. It's terrible for identifying dissidents because it only catches them if they upload to iCloud servers. Dissidents are much more likely to be tech savvy than random child molesters. The reports have to go through Apple, and don't go directly to the cops. Also it's a global image profile list so it's not possible to keep country specific updates secret.
An effctive surveillance mechanism would need to change all of these.
>It's terrible for identifying dissidents because it only catches them if they upload to iCloud servers.
This is a configuration change. Without knowing the implementation, I'd bet a lunch that, for the time-being, the reason this thing is executed only upon upload to iCloud is because there's some simple business logic buried in there telling it to do so.
>Dissidents are much more likely to be tech savvy than random child molesters.
This is a curious argument. You didn't explain why you think this might be. What is it about a dissident that makes him or her more savvy than some random child molester?
>An effctive surveillance mechanism would need to change all of these.
If true, the obstacles you outlined are trivial to overcome.
Not it isn't, the check is built into the upload client, they'd have to implement an on-device storage scanning mechanism. That's a different type of system implemented in a different kind of service.
Not that doing that is hard at all, it's not rocket science and they already have full-system indexing and search, but that's also why this isn't a significant step down any kind of technical slippery slope. The problem here is legislative, not technical.
Apple should just scan the pictures that are in iCloud (their servers). They just assumed that if you have the iCloud option enabled on your device that it gave them the right to do the scan on your phone/computer.
I want to also point out that A/V companies never said they were going to scan for child abuse images on your computer and report you if they found any.
> Apple should just scan the pictures that are in iCloud (their servers). They just assumed that if you have the iCloud option enabled on your device that it gave them the right to do the scan on your phone/computer.
End result is the same. Difference is, that now Apple has very limited access to your images. You can only trust in closed systems. When you step into the Apple ecosystem, you are giving a lot of trust.
> I want to also point out that A/V companies never said they were going to scan for child abuse images on your computer and report you if they found any.
Why would they say, if it is perfectly legal to do anyway. They literally scan every file, so no need to mention anything specific which could lead only for negative PR.
>The image profiles are part of the OS so there's no mechanism to deliver image profiles separately for different countries
Haven't Apple already said it WILL be country specific?
>Apple’s new feature for detection of Child Sexual Abuse Material (CSAM) content in iCloud Photos will launch first in the United States, as 9to5Mac reported yesterday. Apple confirmed today, however, that any expansion outside of the United States will occur on a country-by-country basis depending on local laws and regulations.
I think they'd need to be country-aware at least, otherwise the FBI or whoever will get reports for all people on earth when they presumably don't need them for anyone outside the US?
Reporting is country specific and US only yes, but the profiles are delivered baked into the OS. I suspect this is so that pedophiles can't buy a phone mail order from Canada and bypass the system.
I think the profiles will need to be country specific too. What counts as CSAM in some places doesnt in others (here in the UK we have a ban on cartoons but bath pics are allowed for instance).
This is something Apple have been pressed on a lot. So far (I'd be happy to be corrected) they've only said "whatever local law permits". That sounds ok, till you realise Saudi will want gays reported and China wont like any Winnie the Pooh pics...
China already operates their own iCud storage so this is irrelevant to them.
Apple doesn't have any iCloud data centres in Saudi, so Saudi can't pass laws about what is or isn't stored in them.
Look, the way this works and how it's implemented matters. It's stunning to me how many people are thoroughly confused and jump to unwarranted conclusions about how this actually works and what that means.
I dont think your saudi or china points grasp the nature of this tech. This is about checking what users have on their devices BEFORE it is uploaded to icloud.
So both China and the Saudis (any plenty of other governments) will be very interested as right now, it takes a lot more effort for them to access phone contents (there certainly aren't mass surveillance programs like this for handsets).
I weirdly agree with your last paragraph, but i think we disagree about the details. I can't find any evidence for your assessment that this can only be used against 1 (US) set of image hashes. Or that shitty regimes won't be allowed to abuse it.
If Apple came out and proved that, i might not be happy but my worst fears would be gone. Their silence is sort of deafening at this point...
An Apple recruiter recently reached out to me. I am in a fortunate position to turn down opportunities, so I made sure to explain that I am not interested in working for a company that is at the forefront of enabling further infringement on people's privacy. If you are able to push back, do it in any small way that you can.
In many ways Apple is also the world leader on consumer privacy, pushing for changes when the rest of the industry is walking in the opposite direction. Paying with Apple Pay makes you safer because it gives out minimal payment information; the Target fiasco would've been avoided.
Sign in with Apple allows users to provide minimal information in signing up for accounts; the idea that casual users should know how to setup email aliases is a joke. Apple private relay is the closest to getting grandma to use TOR. Apple is working on stopping pixel tracking in email.
Apple is also leading on the story of user permissions, which is a broken model where you blame users for accepting all the snooping in their lives, for not reading the TOS, and for their failure to negotiate against Walmart.
As always when talking about security and privacy, you need to understand the threat model.
Apple protects users from some threats while also becoming itself the biggest threat to users. And this is exactly what Apple wants. This is how you use Stockholm syndrome to entrench a feudal system.
The relationship is not 3-way as Apple wants users to believe (Apple the defender, users the victim, third-parties the aggressor). The map of the territory is a lot more complex.
Framing relationships in this "triangle" is not new. As parent says, reality is a lot more complex, and this kind of frame-of-mind can be very detrimental:
In what case is the software not the biggest threat, though? Software is completely malleable. How many people update their systems by downloading the source diffs of all the libraries and apps, reviewing them line by line, and compiling locally? For the vast majority, the underlying software is considered a trusted system out of necessity.
The surprising thing to me is that there have been so few critical reviews on the system as it exists today - they are 99.9% "what if" scenarios.
To put it a different way, the possibility has always existed that your trust could turn out to suddenly be misplaced in a single, near-instantaneous policy change. So most of the discussions are actually about reevaluating whether they should consider Apple devices to be a trusted system or not based on this policy change, and trying to predict future policy changes based on it.
The reality is that Apple's spat with the FBI was possible because the US legal system allows it. Other countries can demand anything they want, and Apple has to negotiate with them or decide whether they have to leave that market. The scanning is a US-only feature to comply with US regulations.
If say China adopts a policy Apple does not want to abide by, their choice is exclusively to leave the Chinese market and to potentially adapt to no Chinese manufacturing or even Chinese suppliers. But this is no more or less true than last week.
Apple also dropped the idea of E2EE iCloud backups at the FBI's behest. They've also handed over iCloud in China to GCBD creating an incredible apparatus for the surveillance state.
And now we are supposed to believe they can resist adding a hash to a list if the government asks? Are they serious?
They never said they will resist adding a hash. They just implied it with weasel words. They also said they will accept hashes from “other child protection organizations.” Which clearly means each country can define, with its own horrendous laws, what they think children need to be protected from… content showing children being put in unislamic situations (Saudi Arabia), content showing children living happily in a family with gay parents (Russia), content showing a child being abused by being forced to hold a Taiwanese flag (China), etc.
Apple isn't the Dutch East India Company. It is better to ask the citizens of the American democracy to keep their government accountable than it is to ask Apple to be so powerful that it can shrug off national governments.
You don't want CSAM scanning in the USA? Then ask your national government to change its laws requiring companies to be so vigilant against CSAM. Or we can ask that Apple become so powerful that they can shrug off the government.
That Apple cannot resist the government does not negate the observation that Apple is also a leader in consumer privacy.
The US government is NOT forcing Apple to scan customer data, in fact the entire legal framework of this scanning in the US critically depends on Apple performing the scanning without Government coercion. If the Government forced the scanning or even substantially incentivized it, then it would require a warrant per the fourth amendment.
> The US government is NOT forcing Apple to scan customer data
Unfortunately we do not know that for certain. Companies were compelled to sign up to PRISM against their will. Yahoo in particular tried to resist and were threatened with financial destruction [1]. The Feds can essentially levy any sum of daily fines on a company like Yahoo or Apple any time they see fit, with a rubber stamp from the FISA court; they effectively demonstrated that with their confrontation with Yahoo. And it clearly terrified Yahoo.
We have no idea if what's being put into place by Apple is the start of a new program by the powers that be and Apple has little choice but to comply realistically, or be hammered financially (or worse, the Feds might just get dirty and target executives personally). If they were working on PRISM 3.0 and attempting to implement it, we would never know it at this juncture.
It's worth being suspicious of what's going on given the one certainty is we know very clearly what the authorities want, what they'd like to see happen, and that they never stop trying to prod things in that direction. They're always up to something shady, always looking for ways to advance the surveillance state. The Biden Admin years will see that effort turbo charged once again, as with the Bush & Obama years. Whatever they're up to right now (again, they're always working on new programs like PRISM, always), you can be sure it's big, likely illegal and a gross violation of human rights.
If they were forced the searches would be an unlawful violation of the constitution. Various tech companies have repeatedly testified under oath that they are performing the searches of customer data of their own free will and for their own benefit.
I wouldn't argue that it's an impossibility-- but if true it would be a shocking revelation that would result in hundreds or even thousands of cases being overturned once it was exposed. And if it were true it shouldn't improve our opinion of Apple's actions in the slightest: instead of being just an unethical invasion of privacy for commercial gain, they'd instead be complicit in a secret conspiracy to illegally violate the rights of hundreds of millions of Americans.
As a long time Apple user and developer (since black and white Apple II), I have witness the rise/fall/rise again/and now fall again of Apple. The future computer I WILL be Linux/FreeBSD for all important stuff.
Well done, I'm with you. I've being doing similar in the games industry for years. If we, the actual makers stand against immoral action it will both build pressure and incentive for alternative ways of doing business.
Also, after you're hired, a group of Apple employees can sign a petition saying something you wrote 10 years ago--from which they saw an excerpt--bothers them....and Apple will fire you.
> I am not interested in working for a company that is at the forefront of enabling further infringement on people's privacy
Conducting scans on device instead of on server is your idea of infringement of privacy?
Apple's system keeps everything off their servers until there is an instance where many images on device match known examples of child porn and a human review is triggered.
Google's system scans everything on server, so a single false positive is open to misuse by anyone who can get a subpoena.
We've seen Google data misused to persecute the innocent before.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for murder and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
>Conducting scans on device instead of on server is your idea of infringement of privacy?
Why are you asking if the poster still beats their wife?
(More specifically, you're pre-supposing scanning must happen, which by itself is a highly debatable assertion)
Your point with Google is absolutely sound, but you seem to stop short of actually accepting that actual privacy (no peeking damnit) is dead on arrival. This is a case of rhetorical stealth goalpost moving whether you intended that or not.
No, I'm relaying the fact that scanning does happen, and has been happening for the past decade.
>The system that scans cloud drives for illegal images was created by Microsoft and Dartmouth College and donated to NCMEC. The organization creates signatures of the worst known images of child pornography, approximately 16,000 files at present. These file signatures are given to service providers who then try to match them to user files in order to prevent further distribution of the images themselves, a Microsoft spokesperson told NBC News. (Microsoft implemented image-matching technology in its own services, such as Bing and SkyDrive.)
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
Apple refused to implement this until they found a more private method to handle things.
Only photos you upload to iCloud are scanned and nothing happens unless multiple images match known examples of kiddie porn. In that case, a human review is triggered to make sure you didn't just have several false positives at once.
>Conducting scans on device instead of on server is your idea of infringement of privacy?
It's an infringement on my right to freedom of speech. Client-side scanning merely opens the door for my device to censor me from sending any message of my choosing and impacts my ability to freely communicate. What is today child abuse, tomorrow is health information and further descends to political and religious memes, or whatever other content is deemed problematic.
> Client-side scanning merely opens the door for my device to censor me from sending any message of my choosing and impacts my ability to freely communicate.
Nonsense.
Only photos you attempt to upload to Apple's iCloud are scanned. If you don't like it, turn off iCloud photos.
>Q: So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?
A: If users are not using iCloud Photos, NeuralHash will not run
I stand behind my comments for any type of client-side scanning.
There are two upcoming changes from Apple that are often conflated. First, indiscriminate server-side scanning of photos in iCloud against a non-public source database. Second, client-side scanning of messages for child accounts looking for nudity.
Again, client-side scanning is part of the changes that Apple is implementing and I'm projecting on how the terms and conditions of this client-side behavior can and almost certainly will change over the coming years and iOS versions. It's a slow yet accelerating descent to hell.
I don't want to be that guy, but for this job there were lining up 300 more people.
Nobody except of a tiny group of nerdy guys (including myself, ofc) is against this apple csam move.
Just ask your parents or your non-tech friends if it's "ok" to scan people's phones to find those "bad pedophiles" in order to jail then up for the rest of their life. You will be surprised how much support Apple's initiative has in the broad public.
And that's why apple made this move. They don't really care for the 3% of people who we belong to. They do it because they know they will have the public and political support.
> Just ask your parents or your non-tech friends if it's "ok" to scan people's phones to find those "bad pedophiles" in order to jail then up for the rest of their life. You will be surprised how much support Apple's initiative has in the broad public.
My Dad is an old teacher today and was formerly a farmer.
My view is he clearly understands these issues and has done since I was a teenager sometime in the last millenium when I followed him around the farm and we talked about stuff.
Maybe your parents are like what you describe but don't underestimate other peoples parents. They might not agree immediately, but if one is careful many actually aren't unreasonable.
Also everyone: stop this defeatist attitude. Instead of asking leading questions, talk about it calmly and politely.
Just explain that once this system is in place it will be used for anything, not just photos (or otherwise bad guys could just zip the files). And when everything is scanned some people will add terrorist material (i.e. history and chemistry books), other will add extremist material (religious writings), blasphemous material (Christian or Atheist teachings in Saudi Arabia), and other illegal content (Winnie the Pooh, man against tank etc in China).
In the paragraph above there should be something to make everyone from Ateheists through Christians, Muslims, nerds, art lovers and Winnie the Pooh fans see why this is a bad idea.
some people will add terrorist material (i.e. history and chemistry books), other will add extremist material (religious writings), blasphemous material (Christian or Atheist teachings in Saudi Arabia), and other illegal content
Apple doesn’t seem to do that covertly, what is wrong with blaming them when they actually do above things? This is a genuine question, because rn I see no issue with automated searching CP through their my iphone.
In theory, Apple could silently push above updates without any prior practice. Then why this issue became an issue only now?
You know how in history, occasionally the good old king would die and his crazy nephew would take over and burn all the peasants? This was possible because the king had absolute power.
Now, you might argue that this absolute power in this case is being used for good - but given the brush the US just had with accidental Nero, it’s worth being wary of how tools and powers might not only be used by current powers, but by future ones too.
Exactly. If you don't want weapons of mass destruction to exist, don't create them and definitely don't tell anyone (too late).
If you don't want tools of mass oppression to exist, don't create them. (We are here now.)
The fight against Japan in 1945 was important, as is the fight agains child abuse today.
But we will have to live with the choises we make, in the short run like I wrote about above and also in the long run when a crazy president is elected like you write.
I actually trust local police and courts. But I don't blindly trust future police, future courts and future politicians.
And when it comes to multi national companies I trust them to maximize shareholder value, even if that means doing what China or Saudi Arabia wishes.
As a sibling commenter noted, we are long past that. I see points of all of you in this subthread and I agree, but this doesn’t answer my last question. Tools of mass everything are already here for more than a decade, ready to deploy and use. And when these are used to do an actually good thing (stopping dickpics to minors), everyone wakes up and blames them for the possibility that could always be deployed overnight without any prior notice.
> Tools of mass everything are already here for more than a decade, ready to deploy and use.
You are forgiven if you have missed it but in the wake of Snowden Google and others have hardened their systems massively.
Signal, Matrix and others are actually making it hard to do dragnet surveillance.
> And when these are used to do an actually good thing (stopping dickpics to minors), everyone wakes up and blames them for the possibility that could always be deployed overnight without any prior notice.
Because boundaries have been overstepped again. This is a constant battle that we software people have with authorities :-)
There has been an informal truce that they leave our devices alone and we accept that they scan the cloud.
Now things are about to change and we'll respond. We've won before and I think we can do it again.
PS: There are always good reasons.
PPS: We won the last big one: Cryptography software was "munitions" and couldn't be exported until someone took it on them to make a book out of it, ship it to Europe and let cryptography people here scan it.
So according to the argument up front terrorists won, and I guess we should have a lot of problems now, but we don't have.
If you've never seen a slippery slope in action, ask someone else if they have. It always starts with something everyone can agree on. It's the inevitable slippage over time as the population replaces itself with people who arrn't intrinsically cognizant of the "before" state and the implicit normalization of deviance that represents. We may only live for about 100 years, but I challenge you to look at the size of the United States Code and what has been specifically carved out as illegal or aberrancies normalized just in 200 years.
If you don't want the atmosphere full of toxic corrosive oxygen, don't breathe it out during photosynthesis. (we were here 2.5 billion years ago, worked great for methanogens)
If you don't want to live in oppressive stratified sedentary society don't invent agriculture. (we were here ~dozen millenia ago, worked great for hunter-gatherers)
I used to read the terms when I was younger and even more stupid than today.
I fully expect someone else does it now and I even think there exist GitHub repos or some SaaS or something (some tldr for eulas?)
(Today I more or less consequently don't read thenm because 1. as a European they aren't valid if they go beyond what European law allows 2. nobody can be expected to read those anyway and if I admit to reading them I just make my life harder.)
> ... what is wrong with blaming them when they actually do above things?
It is unfortunate because although the consequences are seismic the actual problem is subtle and abstract; and difficult to get people excited over. Basically, civilisations that protect the weak against the strong (and in this case, the strongest party is by far the government and the police) are more prosperous. The more the strong are empowered to act against the weak the worse the actual outcome gets. Although not on any one easy axis to measure.
I suspect part of it is that every political movement hinges on a small network of people organising it. These systems are fantastic plausible-deniability screens for powerful people to disrupt and destroy those networks to preserve the status quo. Like, for example, how China tries to operate.
You can see signs of similar systems developing in the US. Note that Trump then Biden were both the targets of official investigations (Trump-Russia, Bidan's son & Ukraine). That isn't going to go away, it'll probably be a long time before we see a president who isn't being investigated for something. The tools that people like Apple are building will be drawn in to the struggle, and not to promote truth or fairness but to destroy their support networks if they aren't friends of the Tim Cooks and Susan Wojcickis of the world. And make no mistake, powerful people aren't looking after your interests because you like the companies they run.
Plus on the way through they are going to be used to target minorities. That part is just sort of traditional, though incidental. Like when they decide to search people's phones for marijuana use but not cocaine and it turns out different racial groups use different drugs.
The only defence is blanket bans on activity that could be used to target people.
> once this system is in place it will be used for anything, not just photos
The system seems designed to make it hard to use it for anything else. How will a hash driven by a visual perception based neural net be used on ZIP files? How can you add ZIP files to your iCloud Photo Library?
Sure, Apple could possibly do any number of bad and worse things. It’s a matter of trust that every time we update our iPhones that the update doesn’t include a ZIP file scanner or a blasphemy-scanner. This has always been the case, even before the introduction of the CSAM voucher mechanism.
I would probably take a 10 year version of that bet.
I would probably also take a more broad version of that bet, if we agreed upon a good definition of "abuse".
10 years is tricky though, because this topic has a political angle about it.
I look at this stuff as a political move by Apple as much as anything else. There's a lot of political pressure around encryption, and the "think of the children" angle is very compelling for a lot of people. This CSAM voucher system is cleverly designed to handle that concern without compromising privacy or security for anyone who isn't uploading multiple previously-known CSAM images to their iCloud Photo Library.
How this political situation will unfold over the next 10 years is hard to say. I hope for the best. But it's important for threats to privacy and security to be challenged.
I have wished for more legitimate and valid criticism of this system. Almost every criticism that I've seen is based on plain misunderstandings of how the system works, which isn't helpful.
> once this system is in place it will be used for anything, not just photos
There is a system in place to make a 'backup' of the entire device to a remote server in place and has been in place on every iphone since October 12, 2011. The entire device is covered; logs, calls, messages, files, photos etc. Pandoras box has been open for 10 years.
It is called iCloud backup. If they want to repurpose an existing function against the expressed permission of the user to exfiltrate their data and use it against them, why not just use that instead?
I think you oversimplify this by a lot. No, Apple reputation won't be severely damaged by this move immediately. But I do believe that those "nerdy guys" did a lot to push the Apple brand, and a big part of that push was due to security and privacy. Until recently Apple was always the "privacy brand" and it was hard to argue against it without going the full FSF route of argumentation.
This is no longer the case and I'm sure this will deal some damage over time, even if it only starts with the "screeching voices" of the (nerdy) minority. Maybe not directly to their revenue, but certainly to their reputation. Nothing wrong with shaving off a bit of the prestige of working at Apple ;)
> I don't want to be that guy, but for this job there were lining up 300 more people.
This is the case for many jobs that don't come close to the holy "working at Apple".
> But I do believe that those "nerdy guys" did a lot to push the Apple brand
We sure did. We're also ignoring all the celebrities that helped to push the Apple brand (remember the iPod and it's terrible but iconic white headphones?).
Popular culture has a much stronger influence on Apple's standing with non-nerds, than nerds do. We like to think we're important, and that people value our opinions, but increasingly that's not true. as the differences between the options diminish to 'probably bad for the world long term' (big tech) and 'probably bad for you today' (open source / pinephone / etc, they're just not usable enough yet), the value of our opinions drops, because there's no real choice between bad or badder.
I Was the reason my family switched from Windows and Linux to OSX back when it was popular among hackers. They had seen the ads with celebrities for years and it didn't move them the way I did. I thought we finally had a unix-like OS that we could all standardize on without my mom freaking out at Gnome and for a time that was true.
They remind me every time I bring up stuff like this that I was the one who pushed them to use Apple. In retrospect I wish I had taken FOSS more seriously.
Not to mention, the "nerdy guys" play a big role in deciding what hardware your company buys, what OS and applications are installed, etc. Friends and family reach out to the nerdy guys for recommendations, etc. The impact is slow but massive.
You can't seriously be recommending a Librem 5 for mom and grandpa? That's absurd. The thing barely functions.
As the parent commenter noted, Apple was really one of the only reasonable recommendations from privacy-oriented nerdy guys to their friends and family, if they didn't want to go the out-of-touch-with-reality route by recommending the phones you listed.
As I mentioned elsewhere in this discussion, when talking about privacy and security you need to talk about threat models. Privacy-oriented is not a single direction.
Those recommending iPhones to parrents have a different threat model than those choosing FLOSS devices.
If you think that you’re completely wrong. I have been asked about it by four non technical people so far after it made national news in the UK. There is a lot of anti surveillance sentiment here and it’s appearing in general public regularly.
I regularly go out with groups of random people on Meetup with no shared technical interest as well and I’m surprised at how much anti tracking and surveillance sentiment there is. It got to the point that out of 25 people on a trip out no one used NHS track and trace because they don’t trust it or don’t own a smartphone. This is across the 20-50 age group.
> but for this job there were lining up 300 more people
let them take it then. I try to minimize the blood on my hands.
>Just ask your parents or your non-tech friends
My parents were unhappy with it - they're non technical and not particularly concerned with privacy. I don't think they'll switch but they did ask how to mitigate it. I'm currently scrambling for a (friendly) alternative to icloud photos.
> They don't really care for the 3% of people who we belong to
Welcome to cyberpunk dystopia! Grab a devterm by clockwork (no affiliation), and log in, cowboy.
I agree with you, the general public doesn't give a shit. There will be headlines for a few times, some people change their phones, and that'll be it. The biggest of these movements, I think, is the "de-googling" one. There's a myriad of articles, subreddits, guides, websites even, listing alternatives. And look what happened to Google. Nothing.
When the alternative to apple's surveillance is to smash the phone against a wall, and buy something that's much less convenient, suddenly surveillance is not that big of a problem. And this is very important to note because many world's powerful entities are moving in this direction.
But it doesn't happen over night. The cracks are forming though.
Duckduckgo.com is now at 93,533,476 searches daily.
My non-technical brother just purchased a 3 year Fastmail account and switched from Chrome to Firefox. For added effect, he bought a subscription to Bitwarden. I didn't push him to do any of this, I just told him what I'm using.
His wife refused to put an internet enabled webcam in their new babies room, citing security concerns.
> You will be surprised how much support Apple's initiative has in the broad public.
I wouldn't be, but that's not the issue, the broad public is gullible, the overwhelming majority probably still believe that Iraq had WMDs before invasion.
> I don't want to be that guy, but for this job there were lining up 300 more people.
All of which could decide to stand up for individual rights, but won't with similar excuses to the one you formulated.
I know in US culture some see it as a strength to be selfish, but yet they complain about the society and the politics this kind of mentality necessarily lead to. If all the others are selfish, why should I be the sucker who pays for having principles?
Because suckers with principles shape a society until they don't.
were against hitler, ussr (inside ussr), unlimited king's powers, religion fanaticism, witch hunting, etc ...
in the beginning
today it's surveillance and attempts to legalize such abuses by Apple using some BS cover story intended to create emotional response and this way to fog the real issue: Spyware Engine installation/legalization
I am beginning to wonder if this was the plan all along. Back in 2013 via the Snowden leaks, it was revealed Apple was associated with the NSA domestic surveillance program, PRISM. It appears they (NSA and Apple, et al) pulled out due to the level of negative PR.
After 8 years, the intelligence community and tech companies figured out they could sell their surveillance through a thinly veiled effort to “protect X group” (in this case it was children).
Sorry, but I do not believe that is what the leak revealed.
There was a slide that indicated that data from Apple and other companies was now part of the PRISM program.
I am not trying to deny or refute Snowden's whistleblowing. I think it is highly likely that PRISM exists. What I dispute are the speculations that the companies listed are complicit.
The 2012 date is quite suspicious - it is precisely the same year that a new Apple datacenter in Prineville came online. Facebook also has a datacenter. Literally next door. Facebook also appears on those slides. I am not sure who else is also now in the area.
I wonder where all of the network cables go?
I personally think that PRISM works by externally intercepting data communication lines running to these facilities. Similar to the rumors that international comms links have been tapped. The companies themselves have not participated, but the data path has been compromised.
The NSA has previously tapped lines (AT&T), but they made the mistake of doing it inside the AT&T building. Google "Room 641A at 611 Folsom Street, SF". That is where "beam splitting" was done. This eventually leaked out. The NSA isn't stupid, I doubt they wanted to repeat that sort of discovery. The best way to keep something from being discovered is to not let people know. This is why I think it is believable and likely that the companies listed on the slides have no idea what has been done.
I will also note that PRISM and "beam splitting" are a rather cosy coincidence.
I think it is most likely that PRISM is implemented without the knowledge of anyone except the NSA and in Prineville there is some "diversion" of network cabling to a private facility that is tapping the lines.
> I personally think that PRISM works by externally intercepting data communication lines running to these facilities. Similar to the rumors that international comms links have been tapped. The companies themselves have not participated, but the data path has been compromised.
That wouldn't work without the company being at least passively complicit. Links between datacenters are encrypted. If you want even basic PCI-DSS compliance then links between racks must be encrypted (and a rack that uses unencrypted links must be physically secured). And properly implemented TLS or equivalent (which is table stakes for a company that takes this stuff at all seriously) can't be broken by the NSA directly (and if it could be then everything would be hopeless). Thus the MUSCULAR programme where the NSA put their own equipment in Google's datacenters - that's really the only way you can do it.
Remember how the legal regime in the US works with National Security Letters. Companies can be, and are, required to install these backdoors and required to keep their existence, and the existence of the letter itself, secret. Of course Google, Apple, Facebook, every other company with a significant US presence is in receipt of one of those letters and has installed backdoors - the NSA aren't stupid, what else would those laws and their funding be for?
PCI-DSS does not mandate encryption between racks or datacenters, maybe your own PCI compatible policy does. I’ve worked in PCI-DSS environments (one of which being tier 1 with on-site cardholder data) and we didn’t need to have encryption between racks.
Site to site VPNs are common for smaller companies too, those are encrypted, but the thing with encryption is that there are physical limits to throughput.
For a standard CPU I think it was 3.5Gbp/s or so in 2018, if you want to get much higher (like 9Gbps) then you need special hardware offloading which is expensive.
What is cheap (comparatively), is laying your own fibre cables.
Then it’s “basically” secure and you can have a single cable carrying 100GBPs over a mile.
This is what google used to do, I suspect this is what Apple used to do- this is what many people do.
Google’s solution does not involve site to site VPNs, Google’s solution was to make all internal network traffic encrypted, but the lines do not get implicitly encrypted because they go over that path, like a vpn would.
This thinking is based on trusting "encrypted" links. Did you build the hardware that drives these links? Did you audit the Verilog or code that operates this hardware?
I know of at least one way a to implement a "secure" TLS product that you could purchase and deploy in your datacenter that would leak all of the the keying material to compromise every data connection to the NSA. You would be 100% in compliance of all technical requirements, but your data would be utterly transparent. You would not be able to detect this using an internal or external audit.
Did you purchase your rack-to-rack equipment from the equivalently Trojaned "Solar Winds" vendor? The "Solar Winds" event was a "commercially" botched exploit.
Sorry, NSL(s) do not scale. It is an ever expanding "circle of trust".
Containing secrets is only effective if they are only shared within "your shared culture" and your culture is very stable -- nobody leaves because of a difference of opinion.
>That wouldn't work without the company being at least passively complicit. Links between datacenters are encrypted.
They aren't always. In fact the Snowden leaks were the actual event that got many of these companies to do just that.
You mentioned MUSCULAR, but it was that revelation that the DC to DC connections were not in fact encrypted. I believe that program was taps on the DC connections, since the SSL connectivity was added and then removed in the front end, leaving the replication in the clear. Google seemed to be relying on the physical security of those links and them not being on some shared infra. [1]
WARNING: the link below has classified info from the Snowden leaks. If you have a security clearance, dont click it.
This can be entirely explained if the NSA had already performed a "solar winds" supply chain attack on the vendor that supplied the TLS encrypt / decrypt endpoints. Is the vendor of that hardware known or discoverable?
Google would have no idea the traffic could be intercepted. The NSA could use the Smiley face, perhaps with a nudge, nudge, wink, they are now a "supplier of data" on slides.
They didn't pull out. Apple discloses over 30,000 customers' data each year without a warrant under PRISM (aka FISA 702) as disclosed in their own transparency report (listed under "FISA orders").
PRISM is just the internal NSA name for it. It continues unabated.
FISA orders are written by a Judge. Only judges can write these, this is the literal definition of a warrant. Warrants require specifics - Person X, person Y. These are enumerable. There is paperwork.
PRISM, based on the data available, is all about consuming data WITHOUT a warrant -- vacuuming data associated with identities that are not associated with ANY identities subject to a court order. Violating laws and possibly (USA) constitutional rights in quite a few ways. PRISM likely exists.
I ask of "sneak" to confirm their assertion that "PRISM == FISA orders" is true. Please present this "evidence" and the evidence of connection. If you cannot you are, by default, distributing mis-information, bad logic or at worst tying to mislead.
(my naive searching suggests that "sneak" is definitely not in a position to make these claims)
Judges can write lots of orders but that doesn't make them search warrants which are defined by the US constitution as requiring probable cause. FISA court orders are not search warrants.
FISA Amendments Act (FAA) section 702 is the legal basis claimed by the NSA in a secret interpretation by the FISA court as the basis for PRISM targeted collection without search warrants, including US persons/citizens.
I am neither. A similar exchange with sneak has happened previously.
It is a frustrating exchange.
The words that have been used attempt to tie two controversial topics together PRISM and FISA. The logic then seems to be that because companies can now report on FISA orders, this means they also willingly participated in PRISM.
What has been said seems to ignore that the FISA reporting by companies shows the number of identities that data has been provided for. PRISM on the other hand looks like a program to collect as much data as possible, regardless of identity.
At this point it is going to just be agree to disagree.
Let me tell you my perspective as a screeching minority long time Apple user. You may be right. We as professionals that evangelized a lot of people for Apple, don't have power or influence over core target of Apple of today. Yep.
But I can assure you that I personally, as my colleagues will do everything in our powers to hurt Apples public image and brand, to give real information to our clients, friends and families. To educate people why smartphone convenience is slavery to the Tech Lords and how in the future all this data will shape a Digital ID in Social Credit System which will render peoples freedom obsolete.
To all apologists, Apple employees and shareholders who will hold their stock after this, I have a simple message: F*ck You. No. Seriously. Go to hell.
You are created and supported the monster which will eat you at the end.
Seriously? Breathtaking? Talking to someone?
Is astonishing to me how "smart" and "educated" people are lost in meaningless details of technological implementation when we are witnessing assault on privacy and personal space on a global level. Let me help you understand something important: Sitting in comfort and relative security cannot give you a real life perspective of long term impact of this system. The long term effect will be the normalization of surveillance state. You don't need people to "See something, say something" you just trust a corporation working with database provided by third party (governments) without any form of public oversight. Without open source software and with proven record of abuse (see China servers case) and proven record of exploits (see NSO/Pegasus). And I have to talk to someone?:)
Please, grow up.
I have lived part of my life under communistic regime. The political language of personal invasion under the name of "common good" was the public mantra and abuse of the system was a everyday reality.
When secret services are taking away your mother on a "signal" received from "trusted procedure" created under the wisdom of the Party Elites you cannot do anything. Trust me.
Didn't iMusic or whatever upload users' personal high quality files to cloud, to stream them back in lower quality, and then deleted the originals from the users devices? I remember something like that making the news.
Imagine being a musician and Apple deletes your originals to stream your own music back to you in low quality.
Yes, it did something like that. If and only if the user signed up for, paid for, and enabled the iTunes Match service, the whole point of which is to replace your local files with cloud music. (I don’t find this desirable myself, but I can see how some people might have.)
Apple screwed up big time in the functionality and messaging around it and some people found their original files deleted when they weren’t expecting it. Big problem.
But it was hardly some plot to scan users’ hard drives for copyrighted content and delete it. On the contrary, iTunes Match would happily launder a whole library full of pirated low-quality MP3s into legal, high quality, DRM-free AAC files.
For many years, anti-virus vendors were able to do that. Why haven't those vendors been already co-opted by governments (Kaspersky on the Russian side, Microsoft in the USA side) into scanning for illegal, copyrighted or secret material and reporting on it?
Even open source products like ClamAV rely on a opaque database of virus strings.
Kaspersky is blacklisted as a government security vendor for anything remotely resembling classified or sensitive material. Also, virus databases are open to having their definition databases perused by the user. You can actually dissect what is being scanned for. Apple's system is not, and goes through great pains to be as opaque as possible. Understandably so it may be, from a rational free agent point of view it is still a threat at scale.
Top of that, there are many other tools which can do all the same. People are thinking like this has been hard work to add right now, and now future exploiting comes easier. Hard part has been creation of system, which locks Apple out of your pictures. Scanning your system files and sending some metadata is literally few lines of code and could have been pushed on week anytime in the past.
> Why haven't those vendors been already co-opted by governments (Kaspersky on the Russian side, Microsoft in the USA side) into scanning for illegal, copyrighted or secret material and reporting on it
My reply about the only-recent prevalence of E2EE and HTTPS was an implication that the governments mentioned didn't need to get those companies (such as anti-virus companies, etc) to scan for [insert scary material here] as they would have just been able to hoover it up on the wire (as was shown happens in the US by Snowden)
Thus the question of "Why haven't those vendors been already co-opted by governments" is answered IMO - it wasn't necessary.
Edit: to be fair - i now see what you mean - "never leaving the device and still getting scanned" vs "scanned in transit"
Pushing Spyware Engine is literally abuse to all people including children and it's much worse then any problem they would claim to fight. Even if you believe them.
By this move people are indoctrinated with the idea that being watched by someone big and powerful is Ok. They learn to accept such abuse and what can be worse for any safety of anyone than learning that? If one is serious about any safety one should learn to walk away from such abuse first just like with any other abuses.
It is an attempt to legalize such Spyware Engine installation. Nothing more. The story is just to sell this move using emotional response from naive people. Because high emotions is when people do poor thinking for the long term consequences. Think about Vendetta and consequences of it.
Those people should be educated what the real abuse is and they should teach their children to recognize it because abuse by Apple is already there and it is much worse then the problem they claim are trying to solve. People need to understand that it will get much worse with the time.
So this one time I was tasked to verify a complaint of child pornography and image the infrastructure for evidentiary purposes, if necessary. It was the first time I’d ever been exposed to it as a naïve operations kid at a hosting provider.
Imagine my surprise and horror to find that not only was the complaint accurate, it led to a completely polished thumbnail site on par with PornHub. Boom, right there, no login. No nothing. Five high, seven wide thumbnails. No two of the same child. A complete search engine based on Solr that could filter the thousands of images by age of the victim. By the number of adults participating in the rape. A threaded comment section on each image where people discussed children in their neighborhood and their fantasies of abducting them. An erotic literature section where parents wrote about how they’ve been sexually attracted to their children since changing their first diaper.
I’ll never forget a photo of two men brutally raping a girl of about 9 or 10, because it was one of the highest voted on the site. One of the comments, which I still remember when I close my eyes at night, simply said “its better when they cry”. It’s been eleven years and I’ve seen and dealt with much more of it since then, and I still weep to this day thinking about the pain inflicted on those children, the pure evil of those who enjoy it, and even the design and engineering team who bafflingly put their skills toward building that nadir of human achievement.
Tell me again what “the real abuse” is and educate me, please, because you sound pretty confident that the frighteningly common story I just told isn’t that big of a deal. I can’t believe anyone sane would compare going through your photo collection, even egregiously, to the rape and exploitation of children and think, yeah, you know, based on my value system door number one is the “much worse” injustice. Your opinion is fucking sickening and the exact type of detached inhumanity that is poisoning this industry top to bottom.
Interesting that you chose to so thoroughly explore such a heinous site when all you had to do was image it and provide a copy to the authorities. More interesting that you then depict its graphic content in such detail here.
Perhaps a thorough search of your hard drives and NAS are in order citizen. No need to report to your local precinct, we've already pushed the updated scan list to your devices for analysis.
It’s quite telling that my story described graphic sexual assault of children and you immediately forgot about the victims and made it about the perpetrators. Probably because if you had instead asked:
“So, just because children are being raped, we should give away our freedom and privacy?”
...your ground wouldn’t be as perceptibly firm, even though it’s exactly the question you’re asking. I’m also not going to respond because of the obvious incongruity and false dichotomy of the question, that aside. To be honest, I’d rather you have kept that question to yourself, and I’d go further and speculate that sentiment would apply to most opinions you hold.
>> Pushing Spyware Engine is literally abuse to all people including children and it's much worse then any problem they would claim to fight. Even if you believe them.
>> It is an attempt to legalize such Spyware Engine installation. Nothing more. The story is just to sell this move using emotional response from naive people.
>> Those people should be educated what the real abuse is and they should teach their children to recognize it because abuse by Apple is already there and it is much worse then the problem they claim are trying to solve.
> Not a single person arguing against this idea is being obtuse or insensitive to the very real problem of CSAM.
Not a single one, huh? Are you certain? I made sure to emphasize the six times lovelyviking dismissed or minimized child sexual abuse as a concern for you in case you somehow missed all six of them the first time through. Which is weird, too, because they repeated the same point a couple times to make sure we heard them loud and clear.
> will restate that regardless of the poor taste, it doesn't change the points validity.
Just wanted to point out a oft overlooked tidbit for the non-philodophers in the room.
Validity only conveys that an argument has proper form. There are many valid arguments that are nevertheless bull, because while they have valid structure, they do not follow from true premises. You should not be pursuing mere validity, but soundness. The state of having valid structure, and following from true premises. I also try to go for complete as well, meaning one has admitted all relevant evidence to the topic at hand, but that tends to be more of a rhetorical drawing of the line.
No one questions there are awful people out there. God only knows, I've had my share of of awful things found on computers I've been the steward for. At the end of the day though, I have to weigh my utility as a means of control and oppression against my normative moral compass and axiomatic underpinnings of how the world works. Mine tell me that there will never be a shortage of people willing to keep those people in check without handing to governments the foundations of population scale control mechanisms. The difficulty of mitigating a government in the process of abusing one of those is way higher than merely being a proactive when the situation warrants.
Solve problems at the level they are best solved. Centralization is almost never the answer except in questions enforcement of control, or applying leverage against someone else's will.
It might make me odd, but I can still look at something vile like CSAM scanning, and recognize it for what it is: violent non-consensual violation of what is expected to be private for the furthering of a small groups political aspirations.
I condemn this no less than I would dragnetting abolitionists, whistleblowers, revolutionaries, or other agents of change.
I assure you, there has been much sleep lost in contemplating whether my moral compass has gotten screwed over time. I don't take these issues lightly. I care about it so much that any doubt on my part is grounds for immediate high intensity scrutiny. Yet I keep coming to the same outcome. This. Is. Wrong. On so many levels, and in so many ways.
> It’s quite telling that my story described graphic sexual assault of children and you immediately forgot about the victims and made it about the perpetrators. Probably because if you had instead asked:
“So, just because children are being raped, we should give away our freedom and privacy?”
I think the frame being used to discuss the issue is the problem.
CSAM is the product of abuse/exploitation of children and that in turn is a symptom of a more serious problem: the growing prevalence of people with depraved minds who only get a slap on the wrist when they are caught, instead of a being punished with a strong deterrent.
Once the punishment for child abuse or exploitation is commensurate with the crime, demand for CSAM will plummet.
Blanket scanning of people's devices is a technological solution to a problem that is inherently social—it is trying to treat the symptom instead of the actual ailment.
It is worse tho. The surveillance apparatus doesn't care much about your actual words and images, but your associations and relations. This makes finding needles much more efficient and pre-encryption exfiltration circumvents user added measures, like third party iCloud encryption. And I am pretty sure this will be baked into the OS deeper than your VPN/DNS's reach. Opening up this side channel isn't undone by trusting some "icloud deactivation". Much less in your mind.
With this argument you can dissolve anything in "why even bother?". In the real world it very much makes a difference, if the breach of trust is secretly implementing a side channel, or secretly using a documented one. The latter e.g. can maybe by activated "by accident" plausibly, but having a backdoor at all is hard to justify. And for a company like Apple, the size of a breach of trust matters for financial incentives.
This was the last straw for me. I’ve started looking around at alternatives (iCloud photos, literally no decent alternatives soo ok far to sync 400gb of photos) and have removed iCloud files to sub for just a simple NAS at home with tailscale.
Reasonably, there are other criminals using iDevices, why stop at this? Even pre-crime police work may be feasible with AI and advanced pattern recognition.
There is another information leak that I have not seen mentioned in any news report. If an image hash matches the CSAM database, then it is sent to Apple, encrypted and with the “safety voucher”. Apple can decrypt the image only if they receive enough vouchers, and so they claim that they do not have any information about the user in case the number of vouchers is lower than the threshold.
But actually they do have information: they know that a user has a specific number of images which are perceptually similar to known CSAM material. This information is not conclusive, but it’s also not nothing. For example, could a court order Apple to release the unencrypted iCloud backups of all users who had at least one match?
If they don't keep track of which matches are false positives, wouldn't it be possible to be extremely unlucky and pass the threshold with nothing but false positives generated by Apple?
Not just that, random folks can send you media through multiple ways & you could get embroiled unnecessarily too, innocuous QR code might automatically download an image if your apps are configured that way.
>innocuous QR code might automatically download an image
Just a few days ago at DEF CON some presenter had been going around with the EICAR test string in a QR code and having fun with all the forced AV hits that can cause.
The argument is could this new system result in a court ordering a blanket "send us the backups for anyone who gets a match for any reason" order from some court.
This whole Child Safety feature was not planned to be announced yet. They tried to fix some missleading leaks. My guess is that they are now preparing well to announce everything in September.
I don’t want to be mean, but sometimes leaks do more harm than good, espcially when they don’t match what is actually coming. Many people are angry here, the most of them have not read actual technical details.
Respected people should be more responsible about what they spread.
Technical docs often live long term with typos. This is a perfectly timed leak to take pressure off regulation pushes by waving one of the biggest honking carrots in front of Western political establishments.
This has political quid pro quo written all over it, and anyone who thinks this type of backhanded signalling isn't common isn't paying enough attention.
This is one point of view. However, there was chance to win over both parties (customers and governments), by announcing everything together.
Why do it now, since it is very short time for September, and that time does not really matter anything on regulation level? Or are the scheduled votings coming very very soon?
This holds strong arguments against new regulations whether it was leaked or not. Difference being specific attention.
You can't forecast vote outcomes. Remember, politics is squishy. The totality of a policakers decision-making is more than just the facts, like it or not. How they are feeling toward you, the interests they perceive you advocating, how dedicated you appear to "the public interest" all factors into that Yea/Nay decision, and could be the difference between a relatively draconian statute NOW, or something so wishy-washy and limp-wristed you barely have to bother the accountant at all to adjust fiscal plans.
Perception management is a full time job, and at the core of marketing, lobbying, PR, and corporate strategy. If information does get out, it's because someone either blew the whistle, or because someone is fishing/doing clandestine signaling.
I'll be honest, it strikes me more as whistleblowing in this case; but there has been enough concerted effort at syndication I'm not necessarily closed to a strategic leak.
A question to you legal experts out there: if a potential CSAM match is found during client-side scanning, but such a match has not yet been confirmed by an Apple employee to actually be CSAM, does Apple have the option, legally speaking, to SIMPLY DELETE the "gray-area" content in-place (just like a regular virus scanner), instead of sending it to Apple for further analysis?
Someone performs "an implication by malicious actors attack" on your iPhone/iPad and the injected content simply gets deleted. You take a (false positive) photo with your iPhone/iPad - and it simply disappears (making you retake). No private content is ever sent anywhere, no horrible accusation is ever made, no CSAM ever gets uploaded to iCloud. Simple.
It seems like this system was designed specifically so that this would be impossible, and such a feature would go against what seem to be design goals of this system.
They went through the trouble of making this whole “private set” matching so that the client does the matching but doesn’t know the result of the matching. Only the server can (once enough matches are made that the key is available).
But this strongly suggests that the entire Apple/NCMEC initiative is a "suveillance-and-arrest" system first and foremost (preserving hash secrecy at the cost of user privacy), while the goal of "stop-known-CSAM-distribution-in-iCloud" (developing an in-house CSAM database at the cost of scanning effectiveness) being secondary.
This seems to come from the NCMEC, not from Apple. I remember another thread (can’t find the link) from someone explaining how difficult it was for them to get access to PhotoDNA and the relative hashes.
What they did say was that privacy is a human right. So they are, by their own confession, abusing human rights, albeit to protect another human right. But the irony is striking.
Not being an Apple fan, I'm enjoying the schadenfreude of watching them destroy the arguably most-favored, though one of the last remaining, qualities of Apple products -- people used to see them as the great protectors of their privacy.
It fits, though. They also used to be considered the masters of UI design, and the best at hardware innovation, and in manufacturing. They've mostly destroyed those reputations, as well.
It's actually good when old institutions/companies fail, because the opportunities created for others. The problem is that their failure often takes many years (i.e. look at IBM). If only we could accelerate the process across FAANG, the world would then truly be a better place (and their mission statements fulfilled).
EFF conflates the CSAM detection and the iMessage safety features in the first paragraph. Disappointing that they can’t make their case with the facts.
About 2/3rds of their original letter was spent characterizing the system to warn kids about dick pics with on-device inference as an iMessage backdoor, which literally nobody serious believes.
More astounding was the privacy issue they raised with it, which I've not seen raised by anyone else.
Let's review how this feature works. It is only on if parent's explicitly enable it for their child's phone when they set up the child's phone for parental controls.
If it is on, images sent to the child's phone are scanned using a ML system to recognize sex images. When such an image is found, the child is given a screen that warns that the image contains content that may be harmful to the child, and may be an image of someone who did not consent to having it sent.
The child is asked if they want to reject the image or view it.
If the child elects to view it and is 13-17 they are shown a blurred version of the image and that is the end of it.
If the child elects to view it and is under 13, they get another screen that says that if they view it their parents will be notified because the parents want to be able to check to make sure the child is safe. They are again asked if they want to reject it or view it.
If they reject it, that's the end of it. If they view it they get a blurred version and the parents are notified.
The privacy issue the EFF has with this? If I send your 12 or under child a dick pic and they elect to view it knowing that their parents will be notified and see a copy of the image, my privacy might be violated because I did not consent to the child's parents being told I'm sending their child dick pics or to the parents seeing my dick pic.
I wonder what the EFF's opinion would be if I sent a dick pic to a 12 year old whose device does not have parental controls, but the kid decided to show it to the parents. Has the child violated my privacy? If we are in a state that has a civil law against nonconsensual image sharing would the EFF help me sue the child?
I have nothing to add except that once a child accepts the risk at each prompt, I believe they get to see the original image and not a blurred version.
When I first read their objection, I thought that the system would transmit the image from the child’s device to the parent’s device. I could see how that could be problematic. Except it doesn’t: the record of the image stays with the child’s device, and the parent is simply told the record has been created. At this point, the most charitable interpretation I could give is that they’re worried the model will have many false positives and ping parents about every photo a child receives. iMessage back door, this is not.
Their “concern” is literally as absurd as you describe.
This is effectively a virus scanner. Files are hashed (in a fancy way), compared against known hashes, and matches are reported. Your Windows desktop has Windows Defender.
EFF lost a lot of credence to me after the Best Buy case. They made this big fuss about how Geek Squad employees were agents of the state for reporting CSAM on a customers hard drive, while doing a requested file recovery. When searched, the defendant had CSAM on 5 different devices. The case was dismissed on a technicality. Never did the EFF mention this. Never did they say they were defending a gynecological doctor who had CSAM on 5 devices. Nope, it was spin city.
Now here we are. Apple has made a privacy preserving anti-virus scanner. It does not upload unknown files as Windows Defender does, it does not scan everything. It scans your photos, for known CSAM images, when you are using iCloud backups, in order to comply with the law that they must scan their hosting services for CSAM. It has a more narrow scope than an anti-virus scanner, and a bigger societal benefit.
We seem to have taken the idea that sometimes bad things are promoted through "think of the children" to mean we must oppose anything involving the protection of children. Our greatest fear in this is the government using a national security letter to search for banned ISIS memes? Let's address that slippery slope when we come to it, and let's note that we do not see Windows Defender or similar doing the same. This is great, I hope it puts a bunch of pedos behind bars.
>This is effectively a virus scanner. Files are hashed (in a fancy way), compared against known hashes, and matches are reported
yeah with the small difference that the virus scanner reports to you, whereas this scanner reports to Apple or authorities.
The virus scanner's purpose is to alert you of viruses on your machine, the purpose of apple's scanner is to engage in blanket surveillance and treat ordinary users like potential consumers of CSAM by default.
Nothing about this is privacy preserving. Privacy would be preserved if Apple refrains from touching any of the information that belong to me and doesn't treat their customers like potential criminals. Imagine you rent parking space for your car and at random intervals, with no reason at all, nothing suspicious has ever happened, the owner comes up, opens your trunk, and rummages through it to check for child porn. That's what Apple is doing.
Since when has renting storage space ever entitled anyone to check what the customer puts in the storage? Do you expect the bank clerk to crawl through your personal safe deposit box as well to prevent crime?
>yeah with the small difference that the virus scanner reports to you
Important to note that in case of the Win 10 Defender and its default settings, executables, and hashes of other files are uploaded to Microsoft automatically.
Much less worse than a csam false positive reporting someone to the authorities, but not really "reporting to you" either.
If Microsoft were explicit that they were trying to look at your personal files and were then going to send the police to your house based on what the hashes were, that would be a big issue to.
This analogy only works at a superficial, technical level. When a virus scanner finds a virus, it alerts me and I can either quarantine or delete it. It’s up to me to decide if what it’s found is actually a virus, or a false positive.
When Apple's tool thinks it’s found the material it’s looking for, the assumption is that I am a pedophile who collects CSAM.
> When a virus scanner finds a virus, it alerts me and I can either quarantine or delete it. It’s up to me to decide if what it’s found is actually a virus, or a false positive.
This is also a bit superficial. If you are breaking the law, you can't decide by yourself whether you are breaking the law of not. That is up to the judge.
While you can quarantine or delete the virus, AV vendor is still getting all the stats. It is not maybe including PhotoDNA matches but cryptographical hashes are included for identical match. It is still perfectly legal to inform CSAM content against these matches, and we can't be sure if that has been made or not.
In case of Windows Defender, what if automatic sample submission is enabled? Uploading and storing a file makes Windows as cloud-provider for this specific scenario, and is required by law to report CSAM content.
Who knows if PhotoDNA is also applied into this content, but that hasn't been told yet? It is legal, there is no need to to tell that.
Rather than oppose a measure to protect children because of the fear of a few false positives, why can't you accept the fact that the societal benefit outweighs the potential risk?
A few innocent men might be condemned to rot -- or be murdered -- in prison, but Apple has developed a system that mostly protects your privacy and could save the lives of potentially millions of children around the world. The rights of a few harmed innocents must be balanced against the greater societal good.
Sarcasm doesn't land well here, especially when it's a long comment.
Thinking about false positives is the wrong thing to focus on. The point is the technology could be applied to any image such as Tiannanmen Square, Hong Kong, depictions of God, Muhammad, or Jesus, etc.
> why can't you accept the fact that the societal benefit outweighs the potential risk?
Because a shit technical solution (which opens the doors to other abuses) isn’t the fix. Because Apple are not the government.
A meaningful scheme to protect the children would need:
- better sex ed in schools
- better education for parents and people who would like to become parents as to the risks and signs
- publicly announced meaningful support for people who self-identify with dangerous thoughts and seek help before their thoughts become behaviours
- better support for people from abusive homes as they mature
- (probably) two or three generations to pass through before you could measure statistically sound improvement
Funding of child care would identify the kids that are abused. Identifying any bad actor becomes much easier and targeted without compromising the privacy of everyone.
More eyes on the problem and you will get better results. Easy solution that cannot be abused by bad political actors.
> Rather than oppose a measure to protect children because of the fear of a few false positives, why can't you accept the fact that the societal benefit outweighs the potential risk?
But Apple apologists has been telling me all day that I could stop the system from scanning if I just disable iCloud.
If a person can disable it that easily, then the system is effectively useless.
No, it scans photos for arbitray (fancy) hashes, and Apple chooses to limit it to CSAM images. Nothing about the tech prevents it from being expanded to other kinds of images. And from what I understand, nothing prevents it from being expanded to other filetypes either, does it?
The only thing that ties this tech to CSAM images is Apples promise (and claim) to keep the scope limited.
The only thing that prevents Apple, Microsoft, or Ubuntu from executing arbitrary code on your system is their promise to keep the scope of updates limited. You already operate under this trust model.
Do I? Can you elaborate? Do you notice a difference between a company (potentially) doing this behind closed doors and as quietly as possible and doing this in a public official way?
What if I tell you I don't use any of the operating systems you listed? Does your answer change?
I have a question: Would you welcome such a move? Do you believe that taking things a bit further would be nice because "we might catch a few criminals"?
If you are not compiling your operating system, every library and every application from scratch then you are blindly trusting third parties.
And if you are assuming Apple can't be trusted when they say they won't expand this to non-CSAM use cases then not sure why you would then trust Microsoft, Ubuntu etc.
> If you are not compiling your operating system, every library and every application from scratch then you are blindly trusting third parties.
This implies that trust is always the same and that if you trust one entity (because you did not event limit your answer to corporations) you are supposed to trust everyone and if that is not the case then you have some kind of logical error in your thinking. It also implies that losing the trust in one entity, but not some other, doesn't make sense somehow.
but once a system is in place, it becomes easier to do things that are a variation on what that system already does. As opposed to doing it from scratch.
Also, I think the outcry would be larger if they did it from scratch compared to if they did it as an extension to some existing, known capability. If that's the case, they'd have less to lose in doing such a thing if the base system is already in place.
No, the GP was correct. As soon as any closed-source software implements automatic software updates, you've always one malicious update away from the system betraying you. Having "a system in place" for doing potentially evil things is unnecessary. Interim steps of any kind are unnecessary.
What Apple has done this week doesn't bring the iPhone closer or further to your hypothetical dystopia than it already was. Or Chrome, or Windows, or Android, etc. They update themselves. Every update your devices have done in the past decade could have betrayed you.
Anything that automatically updates is always one step away.
Yes, it's always "one malicious update away", but what I'm saying is different to that.
You're talking about "installing" a change, and talking about more about their capability to change what happens with your data.
I'm talking about 1) the effort required to _write_ the change and -- more importantly -- 2) the potential backlash being different as to whether it's a modification of an existing functionality vs an entirely new type of functionality. This second point is a major one, because it would be seen as much worse if it looks to the public like they've gone out of their way to do something wrong, and would be much more damaging to their reputation. IMO anyway.
> For the conspiracy to work, it'd need Apple, NCMEC and DOJ working together to pull it off voluntarily and it to never leak. If that's your threat model, OK, but that's a huge conspiracy with enormous risk to all participants
I'm not sure how that's relevant to my comment. I'm not saying any particular thing will happen. I was just disagreeing with the person who said "The only thing that prevents ... from executing arbitrary code on your system is their promise to keep the scope of updates limited. You already operate under this trust model." My disagreement is that it's more complex than "the only thing".
If you think Apple is on a slippery slope and will just expand this feature without any consideration then why have an issue now ?
Apple already has your unencrypted photos. They could scan it server-side. Or they scan it on your device and simply not tell you. And they can push OS updates without you knowing to enable all of this.
Apple could even push CSAM to your phone and frame you if they wanted to. They control ALL of the keys to your device whilst you are using iCloud and allowing software updates.
… yes. And it will take less than 2 years for the world to push new entires, as had happened for the last 10y.
They will be forced to detect rafts of totally unrelated content, ranging from king photos in Thailand to Winnie the Pooh in China. This is going to happen.
You need to put away your “protect the children” pearls and realize wtf game they are playing here. It’s always about protecting the children, and folks fall for it every time.
It's really bizarre that the risk isn't more obvious to people. Apple is partnering with foreign countries organizations who will supply CSAM hashes except that there is no way Apple can actually verify this. To do so Apple would be asking these foreign organizations to send them CSAM creating a massive liability. Instead Apple will just be flagging arbitrary file hashes from countries that will absolutely poison the well.
What are your thoughts on state actors asking Apple to use this technology to imprison dissidents?
This effectively criminalizes anything the state deems unacceptable, which in some countries includes criticizing the ruling party. Is it right for an American company to open their gates to that?
Why has that state actor not already forced Apple to deploy a special firmware image to all citizens via software update functionality? Yes, this can be used to search for banned memes, but if a country has that flex, they would already be doing so.
The answer is Apple risks a whistleblower on their hands if they do it secretly, regardless of the targeted country. Remember Google's project Dragonfly?
Plus, software needs to be maintained. I don't see how they could do that in perpetuity without a major risk to their valuation. So all moves do need to be made public while the software is created in countries that have a free press.
So why wouldn't that same whisteblower complain if Apple expands their CSAM detection system to other use cases ?
And iOS is a modular operating system. They could easily swap out the Photos.framework for different state actors and support that in perpetuity. They were already doing this when cross-building for ARM/x86.
> So why wouldn't that same whisteblower complain if Apple expands their CSAM detection system to other use cases ?
I assume Apple could make it very difficult, if not impossible, to detect what they're searching for when they are using hashes created and transmitted by all their own hardware and software.
But, even if they did publish the hashes and those were somehow verified in free-press countries by a trusted 3rd party, that does nothing for countries with no free press. Such places would have no knowledge of what's being searched for, and that's the whole point. I won't support an American company that helps oppressive countries stymie what little freedom their people have left to connect via the internet. To the extent they are successful, the results of those tools will eventually be aimed at us, either via uninformed people or by using the tools themselves on us.
> And iOS is a modular operating system. They could easily swap out the Photos.framework for different state actors and support that in perpetuity. They were already doing this when cross-building for ARM/x86.
Sure. And I expect if there were something nefarious there working on behalf of foreign governments then we would eventually hear about it, one way or another. It's a terrible idea that would be abused, and humans are natural pattern recognizers.
> I assume Apple could make it very difficult, if not impossible, to detect what they're searching for when they are using hashes created and transmitted by all their own hardware and software.
Correct, it would be easy to slip in additional hashes without the team knowing what those hashes represented.
HOWEVER, as soon as these additional hashes match something, the first person to see them will be an Apple employee performing manual review. When they see a picture of Winnie The Pooh or a photograph of some classified spy plane, they're going know that the CSAM system is being used for purposes other than CSAM.
Very naive to assume those hashes won't be treated differently on the backend. The most logical thing would be to send those directly to the CPC/NSA, since Apple's human review is clearly a smokescreen at the point where non-CP hashes are added.
But someone has to write code to hold multiple sets of hashes. And someone has to write the code which treats reports differently. It all has to be written and maintained. Thus developers at Apple will still know that the system is being used for something other than CSAM.
> developers at Apple will still know that the system is being used for something other than CSAM.
Will the next generation's developers call them out for that? Or will they be given justification to accept it?
We're inching towards 1984 with these big tech monopolies. It was one thing for Snowden to reveal the secret agreements the government imposes upon tech companies. It's entirely another for privately run businesses to capitulate, and thus excuse politicians from needing to make intelligence-gathering a public issue.
Whatever backroom discussions are occuring about this topic need to come into public view. This just doesn't make sense on the surface. The government can't have access to secretly monitor everything on the internet. It's too much power for too few, ripe for abuse by bad actors, etc. There must be another way that involves an informed citizenry. I don't care how uninformed we've shown ourselves to be in the last decade. We should press forward on informing regardless.
Hashing is done on device, matching is also done on device. In the event of a match, a "safety voucher" is generated and uploaded to iCloud. Multiple safety vouchers are required for your account to be flagged, at which point the contents of these vouchers (which contains metadata and a grayscale thumbnail of the photo) can be viewed by Apple.
> Multiple safety vouchers are required for your account to be flagged
I don't see how that makes any difference. What if someone plants bad data on your device? That would of course be a concern for cloud-scanning too.
I don't care how secure Apple says their devices are. There are companies that can crack them, and you can bet some unscrupulous people will use that against their opponents. Politicians and other influential people should be as concerned about this as everyone else. Didn't Saudis crack Bezos' phone to reveal his affair? With this tech they could make up worse stories. I believe our justice department could tell the difference between a hack and someone who actually harbors bad data most of the time, but I don't like relying on that.
Given that a functionally identical system has been implemented by Google for years, we should already know what will happen. So let me ask. Is this already happening to people with Android devices? In terms of opportunities for framing someone, how is what Google does any different?
Google's system doesn't do on-device scanning, and I gave an example above of something like this happening. Security is a constant race between good and bad actors. If you weaken your system you're scoring for the other team.
>This is effectively a virus scanner. Files are hashed (in a fancy way), compared against known hashes, and matches are reported. Your Windows desktop has Windows Defender.
The analogy isn't great. Anti-virus/malware software provides a benefit to the owner of the device; Apple's software does not.
Apple is using this as a proxy for scanning their iCloud infrastructure. If you are using their cloud service, it does provide a benefit to the owner of those machines.
I think most people are fine with the measures taken in Messages, much less so with the measures taken with scanning of images and reporting to authorities, and the possible scope creep that is only prevented by policy, rather than by capability.
As someone who cares about free society I am disgraced that we push through de facto censorship regimes to oppress the world through weak child protection arguments.
"Virus Scanner" is optional, many ways to disable it, you won't be arrested if you are infected, and government doesn't get to decide what virus is.
Apple scanning is NOT optional, if they have false match or mess up you will be charged/investigated for one of THE most hated crimes in human history, even if you win the case the damage will be irreparable.
You lost your mind if you think governments are not going to setup their own CSAM database hashes with their own "manual review" centers. Apple will not be able to jack shit on what each country considers CSAM.
It is optional to use iCloud Photos. There's no indication you get arrested if they find something, it isn't reported directly to FBI. Apple will just disable your account, and there is an appeals process to restore it if they made a mistake.
There's more nuance here beyond the clickbait headlines.
A scanner of any kind is a tool that user chooses among several options in App Store based on community review, is preferably open source, with parameters that user sets, and content and specific directories that user would like to scan. In many cases, people don’t want to install any scanner. The job of a virus scanner in particular is to protect the user.
Apple’s scanner is not installed by user, can scan for arbitrary information, is closed source, uses an unknown database and harms most users.
It’s more like a virus or Trojan than a virus removal program.
Because feature-based detection has such a great specificity record? That's where the problems start. At the very least, I suspect their employees are going to see a lot of teenagers' nudes.
No idea how that will translate into Apple One family account holders' lives being torn apart. We'll see.
Windows Defender a) tells you exactly what file/which virus triggered it and b) does not report you to the police. And AV that uploads files by itself is not OK either.
Then, either the DB or the algorithms will result in false positives, and you have to trust the reviewers 100% – with your life – to sort them out correctly. From the article at https://news.ycombinator.com/item?id=28110159:
(The false-positive was a fully clothed man holding a monkey -- I think it's a rhesus macaque. No children, no nudity.) Based just on the 5 matches, I am able to theorize that 20% of the cryptographic hashes were likely incorrectly classified.
> This is effectively a virus scanner. Files are hashed (in a fancy way), compared against known hashes, and matches are reported. Your Windows desktop has Windows Defender.
1) I don't run Windows.
2) The principle of antivirus software is different: the software scans your files but does everything locally and with the end user in full control over what happens next, and of their data. Windows Defender does not report you to the fuzz when it finds a match -- yet. Given that it is apparently now enforcing copyright laws in addition to protecting the end user against viruses, that may change.
> The case was dismissed on a technicality.
If the cops want to catch chomos and bring them to justice, they can assiduously avoid bringing the fruit of the poisoned tree into the courtroom. The societal risks of allowing them to bring ill-gotten evidence to trial are too great, no matter how evil we think the defendant is.
> It scans your photos, for known CSAM images, when you are using iCloud backups, in order to comply with the law that they must scan their hosting services for CSAM.
The USA has no such law (yet). Service providers have a duty to report if they find CSAM, not a duty to scan for it. Even if they had such a duty, they could scan the copy that lives on their servers, rather than pushing spyware to users' devices and blatantly breaking the trust that a user's device implicitly serves the user's needs.
> We seem to have taken the idea that sometimes bad things are promoted through "think of the children" to mean we must oppose anything involving the protection of children.
That's a disingenuous strawman. No one is objecting to laws that punish child abusers, or to legitimate forensic techniques to catch them. We're objecting to companies -- and now end-user devices -- being deputized to participate in law enforcement dragnets likely in violation of the U.S. Constitution, other national constitutions, and the principles of a free society (the applicable one being: LE doesn't get to search you without a damned good reason signed off by a judge on a warrant, and by extension they don't get to twist OEMs' arms to build devices to search you on their behalf).
Thanks for stating a well-reasoned comment on this issue. I think far more people are in this camp than are willing to admit on social media. The voices of hyperbole have drowned out the conversation rendering all nuance meaningless.
2. Anyway, is that legal ? Even if some crazy store material on his Apple hardware isn't that illegal search non usable in law courts ?
3. Child abuse is often used as Trojan horse to introduce questionable practice. What if:
- the system is used to looking for dissidents: I look for people that have a photo of Tiananmen Square protests on their pc, for example;
- for espionage: I have the hash of some documents of interest, so all the PCs with that kind of documents could be a valuable target;
- profiling people: you have computer virus sample on your PC -> security researcher/hacker;
I think that the system is prone to all kind of privacy abuse.
4. this could be part of the previous point, but, because I think it's the final and real reason for the existence of that system, I give to this point its own section: piracy fight. I think that the one of the real reason is to discourage the exchange of illigal multimedia material to enforce copyrighs.
For the listed reasons, I think that is a bad idea. Let me know what are you thinking about.