Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>client-side scanning before _anything_ is uploaded is objectively less invasive than every single photo being scanned on iCloud

That's like saying cyanide tastes better than strychnine. It might be true, but I'd rather just not have either one.



I mean, iOS uses the same client-side machine learning to "scan" your photo library for tons of things. You can search "Dog" and get results, with nothing ever touching Apple's servers. We're happy with this but not happy with the other?


>We're happy with this but not happy with the other?

Yes?! What's hard to understand about the difference between:

1) An application using AI to scan photographs to provide categorization benefits to the owner/operator/user

2) An application using AI to scan photographs to provide accusation and punishment to the owner/operator/user

...especially when feature #1 can be turned off, but feature #2 cannot be turned off?

iCloud mischaracterizing a baby picture as a "dog" might cause some dinner table chuckles, but it's never going to cause meaningful harm. iCloud mischaracterizing a baby picture as a child abuse image can VERY plausibly cause extremely severe harm.

As a matter of principle, my devices shouldn't be designed to act against my will as an active informant for the authorities against me. The point that they do is the point that I join the flannel and wooly beard set out in the mountains eschewing technology and living "off the grid".


> but feature #2 cannot be turned off?

The CSAM scan would not be enabled if you had iCloud Photos turned off. All it did was move the scan on-device, it still only ran if those photos were destined for the cloud.


> We're happy with this but not happy with the other?

One carries the risk of not finding a photo you're looking for.

The other carries the risk of a possibly life-ruining criminal investigations being opened against you.

This is a new surveillance vector that explicitly gives your phone the required functionality to report any flagged content on your device directly to the authorities, something that is clearly a slippery slope with questionable effectiveness at best. I'm not sure how you can compare it to strictly local face recognition AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: