Humans get a bit of training data. If a baby is left to itself during the formative years, they won't develop speech, social skills, reasoning skills, ... and they will be handicapped for the rest of their life, unable to recover from the neglect.
And the rest of our training data, we make it as we go. From interacting with the real world.
You could have said the same about every catastrophe that got out of control. Chances are they will eventually gain unauthorized access to something and we will either get lucky or we get the real life Terminator series (minus time travel so we are f**ed)
> You could have said the same about every catastrophe that got out of control.
Such as what? War?
Climate change still hasn't delivered on all the fear, and it's totally unclear whether it will extinct the human race (clathrate gun, etc.) or make Russia an agricultural and maritime superpower.
We still haven't nuked ourselves, and look at what all the fear around nuclear power has bought us: more coal plants.
The fear over AI terminator will not save us from a fictional robot Armageddon. It will result in a hyper-regulatory captured industry that's hard to break into.
Can you please recommend a book on ZK proofs for someone with basic CS level understanding of algorithms and data structures? I would like to understand it better and use it in dapps. I feel like it completely change the relation between what's data and what's computation, a bit how matter and energy was linked in physics.
I think collision or interaction are better words. The word touch is meant for bigger objects, like hands, animals, furniture, fabrics, not molecules and electromagnetic radiation.
False positives would constitute a huge invasion of privacy. Even actual positives would be, a mom taking a private picture of her naked baby, how can you report that. They did well dropping this insane plan. The slippery slope argument is also a solid one.
NYT article about exactly this situation[0]. Despite the generally technical competency of HN readership, I imagine there would be a lot of people who would find themselves completely fucked if this situation happened to them.
The tl;dr is that despite this man ultimately having his name cleared by the police after having his entire Google account history (not just cloud) searched as well his logs from a warrant served to ISP, Google closed his account when the alleged CSAM was detected and never reinstated it. He lost his emails, cloud pictures, phone number (which losing access to prevented the police from contacting him via phone), and more all while going through a gross, massive invasion of his privacy because he was trying to do right for his child during a time when face-to-face doctor appointments were difficult to come by.
This should be a particularly salient reminder to people to self-host at the very least the domain for their primary and professional e-mail.
The apple one was only matching against known images, not trying to detect new ones.
The google one actually does try to detect new ones and there are reported instances of Google sending the police on normal parents for photos they took for the doctor.
I feel this neatly captures the overarching corporate philosophies and attitudes of Apple and Google in a single example.
Pick your favourite other example of when Apple and Google have faced roughly the same problem as each other, and hold up their respective solutions next to those in the example of CSAM scanning above. I bet they'll look similar.
And it would only notify someone for human review if a certain threshold was reached; just having one or two violating images would have tripped the system.
And the rest of our training data, we make it as we go. From interacting with the real world.