Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That seems unlikely, given that making up hurtful stories about people and transmitting them via text or voice is still a thing. Everyone knows that anyone can make up any story they want without any technology whatsoever, and yet spreading rumors is still a thing.


Not really. I can't think of a recent "leaked texts" that the participants cannot not easily and plausibly deny (e.g. elon's supposed messages to gates), or even voice messages. Even most images can already be denied as photoshop if all the witnesses agree. The only medium that is somewhat hard to deny is videos, like sex tapes, but that's also not too hard. I think there will soon be a race to make deep learning pics look completely indistinguishable from phone pics.


Perhaps that's somewhat true for famous people, although there are plenty of examples of false stories (without any forged evidence, literally just stories) causing real embarrassment and damage to reputation.

But it's even more true for non-famous people getting bullied in their social groups, both online and offline, and that's more what I was responding to (the "asshole friends" in the original comment).


It'll be hard to deny crypto-signed photos https://petapixel.com/2022/08/08/sonys-forgery-proof-tech-ad..., especially if they include metadata that distinguish photos of AI generated images from normal photos.


any camera can be hacked to plant an image in its framebuffer


There are a few different kinds of 'secure enclaves' implemented on chips, where you can have some degree of trust that it "cannot" be faked.

E.g. crypto wallets, hardware signing tokens, etc.

We could imagine an imaging sensor chip made by a big-name company whose reputation matters, where the imaging sensor chip does the signing itself.

So, Sony or Texas Instruments or Canon start manufacturing a CCD chip that crypto signs its output. And this chip "can't" be messed with in the same way that other crypto-signing hardware "can't" be messed with.

That doesn't seem too far-fetched to me.

* edit: As I think about it, I think more likely what happens is that e.g. Apple starts promising that any "iPhoneReality(tm)" image, which is digitally signed in a certain way, cannot have been faked and was certainly taken by the hardware that it 'promises' to be (e.g. the iPhone 25).

Regardless of how they implement it at the hardware level to maintain this guarantee, it is going to be a major target for security researchers to create fake images that carry the signature.

So, we will have some level of trust that the signature "works", because it is always being attacked by security researchers. Just like our crypto methods work today. There will be a cat-and-mouse game between manufacturers and researchers/hackers, and we'll probably know years in advance when a particular implementation is becoming "shaky".


How would such a camera ensure you're not taking a picture of a picture?


Just get a really really nice screen to display deepfakes on before photographing them


Maybe have the signed content include a timestamp and location? But... GPS can be spoofed I think.


In which case if found out will result in that person / publishers credibility going down the drain.

We will learn to trust sources (cryptographically signed) rather than just what we see.


Maybe at some point for some cameras. But not soon after their release if they took steps to protect their pipeline with hardware.


Are journalists savvy or ethical enough to give a shit? What about the people reading/viewing/listening to the news?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: