Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I also like the facts that:

* Everyone can access it. Not just sophisticated actors anymore

* It might make us rethink the entire "truth" chain, on how we source our information

Of course, the two go hand-in-hand, but the latter point is overdue: while video is perhaps more glaring, it's something that's needed in a lot of other areas as well (text -- news articles, messages, mail; sound -- phone calls, etc; image -- photoshop, though we start to get used to it).



What's good about letting every script kiddie use it? I'm thinking, the less usage the better?

I agree that provenance could become more important, but I don't see it changing how memes spread. For a lot of people it's just entertainment and they don't care whether it's true.


> What's good about letting every script kiddie use it?

I think parent was making a point that once this tech is in the hands of the common man, it's value for sophisticated players might diminish strongly. Same way as an undisclosed 0day in a high value target (say IOS), is extremely valuable. Once it gets disclosed and everyone knows about it, people can come up with workarounds and eventual fixes, rendering the threat basically defanged.

I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.


> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.

This is why I am all for the current hype and the apps. Everyone that has access to electronic media should now that videos are now more easily to manipulate than ever. This undermines the attack vector of supporting fake news with fake videos. (At least it should, I do not know about the psychological side of it. Maybe even knowledge of a video's falsehood does not diminish its impact that much. Still I think it is best to make video fakeability common knowledge.)


I used to work for a company who made verifiable audio and video for law enforcement. Even a constant running timecode could be circumvented in the early 90's. (They used to use a dedicated audio channel to encode a hash of the last few seconds - on analogue). It actually takes a lot of effort to show that a video hasn't been/could not have been doctored.


> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.

On the flip side, this enables revenge porn deepfakes and person to person harm on a much, much more common level. If you are someone who becomes briefly famous on the internet, expect that a TON of harmful or disturbing videos of "you" will show up in short order. I'm not sure we have good regulations and laws in place to protect people against the misuse of this tech yet.


I understand, but I believe it is obvious now that technology like this cannot be legislated away or controlled. The moment that people discovered that ML can be used to generate fakes was the point of no return.

There's not a single thing in the chain of tools and knowhow required to produce this that can be kept out of the hands of malicious actors.

The best course of action now is to rapidly educate people of the implications and hope that the initial wave of abuse will be without too many casualties.


Think about Photoshop - back before it existed people assumed magazine ads had real people (not carefully, manually edited py artists). Photoshop and tools like it became commonly available. Over time the tools being more widely available caused more people to scrutinize and be suspicious of what they saw. Ads featuring heavily manipulated photos had the opposite of their original Intent. People picked up the term 'shopped to mean an image was fake or being deceptive. Kids in middle school learn how to do it.

Now, when you see a before and after photo, or an advertisement, the default is to assume it has been manipulated. That change came about with broad access to once-rare tools.


> Everyone can access it. Not just sophisticated actors anymore

Why do you like this?


“When everyone’s special, no one is.”

Basically, a (good) deepfake was out of reach for all but a few until recently, so if some bad actor with money and resources wanted to fake a video they could do it and few people would even think it could be faked.

Nowadays, the bar is way higher for this, as every video can be suspect.


> every video can be suspect.

But how many will suspect them? If 10 million people watch a video but only (generously) 1 million people think or are at least willing to consider it may be fake, that is very effective disinformation. We have to remember that the average person (especially in older generations) is likely unaware that this sort of technology is possible and easily accessible.


More people will suspect them than before. Just like photoshopping is now coming knowledge enough to have terms associated with it added to the common lexicon.


As people with the knowledge that video is no longer trustworthy, it is incumbent on us to share that message with other people, so it does become common knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: