In the long term, Deepfake won't make people believe in fake videos, it will just make all confidence in video evidence disappear.
People will trust some fake videos until they see stunningly realistic videos of themselves hanging out with Hitler, or whatever.
Later we will have some video devices that use some kind of encryption and network connectivity to validate frames of video faster than they could be post-processed, creating trustworthy video. However, crazy people will just say the organizations running the validation are illegitimate.
Video material is going to lose its value, like many other kinds of evidence before. Just think about your own judgement about pictures nowadays and you can probably apply it to videos in 10-20 years. Neither of them will lose all of their value as evidence, but they will be adjusted as technology is going forward.
In the long run I think you're correct, but just looking at how poorly the Boomers and even Gen Xers are adjusting to modern forms of media, I'd say it's safe to assume it will take a generation or even two before media literacy with regards to video is properly adjusted among the general public, at which point, who knows what kind of tech will be available?
I'd like to think younger people would be able to verify sources or discern misinformation, but I don't have much hope. A scroll through /r/all on reddit will show you millennial and gen Z audiences getting whipped up into a frenzy with fake or out-of-context tweets or posts all the time.
People will trust some fake videos until they see stunningly realistic videos of themselves hanging out with Hitler, or whatever.
Later we will have some video devices that use some kind of encryption and network connectivity to validate frames of video faster than they could be post-processed, creating trustworthy video. However, crazy people will just say the organizations running the validation are illegitimate.