That seems unlikely, given that making up hurtful stories about people and transmitting them via text or voice is still a thing. Everyone knows that anyone can make up any story they want without any technology whatsoever, and yet spreading rumors is still a thing.
Not really. I can't think of a recent "leaked texts" that the participants cannot not easily and plausibly deny (e.g. elon's supposed messages to gates), or even voice messages. Even most images can already be denied as photoshop if all the witnesses agree. The only medium that is somewhat hard to deny is videos, like sex tapes, but that's also not too hard. I think there will soon be a race to make deep learning pics look completely indistinguishable from phone pics.
Perhaps that's somewhat true for famous people, although there are plenty of examples of false stories (without any forged evidence, literally just stories) causing real embarrassment and damage to reputation.
But it's even more true for non-famous people getting bullied in their social groups, both online and offline, and that's more what I was responding to (the "asshole friends" in the original comment).
There are a few different kinds of 'secure enclaves' implemented on chips, where you can have some degree of trust that it "cannot" be faked.
E.g. crypto wallets, hardware signing tokens, etc.
We could imagine an imaging sensor chip made by a big-name company whose reputation matters, where the imaging sensor chip does the signing itself.
So, Sony or Texas Instruments or Canon start manufacturing a CCD chip that crypto signs its output. And this chip "can't" be messed with in the same way that other crypto-signing hardware "can't" be messed with.
That doesn't seem too far-fetched to me.
* edit:
As I think about it, I think more likely what happens is that e.g. Apple starts promising that any "iPhoneReality(tm)" image, which is digitally signed in a certain way, cannot have been faked and was certainly taken by the hardware that it 'promises' to be (e.g. the iPhone 25).
Regardless of how they implement it at the hardware level to maintain this guarantee, it is going to be a major target for security researchers to create fake images that carry the signature.
So, we will have some level of trust that the signature "works", because it is always being attacked by security researchers. Just like our crypto methods work today. There will be a cat-and-mouse game between manufacturers and researchers/hackers, and we'll probably know years in advance when a particular implementation is becoming "shaky".