Fully agree, in recent weeks I've also started to consider LLMs in a wider context, which is to destroy all trust in the web.
The enshittification of search engines, making social media verification meaningless, locking down APIs that used to be public, destroying public datasets, spreading lies about legacy media, the easiness of deploying bots that can sound human in short bursts of text... it's all leading towards making it impossible to verify anything you read online.
The fearmongering around deepfakes from a few years back is coming true, but the scale is even bigger. Turns out, there won't be Web 3.0.
You could never believe everything you read online, but with enough time and effort, you could chase any claim back to its original source.
For example, you could read something on Statista.com, you could see the credits of that dataset, and visit the source to verify. Or you randomly encounter some quote and then visit your favourite Snopes-like website to verify that the person actually said that.
That's what's under attack. The "middleware" will still be there, but the source is going to be out of your reach. Hallucinations are not a bug, but a feature.
If you can't trace something back to its source, it's suspect. It was that way then too. I suppose you're just concerned there's a firehose of disinformation now.
So perhaps we have to just slough off the internet completely, the way we always have for things like weekly rags about "Bat Boy" or whatever.
I hate to see the internet go, but we'll always have Paris.
Genuine question - how so? If I want to find stuff out I go to wikipedia, nyt, guardian, hn, linked sites and so on. I'm not aware of that lot being noticeably less trustable than in the past? If anything I find getting information more trustable than before in that there are a lot of long from interviews from all sorts of people on youtube where you can get their thoughts directly rather than editorialised and distorted.
I mean the web was never a place where things were vetted - you've always been able to put any sort of rubbish on it and so have had to be selective if you want accuracy.
The enshittification of search engines, making social media verification meaningless, locking down APIs that used to be public, destroying public datasets, spreading lies about legacy media, the easiness of deploying bots that can sound human in short bursts of text... it's all leading towards making it impossible to verify anything you read online.
The fearmongering around deepfakes from a few years back is coming true, but the scale is even bigger. Turns out, there won't be Web 3.0.