Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not the same. This is something I've observed many times but have never quite been able to put a name to it.

When you lower the friction of an action sufficiently, it causes a qualitative change in the emergent behavior of the whole system. It's like how a little damping means the difference between a bridge you can safely drive over versus a galloping Gertie that resonates until it collapses.

When a human has to choose and put some effort into regurgitating a piece of information, there is a natural decay factor in the system where people will sometimes not bother to repeat something if it doesn't seem valuable enough to them. Sure, things like urban legends and old wive's tales exploit bugs in our information prioritization. But, overall, it has the effect of slowly winnowing out nonsense, misinformation, and other low value stuff. Meanwhile, information that continues to be useful continues to be worth the effort of repeating.

Compared to the print and in-person worlds before, things got much worse just with social media where a human was still in the loop but the effort to rebroadcast was nil. This is exactly why we saw a massive rise in misinformation in the past couple of decades.

With ChatGPT and humans completely out of the loop, we will turn our information systems into galloping Gertie and they will resonate with nonsense and lies until the whole system falls apart.

We are witnessing the first cracks now. Look at George Santos, a candidate who absolutely should have never won a single election but managed to because information pipelines about candidates are so polluted with junk and nonsense that voters didn't even realize he was a con man. Not even a sophisticated one, just a huckster able to hide within the sea of information noise.



Humans still have to choose to transcribe ChatGPT noise into Wikipedia, because automated attempts to do so will be too easy to identify and squash.

Wikipedia already does organization- and source-level IP blocking for input sources that have proven sufficiently malicious.


The question is, then, is the human-borne friction enough to slow the diffusion of GPT-derived "knowledge" back onto Wikipedia through human inputs? It is very easy to imagine that GPT-likes could apply misinformation to a population and change social/cultural/economic understandings of how reality works. That would then slowly seep back into "knowledge bases" as the new modes of reasoning become "common sense".


Wikipedia content requires citation.

I think the worst-case scenario is that some citable sources get fooled by ChatGPT and Wikipedians will have to update their priors on what a "reliable source" looks like.


sure, we need dampening in our information systems and our social trust systems. it's clearly not there now. if the problem gets out of hand to the point we're forced to address it, i think that's a good thing overall.


But, overall, it has the effect of slowly winnowing out nonsense, misinformation, and other low value stuff. Meanwhile, information that continues to be useful continues to be worth the effort of repeating

Unfortunately, in some (many?) cases the very fact some "information" exists is the "usefulness", independent of the usefulness/accuracy of the information itself. The unsubstantiated "claim" of crime being up can result in more funding for police, even if the claim is false. There are people profiting from the increase in police spending, they don't care if the means to obtain that are true or not.

Over the long term, the least-expended-energy state, accepting the truth, will win out, but people have some incentive/motivation to avoid that in the shorter term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: