Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans still have to choose to transcribe ChatGPT noise into Wikipedia, because automated attempts to do so will be too easy to identify and squash.

Wikipedia already does organization- and source-level IP blocking for input sources that have proven sufficiently malicious.



The question is, then, is the human-borne friction enough to slow the diffusion of GPT-derived "knowledge" back onto Wikipedia through human inputs? It is very easy to imagine that GPT-likes could apply misinformation to a population and change social/cultural/economic understandings of how reality works. That would then slowly seep back into "knowledge bases" as the new modes of reasoning become "common sense".


Wikipedia content requires citation.

I think the worst-case scenario is that some citable sources get fooled by ChatGPT and Wikipedians will have to update their priors on what a "reliable source" looks like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: