The question is, then, is the human-borne friction enough to slow the diffusion of GPT-derived "knowledge" back onto Wikipedia through human inputs? It is very easy to imagine that GPT-likes could apply misinformation to a population and change social/cultural/economic understandings of how reality works. That would then slowly seep back into "knowledge bases" as the new modes of reasoning become "common sense".
I think the worst-case scenario is that some citable sources get fooled by ChatGPT and Wikipedians will have to update their priors on what a "reliable source" looks like.
Wikipedia already does organization- and source-level IP blocking for input sources that have proven sufficiently malicious.