Okay, I can't defend that because in reality I'm talking purely from my subjective experience. I will say that in my experience it isn't close at all. However, I will also say that a lot of the things that chatgpt gets wrong for me, wikipedia just won't contain at all, but then chatgpt seeming confident when it's wrong is basically the whole problem.
One way of looking at it is that Wikipedia has a transparent and auditable way of correcting and updating false information, which makes it inherently more reliable and trustworthy than tensor weights finetuned by unreproducible human feedback.
> Wikipedia has well documented and explored issues related to vandalism, bias, and misinformation.
Last I read about this, the error rate in Wikipedia was actually lower than in the Encyclopedia Britannica, by a measurable amount.
This was a while back, and admittedly it only counted articles where there was an equivalent article in both (which probably gives a better picture of Wikipedia, as those kinds of “boring” articles have less vandalism…) but it’s not immediately a given that Wikipedia is just objectively bad at being accurate.