When it said that the vast majority of the population of what's now modern-day France spoke modern French as their native language, that's categorically false and shouldn't be treated with leniency or open to interpretations.
My point is cherry picking flaws won’t help improve anything for ChatGPT, Wikipedia, etc — but systematic approaches to discovering and modeling information space and related fact, queries, etc would. Wikipedia not free of issues and to my knowledge does not allow end user to see Wikipedia’s confidence in a given fact, if they have one at all.
These sort of statements are by nature speculative not factual, and thus speakers are advised to express and communicate doubt, and hedge against uncertainty inherent in their views, and probably even vicious rebuttals, by using appropriate language constructs or terminology, but when a know-it-all bot that was trained by its handlers to pass always as an authority figure makes a mistake due to hubris or overconfidence, don't expect us to sit idle, and not call them out, and refute their claims accordingly.
Beyond that, in my opinion, while human dialogue might hedge confidence, disclose conflicts of interest, etc — to me, assumed the exchange is via text-based chat — there are much more efficient and effective ways to express that information than adding non-actionable text like that.
When it said that the vast majority of the population of what's now modern-day France spoke modern French as their native language, that's categorically false and shouldn't be treated with leniency or open to interpretations.