Yes, this is a growing stage. In one or two years LLMs will have Wikipedia quality or even research paper quality. The spam they will produce might be better than most human written stuff.
I agree, that's the problem, but I think it's still somewhat complicated.
Imagine someone posting an extremely well written and insightful postmortem of an outage. It would show advanced and accurate usage of all kinds of tools to get to the bottom of the outage. It would be extremely useful reading for anyone investigating a similar outage, but the outage never actually occurred.
Now you have both ground truth accuracy and misleading fiction at the same time. Whether or not that makes the post useful depends entirely on the conclusions you're drawing from it.