In the words of Richard Stallman, it's "pretend intelligence".
The language may be structurally correct and *some* of the facts as well but any real grasp of what is being said is largely missing.
This is a direct result of a design built around probability. The overriding objective is plausibility --- not to be confused with either facts or intelligence.
An LLM doesn't understand the difference between fact and fiction.
It just uses probability to choose the next word. Hopefully, there are a more facts in it's database that can serve as a guide. But if not, it will just as readily use fiction to produce something that sounds plausible.
Anything an LLM produces simply cannot be trusted and is a poor example of "intelligence".