These are simply implementation failures. You should be using them to gather information and references that are verifiable. There are even hallucination detectors that do some of this for you automatically.
If you are treating LLMs like all-knowing crystal balls, you are using them wrong.
If you are treating LLMs like all-knowing crystal balls, you are using them wrong.