so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ? so if we just spin up the right architecture, hallucinations won't be a problem anymore ?
I have doubts
>so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ?
Yes.
Hallucinations were a big problem with single shot prompting. No one is seriously doing that anymore. You have an agentic refinement process with an evaluator in the loop that takes in the initial output, quality checks it, and returns a pass/fail to close the loop or try again, using tool calls the whole time to inject verified/real time data into the context for decision making. Allows you to start actually building reliable/reasonable systems on top of LLMs with deterministic outputs.