Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ? so if we just spin up the right architecture, hallucinations won't be a problem anymore ? I have doubts


>so you are telling me that hallucinations (that by definition happen at the model layer) are an engineering problem ?

Yes.

Hallucinations were a big problem with single shot prompting. No one is seriously doing that anymore. You have an agentic refinement process with an evaluator in the loop that takes in the initial output, quality checks it, and returns a pass/fail to close the loop or try again, using tool calls the whole time to inject verified/real time data into the context for decision making. Allows you to start actually building reliable/reasonable systems on top of LLMs with deterministic outputs.


LLMs can’t really evaluate things. They’re far too suggestible and can always be broken with the right prompt no matter how many layers you apply.


okay give me the link to a LLM-based system that does not hallucinate then




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: