Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dictionary substitution is so much closer to symbolic AI than to what LLM are that I don't think it can serve as an example for the "hallucination" wording. In an expert system, if a rule is wrong or inadequate it's clearly a bug. Just not a bug in the rule engine but in the rules.

But in an LLM, there are no explicit rules (or only in some obscure background layers), it's all statistics. Statistics stacked on top of more statistics in a fascinating self-stabilizing way, where each guess provides some support to its peers, like how the weak paper slabs support each other in a house of cards. But it's statistical guesswork all the way down. Even the most correct answers are merely statistics playing out favorably in a case that may or may not have been very easy to get right.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: