Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Hallucinate" is a term of art, and does not imply a philosophical commitment to whether LLMs have minds. "Confabulation" might be a more appropriate term.

What is indisputable is that LLMs, even though they are 'just' word generators, are remarkably good at generating factual statements and accurate answers to problems, yet also regrettably inclined to generating apparenly equally confident counterfactual statements and bogus answers. That's all that 'hallucination' means in this context.

If this work can be replicated, it may offer a way to greatly improve the signal-to-bullshit ratio of LLMs, and that will be both impressive and very useful if true.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: