Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Hallucination" is just to term we use to say "this result is not what it should be". The model always uses the very same process, it does not do one thing for "hallucinations" and something else for "correct" results.

In a nutshell it is always predicting the next token from a joint probability distribution. That's it.

All other interpretations are speculative.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: