Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLM work in only one way: try to predict what's said next.

Yes, obviously, but it's still trained to say certain things and not others. And it does check an internal state, one that's derived from its GBs of parameters in its attention layers and from all the previous tokens; what it doesn't have is persistent internal state apart from the previously emitted tokens.

So it's not completely pointless to ask "why did it use this specific word or turn of phrase?"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: