Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For example, if Newton's laws are in conflict with another fact, the LLM will defer to the fact that it finds more probable in context.

Which is the correct thing to do. If such a context would be, for example, an explanation of an FTL drive in a science fiction story, both LLMs and humans would be correct to put Newton aside.

LLMs aren't Markov chains, they don't output naive word frequency based predictions. They build a high-dimensional statistical representation of entirety of their training data, from which then completions are sampled. We already know this representation is able to identify and encode ideas as diverse as "odd/even" and "fun" and "formal language" and "golden gate bridge"-like. "Fictional context" vs. "Real-life physics" is a concept they can distinguish too, just like people can.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: