Hacker News new | past | comments | ask | show | jobs | submit login

That is funny, because of all the problems with LLMs, the biggest one is that they will lie/hallucinate/confabulate to your face before saying I don't know, much like those leaders.



Is this inherent to LLMs by the way, or is it a training choice? I would love for an LLM to talk more slowly when it is unsure.

This topic needs careful consideration and I should use more brain cycles on it. Please insert another coin.


It's fairly inherent. Talking more slowly wouldn't make it more accurate, since it's a next-token predictor: you'd have to somehow make it produce more tokens before "making up its mind" (i.e., outputing something that's sufficiently-correlated with a particular answer that it's a point of no return), and even that is only useful to the extent it's memorised a productive algorithm.

You could make the user interface display the output more slowly "when it is unsure", but that'd show you the wrong thing: a tie between "brilliant" and "excellent" is just as uncertain as a tie between "yes" and "no".


Telling the LLM to walk through the response, step by step is a prompt engineering thing though.


It is. Studied in the literature under the name "chain of thought" (CoT), I believe. It's still subject to the limitations I mentioned. (Though the output is more persuasive to a human even when the answer is the same, so you should be careful.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: