Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Making an assertion while being wrong does not mean you were guessing. You were simply wrong. Yet the vast majority of the time, when we are not guessing, we are correct. And when we are guessing, we can convey the ambiguity we feel. Guessing is not defined by the guarantee of accuracy.

LLMs struggle to convey uncertainty. Some fine tuning has allowed it to aggressively point out gaps. But it doesn’t really know what it knows even if maybe under the hood probabilities vary. Further, ask it if it is sure on things and it’ll frequently assume it was wrong, even if it proceeds to spit out the same answer.



>Making an assertion while being wrong does not mean you were guessing. You were simply wrong.

This distinction is made up. It doesn't really exist in cognitive science. What does "simply wrong" even mean really ? Why is it different ?

>Yet the vast majority of the time, when we are not guessing, we are correct.

We're not good at knowing when we're not guessing in the first place. Just because it doesn't feel that way to you doesn't mean it isn't so.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3196841/

If you asked most of the participants in this paper, they'd tell you straight faced and fully believing how decision x was the better choice and give elaborate reasons why.

The clincher in this paper (and similar others) isn't that the human has made a decision and doesn't know why. It's that he has no idea why he has made a decision but doesn't realize he doesn't know why. He believes his rationalization.

What you feel holds no water.

>But it doesn’t really know what it knows

Yeah and neither do people.


I'm not the person you're arguing with, but going back to the original meta-point of this thread, I too think you're vastly over-estimating people's introspective power on their internal states, including states of knowing.

The distinction you're drawing between "guessing" and "being sure of something but being wrong about it" is hazy at best, from a cognitive science point of view, and the fact that it doesn't _feel_ hazy to a person's conscious experience is exactly why this is interesting and maybe even philosophically important.

More briefly, people are just horseshit at knowing themselves, their motivations, their state of knowledge, the origins of their knowledge. We see some of these 'failures' in LLMs, but we (as a general rule, the 'royal we') are abysmal at seeing it in ourselves.


But it doesn’t really know what it knows

To be fair we don't know what we know, either. Epistemology is the bedrock that all of philosophy ultimately rests on. If it were a solved problem nobody would talk about it or study it anymore. It's not.

One of the most interesting things about current ML research is that thousands of years of philosophical navel-gazing is suddenly relevant. These tools are going to teach us a lot about ourselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: