Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This!

One could argue that humans will say "I don't know" when they don't know, but that ultimately depends on training. We currently don't have LLMs trained to answer "I don't know" (I suspect that in part that's due to incentives of focusing on something that looks impressive; one could argue that if an AI system too often says "I don't know" it's less impressive albeit it may be more useful since you can rely upon it more when it says it knows)

There are plenty of cases where humans are trained to hope they sound convincing even if they make up answers. I've seen a fair share of this done by students.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: