Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Offtopic, but there's an interesting parallel between making a mistake like that and AI hallucination. In a human, that behavior is a positive signal, but for many people, that same behavior is proof of how LLMs are just a toy that will never rival human intelligence.


The difference is that an LLM isn't very good at saying "I'm a fucking idiot" and changing it when asked to double-check (unless you handhold it in the direction of the exact error it's meant to be looking for). Humans recognize their own hallucinations. There's not really any promising work towards getting AI to do the same.


Have you tried it? They're actually pretty good at it in a lot of scenarios. It's not flawless, but they're only getting better.


Leela.ai was founded to work on problems like that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: