Hacker News new | past | comments | ask | show | jobs | submit login

Source?



LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.


unfortunately that is exactly what the humans are doing an alarming fraction of the time


One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.


that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms


Ask an LLM.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: