Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the post-truth era has taught us anything, it's that humans aren't all that good at this either. It's probably a consequence of how logic works - to be reliable, you need a narrow field, the more open-ended the application the more likely it is to need to guess.


chatGPT spits out the first thing to be generated with no self-awareness that it could be wrong let alone have self-reflection to correct itself. Maybe it seems people can be that way but this is generally not how humans function.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: