If the post-truth era has taught us anything, it's that humans aren't all that good at this either. It's probably a consequence of how logic works - to be reliable, you need a narrow field, the more open-ended the application the more likely it is to need to guess.
chatGPT spits out the first thing to be generated with no self-awareness that it could be wrong let alone have self-reflection to correct itself. Maybe it seems people can be that way but this is generally not how humans function.