LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.
One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.
that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms