Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And if you don't see how the example with the human brain throws a wrench in your analogy, it would explain why you'd think it as nonsensical, as it's exactly of relevance.

> We also know how they work, they are token predicters. That is all they are and all they can do

Ah there it is, you've betrayed a deep lack of understanding in all relevant disciplines (neuroscience, cognition, information theory) required to even appreciate the many errors you've made here.

You sure understand the subject matter and have nothing possibly to learn. Enjoy.

https://en.wikipedia.org/wiki/Predictive_coding



I'm aware of the theories that LLM maximalists love to point to over and over that tries to make it seem like LLMs are more like human brains than they are. These theories are interesting in the context of actual minds but you far over extend their usefulness and application by trying to apply them to LLMs.

We know as a hard fact that LLMs do not understand anything. They have no capacity to "understand". The constant, intractible failure modes that they continuously exhibit are clear byproducts of this fact. By continuing to cling to the absurd idea that there is more going on than token prediction you make yourself look like the people who kept insisting there was more going on with past generation chat bots even after being shown the source code.

I have understood all along why you attempt to extend my colored shape example to the brain, but your basis for this is complete nonsense. Because a) we do not have the actual understanding of the brain to do this and b) it's competely beside the point, becuase we know that minds do arise from the brain. My whole point is an LLM is an illusion of a mind which is effective because it outputs words, which we are so hard wired to associate with other minds, expecially when they seem to "make sense" to us. If instead of words you use something nonsensical like colored shapes with no underlying meaning, this illusion of the mind goes away and you can see an LLM for whst it is.


> We know as a hard fact that LLMs do not understand anything.

A bit late to the party, but we most certainly do not even know what "understanding" means.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: