Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a neuroscientist, but I think it's likely that LLMs (with 10s/100s of billions of parameters) and the human brain (with 1-2 orders of magnitude more neural connections[1]) process language in analogous ways. This process is predictive, stochastic, sensitive to constantly-shifting context, etc. IMO this accounts for the "unreasonable effectiveness" of LLMs in many language-related tasks. It's reasonable to call this a form of intelligence (you can measure it, solve problems with it, etc).

But language processing is just one subset of human cognition. There are other layers of human experience like sense-perception, emotion, instinct, etc. – maybe these things could be modeled by additional parameters, maybe not. Additionally, there is consciousness itself, which we still have a poor understanding of (but it's clearly different from intelligence).

So anyway, I think that it's reasonable to say that LLMs implement one sub-set of human cognition (the part that has to do with how we think in language), but there are many additional "layers" to human experience that they don't currently account for.

Maybe you could say that LLMs are a "model distillation" of human intelligence, at 1-2 orders of magnitude less complexity. Like a smaller model distilled from a larger one, they are good at a lot of things but less able to cover edge cases and accuracy/quality of thinking will suffer the more distilled you go.

We tend to equate "thinking" with intelligence/language/reason thanks to 2500 years of Western philosophy, and I believe that's where a lot of confusion originates in discussions of AI/AGI/etc.

[1]: https://medicine.yale.edu/lab/colon-ramos/overview/#:~:text=...



>I am not a neuroscientist, but I think it's likely that LLMs (with 10s of billions of parameters) and the human brain (with 1-2 orders of magnitude more neural connections[1]) process language in analogous ways

Related is the platonic representation hypothesis where models apparently converge to similar representations of relationships between data points.

https://phillipi.github.io/prh/ https://arxiv.org/abs/2405.07987


Interesting. I'm not sure I'd use the term "Platonic" here, because that tends to have implications of mathematical perfection / timelessness / etc. But I do think that the corpuses of human language that we've been feeding to these models contain within them a lot of real information about the objective world (in a statistical, context-dependent way as opposed to a mathematically precise one), and the AIs are surfacing this information.

To put this another way, I think that you can say that much of our own intelligence as humans is embedded in the sum total of the language that we have produced. So the intelligence of LLMs is really our own intelligence reflected back at us (with all the potential for mistakes and biases that we ourselves contain).

Edit: I fed Claude this paper, and "he" pointed out to me that there are several examples of humans developing accurate conceptions of things they could never experience based on language alone. Most readers here are likely familiar with Helen Keller, who became an accomplished thinker and writer in spite of being blind and deaf from infancy (Anne Sullivan taught her language despite great difficulty, and this Keller's main window to the world). You could also look at the story of Eşref Armağan, a Turkish painter who was blind from birth – he creates recognizable depictions of a world that he learned about through language and non-visual senses).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: