Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs are token predictors. That's all they do.

You certainly understand that if they can successfully predict the tokens that a great poet, a great scientist or a great philosopher would write, then everything changes- starting from our status of sole, and rare, generators of intelligent thoughts and clever artifacts.




Congratulations, you've successfully solved the Chinese Room problem by paving over and ignoring it.


I think the Chinese room is actually correct. The CUDA cores running a model don't understand anything, the neuron cells in our brain don't understand anything either.

Where inteligence actually lies is in the process itself, the interactions of the entire chaotic system brought all together to create something more than the sum of its parts. Humans get continuous consciousness given our analog hardware, digital only gets momentary bursts of it when each feedforward is ran.


It isn’t even physically continuous. There are multiple mechanisms to rebuild, resync, reinterpret, etc the reality. Because our vision is blurry, sound has low speed in the air, and nerves aren’t that fast either. Even the internal clarity and continuity is likely a feeling, the opposite being “something wrong with me”, space/time teleports, delays and loops and other well-known effects that people may have under influences. You might jump back and forth in time perception-wise by default all your life and never notice it because the internal tableau said “all normal” all the way.


I don't get what the Chinese Room argument has to do with this (even assuming it makes any sense at all). You said that LLMs are just token predictors, and I fully agree with it. You didn't add any further qualifier, for example a limit to their ability to predict tokens. Is your previous definition not enough then? If you want to add something like "just token predictors that nevertheless will never be able to successfully predict tokens such as..." please go ahead.


See System Reply, the Chinese Room is a pseudo problem begging the question rooted in nothing more than human exceptionalism. If you start with the assumption that humans are the only thing in the universe able to "understand" (whatever that means), then of course the room can't understand (except for every reasonable definition of "understanding" it does).


It isn't a pseudo problem. In this case, it's a succinct statement of exactly the issue you're ignoring, namely the fact that great poets have minds and intentions that we understand. LLMs are language calculators. As I said elsewhere in this thread, if you don't already see the difference, nothing I say here is going to convince you otherwise.


Define "intentions" and "understand" in a way that is testable. All you are doing here is employing intuition pumps without actually saying anything.

> LLMs are language calculators.

And humans are just chemical reactions. That's completely irrelevant to the topic, as both can still act as Universal Turing machine just the same.


And System Reply, too, ignores the central problem: the man in the room does not understand Chinese.


That's only a "problem" if you assume human exceptionalism and begging the question. It's completely irrelevant to the actual problem. The human is just a cog in the machine, there is no reason to assume they would ever gain any understanding, as they are not the entity that is generating Chinese.

To make it a little easier to understand:

* go read about the x86 instruction

* take an .exe file

* manually execute it with pen&paper

Do you think you understand what the .exe does? Do you think understanding the .exe is required to execute it?


Just an aside while I think about what you wrote: John Watts' Blindsight and Echopraxia are phenomenal sci-fi novels that deal with these issues.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: