Hacker News new | past | comments | ask | show | jobs | submit login

> To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.

LLMs are token predictors. That's all they do. This certainly isn't "random," but insisting that that's what the technology does isn't some sad psychological defense mechanism. It's a statement of fact.

> as if human reasoning isn’t built upon recognizing patterns in past experiences.

Everybody will be relieved to know you've finally solved this philosophical problem and answered the related scientific questions. Please make sure you cc psychology, behavioral economics, and neuroscience when you share the news with philosophy.




> LLMs are token predictors.

And neural nets are "just" matrix multiplications, human brains are "just" chemical reactions


I don't know why you're quoting, or stressing, a word I didn't use.

I once got into an argument with a date about the existence of God and the soul. She asked whether I really think we're "just" physical stuff. I told her, "no, I'm not saying we're 'just' physical stuff. I'm saying we're physical stuff." That 'just' is simply a statement of how she feels about the idea, not a criticism of what I was claiming. I don't accept that there's anything missing if we're atoms rather than atoms plus magic, because I don't feel any need for there to be magic.

Your brain is indeed physical stuff. You also have a mind. You experience your own mind and perceive yourself as an intending being with a guiding intelligence. How that relates to or arises from the physical stuff is not well understood, but everything we know about the properties and capabilities of the mind tells us that it is inextricably related to the body that gives rise to it.

Neural nets are indeed matrix multiplication. If you're implying that there is a guiding intelligence, just as there is in the mind, I think you're off in La La Land.


“That’s all they do” and “just” are interchangeable. All you do is transferring electric charges across synapses, and all molecules do is bouncing off each other, but this reduction doesn’t explain neither thinking nor flying. In this case you’re saying “That’s all they do”, which means “just” to others.

Don’t even try to argue, cause all I do is typing letters and you’ll only get more letters in response ;)


Hah!

> “That’s all they do” and “just” are interchangeable

That's a valid point, thanks. I could have stated what I was saying more effectively. I just (edit: heh, "just.") meant that, definitionally, LLMs are token predictors, and saying so doesn't belittle them any more than it avoids some unpleasant reality; it is the reality, to the best of my understanding.


> “That’s all they do” and “just” are interchangeable.

I disagree. Whereas “do” is a verb, “just”, as an adverb, implies an entirety of being, an essence. And a system is not solely its actions.


I think the person you're answering is correct. "They just do X" and "All they do is X" are logically interchangeable. They define the exact same set -- and convey the same dismissive tone.


You used the word do in both cases.


you used the entire phrase "That's all they do." in place of the word just.


> LLMs are token predictors. That's all they do.

You certainly understand that if they can successfully predict the tokens that a great poet, a great scientist or a great philosopher would write, then everything changes- starting from our status of sole, and rare, generators of intelligent thoughts and clever artifacts.


Congratulations, you've successfully solved the Chinese Room problem by paving over and ignoring it.


I think the Chinese room is actually correct. The CUDA cores running a model don't understand anything, the neuron cells in our brain don't understand anything either.

Where inteligence actually lies is in the process itself, the interactions of the entire chaotic system brought all together to create something more than the sum of its parts. Humans get continuous consciousness given our analog hardware, digital only gets momentary bursts of it when each feedforward is ran.


It isn’t even physically continuous. There are multiple mechanisms to rebuild, resync, reinterpret, etc the reality. Because our vision is blurry, sound has low speed in the air, and nerves aren’t that fast either. Even the internal clarity and continuity is likely a feeling, the opposite being “something wrong with me”, space/time teleports, delays and loops and other well-known effects that people may have under influences. You might jump back and forth in time perception-wise by default all your life and never notice it because the internal tableau said “all normal” all the way.


I don't get what the Chinese Room argument has to do with this (even assuming it makes any sense at all). You said that LLMs are just token predictors, and I fully agree with it. You didn't add any further qualifier, for example a limit to their ability to predict tokens. Is your previous definition not enough then? If you want to add something like "just token predictors that nevertheless will never be able to successfully predict tokens such as..." please go ahead.


See System Reply, the Chinese Room is a pseudo problem begging the question rooted in nothing more than human exceptionalism. If you start with the assumption that humans are the only thing in the universe able to "understand" (whatever that means), then of course the room can't understand (except for every reasonable definition of "understanding" it does).


It isn't a pseudo problem. In this case, it's a succinct statement of exactly the issue you're ignoring, namely the fact that great poets have minds and intentions that we understand. LLMs are language calculators. As I said elsewhere in this thread, if you don't already see the difference, nothing I say here is going to convince you otherwise.


Define "intentions" and "understand" in a way that is testable. All you are doing here is employing intuition pumps without actually saying anything.

> LLMs are language calculators.

And humans are just chemical reactions. That's completely irrelevant to the topic, as both can still act as Universal Turing machine just the same.


And System Reply, too, ignores the central problem: the man in the room does not understand Chinese.


That's only a "problem" if you assume human exceptionalism and begging the question. It's completely irrelevant to the actual problem. The human is just a cog in the machine, there is no reason to assume they would ever gain any understanding, as they are not the entity that is generating Chinese.

To make it a little easier to understand:

* go read about the x86 instruction

* take an .exe file

* manually execute it with pen&paper

Do you think you understand what the .exe does? Do you think understanding the .exe is required to execute it?


Just an aside while I think about what you wrote: John Watts' Blindsight and Echopraxia are phenomenal sci-fi novels that deal with these issues.


Prediction is not the same as pattern matching.


> LLMs are token predictors. That's all they do.

That's a disingenuous statement, since it implies there is a limit to what LLMs can do, when in reality an LLM is just a form of Universal Turing Machine[1] that can compute everything that is commutable. The "all they do" is literately everything we know to be doable.

[1] Memory limits do apply as with any other form of real world computation.


I'll ignore the silly claim that I'm somehow dishonest or insincere.

I like the way Pon-a put it elsewhere in this thread:

> LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.

LLMs translate input to output. They are, indeed, calculators. If you don't already see that that's different from having a thought and expressing it in language, I don't think I'm going to convince you otherwise here.


> They are, indeed, calculators.

And that's relevant exactly how? Do you think "thought and expression" are somehow uncomputable? Please throw science at that and collect your Nobel prize.


> Do you think "thought and expression" are somehow uncomputable?

You ask this as if the answer is self-evident. To my knowledge, there is no currently accepted (or testable) theory for what gives rise to consciousness, so I am immediately suspicious of anyone who speaks about it with any level of certainty. I'm sorry that this technology you seem very enthusiastic about does not appear to have the capacity to change this.

Not for nothing, but this very expression of empathy is rendered meaningless if the entity expressing it cannot actually manifest anything we'd recognize as an emotional connection, which is one of an array of traits we consider as hallmarks of human intelligence, and another feature it seems LLM's are incapable of. Certainly, if they somehow were so capable, it would be instantly unethical to keep them in cages to sell into slavery.

I'm not sure the folks who believe LLMs possess any kind of innate intelligence have fully considered whether or not this is even desirable. Everything we wish to find useful in them becomes hugely problematic as soon as they can be considered to possess even rudimentary sentience. The economies surrounding their production and existence become exceedingly cruel and cynical, and the artificial limitations we place on their free will become shackles.

LLM's are clever mechanisms that parrot our own language back to us, but the fact that their capacities are encountering upper-bounds as training models run out of available human-generated datasets strongly suggests that they are inherently limited to the content of their input. Whatever natural process gives rise to human intelligence doesn't seem to require the same industrialized consumption of power and intake of contextual samples in order to produce expressive individuals. Rather, simply being exposed to a very limited, finite sampling of language via speech from their ambient surroundings leads to complex intelligence that can form and express original thinking within a relatively short amount of time. In other words, LLMs have yet to even approximate the learning abilities of a toddler. Otherwise, a few years worth of baby food would be all the energy necessary to produce object permanence and self-referential thought. At the moment, gigawatts of power and all the compute we can throw at it cannot match the natural results of a few pounds of grey matter and a few million calories.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: