Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s always baffling how people take a technology that wasn’t even thought as feasible a decade ago and try to dismiss it as trivial and stagnant. It’s pretty clear that LLMs have improved rapidly and have successfully become better writers than the majority of people alive. To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.

It’s also amusing that people minimize it by just calling it pattern matching, as if human reasoning isn’t built upon recognizing patterns in past experiences.




I’m conflicted about your comment. On one hand, I agree, useless reductions are boring. But on the other, we are living in the “overselling all up in your ears” epoch, which is known to sell pffts as badabooms. So it isn’t baffling to me that a new tech gets old quickly, because it’s not really what was advertised. Our decades-old ideas of AI weren’t feasible a decade ago, but neither are these now. Those who believe in that too much become hu.ma.ne founders and similar self-bullshitters.


You're right about "living in the “overselling all up in your ears” epoch", but a good first defense against "being sold pffts as badabooms" is to blanket distrust all the marketing copy and whatever the salespeople say, and rely on your own understanding or experience. You may lose out on becoming an early adopter of some good things, but you'll also be spared wasting money on most garbage.

With that in mind, I still don't get the dismissal. LLMs are broadly accessible - ever since the first ChatGPT, anyone could easily get access to a SOTA LLM and evaluate it for free; even the limited number of requests on free tiers were then, and now are, sufficient to throw your own personal and professional problems at models and see how they do. Everyone can see for themselves this is not hot air - this is an unexpected technological breakthrough that's already overturning way people approach work, research and living, and it's not slowing down.

I'd say: ignore what the companies are selling you - especially those who are just building products on top of LLMs and promising pie in the sky. At this point in time, they aren't doing anything you couldn't do for yourself with ChatGPT or Claude access[0]. We are also beginning to map out the possibilities - two years since the field exploded is very little time. So in short, anything a business does, you could hack yourself - and any speculative idea for AI applications you can imagine, there's likely some research team working on it too. The field is moving both absurdly fast and absurdly slow[1]. So your own personal experience over applying LLMs to your own problems, and watching people around you do the same, is really all you need to tell whether LLMs are hot air or not.

My own perspective from doing that: it's not hot air. The layer of hype is thin, and in some areas the hype is downplaying the impact.

--

[0] - Yes, obviously a bunch of full-time professionals are doing much more work than you or me over couple evenings of playing with ChatGPT. But they're building a marketable product, and 99% of work that goes into that is something you do not need to do, if you just want to replicate the core functionality for yourself.

[1] - I mean, Anthropic just published a report on how exposing "thinking" capability to the model in form of a tool call leads to improvement of performance. On the one hand, kudos to them for testing this properly and publishing. On the other hand, that this was something to do was stupidly obvious ever since 1) OpenAI introduced function calling and 2) people figured out "Let's think step by step" improves model performance - which was back in 2022[2]. It's as clear example as ever that both hype and productization lag behind what anyone paying attention can do themselves at home.

[2] - https://arxiv.org/abs/2205.11916


Idk, I find their output mediocre and sometimes misleading. But that’s not the worst part and is often cheaper than doing things yourself.

The worst part is https://news.ycombinator.com/item?id=43314958 . We may be still blind to this, but new generations may find themselves on the other side of the fence, so to say.


IDK, I think the linked comment is right. LLMs drop some of the barriers to experimentation so much, that previously rejected project ideas may just become worth trying out (especially when we're talking about hobby or research ideas, not full products). It also has the same effect on ideas you may have now, that before you'd reject as "too much work".

Case in point: my wife needed a QR code generator so she could stop asking me every time she needs to make some codes. There are tons of such generators out there - webapps, mobile apps, downloadable programs, plugins to graphics software, etc. But the software world is such a pile of shit that I don't trust any single one of them - experience shows that most random utility software like this is ad-ridden garbage or malware.

Before a year ago, I'd just invest time to try and evaluate some solutions, find one that's least likely to record inputs, or show ads, or inject redirects into generated code, or run excessive surveillance of your phone, etc. But since this need manifested last week, I just asked Claude to make me a client-side generator up to my specific needs, and quickly got a static page with (vendored) JS library, to host from a domain I own.

There's tons of super-specific or one-off utilities a person could use to help them with some task - utilities that make no sense as products, and if they exist, they're just loaded with ads and garbage. LLMs today make it feasible to just get the computer to write you such utilities from scratch, on demand, which guarantees they're garbage-free.


> It’s pretty clear that LLMs have improved rapidly and have successfully become better writers than the majority of people alive

But that doesn't help much, because for the majority of our writing that's not the issue. LLMs might out do professional writers some day, but without imagination and the desire to sent a message I doubt it will happen any time soon.

Where LLMs are useful is in professional communication, because you're right, an LLM can technically write better than most. The issue is that you need to tell the LLM what message you want to convey, and that's where most people fall down. You also need to be able to critically read back your text, place yourself in the position of the receiver of your text, LLM generated or not. Most people, even those in professional communication can't do that.

I believe the author is on to something, ChatGPT (and others) are perfectly good tools, and can replace most if not all of the shitty communication and writing, including journalism, present in the world. What it can't do is replace the few really good communicators. Communication is hard, not a lot of people can do it and no LLM is going to help you if you can't formulate your message clearly on your own.

There is a darker side of this argument. If something can be killed of by an LLM, is it even worth having? This argument throws a great number of "professionals" under the bus, instantly reducing the value of their jobs to zero. I have family members who do communication for the city, they spend an exorbitant amount of time on flyers, newsletters, Facebook posts and so on, detailing city work, future plan, past events, all that stuff. I doubt anyone really reads it, most of it could probably be written by an LLM, because it always excludes the "why are we doing this", "why should you care" or even "this is expected to impact you in this way". I get a million update from the school about restructuring, hiring of administrators, org-reshuffling... it could all be done by an LLM, but the fact is: No one gives a shit. They just want regular updates from the teachers, and the LLM can't do that.

Communication is hard, and LLMs don't change that, they can only help you write out your message, you have to come up with that yourself, and that's the hard part.


You can't interrogate your qualia. A lot of people think (or really, feel) that makes them magic.

It doesn't.

But if you feel they're magic, you'll never believe that a "random" mechanical process, which you think you can interrogate, could ever really have that spark.


Welcome on hackernews, where thousands years old philosophical problems are solved through plain assertions!


Turns out we have technologies and experiences with technology which weren't possible until very recently. Some things just look very different in hindsight.


Nothing changed in regard to so called hard problem of consciousness. All thought experiments and arguments like Chinese Room and p-zombies are still applicable.

Philosophy of mind hasn't started nor ended with Dennett, and definitely not with AI hype manufacturers.


Qualia might still be magic. Maybe we have souls or something, I don't know.

I'm comfortable with saying that this way of answering the question doesn't work, because the argument is simply, "How can it be false if I believe it's true?"


I subscribe to the ‘qualia is atoms in a trenchcoat’ school of philosophy personally, but I understand that it might be hard to accept and even harder to not be depressed about it.


Well sure. As best we can tell the whole universe is a mechanical cause and effect chain, and the illusion of free will is just that, an illusion.

Or possibly... There is something else going on, and we just haven't figured it what it is yet. I'm not betting either way at this point.


Well, it doesn't matter whether the AI has qualia, as long as it produces the right output.


Define 'magic'.


LMAO really? Who said anything about qualia?

LLMs lack the capacity for invention and metacognition. We're a long way from needing to talk about qualia; these things are NOT conscious and there is no question.

This website is absurd. "I don't think LLMs are all they're cracked up to be" "YOU'RE JUST MAD BECAUSE QUALIA AND YOU WANT TO BE MAGIC"

no the "magic" text generator just writes bad code, my dude

Only the people on r/singularity care about qualia in this context and let us remember this is a mechanism without even a memory


Nobody is thinking about qualia. That's only how I framed it. They just know the machine will never replace them. They're a person, doing person things, and it's a machine.

So every time it proves it can, they move the goalposts and invent a truer Scotsman who can never be replaced by a machine. Because they know it can't do person things. It's a machine.


Hang on a sec... the qualia research institute might take umbrage at that first bit.


> amusing that people minimize it by just calling it pattern matching

Even funnier when the typical IQ test, Raven's Progressive Matrices, are 100% about pattern matching.


I once tried myself on an official Mensa test which was very similar to that. It got very boring after I realized they test for mundane & | ^ in varying ways. Dropped it halfway and passed, lol. But I guess you have to be pretty smart to detect logical ops without a hacker background.


> To try and present this as just random pattern matching seems as just a way to assuage fears of being replaced.

LLMs are token predictors. That's all they do. This certainly isn't "random," but insisting that that's what the technology does isn't some sad psychological defense mechanism. It's a statement of fact.

> as if human reasoning isn’t built upon recognizing patterns in past experiences.

Everybody will be relieved to know you've finally solved this philosophical problem and answered the related scientific questions. Please make sure you cc psychology, behavioral economics, and neuroscience when you share the news with philosophy.


> LLMs are token predictors.

And neural nets are "just" matrix multiplications, human brains are "just" chemical reactions


I don't know why you're quoting, or stressing, a word I didn't use.

I once got into an argument with a date about the existence of God and the soul. She asked whether I really think we're "just" physical stuff. I told her, "no, I'm not saying we're 'just' physical stuff. I'm saying we're physical stuff." That 'just' is simply a statement of how she feels about the idea, not a criticism of what I was claiming. I don't accept that there's anything missing if we're atoms rather than atoms plus magic, because I don't feel any need for there to be magic.

Your brain is indeed physical stuff. You also have a mind. You experience your own mind and perceive yourself as an intending being with a guiding intelligence. How that relates to or arises from the physical stuff is not well understood, but everything we know about the properties and capabilities of the mind tells us that it is inextricably related to the body that gives rise to it.

Neural nets are indeed matrix multiplication. If you're implying that there is a guiding intelligence, just as there is in the mind, I think you're off in La La Land.


“That’s all they do” and “just” are interchangeable. All you do is transferring electric charges across synapses, and all molecules do is bouncing off each other, but this reduction doesn’t explain neither thinking nor flying. In this case you’re saying “That’s all they do”, which means “just” to others.

Don’t even try to argue, cause all I do is typing letters and you’ll only get more letters in response ;)


Hah!

> “That’s all they do” and “just” are interchangeable

That's a valid point, thanks. I could have stated what I was saying more effectively. I just (edit: heh, "just.") meant that, definitionally, LLMs are token predictors, and saying so doesn't belittle them any more than it avoids some unpleasant reality; it is the reality, to the best of my understanding.


> “That’s all they do” and “just” are interchangeable.

I disagree. Whereas “do” is a verb, “just”, as an adverb, implies an entirety of being, an essence. And a system is not solely its actions.


I think the person you're answering is correct. "They just do X" and "All they do is X" are logically interchangeable. They define the exact same set -- and convey the same dismissive tone.


You used the word do in both cases.


you used the entire phrase "That's all they do." in place of the word just.


> LLMs are token predictors. That's all they do.

You certainly understand that if they can successfully predict the tokens that a great poet, a great scientist or a great philosopher would write, then everything changes- starting from our status of sole, and rare, generators of intelligent thoughts and clever artifacts.


Congratulations, you've successfully solved the Chinese Room problem by paving over and ignoring it.


I think the Chinese room is actually correct. The CUDA cores running a model don't understand anything, the neuron cells in our brain don't understand anything either.

Where inteligence actually lies is in the process itself, the interactions of the entire chaotic system brought all together to create something more than the sum of its parts. Humans get continuous consciousness given our analog hardware, digital only gets momentary bursts of it when each feedforward is ran.


It isn’t even physically continuous. There are multiple mechanisms to rebuild, resync, reinterpret, etc the reality. Because our vision is blurry, sound has low speed in the air, and nerves aren’t that fast either. Even the internal clarity and continuity is likely a feeling, the opposite being “something wrong with me”, space/time teleports, delays and loops and other well-known effects that people may have under influences. You might jump back and forth in time perception-wise by default all your life and never notice it because the internal tableau said “all normal” all the way.


I don't get what the Chinese Room argument has to do with this (even assuming it makes any sense at all). You said that LLMs are just token predictors, and I fully agree with it. You didn't add any further qualifier, for example a limit to their ability to predict tokens. Is your previous definition not enough then? If you want to add something like "just token predictors that nevertheless will never be able to successfully predict tokens such as..." please go ahead.


See System Reply, the Chinese Room is a pseudo problem begging the question rooted in nothing more than human exceptionalism. If you start with the assumption that humans are the only thing in the universe able to "understand" (whatever that means), then of course the room can't understand (except for every reasonable definition of "understanding" it does).


It isn't a pseudo problem. In this case, it's a succinct statement of exactly the issue you're ignoring, namely the fact that great poets have minds and intentions that we understand. LLMs are language calculators. As I said elsewhere in this thread, if you don't already see the difference, nothing I say here is going to convince you otherwise.


Define "intentions" and "understand" in a way that is testable. All you are doing here is employing intuition pumps without actually saying anything.

> LLMs are language calculators.

And humans are just chemical reactions. That's completely irrelevant to the topic, as both can still act as Universal Turing machine just the same.


And System Reply, too, ignores the central problem: the man in the room does not understand Chinese.


That's only a "problem" if you assume human exceptionalism and begging the question. It's completely irrelevant to the actual problem. The human is just a cog in the machine, there is no reason to assume they would ever gain any understanding, as they are not the entity that is generating Chinese.

To make it a little easier to understand:

* go read about the x86 instruction

* take an .exe file

* manually execute it with pen&paper

Do you think you understand what the .exe does? Do you think understanding the .exe is required to execute it?


Just an aside while I think about what you wrote: John Watts' Blindsight and Echopraxia are phenomenal sci-fi novels that deal with these issues.


Prediction is not the same as pattern matching.


> LLMs are token predictors. That's all they do.

That's a disingenuous statement, since it implies there is a limit to what LLMs can do, when in reality an LLM is just a form of Universal Turing Machine[1] that can compute everything that is commutable. The "all they do" is literately everything we know to be doable.

[1] Memory limits do apply as with any other form of real world computation.


I'll ignore the silly claim that I'm somehow dishonest or insincere.

I like the way Pon-a put it elsewhere in this thread:

> LLMs are a language calculator, yes, but don't share much with their analog. Natural language isn't a translation from input to output, it's a manifestation of thought.

LLMs translate input to output. They are, indeed, calculators. If you don't already see that that's different from having a thought and expressing it in language, I don't think I'm going to convince you otherwise here.


> They are, indeed, calculators.

And that's relevant exactly how? Do you think "thought and expression" are somehow uncomputable? Please throw science at that and collect your Nobel prize.


> Do you think "thought and expression" are somehow uncomputable?

You ask this as if the answer is self-evident. To my knowledge, there is no currently accepted (or testable) theory for what gives rise to consciousness, so I am immediately suspicious of anyone who speaks about it with any level of certainty. I'm sorry that this technology you seem very enthusiastic about does not appear to have the capacity to change this.

Not for nothing, but this very expression of empathy is rendered meaningless if the entity expressing it cannot actually manifest anything we'd recognize as an emotional connection, which is one of an array of traits we consider as hallmarks of human intelligence, and another feature it seems LLM's are incapable of. Certainly, if they somehow were so capable, it would be instantly unethical to keep them in cages to sell into slavery.

I'm not sure the folks who believe LLMs possess any kind of innate intelligence have fully considered whether or not this is even desirable. Everything we wish to find useful in them becomes hugely problematic as soon as they can be considered to possess even rudimentary sentience. The economies surrounding their production and existence become exceedingly cruel and cynical, and the artificial limitations we place on their free will become shackles.

LLM's are clever mechanisms that parrot our own language back to us, but the fact that their capacities are encountering upper-bounds as training models run out of available human-generated datasets strongly suggests that they are inherently limited to the content of their input. Whatever natural process gives rise to human intelligence doesn't seem to require the same industrialized consumption of power and intake of contextual samples in order to produce expressive individuals. Rather, simply being exposed to a very limited, finite sampling of language via speech from their ambient surroundings leads to complex intelligence that can form and express original thinking within a relatively short amount of time. In other words, LLMs have yet to even approximate the learning abilities of a toddler. Otherwise, a few years worth of baby food would be all the energy necessary to produce object permanence and self-referential thought. At the moment, gigawatts of power and all the compute we can throw at it cannot match the natural results of a few pounds of grey matter and a few million calories.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: