Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nearly every argument like this has the same fatal flaw, and it's generally not the critique of the AI, but the critique reflected back on to humans.

Humans also don't understand and are frequently faking understanding, which for many tasks is good enough. There are fundamental limits to what humans can do.

The AI of a few months ago before OpenAI's sycophancy was quite impressive, less so now which means it is being artificially stunted so more can be charged later. It means privately it is much better than what is public. I can't say it "understands," but I can say it outclasses many many humans. There are already numbers of tasks based around understanding where I would already choose an LLM over a human.

It's worth looking at bloom's taxonomy (https://en.wikipedia.org/wiki/Bloom%27s_taxonomy): In the 2001 revised edition of Bloom's taxonomy, the levels were renamed and reordered: Remember, Understand, Apply, Analyze, Evaluate, and Create. In my opinion it is at least human competitive for everything but create.

I used to be very bearish on AI, but if you haven't had a "wow" moment when using one, then I don't think you've tried to explore what it can do or tested it's limits with your own special expertise/domain knowledge, or if you have then I'm not sure we're using the same LLMs. Then compare that experience to normal people, not your peer groups. Compare an LLM to people into astrology, crystal healing, or homeopathy and ask which has more "understanding."




I do agree with you - but the big difference is that humans-who-are-faking-it tend to learn as they go so might, with a bit of effort, be expected to understand eventually.

Does that actually matter? Probably not for many everyday tasks...


Um, moving the goal post?

The claim was LLMs understand things.

The counter was, nope, they don't. They can fake it well though.

Your argument now is, well humans also often fake it. Kinda implying that it means it's ok to claim that LLMs have understanding?

They may outclass people in a bunch of things. That's great! My pocket calculator 20 years also did, and it's also great. Neither understands what they are doing though.


It's fun to talk about, but personally he whole "understanding" debate is a red herring, imo what we actually care about when we talk about intelligence is the capacity and depth of: second order thinking, regardless of the underlying mechanism. I think personally key question isn't "do LLMs understand?" but, "can LLMs engage in second order thinking?" The answer seems to be yes - they can reason about reasoning, plan their approaches, critique their own outputs, and adapt their strategies, o1 has shown us that with RL and reasoning tokens you can include it in a single system, but our brains have multiple systems we can control and that can be combined in multiple ways at any given moment: emotions, feelings, thoughts combined into user space, 3 core systems input, memory, output. The nuances is in the fact that various reasons (nature + nurture), various humans appear to have varying levels of meta control over the multiple reasoning systems.


Why are you pretending to be participating in a debate? You mention things like "moving the goalpost", "counter[arguments]", and "arguments", as if you did anything more than just assert your opinion in the first place.

This is what you wrote:

> LLMs don't understand.

That's it. An assertion of opinion with nothing else included. I understand it sucks when people feel otherwise, but that's just kinda how this goes. And before you bring up how there were more sentences in your comment, I'd say they are squarely irrelevant, but sure, let's review those too:

> It's mind-boggling to me that large parts of the tech industry think that.

This is just a personal reporting of your own feelings. Zero argumentational value.

> Don't ascribe to them what they don't have.

A call for action, combined with the same assertion of opinion as before, just rehashed. Again, zero argumentational value.

> They are fantastic at faking understanding.

Opinion, loaded with the previous assertion of opinion. No value add.

> Don't get me wrong, for many tasks, that's good enough.

More opinion. Still no arguments or verifiable facts presented or referenced. Also a call for action.

> But there is a fundamental limit to what all this can do.

Opinion, and a vague one at that. Still nothing.

> Don't get fooled into believing there isn't.

Call for action + assertion of opinion again. Nope, still nothing.

It's pretty much the type of comment I wish would just get magically filtered out before it ever reached me. Zero substance, maximum emotion, and plenty of opportunities for people to misread your opinions as anything more than that.

Even within your own system of opinions, you provide zero additional clarification why you think what you think. There's literally nothing to counter, as strictly speaking you never actually ended up claiming anything. You just asserted your opinion, in its lonesome.

This is no way to discuss anything, let alone something you or others likely feel strongly about. I've had more engaging, higher quality, and generally more fruitful debates with the models you say don't understand, than anyone here so far could have possibly had with you. Please reconsider.


> higher quality, and generally more fruitful debates with the models you say don't understand

My favorite thing about LLMs is that they can convincingly tell me why I'm wrong or how I could think about things differently, not for ideas on the order of sentences and paragraphs, but on the order of pages.

My second favorite thing is that it is amazingly good at deconstructing manipulative language and power tactics. It is scary good at developing manipulation strategies and inferring believable processes to achieve complex goals.


Had some success with that myself as well. Also found out about Claimify [0] recently, I should really get myself together and get a browser extension going one of these days. I think the quantized gemma3 models should be good enough for this, so it could remain all local too.

[0] https://youtu.be/WTs-Ipt0k-M


So, it is your opinion that the mere expression of opinion "without anything else" is not allowed in a discussion?

And if that is so, didn't you also "just" express an opinion? Would your own contribution to the discussion pass your own test?

You might have overlooked that I provided extensive arguments all around in this thread. Please reconsider.


> So, it is your opinion that the mere expression of opinion "without anything else" is not allowed in a discussion?

This is not what I said, no: I said that asserting your opinion over others' and then suddenly pretending to be in a debate is "not allowed" (read: is no way to have a proper discussion).

A mere expression of opinion would have been like this:

> [I believe] LLMs don't understand.

And sure, having to stick an explicit "I think / I believe" everywhere is annoying. But it became necessary, when all the other things you had to say continued to omit this magic phrase, and it became clearly intentionally not present, when you started talking as if you made any arguments of your own. Merely expressing your opinion is not what you did, even when reading it charitably. That's my problem.

> Would your own contribution to the discussion pass your own test?

And so yes, I believe it does.

> You might have overlooked that I provided extensive arguments all around in this thread. Please reconsider.

I did consider this. It cannot be established that the person whose comment you took a whole lot of issue with also considered those though, so why would I do so? And so, I didn't, and will not either. Should I change my mind, you'll see me in those subthreads later.


I challenge you to go through the history of your own posts and count how often you salt your statements of opinion with the magic dust of "I believe".

I did. You are not living up to the standard you are demanding of others (and which rarely anybody around here satisfies anyway).

Seems we are not getting anywhere. We can agree to disagree, which I'm fine with. Please refrain from personal attacks going forward, thank you.


> I challenge you to go through the history of your own posts and count how often you salt your statements of opinion with the magic dust of "I believe".

Challenge semi-accepted [0]. Looking through my entire comment history here so far on this wonderful forum (628 comments), there seem to be 179 hits for the word "think" and 21 for the word "believe". If we're being nice and assume these are all in separate comments, that would mean up to ~32% of my comments feature these words, and then only some portion of these will actually pertain to me guarding my own opinions with them. Still, feeling pretty chuffed about it if I'm honest, I think I'm doing pretty good.

For good measure, I also checked against your comment history of 100 comments. 2 counts of "believe", 9 counts of "think". Being nice here only yields us up to 11%, and focusing on expressions of opinion would only bring this down further.

That said, I think this is pretty dumb. [1]

> I did. You are not living up to the standard you are demanding of others (and which rarely anybody around here satisfies anyway).

Please do show me the numbers you got and your methodology. (And not from the research you're going to do after reading this comment - although if it's done actually proper, I'm interested in that too.)

> Seems we are not getting anywhere.

If only you put as much effort into actually considering what I wrote as you did into stalking my comment history or coming up with new fallacies and manipulation tactics, I think we would have.

Seriously:

- not being able to put it into words how you don't think LLMs understand is perfectly normal. You could have embraced this, but instead we're on like level 4 of you doubling down.

- sharing your opinion continues to be perfectly okay. Asserting your opinion over others continues to be super not okay.

- I (or others) don't need to be free of the faults that I described in order for these things to be faults. It's normal to make mistakes. It'd also be normal to just own them, but here I am, exporting my own comment history using the HN API, because you just can't acknowledge having been wrong and not defending it, even though reading between the lines you do seem to agree with basically everything I said, and are just trying to give me a rhetorical checkmate at this point.

> Please refrain from personal attacks going forward, thank you.

Tried my best. For real; I rewrote this like 6 times.

[0] You continue to heavily engage in manipulative language and fallacies, so I feel 100% uncompelled to honor your "challenge request" proper. I explicitly brought up several other criteria, such as a sentence presenting as an opinion when read in good faith, not being utilized as an accepted shared characterization when used in other sentences, and not being referred to as arguments elsewhere. What you describe as "statements of opinion with the magic dust of "I believe"" seem to intentionally gloss over these criteria, in what I can best describe as just a plain old strawman. So naturally, the challenge was as woefully weakly accepted as I possibly could.

[1] Obviously these statistics are completely bogus, since maybe you just don't offer your opinions much. Considering your performance here so far, this is pretty hard for me to believe, but it is entirely possible and I don't care to manually pore over 100 of your comments, sorry. If they are anything like the ones in this subthread here so far, I've already had more than enough. And if I went through the trouble of automating it ironically involving an LLM, I'd be doing a whole proper job of it at that point anyways, which would go against [0].


Excellently put.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: