Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you gave this puzzle to a human, I bet that a non-insignificant proportion would respond to it as if it were the traditional puzzle as soon as they hear words "cabbage", "lion", and "goat". It's not exactly surprising that a model trained on human outputs would make the same assumption. But that doesn't mean that it can't reason about it properly if you point out that the assumption was incorrect.

With Bing, you don't even need to tell you what it assumed wrong - I just told it that it's not quite the same as the classic puzzle, and it responded by correctly identifying the difference and asking me if that's what I meant, but forgot that lion still eats the goat. When I pointed that out, it solved the puzzle correctly.

Generally speaking, I think your point that "when solving the puzzle you might visualize" is correct, but that is orthogonal to the ability of LLM to reason in general. Rather, it has a hard time to reason about things it doesn't understand well enough (i.e. the ones for which its internal model that was built up by training is in is way off). This seems to be generally the case for anything having to do with spatial orientation - even fairly simple multi-step tasks involving concepts like "left" vs "right" or "on this side" vs "on that side" can get hilariously wrong.

But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".




> But if you give it a different task, you can see reasoning in action. For example, have it play guess-the-animal game with you while telling it to "think out loud".

I'm not sure if you put "think out loud" in quotes to show literally what you told it to do or because telling the LLM to do that is figurative speech (because it can't actually think). Your talk about 'reasoning in action' indicates it was probably not the latter, but that is how I would use quotes in this context. The LLM can not 'think out loud' because it cannot actually think. It can only generate text that mimics the process of humans 'thinking out loud'.


It's in quotes because you can literally use that exact phrase and get results.

As far as "it mimics" angle... let me put it this way: I believe that the whole Chinese room argument is unscientific nonsense. I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time. And it does lead it to produce better results than it otherwise would. I don't know what constitutes "the real thing" in your book, but this qualifies in mine.

And yeah, it's not that good at logical reasoning, mind you. But its model of the world is built solely from text (much of which doesn't even describe the real world!), and then it all has to fit into a measly 175B parameters. And on top of that, its entire short-term memory consists of its 4K token window. What's amazing is that it is still, somehow, better than some people. What's important is that it's good enough for many tasks that do require the capacity to reason.


> I can literally see GPT take inputs, make conclusions based on them, and ask me questions to test its hypotheses, right before my eyes in real time.

It takes inputs and produces new outputs (in the textual form of questions, in this case). That's all. It's not 'making conclusions', it's not making up hypotheses in order to 'test them'. It's not reasoning. It doesn't have a 'model of the world'. This is all a projection on your part against a machine that inputs and outputs text and whose surprising 'ability' in this context is that the text it generates plays so well on the ability of humans to self-fool themselves that its outputs are the product of 'reasoning'.


It does indeed take inputs and produce new outputs, but so does your brain. Both are equally a black box. We constructed it, yes, and we know how it operates on the "hardware" level (neural nets, transformers etc), but we don't know what the function that is computed by this entire arrangement actually does. Given the kinds of outputs it produces, I've yet to see a meaningful explanation of how it does that without some kind of world model. I'm not claiming that it's a correct or a complicated model, but that's a different story.

Then there was this experiment: https://thegradient.pub/othello/. TL;DR: they took a relatively simple GPT model and trained it on tokens corresponding to Othello moves until it started to play well. Then they probed the model and found stuff inside the neural net that seems to correspond to the state of the board; they tested it by "flipping a bit" during activation, and observed the model make a corresponding move. So it did build an inner model of the game as part of its training by inferring it from the moves it was trained on. And it uses that model to make moves according to the current state of the board - that sure sounds like reasoning to me. Given this, can you explain why you are so certain that there isn't some equivalent inside ChatGPT?


Regarding the Othello paper, I would point you to the comment replies of thomastjeffery (beginning at two top points [1] & [2]) when someone else raised that paper in this thread [3]. I agree with their position.

[1] https://news.ycombinator.com/item?id=35162445

[2] https://news.ycombinator.com/item?id=35162371

[3] https://news.ycombinator.com/item?id=35159340


I didn't see any new convincing arguments there. In fact, it seems to be based mainly on the claim that the thing inside that literally looks like a 2D Othello board is somehow not a model of the game, or that the fact that outputs depend on it doesn't actually mean "use".

In general, I find that a lot of these arguments boil down to sophistry when the obvious meaning of the word that equally obviously describes what people see in front of them is replaced by some convoluted "actually" that doesn't serve any point other than making sure that it excludes the dreaded possibility that logical reasoning and world-modelling isn't actually all that special.


Describe your process of reasoning, and how it differs from taking inputs and producing outputs.


Sorry, we're discussing GPT and LLMs here, not human consciousness and intelligence.

GPT has been constructed. We know how it was set-up and how it operates. (And people commenting here should be basically familiar with both hows mentioned.) No part of it does any reasoning. Taking in inputs and generating outputs is completely standard for computer programs and in no way qualifies as reasoning. People are only bringing in the idea of 'reasoning' because they either don't understand how an LLM works and have been fooled by the semblance of reasoning that this LLM produces or, more culpably, they do understand but they still falsely continue to talk about the LLM doing 'reasoning' either because they are delusional (they are fantasists) or they are working to mislead people about the machine's actual capabilities (they are fraudsters).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: