Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here is GPT 4o in "reasoning" mode answering the question "Are LLMs capable of reasoning?"

> It depends on how you define "reasoning."

> LLMs like me can perform certain types of reasoning, such as:

> Pattern recognition & analogy – Recognizing relationships between concepts and applying similar patterns in new contexts.

> Deductive reasoning – Applying rules to specific cases (e.g., "If all humans are mortal and Socrates is a human, then Socrates is mortal").

> Inductive reasoning – Generalizing from specific examples (though often with statistical biases).

> Common-sense reasoning (to an extent) – Inferring likely outcomes based on broad training data.

> However, there are limitations:

> No true understanding – LLMs don’t "know" things like humans do; they predict based on probabilities . > Struggles with multi-step logic – Complex reasoning that requires deep memory or long-term planning is difficult.

> Lack of real-world interaction – Without direct experience, reasoning is limited to text-based patterns.

> Confidence: 90%

> (Strong evidence suggests LLMs can perform certain types of reasoning, but they are not yet comparable to human-level reasoning.)

Would you agree with that analysis? If so, then LLMs are indeed capable of reasoning, in some ways.



It fails at deductive reasoning though. Pick a celebrity with non-famous children that don't obviously share their last name or something. If you ask it "who is the child of <celebrity>", it will get it right, because this is in its training data, probably Wikipedia.

If you ask "who is the parent of <celebrity-child-name>", it will often claim to have no knowledge about this person.

Yes sometimes it gets it right, but sometimes also not. Try a few celebrities.

Maybe the disagreement is about this?

Like if it gets it right a good amount of the time, you would say that means it's (in principle) capable of reasoning.

But I say, that if it gets it wrong a lot of the time, that means 1) it's not reasoning in situations when it gets it wrong, but also 2) it's most likely also not reasoning in situations when it gets it right.

And maybe you disagree with that, but then we don't agree on what "reasoning" means. Because I think that consistency is an important property of reasoning.

I think that if it gets "A is parent of B, implies B is child of A" wrong for some celebrity parents, but not for others, then it's not reasoning. Because reasoning would mean applying this logical construct as a rule, and if it's not consistent at that, it makes it hard to argue that it is in fact applying this logical rule instead of doing who-knows-what that happens to give the right answer, some of the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: