Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What is reasoning if not the ability to assess "if this" and conclude "then that"?

A lot of things. There are entire fields of study which seek to define reasoning, breaking it down into areas that include logic and inference, problem solving, creative thinking, etc.

> If you can do it with logic gates, who's to say you can't do it with transformers or one of the newer SSMs? And who's to say it can't be learned from data?

I'm not saying you can't do it with transformers. But what's the basis of the belief that it can be done with a single transformer model, and one trained on language specifically?

More specifically, the papers I've read so far that investigate the reasoning capabilities of neural network models (not just LLMs) seem to indicate that they're capable of emergent reasoning about the rules governing their input data. For example, being able to reverse-engineer equations (and not just approximations of them) from input/output pairs. Extending these studies would indicate that large language models are able to emergently learn the rules governing language, not necessarily much beyond that.

It makes me think of two anecdotes:

1. How many times have you heard someone say, "I'm a visual learner"? They've figured out for themselves that language isn't necessarily the best way for them to learn concepts to inform their reasoning. Indeed there are many concepts for which language is entirely inefficient, if not insufficient, to convey. The world's shortest published research paper is proof of this: https://paperpile.com/blog/shortest-papers/.

2. When I studied in school, I noticed that for many subjects and tests, sufficient rote memorization became indistinguishable from actual understanding. Conversely, better understanding of underlying principles often reduced the need for rote memorization. Taken to the extreme, there are many domains for which sufficient memorization makes actual understanding and reasoning unnecessary.

Perhaps the debate on whether LLMs can reason is a red herring, given that their ability to memorize surpasses any human by many orders of magnitude. Perhaps this is why they seem able to reason, especially given that our only indication so far is the language they output. The most useful use-cases are typically those which are used to trigger our own reasoning more efficiently, rather than relying on theirs (which may not exist).

I think the impressiveness of their capabilities is precisely what makes exaggeration unnecessary.

Saying LLMs develop emergent logic and reasoning, I think, is a stretch. Saying it's "within the realm of possibility that some limited form of reasoning might actually be part of the emergent behavior" sounds more realistic to me, though rightly less sensational.

EDIT:

I also think it's fair to say that the ELIZA program had the limited amount of reason that was programmed into it. However, the point of the ELIZA study was that it shows people's tendency to overestimate the amount of reasoning happening, based on their own inferences. This is significant, because this causes us to overestimate the generalizability of the program, which can lead to unintended consequences when reliance increases.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: