Hacker News new | past | comments | ask | show | jobs | submit login

We do have systems that reason. Prolog comes to mind. It's a niche tool, used in isolated cases by relatively few people. I think that the other candidates are similar: proof assistants, physics simulators, computational chemistry and biology workflows, CAD, etc.

When we get to the point where LLMs are able to invoke these tools for a user, even if that user has no knowledge of them, and are able to translate the results of that reasoning back into the user's context... That'll start to smell like AGI.

The other piece, I think, is going to be improved cataloging of human reasoning. If you can ask a question and get the answer that a specialist who died fifty years ago would've given you because that specialist was a heavy AI user and so their specialty was available for query... That'll also start to smell like AGI.

The foundations have been there for 30 years, LLMs are the paint job, the door handles, and the windows.




> We do have systems that reason. Prolog comes to mind. It's a niche tool, used in isolated cases by relatively few people. I think that the other candidates are similar: proof assistants, physics simulators, computational chemistry and biology workflows, CAD, etc.

I think OP meant other definition of reason, because by your definition calculator can also reason. These are tools created by humans, that help them to reason about stuff by offloading calculations for some of the tasks. They do not reason on their own and they can't extrapolate. They are expert systems.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html


If an expert system is not reasoning, and a statistical apparatus like an LLM is not reasoning, then I think the only definition that remains is the rather antiquated one which defines reason as that capability which makes humans unique and separates us from animals.

I don't think it's likely to be a helpful one in this case.


I think he wants "reasoning" to include coming up with rules and not just following rules. Humans can reason by trying to figure out rules for systems and then see if those rules work well, on large scale that is called the scientific method but all humans do that on a small scale, especially as kids.

For a system to be able to solve the same classes of problems human can solve it would need to be able to invent their own rules just like humans can.


I think that is what I mean by reason. I set the bar for reasoning and AGI pretty high.

Though, I will admit, a system that acts in a way that’s indistinguishable from a human will be awful hard to classify as anything but AGI.

Maybe I’m conflating AGI and consciousness, though given that we don’t understand consciousness and there’s no clear definition of AGI, maybe they ought to be inclusive of each other until we can figure out how to differentiate them.

Still, one interesting outcome, I think, should consciousness be included in the definition of AGI, is that LLMs are deterministic, which, if conscious, would (maybe) eliminate the notion of free will.

I feel like this whole exercise may end up representing a tiny, microscopic scratch on the surface of what it will actually take to build AGI. It feels like we’re extrapolating the capabilities of LLMs far too easily from capable chat bots to full on artificial beings.

We humans are great at imagining the future, but not so good at estimating how long it will take to get there.


Reasoning, in the context of artificial intelligence and cognitive sciences, can be seen as the process of drawing inferences or making decisions based on available information. This doesn't make machines like calculators or LLMs equivalent to human reasoning, but it does suggest they engage in some form of reasoning.

Expert systems, for instance, use a set of if-then rules derived from human expertise to make decisions in specific domains. This is a form of deductive reasoning, albeit limited and highly structured. They don't 'understand' in a human sense but operate within a framework of logic provided by humans.

LLMs, on the other hand, use statistical methods to generate responses based on patterns learned from vast amounts of data. This isn't reasoning in the traditional philosophical sense, but it's a kind of probabilistic reasoning. They can infer, locally generalize, and even 'extrapolate' to some extent within the bounds of their training data. However, this is not the same as human extrapolation, which often involves creativity and a deep understanding of context.


Ya i feel like this issue is people think an LLM will someday "wake up" no, LLM's will just be multimodal and developed to use tools, and a software ecosystem around it will end up using the LLM to reason how to execute, basically the LLM will be the internal monologue of whatever the AGI looks like.


Agreed. I think it's more likely that we'll reach a point where their complexity is so great that no single person can usefully reason about their outputs in relation to their structure.

Not so much a them waking up as an us falling asleep.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: