I think he wants "reasoning" to include coming up with rules and not just following rules. Humans can reason by trying to figure out rules for systems and then see if those rules work well, on large scale that is called the scientific method but all humans do that on a small scale, especially as kids.
For a system to be able to solve the same classes of problems human can solve it would need to be able to invent their own rules just like humans can.
I think that is what I mean by reason. I set the bar for reasoning and AGI pretty high.
Though, I will admit, a system that acts in a way that’s indistinguishable from a human will be awful hard to classify as anything but AGI.
Maybe I’m conflating AGI and consciousness, though given that we don’t understand consciousness and there’s no clear definition of AGI, maybe they ought to be inclusive of each other until we can figure out how to differentiate them.
Still, one interesting outcome, I think, should consciousness be included in the definition of AGI, is that LLMs are deterministic, which, if conscious, would (maybe) eliminate the notion of free will.
I feel like this whole exercise may end up representing a tiny, microscopic scratch on the surface of what it will actually take to build AGI. It feels like we’re extrapolating the capabilities of LLMs far too easily from capable chat bots to full on artificial beings.
We humans are great at imagining the future, but not so good at estimating how long it will take to get there.
For a system to be able to solve the same classes of problems human can solve it would need to be able to invent their own rules just like humans can.