I think that is what I mean by reason. I set the bar for reasoning and AGI pretty high.
Though, I will admit, a system that acts in a way that’s indistinguishable from a human will be awful hard to classify as anything but AGI.
Maybe I’m conflating AGI and consciousness, though given that we don’t understand consciousness and there’s no clear definition of AGI, maybe they ought to be inclusive of each other until we can figure out how to differentiate them.
Still, one interesting outcome, I think, should consciousness be included in the definition of AGI, is that LLMs are deterministic, which, if conscious, would (maybe) eliminate the notion of free will.
I feel like this whole exercise may end up representing a tiny, microscopic scratch on the surface of what it will actually take to build AGI. It feels like we’re extrapolating the capabilities of LLMs far too easily from capable chat bots to full on artificial beings.
We humans are great at imagining the future, but not so good at estimating how long it will take to get there.
Though, I will admit, a system that acts in a way that’s indistinguishable from a human will be awful hard to classify as anything but AGI.
Maybe I’m conflating AGI and consciousness, though given that we don’t understand consciousness and there’s no clear definition of AGI, maybe they ought to be inclusive of each other until we can figure out how to differentiate them.
Still, one interesting outcome, I think, should consciousness be included in the definition of AGI, is that LLMs are deterministic, which, if conscious, would (maybe) eliminate the notion of free will.
I feel like this whole exercise may end up representing a tiny, microscopic scratch on the surface of what it will actually take to build AGI. It feels like we’re extrapolating the capabilities of LLMs far too easily from capable chat bots to full on artificial beings.
We humans are great at imagining the future, but not so good at estimating how long it will take to get there.