Given the existence of super-human narrow AI, the interesting property is generality, not intelligence. But I don’t think it’s useful to call a sub-human cat-level general AI an AGI.
If we have AI as general as an animal, ASI (superintelligence) is probably imminent. Because the architecture of humans intelligence probably isn't very different from cats, just the scale is bigger.
I think that very well could be true, depends on how that generality was obtained.
I would not be surprised if a multi-modal LLM (basically current architecture) could be wired up to be as general as a cat with current param count, and with the spark of human creativity (AGI/ASI) still ending up being far away.
But if you made a new architecture that solved the generalization problem (ie baking in a world model, self-symbol, etc) but only reached cat intelligence, then it would seem very likely that human-level was soon to follow.
If we have AI as general as an animal, ASI (superintelligence) is probably imminent. Because the architecture of humans intelligence probably isn't very different from cats, just the scale is bigger.