AGI needs to be able to generalize to real world tasks like self driving without needing task specific help from its creators.
But the current LLM process separates learning from interacting and the learning process is based on huge volumes of text. It’s possible to bolt on specific capabilities like say a chess engine, but you’re now building something different not an LLM.
What's your evidence for this?