I think it's at least in part FOMO from the VC's - maybe a bit like the dot-com mania, as well as them not understanding the technology and therefore having to believe the breathless hype and cherry-picked demos from the likes of Sam Altman.
If a company can make ASI/AGI I think it will be worth a trillion dollars extremely easily. Probably more. I think OpenAI is probably the most likely to make it happen right now.
Truly human level AGI, able - amongst other things - to replace many jobs (where a physical presence is not required/helpful) will obviously be very valuable, but it does not appear that LLMs are on the path to that, certainly not anytime soon (and not without extending the architecture with new capabilities).
However, OpenAI don't even seem to have the goal of achieving this type of truly human level AGI - their own definition of AGI is along the lines of "able to perform most economically valuable tasks in a mostly autonomous fashion".
Karpathy himself believes that neural networks are perfectly plausible as a key component to AGI. He has said that it doesn't need to be superseded by something better, it's just that everything else around it (especially infrastructure) needs to improve. As one of the most valuable opinions in the entire world on the subject, I tend to trust what he said.
I think the consensus of people with the most valid opinions is that AGI is actually not that far off, definitely in our lifetimes, and likely in the next decade.
I'm personally not concerned if we recreate a human in a computer, in fact we might be better off not doing that. If we can replace nearly all manual labor, all software development, and all driving, then I would be really happy with the state of automation of "AGI" (societal ramifications aside).
If Karpathy truly believes that LLM's could be extended into trillion-dollar AGI, and had a workable plan how this might be achieved, then why is he not raising money to do it, or working at a company pursuing AGI? His own background is more around vision (both PhD and at Tesla) than SOTA transformers...
It's interesting to see what Noam Shazeer (largely responsible for the transformer design) did with it after leaving Google.. not pursuing AGI but instead entertainment chatbots (character.ai).
AGI may well be achieved in our lifetimes (next 50 years), but 10 seems short considering that it's already been 8 years since "attention is all you need", and so far that's all we've got! Maybe o1 is a baby step beyond transformers (or is it really just an agent?), but I think we need multiple very large steps (transformer-level innovation) to get closer to AGI.
If one wants to appeal to authority, then how about LeCun, privvy to SOTA LLM research at Meta, yet confident this is entirely the wrong approach to achieve animal/human intelligence.
There is some reality to them - they are extrapolations of latest investment round. Say OpenAI had been valued at $100B, but some investors can be convinced to buy a 10% stake for $15B, then the company is now "valued" at $150B (10% = $15B => 100% = $150B).
It's a bit like the tail wagging the pig though, since reality is probably more like a 1% (tail) share being bought for $1.5B, and that determining the "value" of the remaining 99% (the whole pig).