Externally there's no rigorous definition as to what constitutes AGI, so I'd guess internally it's not one monolithic thing they're targeting either. You'd need everyone to take a class about the nature of intelligence first, and all the different kinds of it just to begin with. There's undoubtedly dissent internally as to the best way to achieve chosen milestones on the way there, as well as disagreement that those are the right milestones to begin with. Think tactical disagreement, not strategic. If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
Well, Sam Altman has a clear definition of ASI, and AGI is something they've been thinking about for a long time, so presumably they must have some accepted definition of it.
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.