This really doesn't follow. True AGI would be general, but it doesn't necessarily mean that it's smarter than people; especially the kind of people who work as top researchers for OpenAI.
I don’t see why it wouldn’t be superhuman if there’s any intelligence at all. It already is superhuman at memory and paying attention, image recognition, languages, etc. Add cognition to that and humans basically become pets. Trouble is nobody has a foggiest clue on how to add cognition to any of this.
It is definitely not superhuman or even above average when it comes to creative problem solving, which is the relevant thing here. This is seemingly something that scales with model size, but if so, any gains here are going to be gradual, not sudden.
I’m actually not so sure they will be gradual. It’ll be like with LLMs themselves where we went from shit to gold in the span of a month when GPT 3.5 came out.