I agree GPT isn't grounded and it is a problem, but that's a weird point to argue against AlphaCode. AlphaCode is ground by actual code execution: its coding experience is no less real than people's.
AlphaGo is grounded because it experienced Go, and has a very good conception of what Go is. I similarly expect OpenAI's formal math effort to succeed. Doing math (e.g. choosing a problem and posing a conjecture) benefits from real world experience, but proving a theorem really doesn't. Writing a proof does, but it's a separate problem.
I think software engineering requires real world experience, but competitive programming probably doesn't.
AlphaGo is grounded because it experienced Go, and has a very good conception of what Go is. I similarly expect OpenAI's formal math effort to succeed. Doing math (e.g. choosing a problem and posing a conjecture) benefits from real world experience, but proving a theorem really doesn't. Writing a proof does, but it's a separate problem.
I think software engineering requires real world experience, but competitive programming probably doesn't.