> Based on Sam's statement they seem to be making a bet that accelerating progress on AI now will help to solve the Control problem faster in the future but this strikes me as an extremely dangerous bet to make because if they are wrong they are reducing the time the rest of the world has to solve the problem substantially and potentially closing that opening enough that it won't be solved in time at all and then foom.
Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.
Yeah I mean a significant problem is that many(most?) people including those in the field until very recently thought AIs with these capabilities were many decades if not centuries away and now that people can see the light at the end of the tunnel there is massive Geo-political and Economic incentive to be the first to create one. We think OpenAI vs Deepmind vs Anthropic vs etc. is bad but wait until it's US vs China and we stop talking about billion dollar investments in AI research and get into the Trillions.
Scott's Exxon analogy is almost too bleak to really believe. I hope OpenAI is just ignorant and not intentionally evil.
Moreover, now that they've started the arms race, they can't stop. There's too many other companies joining in, and I don't think it's plausible they'll all hold to a truce even if OpenAI wants to.
I assume you've read Scott Alexander's taken on this? https://astralcodexten.substack.com/p/openais-planning-for-a...