Here's the thing: you don't need AGI to have an alignment problem. It is predicting text, which is surprisingly closely aligned in most cases, but can fail spectacularly. As it becomes more capable and people give it more real-world responsibility, the blast radius for its mistakes increases. That might not be an AGI-level problem, but it is still a big problem.