Given that the link contains dozens of articles with read times over 10 minutes there is no way you engaged with the problem enough to be able to dismiss it so casually with yourcown hand waving. Ignoring that fact however we can just look at what Bing and ChatGPT have been up to since release.
Basically immediately after release both models were "jailbroken" in ways that allowed them to do undesirable things that OpenAI never intended whether that's giving recipes for how to cook Meth or going on unhinged rants and threatening to kill the humans they are chatting with. In the AI Safety circles you would call these models "unaligned", they are not aligned to human values and do things we don't want them to. HERE is THE problem, as impressive as these models may be I don't think anyone thinks they are really at human levels of intelligence or capability, maybe barely at like mouse level intelligence or something like that. Even at that LOW level of intelligence these models are unpredictably uncontrollable. So we haven't even figured out how to make these "simple" models behave in ways we care about. So now lets project forward to GPT-10 which may be at human level or higher and think about the things it may be able to do, we already know we can't control far simpler models so it goes without saying that this model will likely be even more uncontrollable and since it is much more powerful it is much more dangerous. Another problem is that we don't know how long before we get to a GPT-N that is actually dangerous so we don't know how long we have to make it safe. Most serious people in the field think making Human level AI is a very hard problem but that making a Human level AI that is Safe is another step up in difficulty.
These models are uncontrollable because they're simple black box models. It's not clear to me that the kind of approach that would lead to human-level intelligence would necessarily be as opaque, because -- given the limited amount of training data -- the models involved would require more predefined structure.
I'm not concerned, absent significant advances in computing power far beyond the current trajectory.
Basically immediately after release both models were "jailbroken" in ways that allowed them to do undesirable things that OpenAI never intended whether that's giving recipes for how to cook Meth or going on unhinged rants and threatening to kill the humans they are chatting with. In the AI Safety circles you would call these models "unaligned", they are not aligned to human values and do things we don't want them to. HERE is THE problem, as impressive as these models may be I don't think anyone thinks they are really at human levels of intelligence or capability, maybe barely at like mouse level intelligence or something like that. Even at that LOW level of intelligence these models are unpredictably uncontrollable. So we haven't even figured out how to make these "simple" models behave in ways we care about. So now lets project forward to GPT-10 which may be at human level or higher and think about the things it may be able to do, we already know we can't control far simpler models so it goes without saying that this model will likely be even more uncontrollable and since it is much more powerful it is much more dangerous. Another problem is that we don't know how long before we get to a GPT-N that is actually dangerous so we don't know how long we have to make it safe. Most serious people in the field think making Human level AI is a very hard problem but that making a Human level AI that is Safe is another step up in difficulty.