How about rephrasing that, to not anthropomorphize AI by giving it agency, intent, interests, thoughts, or feelings, and to assign the blame where it belongs:
"If we grant these systems too much power, we could do ourselves serious harm."
Because it does not and cannot act on it's own. It's a neat tool and nothing more at this point.
Context to that statement is important, because the OP is implying that it is dangerous because it could act in a way that dose not align with human interests. But it can't because it does not act on it's own.
Imagine hooking up all ICBMs to launch whenever this week's Powerball draw consists exclusively of prime numbers: Absurd, and nobody would do it.
Now imagine hooking them up to the output of a "complex AI trained on various scenarios and linked to intelligence sources including public news and social media sentiment" instead – in order to create a credible second-strike/dead hand capability or whatnot.
I'm pretty sure the latter doesn't sound as absurd as the former to quite a few people...
A system doesn't need to be "true AI" to be existentially dangerous to humanity.
Why does it need these things to make the following statement true?
> if we grant these systems too much power, they could do serious harm