I don't see why the path of AI technology would jump from 1) "horribly incompetent at most things" to 2) "capable of destroying Humanity in a runaway loop before we can react"
I suspect we will have plenty of intermediary time between those two steps where bad corporations will try to abuse the power of mediocre-to-powerful AI technology, and they will overstep, ultimately forcing regulators to pay attention. At some point before it becomes too powerful to stop, it will be regulated.
I suspect we will have plenty of intermediary time between those two steps where bad corporations will try to abuse the power of mediocre-to-powerful AI technology, and they will overstep, ultimately forcing regulators to pay attention. At some point before it becomes too powerful to stop, it will be regulated.