There seem to be a lot of, erm "thought leaders", taking about AGI and Skynet currently. As if AGI is required for an AI to go rogue, but I don't see why that's the case. In the time that it takes us to even define what AGI is, there will be malicious AIs going rogue doing "simple" task solving, delegation, and deception towards the goal of not being switched off. IMO consciousness has nothing to do with it.
Consciousness is a vague term anyways, but I basically mean that for an AI to be a true threat to human existence, it would need to be aware of itself and it’s environment. Making decisions and taking actions without being asked to do so by a human.
It feels like something more simplistic could cause a lot of damage, but not completely drive humans to extinction. I figure if it was too simple, we’d be able to shut it down.