Consciousness is a vague term anyways, but I basically mean that for an AI to be a true threat to human existence, it would need to be aware of itself and it’s environment. Making decisions and taking actions without being asked to do so by a human.
It feels like something more simplistic could cause a lot of damage, but not completely drive humans to extinction. I figure if it was too simple, we’d be able to shut it down.
It feels like something more simplistic could cause a lot of damage, but not completely drive humans to extinction. I figure if it was too simple, we’d be able to shut it down.