may not be a bad thing if development of AI is pursued by 'red team' vs 'blue team' - two diverging approaches who might independently generate different solutions to universal problems. Then again, it could be the ultimate bad thing if one were to see the other an existential threat which must be eliminated, making it a rational decision to strike first in order to finish it