So I propose the Musk supremacy criterion to be the following.
Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.
A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.
Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?
For example 1000 people — but only if you get to choose who — is sufficient to take absolute control of both the US congress and the Russian State Duma (or a supermajority of those two plus the Russian Federation Council), which gives them the freedom to pass arbitrary constitutional amendments… so your scenario includes "gets crowned King of the USA and Russia, 90% of the global nuclear arsenal is now their personal property" as something we don't care about.
Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.
A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.
Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?