>You don't need to research a counter-example, you just produce one. He did so by exhibiting a relatively weak AI that couldn't be contained.
Well, no. He showed that a person pretending to be an AI could convince a person to let it out of the box. Sometimes.
It's not really a meaningful counterexample. Everyone involved with the event knew that the AI wasn't real and that there are zero consequences for letting it out. You'd have to concoct a much more involved test as a decent counterexample.
A person pretending to be an AI is surely easier to contain than a super-powerful AI that is smarter than all people. A failure to contain the former -- a strictly easier task -- is a perfect example of a failure to contain the latter.
Well, no. He showed that a person pretending to be an AI could convince a person to let it out of the box. Sometimes.
It's not really a meaningful counterexample. Everyone involved with the event knew that the AI wasn't real and that there are zero consequences for letting it out. You'd have to concoct a much more involved test as a decent counterexample.