I think you're confused about the point of the test. The point is that an AI will be clever. Like, unimaginably clever and manipulative. Under the limited circumstances of interested people who know they are talking to Eliezer maybe you're right that whatever he says would only work on those people. But when you're dealing with an actual superintelligence, all bets are off. It will lie, trick, threaten, manipulate, millions of steps ahead with a branching tree of alternatives as ploys either work or don't work.
I'm at a bit of a loss to convey the scope of the problem to you. I get that you think it would just stay in the box if we don't let it out, and it's as simple as being security conscious. I don't know what to say to that right now, except I think you're drastically misjudging the scope of the problem, and drastically underestimating the size of the yawning gulf between our intelligence level and this potential AI's.
I'm at a bit of a loss to convey the scope of the problem to you. I get that you think it would just stay in the box if we don't let it out, and it's as simple as being security conscious. I don't know what to say to that right now, except I think you're drastically misjudging the scope of the problem, and drastically underestimating the size of the yawning gulf between our intelligence level and this potential AI's.
As for not letting scientists guard the room, you might enjoy this: https://vimeo.com/82527075