Strong AI is not necessarily a bad thing. Instead of worrying about questions 1 & 2, we could be thinking less about constraint and competition with AI and more about cooperation and goal-orientation: e.g. the work of Yudkowsky (https://intelligence.org/files/CFAI.pdf) or some of the thoughts provided by Nick Bostrom http://nickbostrom.com/. Goal-orientation is preferable to capability constraint because the potential benefits are far larger.
Tl;dr: I, for one, welcome our robot overlords (so long as they don't behave like our robot overlords).
I agree that superintelligence could bring enormous benefits to humanity but the risks are very high as well. They are in fact existential risks, as detailed in the book Superintelligence by Bostrom.
That is why we need to invest much more research efforts on Friendly AI and trustworthy intelligent systems. People should consider contribute to MIRI (https://intelligence.org/) where Yudkowsky, who helped pioneer this line of research, works as a senior fellow.
Tl;dr: I, for one, welcome our robot overlords (so long as they don't behave like our robot overlords).