If some bright young person (or more realistically some sequence of bright young people building on each other's work) were to devise a plausible method to control an AI such that it stays under human control even if it become much more cognitively capable than the most capable people, then I would be OK with going ahead with AI research.
I don't think most readers here realize just how little control the AI labs have over their creations and how reliant they are on trial and error for implementing what control they do have. Of course, as soon as it becomes critical to keep an AI under control (namely, when its capabilities start to exceed human capabilities) is exactly when a lab will stop being able to rely on trial and error: specifically, the next time the lab makes an unsuccessful try, the AI will tend to arrange things so that the lab doesn't get any more tries.
I don't think most readers here realize just how little control the AI labs have over their creations and how reliant they are on trial and error for implementing what control they do have. Of course, as soon as it becomes critical to keep an AI under control (namely, when its capabilities start to exceed human capabilities) is exactly when a lab will stop being able to rely on trial and error: specifically, the next time the lab makes an unsuccessful try, the AI will tend to arrange things so that the lab doesn't get any more tries.