The Singularitarians were breathlessly worrying 20+ years ago, when AI was absolute dogshit - Eliezer once stated that Doug Lenat was incautious in launching Eurisko because it could've gone through a hard takeoff. I don't think it's just an act to launder their evil plans, none of which at the time worked.
Fair. OpenAI totally use those arguments to launder their plans, but that saga has been more Silicon Valley exploiting longstanding rationalist beliefs for PR purposes than rationalists getting rich...
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.