Fair. OpenAI totally use those arguments to launder their plans, but that saga has been more Silicon Valley exploiting longstanding rationalist beliefs for PR purposes than rationalists getting rich...
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.
Eliezer did once state his intentions to build "friendly AI", but seems to have been thwarted by his first order reasoning about how AI decision theory should work being more important to him than building something that actually did work, even when others figured out the latter bit.