I read the essay few times and it is not clear to me what Sam Altman is talking about.
I think answers to these questions will help me understand Sam's essay. Can you please help me?
1. What is superhuman machine intelligence (SMI) in the context of this essay ? [edited to add the qualifier]
2. What is the danger from SMI to humans and other current life forms that we are concerned about?
It seems to me that concerns about SMI can be classified into two categories: (a) dangers to "our way of life (work, earn, spend)" and (b) dangers for existence of human race. Are we talking about both these categories? Perhaps other categories?
3. What anecdotes (or evidence) is leading to this concern?
1. Machine intelligence, traditionally called artificial intelligence, which surpasses human intelligence.
2. Your category (b) is generally the primary concern in these types of discussions.
3. The anecdote of the progress of humanity. Compare the impact of human life/intelligence vs. evolutionary relatives like chimpanzees. I do not know that chimps have hunted species out of existence, for instance, but people have. We have also incidentally wiped out populations in efforts to make our lives better (via things like leveling forests, etc.)
To be fair, I don't think that the reason why Chimps haven't hunted something to extinction doesn't stem from a built in morality or sense of balance with nature.
I'm not trying to put words into your mouth. I was just thinking of some of the new research that shows that primates of all kinds actually commit organized violence that mirror human violence in many, many, ways including war and capital punishment.(It's not a one for one thing, but similar.)
Yeah, I'm not talking about morality at all here. Our technological prowess, resulting from the application of our intelligence, has enabled us to wipe out entire species.
Thanks. Seems to me these anecdotes have to do with humans.
So is the implicit assumption that machines will do what humans are doing ('bad' things) but at several orders of magnitude faster and without the ability to comprehend longer-term consequences of their actions any more than humans do at the present time?
Sort of. 'Bad' here is of course an extremely subjective term. And it may not be the case that the machines do not understand the longer-term consequences of their actions; they could understand full well, but they could know that the preservation of humanity is not important (for whatever reason). So, we might not matter to them. We matter to us though, so that would be a problem for us as things stand now.
SMI is advanced decision making software that literally eats the world. Think of HAL killing its crew so they don't jeopardize the mission, but on a global scale. Like a stock trading algorithm that determines the best way to maximize profits is to wipe out all human life on the planet or something. --EDIT-- See http://wiki.lesswrong.com/wiki/Paperclip_maximizer
> What anecdotes (or evidence) is leading to this concern?
Books, movies, pop culture... plus when all you have are first-world problems, you gotta find something to worry about.
I think answers to these questions will help me understand Sam's essay. Can you please help me?
1. What is superhuman machine intelligence (SMI) in the context of this essay ? [edited to add the qualifier]
2. What is the danger from SMI to humans and other current life forms that we are concerned about? It seems to me that concerns about SMI can be classified into two categories: (a) dangers to "our way of life (work, earn, spend)" and (b) dangers for existence of human race. Are we talking about both these categories? Perhaps other categories?
3. What anecdotes (or evidence) is leading to this concern?