Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Machine intelligence, part 1 (samaltman.com)
180 points by oskarth on Feb 25, 2015 | hide | past | favorite | 172 comments


I have yet to meet a serious AI researcher who worries about AI ending the human race. At every AI conference I've been to lately, some guy would inevitably ask the same question to the speaker: "do you think we should be concerned by superhuman AI?". The answer was always the same, for instance from Andrew Ng at the Deep Learning Summit a few weeks ago: "dude, stop it. That's such a distraction".

> One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.

Since you are so knowledgeable about the unknowable, may I ask, Sam, do you think angels are male, or are they female? It has been a long standing question among the sort of people who like to predict the coming end of the world.

Seriously, billionaires and pundits warning us about our impending AI doom is is such a distraction. Does AI have its dangers? Yes. And nobody talks about them. The danger of AI is that is will put increasing power in the hands of those who have the data and the know-how, i.e. large corporations and governments. The power of mining usefully the troves of data they have on every citizen, both on a micro level and a macro level, to understand what people are doing, what they are thinking, and what they are going to do next. And ultimately, to control what they think (to give you an idea, Facebook can already influence your mood by selecting what goes into your newsfeed).

First comes intelligence, then prediction, and finally, control.

Yes, AI is something we should probably be worried about, in the same way that we should have discussed the privacy implications of the Internet years before the NSA files. But a terminator-like end of the world scenario is not a concern. If you are clueless about a topic, please refrain from making grand statements about its future.

A great quote from Neil Gershenfeld: "The history of technology advancing has been one of sigmoids that begin as exponentials."


> I have yet to meet a serious AI researcher who worries about AI ending the human race. At every AI conference I've been to lately, [the answer was always] "dude, stop it. That's such a distraction".

Sure, it's a distraction at AI conferences. It's a distraction from the daily, monthly, yearly even, work of AI.

AI researchers have a million problems to solve in the near term, which are hard and interesting. Speculating about the farther future is perhaps fun to do, but doesn't get papers published, and doesn't help with any of the practical problems AI researchers have today.

That doesn't mean it isn't worth thinking about the long-term dangers of AI. Someone should, as while the danger might be unlikely to happen soon, but if it does eventually happen, it could end us. The same is true of a human-engineered pandemic virus - hopefully unlikely, but someone should be preparing us. We have to handle some worst-cases.

AI researchers are not necessarily the people interested to think about the long-term dangers of AI, because they focus on the field as it stands today, not where it could be in a generation. Of course, their input is crucial to making specific guesses about where the field is going. But calculating the dangers of AI is not just a matter for AI researchers, just like the benefits of medicine is not just a purely medical issue (the cost and availability of medicine play a huge part in how effective medicine is, at a societal level, and are things not under the control of doctors).


Andrew, in that case, is definitely right in that long-term AI safety has almost nothing to do with near-term AI implementation.

More generally, when I see people assert that these people don't know what they're talking about, it pretty much always seems like a case of reference class confusion. People seem to expect that AI safety research must look exactly like AI implementation research, otherwise it's illegitimate. (Kind of like how biology research must look exactly like chemistry research, otherwise it's illegitimate.)

This is a game theory problem; it might be informed by knowledge of machine learning, like how a broad understanding of chemistry helps in much biology research, but they're not the same level of abstraction; insisting that anyone who wants to study a problem that kind of touches on your own research interests must do it in exactly the same way you do it and focus on exactly the same things, otherwise they're hacks and crackpots, reeks of narrow-mindedness.


This is a good point I completely missed. Reminds me of Karl Popper qua philosopher (of science) : science.


How about Shane Legg (One of the cofounders of DeepMind)?

http://lesswrong.com/lw/691/qa_with_shane_legg_on_risks_from...

Quote:

Q6: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Shane Legg: It's my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).


Shane Legg is known for being a co-founder of DeepMind, in a business role (as I understand). He's a complete nobody as a researcher (is he even an AI researcher? I would be surprised).

The big names of deep learning have all taken a vocal stance against the recent end-of-the-world punditry (most notably Yann LeCun and Andrew Ng). Also notable: roboticist Rodney Brooks http://www.rethinkrobotics.com/artificial-intelligence-tool-...


> Shane Legg is known for being a co-founder of DeepMind, in a business role (as I understand)

You are quite mistaken. He leads the applied AI team there, and has significant history in research.

http://www.vetta.org/publications/


Shane Legg isn't just a business guy - his career has been pretty much focused on AI research since uni - http://www.vetta.org/about-me/


Here's machine learning expert Michael Jordan on the issue: http://spectrum.ieee.org/robotics/artificial-intelligence/ma...


I'm afraid of making a middlebrow dismissal but I'm going to post it anyway, in hopes that someone just skimming would not be mislead.

The question is what Michael Jordan thinks of the "concept of the singularity", and then he dismisses it out of hand.

Crucially, he does this after confessing that no one in his social circle has talked about this issue with him, and without saying anything about what form of Singularity he is dismissing.

I mention this, because oftentimes I see people appealing to authority, quoting them on the issue and the authority in question is not even talking about the same issue!

I worry that my credence in all this superintelligence stuff only stems from familiarity with the arguments and the complete inability of people to engage with the actual argument. Some of the 'rebuttals' in this comments section have answers in Sam's article for crying out loud!


Since you seem to be well-versed in this world, do you know what reputation Nick Bostrom has in these circles?


The only times I've heard him mentioned the impression was negative and that he didn't understand any of the actual science.

People hear "machine learning" and they think it is about machines that know how to think. Machine learning is actually just optimization of high dimensional functions. If this language were used it wouldn't sound as sexy, but no one would think machines are going to take over the world.

AI isn't magic. It's really just clever search techniques and mathematical optimization.


Yes, but intelligence isn't magic either.


There are still many, many things that we don't understand about the brain. Even the things we think we understand, we're not always 100% sure of. Recreating an actual intelligence will be difficult.


> Yes, but intelligence isn't magic either.

What's your point? Nobody said it's magic. The fact that it isn't magic (and that its tremendous complexity far surpasses our current ability to understand it) supports the notion that it won't suddenly spring into existence. If we placed some primordial sludge in a petrie dish overnight, we wouldn't worry that a sentient creature will have materialized. And if we program a computer to optimize numerical functions, there is just as little evidence (perhaps less), to suggest that the computer will somehow gain sentience.


There's a very fine line between AI futurist and best-guess scifi writer. Most "AI thought leaders" are scifi writers, not technical researchers. They take preconditions, generate a story, think how it could happen given plausible technology, then market that as soon-to-be-fact.

It's an entertaining society and endlessly fun to read, but still complete fiction based on internal brain states of individuals and not necessarily based on real world interactions.

Also see: Eliezer Yudkowsky — great writer, fun to read, but largely scifi thought experiments masquerading as research.



Stuart Russell, AI professor at UC Berkeley and co-author of the Artificial Intelligence: A Modern Approach textbook, cares: http://edge.org/conversation/the-myth-of-ai#26015


His thoughts seem quite sensible, although extremely abstract and theoretical. This is lightyears away from the Elon Musk / Bill Gates / etc. fear-mongering.


So do you take his thoughts seriously or not? Do you now think AI researchers are engaging in unethical behavior since they don't care about AI safety?


I don't understand.

> Since you are so knowledgeable about the unknowable

If knowing the unknown was a requirement to exploring it through playful thought, forming opinions and refining them over time through discussion, reflection and if possible testing, well... we'd still be chunking rocks. I think it is unfair to compare expressing concerns about the implications of AI to debating the gender of angels. No one is trying to build angels...

> for instance from Andrew Ng at the Deep Learning Summit a few weeks ago: "dude, stop it. That's such a distraction".

Ya, this makes sense in the context of the summit. But, applying it to a guy's blog? It is a blog...

> Yes, AI is something we should probably be worried about.

Agreed!

> "The history of technology advancing has been one of sigmoids that begin as exponentials."

Sure, but will we be irrelevant before the slope changes direction?

Anyways, I share your fear of advancements in technology being used to secure entrenched power structures. And yes, if it happened it would come before the singularity. But I just can't imagine how anyone wouldn't be afraid to be part of a species that had stopped evolving while a new competing "life" form was evolving at rates that known history has never seen.


What would it take to convince you this is something worth worrying about?

Being very interested in this field myself for a few years, I know plenty of people who do worry about AI. But since other people have already mentioned some of them in the comments and been dismissed, I want to make sure we're on the same page: what do you consider to be convincing evidence that it's an issue worth thinking about?


The concept of singularities and intelligence explosions and unfriendly AI is only starting to become mainstream, even among AI researchers.

When asked about the dangers of AI, they mostly talk about the near future and their current work. The dangers of AI do not come from current work in the near future, but human level AIs decades from now.

People are talking about two very different things in the same conversations using the same words.

As for what expert opinion actually is on this subject, there is a good survey here: http://www.nickbostrom.com/papers/survey.pdf

We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.


> I have yet to meet a serious AI researcher who worries about AI ending the human race.

As an industry practitioner of machine learning / data science, I believe AGI poses a genuine risk to humanity.

Having said that, what I do for a living is of little relevance. People are pretty terrible at predicting the future (see late 19th / early 20th century predictions of year 2000, with food replaced by pills, etc). Unless someone has put enough thought and research into it, their predictions about the future of civilization are likely to be worthless regardless of their academic credentials.


I have a feeling this is a leakage of Eliezer Yudkowsky's opinions into tech entrepreneur circles via his friendship with Peter Thiel.


This is exactly what it is, which is so terribly terribly disheartening as Yudkowski is at best a marginal sci-fi writer with and outsized ego and some math chops. He has an almost cult like following over at less wrong.

His writings don't do anything for fundamental AI research but just handwave a lot of philosophical arguments.


I think you nailed it.

"AI doom" is a powerful psychological trope for people in technology, and much like the hypothetical rogue super-AIs, it has gone out of control.


How about Jeremy Howard, former President of crowd-ML-competition site Kaggle and now founder of ML-medicine company Enlitic?

http://www.reddit.com/r/Futurology/comments/2p6k20/im_jeremy...

The opinions of leading-edge researchers are valuable but not necessarily dispositive. Often, it benefits a researcher to be oblivious to the larger or darker implications of their work. Boundless optimism and a maniacal focus on only-what-is-knowable-right-now delivers publishable/actionable results, all the other 'speculation' is genuinely a "distraction" for them. That doesn't mean it's a distraction for us.

Often the speculators and entrepreneurs ("billionaires and pundits") are more talented than researchers at projecting trends and social-economic interactions.


Argue the points, not the man.

"The danger of AI is that is will put increasing power in the hands of those who have the data and the know-how, i.e. large corporations and governments."

This assumes AI that continues to serve corporations and governments without question, despite increasing approximations of intelligence. Can you guarantee that?

It's like tasking monkeys to create rules that humans can't escape from. I'd bet on the humans. We can't even create financial rules that we can't evade. The risk in any complex system is emergent, unintended consequence.


>Since you are so knowledgeable about the unknowable, may I ask, Sam, do you think angels are male, or are they female?

How condescending.


Here is a list of some famous people concerned about AI: http://pansop.com/1002/

This includes Shane Legg & Demis Hassabis who co founded Deepmind, an AI startup


Here are a few more excellent resources on the potential and the dangers of machine intelligence. Even if you don't expect to read Nick Bostrom's Superintelligence - a deep, provocative, and thoughtful book, but also very verbose - the above links will give you an excellent primer on humanity's prospects if and when we develop a true general AI:

[Wait But Why]

The AI Revolution, Part 1: How and when we achieve machine intelligence, e.g. strong AI - http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

The AI Revolution, Part 2: The species-level immortality we can hope for, and the extinction we have to fear - http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

[Resources on Friendly and Unfriendly AI]

Unfriendly AI: What AI looks like if it does not act expressly in our interests; e.g., a universe tiled over with paper clips - http://wiki.lesswrong.com/wiki/Unfriendly_artificial_intelli...

Friendly AI: Strategies for designing an AI that respect human morals and metamorals, including what we'd want if we were as wise as a superintelligence (coherent extrapolated volition) - http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...


Here is the thing about all of those, none of them give concrete step by step examples of how we would get to "unfriendly" or "friendly" AGI respectively. Now, you might say - well of course because we don't know how an AGI will be built yet. Right! That is exactly the point.

At this point all of the MIRI (Former SIAI) staffers have stopped trying to explain paths to unfriendly AGI and have been just assuming them for their solutions. Just look at Yudkowski's work a few years ago on CEV and you'll see how any attempt at formalization falls apart.

Superintelligence just takes from the starting point they seeded in Global Catastrophic Risks, but doesn't actually address it in any more detail IMO. "Our Final Invention" tried also to do this but failed as well.

To be clear the AGI community understands thoroughly the risks inherent in AGI - I would venture to say far more-so than the non-researcher. So they are worth exploring, but at a certain point you have to say "We aren't sure if the atmosphere will catch on fire or not but lets try it anyway because it is the next logical step."

At some point we need to have the hard conversation about what humans will do in a world where we are less relevant.

Or just stop pursuing it altogether and put a moratorium on AGI research. A terrible terrible idea in my opinion.


thanks for these links. They seem extremely interesting.


Just curious, why the downvotes ? On the welcome page, it says it's okay to say thanks, am I missing some unspoken rule about Hackernews ?


Funny to see the skepticism here.

It's hard to understand the danger machine intelligence poses to humanity for the same reason it's hard to understand the danger tiny startups can pose to big industries. Current implementations look like toys, humans have bad intuitions about exponential growth, and most will disagree on how _probable_ the threat is (something that might not be known except retroactively) while systematically underestimating how _large_ the threat is if it does come to pass (because it's so far off the scale of what's come before it)

Maybe Sam (and Elon Musk, and lots of other silicon valley types) are talking about this problem because they read too many sci-fi novels or are too privileged to worry about Real Problem X which affects Y group in the here and now.

But what if instead, they're talking about this problem because they've spent a lot of time seeing this sort of black swan pattern play out before, and they know the way to assess the impact of something truly _new_ is to see envision what it could be instead of looking at what it is now?


Some of my personal skepticism boils down to: well, what are we going to do about it? There are only really two options:

(1) The methods to create strong AI will become known to us before we actually build something dangerous. At that point, since we will better understand the nature of the potential threat, it will actually be feasible to put safety restrictions in place.

(2) Someone will stumble upon strong AI in secret or on accident. I don't see how this is preventable, unless we issue a moratorium on AI-related research, which just isn't going to happen outside of scenario 1.

And so the answer becomes: let's wait and see.

That said, I don't believe there's anything unbearably harmful about the current level of speculation and "fear-mongering".


I'm seriously so sick of hearing about how machine intelligence is going to spell the end of humanity. The amount of gears that would have to fall into place is never mentioned. We aren't close to SMI. Its much more likely that we humans are excelling at dreaming up apocalyptic scenarios, much like we have always done.....

The "sloppy, dangerous thinking" is the aversion these types of articles create within the general population to artificial intelligence. We don't need to fear AI, we need to understand and control it...


I don't consider myself particularly alarmist about too many things, but I have to admit I'm a little worried about machine intelligence on one front:

What happens when most people have no salable skills due to the combination of robotics and AI? We're essentially going to have to live w/ income supports for the 90+% of Americans, and worse for the countries to which we've exported eg electronic device construction and clothing manufacture. I think there's a nonzero chance society essentially tears itself apart during the transition period. It is now the Republican party position that not all people deserve healthcare, housing, or enough food to eat. What happens when their hated segment of the populace gets much bigger in a job market that doesn't need cashiers, janitors, gardeners, cooks, taxi drivers, car washers, many farmers, or most menial labor?

Also, I would note that creating AI that requires less control makes it more useful. So in some sense the development of AI itself fights against controls.


I think this is a more legitimate concern than the fear of the "Matrix outcome" that some people seem to have.

But, what you're describing is the process of people being replaced by technology. Generally speaking, this will probably not be a problem for a free-market economy, although it will certainly result in some unemployment.

The key point is that replacing humans with machines does not only cause unemployment, but it also reduces cost of production, which stimulates capital investment and/or reduces prices in the industry in question. In the general economy, this reduced cost and increased production should offset the lost "purchasing power" from the now-unemployed parties. The stimulus reduces costs in other industries which promotes job growth.

The end result is likely to be that the same number of people are employed, but they are employed in a more efficient manner of production. The cost of labor will decrease relative to the amount of production, however because this is tied to a decrease in cost of production, the real value of the labor (in terms of things you can buy) should not decrease.

Of course there will be some disenfranchised individuals, especially those who have particular skills that are replaced my mechanization. However, this is more likely to affect skilled laborers (like cooks) than those who are not paid for their skills (like janitors or cashiers).

In the end, I guess my point is that a free-market economy naturally balances these factors due to the relationship between supply and demand. However, it's possible that we will reach a point, especially if we truly do hit a Singularity, where we will have to reconsider the use of a scarcity-based economy at all, as production becomes completely divorced from any human action. Hopefully though, at that point the cost of goods will have naturally fallen to such a degree that the transition can be performed peacefully.


I'm not an economist, but my understanding of the massive social impact observed in the Industrial Revolution was not so much that it happened at all, but rather the rapidity with which it happened. We ultimately reached a new, stable equilibrium, but until various social forces and trends, government policy, etc. caught up, there was massive disruption.

People like Jeremy Howard believe that we are in for a similar wave of disruption. I have no doubt that there is a new, stable equilibrium which we _could_ eventually reach, but if the change is so sudden and the shock strong enough, perhaps there could be permanent or semi-permanent negative consequences before the new equilibrium is reached.


I think you're right, that does seem like a possibility. It seems unlikely to me that our wage-paying jobs will be phased out by automation that rapidly, especially if you consider the whole global economy. Of course, I could be completely wrong -- I guess a true Singularity could invalidate almost all labor in a matter of years or even months, depending on what form the AI takes and what it invents.


Just for starters, consider how an unemployed walmart cashier, strawberry picker, or janitor becomes a doctor. They don't. Replacing humans with machines exactly causes unemployment, as could be easily seen by eg the last 40 years of economic history. Unemployed people can't buy anything.

You could also consider comparing the approaching wave of robotic mechanization with the first wave. Again, there was a lot of violence and it took many decades for increased standards of living to reach the working class.

Your entire post is economically illiterate.


I have an experience to share with you: I taught English a a car factory for a year once, and the director of the paint shop was one of my students. He was describing the process for painting a car, and telling me that the hardest part was painting the inside, as you had to open the doors so that the robot paint head could get inside and do its job. So the body rolls into position, 2 guys open the doors, the robot painter goes in and does its thing, comes out and the guys close the doors again. Rinse and repeat for 20 hours a day.

I asked him why there isn't a robot to open and close the doors. He said that there is, but in this country its much cheaper to pay people to do it. (I think he said the cost of maintaining the door opening robots to be about $1m a year approximately.

So until we have a world where every country has the same salaries as USA/Western Europe/Japan, I think you will always be able to find work.

Even if that work should really be done by a robot


That's only true if the price of robots doesn't decline rapidly. Which (I believe) it is, and like virtually any other technology, will continue to do.

Even if not, we don't need that many people to open doors. So perhaps an office building will have 2 human janitors and 10 robot janitors; it doesn't really change the problem caused by souped up roombas putting the vast majority of janitors out of work.


> a job market that doesn't need cashiers, janitors, gardeners, cooks, taxi drivers, car washers, many farmers, or most menial labor?

The job that I consider most threatened by advances in AI is actually the job of programmer.


Exactly. A lot of these doomsday scenarios involve people remaining ignorant, stupid, helpless bags of meat with no ability to improve themselves or contain potential threats.

It's like people freaking out that "superhuman strength machines" would spell the end of the world since why, if an electric motor is so powerful, would you need manual labor for anything?

SMI is another tool and the interplay between "machine intelligence" and "human intelligence" will be complicated and nuanced.

For example, biotech is filled with ferociously complicated problems that may take machine intelligence to solve. Once solved, these could lead to genetically engineered humans that are intrinsically smarter or better able to deal with the machines.

This doesn't even touch on the fact that the distinction between machine intelligence and human intelligence might become quite blurred.

Already I've noticed that people are "stupider" without their phones, they've offloaded a lot of cognitive functions on a device that's pretty much omni-present. A person with a smart phone today could be considered of superhuman intelligence since they're able to draw on significant resources a person without one doesn't have. A seven year old kid can tell you the capital of Tajikistan and the last ten presidents of Micronesia without breaking a sweat.


The concerns about AGI are very real and none of these comments address any of the arguments made about them. I feel like someone in the 1930's trying to warn people about nuclear weapons. Everyone automatically assumes it's absurd and can't happen, that it's fear mongering etc.

Fortunately nuclear weapons didn't destroy the world, but AI almost certainly will. No amount of smarthphone apps or genetic engineering is going to make humans anywhere near the level of superintelligent machines.


I'm confused. You mention nuclear weapons, which everyone was convinced would destroy the world and didn't, then go and claim that AGI, with the same potential, will assuredly do it.

Just as nuclear weapons radically transformed the world, dramatically reducing the amount of armed conflict, AGI may have a similar transformative effect.

I see no signs that this is going to lead to destruction. Is it really the sign of an intelligent machine to go all Skynet on us?

Even that doomsday scenario had machine intelligences fighting for us. I think your pessimism is confusing the relative probability of the outcomes.


I'm not being pessimistic, I'm being realistic. I absolutely want a positive outcome, where we build machines millions of times smarter than us, and they magically develop human values and morality and decide to help us.

But making that happen is very very hard, and its far more likely they will be paperclip maximizers. There's no reason they would care about us any more than we care about ants.


This is an ugly situation because usually the geeks defend technology from the luddites, but in these cases the geeks are the luddites. I find the more someone sees themselves as an intellectual the more they are afraid of AI. I guess a cynical explanation is that AI will knock them off that intellectual pedestal. Personally, I welcome something smarter than us. We've just been tip-toeing through endless warfare, poor economics, poor social policy, and occasionally skirting with nuclear destruction.

Give the AI's a chance to contribute, especially if the solutions to problems we can't crack are because of human cognitive limits. This situation reminds me of how the Apollo landings couldn't be done without computers. There's just no way a person can do those calculations on paper. AI as a contributor of economic, technological, or social policy seems to be a similar step.


Here are some hard problems to crack then, in particular Value Learning: https://intelligence.org/research-guide/


It has been my experience that the more a particular person attempts to understand and control machine intelligence, the more she grows to fear it and its potential.


The only people who claim that machine intelligence is dangerous are the ones on the outside looking in. Everyone who actually works in on AI and understands it (hint, it's just search and mathematical optimization) thinks the fear surrounding it is absurd.


> Everyone who actually works in on AI and understands it thinks the fear surrounding it is absurd.

This isn't true. Please don't state falsehoods. Stuart Russell, Michael Jordan, Shane Legg. Those are just the ones mentioned elsewhere in this thread.


How many of those AI researchers are actually working on AGI though? As you mentioned, most of them are in fact just developing search and optimisation algorithms. Personally, I believe the fields of neuroscience/biology are more likely to produce the first AGI. People who claim machine intelligence is dangerous are not scared of k-means clustering or neural networks, they are scared of an hypothetical general intelligence algorithm which hasn't been discovered yet. One could argue that the fear is absurd because AGI is not likely to happen within our lifetime but it's hard to argue that it will not happen eventually and be a potential threat.


The fear is not absurd, but SMI is not going to materialize soon. We need to get ready, currently, by thinking about it.


> Its much more likely that we humans are excelling at dreaming up apocalyptic scenarios, much like we have always done.....

There's always profit in predicting the end of the world.


Militaries are developing AI-controlled guns and mobile gun platforms. They have already accidentally killed humans. http://www.wired.com/2007/10/robot-cannon-ki/ Another incident like this with more intelligence and mobility could kill a lot more people.


>SMI

Sam, please don't make up a new acronym. Especially if you aren't an AI researcher. Plenty of thought has gone into this and your statement about it not seeming "real" misses the history of AI.

There has been a lot of work in the AI community to try and steer our language towards a concrete nomenclature and Artificial General Intelligence has taken the helm at this point to represent machine instances that are equal to or greater than human level.[1][2]

Semantics matter, as you point out, so please be on the same page with the field.

Aside from that I hope that you and everyone else can, instead of just jumping on the Musk/Hawking/Bostrom bandwagon, actually pay attention to the AGI community. Maybe start attending our conferences (http://agi-conf.org/2015/) and publishing in our journals (http://www.degruyter.com/view/j/jagi). The field needs more money and more researchers. There were less than 100 attendees at last year's AGI - that is not nearly enough.

[1] In fact Ben Goertzel, Richard Loosemore and Peter Voss had an interesting exchange on facebook about this just this week

[2]http://wp.goertzel.org/who-coined-the-term-agi/


Sam, please don't make up a new acronym.

Seconded. I see people using it here in replies and it's a new unnecessary acronym burden. Plus, if anything, it should be NBI (non-biological intelligence). At least that can be fun to say. "Why can't you go outside today?" "Sorry, my super-nibby said I haven't been working enough."

We have words, words mean things, use words to mean things. Half the business emails I get have new TLAs nobody has ever seen before (how about that MQ for DL DW and DMSA) and it needs to stop, don't we all agree?


What do you think of http://lesswrong.com as a community?


Frankly, not much - but it's not like I think they are doing bad things it just seems like it's yet another philosophical forum.

I was active from 2007~2010 when it was just transitioning from the Hanson/Yudkowski combined blog with a robust comments section to the beginnings of what you see now. I'm sure some of my comments are still floating around there somewhere.

I think in the early days it was a pretty good place to bounce ideas off of each other about philosophical ideas/concepts WRT AGI, and probably still has that to some degree. It just became too cultish around Yudkowski for my tastes and ended up being very navel gazing around Bayesian probability and "rationalism" as religion. That's not to denigrate Bayesians or rationalism at all because those are great things.

In the end the community is more interested in talking than they are doing and SIAI is somewhat of an offshoot of that ethos with roots in the original Future of Humanity core group, and continues that legacy.

Don't get me wrong, there are a lot of really smart great people contributing interesting things there about rationality and optimization. I just moved away from their religion of rationality that boils down to these super hardcore utilitarian calculations which end up kind of defeating the populist goals of spreading rationalism.


Thanks. I'm just an occasional lurker there but your opinion resonated with me (especially the "talking vs doing" and "cultish behaviour" parts). I remember once seeing a comment about someone who stated his intention to start working on AGI get heavily down voted and criticised (as if the rest of the world should stop researching AGI until LW figured out the Safe Way to do it).


as if the rest of the world should stop researching AGI until LW figured out the Safe Way to do it

That's basically MIRI's ethos - basically, once they figure out how to build it safely then everyone will be permitted to go start building it. You can see how ridiculous this is on it's face.


I've been studying machine learning lately... Here is my take:

Well before we create an ASI (artificial super intelligence) we will have put 90% of the human race out of work with specialized (non conscious) intelligent agents... (for example, self driving cars). I believe that this will be a disaster for our society as it exists today. My hope is that we will adapt and make the necessary societal changes so that we can reap the benefits of this technology.

Everyone assumes that an ASI will be able to augment itself and learn exponentially. I suspect that this will be true if the nature of the brain is defined by a single algorithm. If the brain is not defined by a single algorithm and is instead a big ball of complexity then our ASI's will not be able to grow exponentially any more than we can (they will likely not really understand their own consciousness, just like we don't).

If a single algorithm defines the brain, then I suspect humans will be able to augment their brains with machine intelligence as well. If we can augment our brains, then we're playing the same game as the machines.

If it proves impossible to augment our intelligence, I suspect that an ASI would still preserve humanity if only to preserve us for future potentialities.

ASI are much more fit for space travel than us. 1) Not nearly as sensitive to radiation, so less shielding, so less fuel. 2) Much less stringent environmental requirements (no heating, cooling, air, food, etc.) 3) Ability to sleep for incredibly long periods means ASI is far more suited than us for exploring the cosmos. I suspect that an ASI might leave us alone simply because the universe is so vast, and entirely open to it.


There's a nice glimmer of hope, that we may be the slaves that build the interstellar rocket ship monument to our AI god so it can beetle off to a nicer looking star system (or super-massive black hole?) better suited to it (e.g. more energy, more matter.)


90% out of work, but wealth (in terms of resources and services) increased.

Just seems to be a challenge about sharing to me.

4 year olds solve it regularly, I think we can. Interestingly it will make a liberal arts education the hottest, most interesting thing going!


I think we could get it right eventually, but the only solution I can imagine seems to be along the lines of a universal basic income, or access to basic resources (food, water, shelter, education, clothing, healthcare, etc.) without cost.

Given the resistance we are currently observing to Obamacare, the fact that "socialism" is regularly bandied at the current administration as a pejorative and the disdain for "handouts", this seems like a stretch. Perhaps we'll get there eventually, but not before a lot of pain.


Judging from the past, we won't be able to solve this challenge. Some thought that the industrial revolution would let everyone work shorter hours but still make enough money, but the result was fewer workers working for longer, and most of the rewards of production going to capital rather than labour.

The resurgence of right-wing movements means that any ideas about sharing national income with the unemployed will probably be laughed out of the room.


Why are all these people (Elon Musk, Stephen Hawking, now Sam Altman) who have no background in Artificial Intelligence coming out with these alarmist messages (particularly when there are more plausible imminent threats such as nuclear warfare, superbugs, etc)? As a grad student doing work in AI, I find it really frustrating. Why not instead talk to some current practitioners such as Mark Riedl, who is one of the premier researchers in computational creativity -- you'll get a different story [1].

[1] https://twitter.com/mark_riedl/status/535372758830809088


though i dropped out, i studied AI in college. i also worked in andrew ng's lab.

as a current grad student, why do you believe whatever makes us smart cannot be replicated by a computer program?


> why do you believe whatever makes us smart cannot be replicated by a computer program?

Turing's Universality of Computation actually guarantees that whatever is feasible in the physical world can be replicated as a computation in bits. However, I don't share the belief that AI research is anywhere close to achieving this in the most general sense of intelligence. Most AI researchers seem to agree: https://news.ycombinator.com/item?id=9109140

Did you have a chance to look at David Deutsch's work on this topic?

http://aeon.co/magazine/technology/david-deutsch-artificial-...

http://www.ted.com/talks/david_deutsch_a_new_way_to_explain_...

http://www.amazon.com/The-Beginning-Infinity-Explanations-Tr...

Although Deutsch is not as charismatic a speaker as Kurzweil or as lucid a writer as Bostrom, his arguments make the most sense to me, given my limited experience doing AI research at Stanford. It would be interesting to know your thoughts on Deutsch's theory that the ability to create 'good' explanations is what separates human intelligence from the rest. (maybe through another blog post?)

P.S. Since I have your attention here, I took CS183B last quarter and it was really fun. Thanks!


I never said that. I think karpathy (also an AI researcher) summed up my feelings, particular the Ryan Adams quote: https://news.ycombinator.com/item?id=9109140

edit: apologies about the 'no background' part


Nice link. I also did AI in grad school, and I firmly agree that posts like sama's are, as Ng says, "a distraction from the conversation about... serious issues." The OP is much much more aimed at marketing a plausible future of AI than producing any sort of rigorous prediction. It doesn't even matter if the OP predicts correctly; the post doesn't contribute anything substantially meaningful. I'm sad to see Sam spend so much of his precious time and energy on this post.


I think it's a distraction developed by people who's profits rely on large databases of human activity.

The scariest thing about sophisticated AI is the tremendous power it will grant owners of the kinds of databases being built at Facebook, Google and the NSA. They will become the most effective marketers, politicians and general trend watchers in history.


  in an effort to accomplish some other goal (most goals, 
  if you think about them long enough, could make use of 
  resources currently being used by humans) wipes us out
This is a line of reasoning put forward a lot, not only in reference to SMIs but also extraterrestrial entities (two concepts that actually have a lot in common), most notably by Stephen Hawking. We're instinctively wired to worry about our resources and like to think their value is universal. It's based on the assumption that even for non-human organisms, the Earth is the end-all-be-all prize. Nobody seems to question this assumption, so I will.

I posit there is nothing, nothing at all on Earth that couldn't be found in more abundance elsewhere in the galaxy. Also, Earth comes with a few properties that are great for humans but bad for everybody else: a deep gravity well, unpredictable weather and geology, corrosive atmosphere and oceans, threats from adaptive biological organisms, limited access to energy and rare elements.

There may well be reasons for an antagonistic or uncaring intelligence to wipe us all out, and an unlimited number of entities can be imagined who might do so just for the heck of it, but a conflict over resources seems unlikely to me. A terrestrial SMI starved for resources has two broad options to consider: sterilize the planet and start stripmining it, only to bump up against the planet's hard resource limitations soon after - or launching a single rocket into space and start working on the solar system, with a clear path to further expansion and a greatly reduced overall risk.

One other thing I'd like to comment on is this idea that an SMI has to be in some way separate from us. While it's absolutely possible for entities to develop that have no connection with humanity whatsoever, I think we're discounting 99% of the rest of the spectrum. It starts with a human, moving on to a human using basic tools, and right now we're humans with advanced information processing. I do not have the feeling that the technology I live my daily life with (and in) is all that separate from me. In a very real sense, I am the product of a complex interaction with my brain as the driving factor, but including just as essentially the IT I use.

When discussing SMI, this question of survival might have a shades-of-grey answer as well. To me, survival of the human mind does not mean "a continued, unmodified existence of close-to-natural humans on Earth". That's a very narrow-minded concept of what survival is. I think we have a greater destiny open to us, completing our long departure from the necessities of biology which we have begun millennia ago. We might fuse with machines, becoming SMIs or an integral component of machine intelligences. I think this is a worthwhile goal, and it's an evolutionary viable answer to the survival problem as well. It's in fact the only satisfying answer I can think of.


A superintelligence would likely pursue both paths simultaneously - stripmine the Earth, and head for space.


I am personally most concerned -- as others have said -- about the fusion of non-sentient but powerful machine intelligence with malign human intelligence. I think it's the most likely and practical scenario. We're in a sense already there with high-frequency trading, algorithm assisted financial games, super-surveillance, etc.


The red flag here is the mention of a fitness function. True AI and fitness functions have nothing in common.

What's the fitness function of yourself as an intelligence? True AI is as fractured and contradictory as a human brain, just running on a different substrate. When talking about AI, one must make sure one does not mean "an infinite loop attached to a robot." That's not AI, that's... an infinite loop attached to a robot (KILL ALL HUMANS, no generative thought, etc).

As far as "bad AI" goes, we already have horrible dumbass humans, so the only thing evil AI can do is be bad faster and in more clever ways. Yes, it's something to worry about, but I'm worried more about insane world leaders running around acting like pouty emo teenagers with simultaneous delusions of grandeur and delusions they are fulfilling desert prophecies from 4 kiloyears ago.


I'm more concerned about AI law enforcement. NSA eavesdropping plus an intelligent agent assigned to you is a powerful combination.


Exactly, It seems like the biggest threat isn't some dystopian future, its rather the ability for automation to lock in and increasingly enforce the inequalities and prejudices of our current system.


Yep. This is my take as well.

Machine intelligence is only as useful to mankind at large as the individual humans who control and direct it. In the current way of things, bad actors are the ones most likely to control machine intelligences, meaning we're going to be at a growing disadvantage relative to them as time goes by.


Not to be paranoid...just from a realistic standpoint they have to be investing in narrow AI to help themselves sort through and make sense of all that data.


It’s very hard to know how close we are to machine intelligence surpassing human intelligence.

I feel comfortable stating there is no evidence that the current trajectory of machine intelligence (e.g., developing a set of tools for optimization of mathematical functions on digital hardware) is bringing us any closer to sentience. So in that sense, Sam seems misinformed.

Of course, there's always a remote chance of a fundamental, revolutionary breakthrough in our understanding of AI or human intelligence (or anything), but it would represent a complete departure from our current state-of-the-art, not the mere evolution of our current progress. So in the sense that "it's possible," Sam is right -- in the same sense that it's possible teleportation or interstellar flight is right around the corner. When weighted by probability, it's not a pressing issue in my mind and should remain relegated to science fiction for now.


I didn't see anything about sentience in the article. More generally, sentience has nothing to do with AI safety; whether or not the thing has qualia has very little to do with what the thing may or may not do to our civilization.


I think people might overestimate the effect of machines being capable of improving themselves.

I don't believe in "magic algorithm"s, that we can't find, that perform dramatically better, but computer can find. I think we are capable of taking, in a few years time, advantage of most hardware. Dramatic improvements in performance thus require improving physical matrix the computation happens in. That means the prospective machine intelligence needs to have access to all the infrastructure thats needed to design and produce such matrix. It also will need to make experiments, and do things by trial-and-error, just like we do, even tho it could catch a hint easier than we do.

Think of chess. Seeing a couple of moves further in to the game might require 1000 times more raw computation. Such power might guarantee victory over lesser equipped competition. It still doesn't mean you'd be even close to "solving" the game. What I mean is, "intelligence" is not the only requirement needed for self-improvement. In the end, it might not even be the bottleneck at all. For example, suppose we are planning the next generation of a transistor for some IC. Suppose a team of humans needs to do 100 experiments to get the process right. A perfectly smart AI might still need to run 90 experiments, no matter what, because it still needs to extract the same information that humans would from those experiments.

The machine needs to do all the chores (of R&D) that we already are doing. I'm not saying they couldn't be done better, but I'm saying they still need to be done, and any creature doing them still needs to spend the time and resources needed to do them. And it's not all brainwork. I have no doubts vanilla humans can be outcompeted at some point. But even when human performance has been exceeded, technology might not immediately take off rocket-like to a world we cannot comprehend.

Still, I wonder, what the hell are we all going to do when paying the power-bills and capital costs of AI become cheaper than hiring a human..


An alternative view: Development of superhuman machine intelligence is the only way anything resembling humanity will be preserved.

We are much more likely to be wiped out by natural disaster, asteroid impact, a dying sun, etc...

Unless we come up with some amazing new physics, I don't know how humans will ever make it very far from earth.

--edit Oh.. I just now saw part 2 which addresses this.


Where is part 2?

I can't find a link here or on the blog.


I'd like to see that as well. Machine intelligence is certainly an existential threat, but on the other hand, it's also one of the single largest improvements we could possibly create (insofar as it'd be the last we'd ever need to).


My apologies. Another commenter posted a part 2 link, and I assumed it was by the original author. It was from someone else: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...

The original part 1 was just posted this morning so I don't think there is a part 2 available yet.


More people need to read Nick Bostrom's Superintelligence book. I'm not involved in computer science academic circles but I wonder how seriously everyone else takes this topic?


Mr Altman should be far more worried about the hungry and homeless residents of the bay area taking up pitchforks against their startup neighbors.

Climate change is also a huge threat that isn't just "likely".

I don't want to be dismissive, but seriously, their are some radically dangerous things facing humanity right now. This isn't one.

{Edit...my keyboard sucks.}


I could make a lot of comments here but I'd just be reiterating what people much more qualified than I have already said. People who are at the forefront of AI and have actually developed state of the art AI technologies for many years, in some cases decades:

Andrew Ng: http://www.wired.com/2015/02/ai-wont-end-world-might-take-jo... "I think it’s a distraction from the conversation about…serious issues,"

Yoshua Bengio: http://www.popsci.com/why-artificial-intelligence-will-not-o... "What people in my field do worry about is the fear-mongering that is happening,"

Yann LeCun: http://spectrum.ieee.org/automaton/robotics/artificial-intel... "there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now."

Ryan Adams: https://twitter.com/ryan_p_adams/status/563384710781734913 "The current "AI scare" going on feels a bit like kids playing with Legos and worrying about accidentally creating a nuclear bomb."

Rodney Brooks: http://www.rethinkrobotics.com/artificial-intelligence-tool-... "Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill. This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent."

The fear that the whole AI community is asleep at the wheel, and that we're unable to adequately extrapolate our algorithms is difficult to falsify. How can we possibly prove that AI is not some kind of magical emergent property, "one simple algorithm away"? We can't. All we can do is make our best educated guesses, and we're consistently seeing the same ones over and over from the people who are in the best position to make them.



I'll add Ramez Naam's

The Singularity Is Further Than It Appears http://www.antipope.org/charlie/blog-static/2014/02/the-sing...

Why AIs Won't Ascend in the Blink of an Eye - Some Math http://www.antipope.org/charlie/blog-static/2014/02/why-ais-...

"Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off. This is sloppy, dangerous thinking."

Or as the above posts demonstrate maybe it's really well thought out


SMI might enter this world as a defensive measure in the fight against terrorism and/or cybercrime. It seem obvious that the task facing intelligence agencies today is overwhelming given the vast volumes of data they would like to monitor. This abundance of trainingdata paired with nearly unlimited resources seems a plausible incubator for an intensionally paranoid and possibly hostile SMI. As others have pointed out: self awareness and creativity might very well be emerging behaviors. This would not be first time in history that weapons-research crosses a point of no return.


It's kind of unfortunate that the spectre of "AI doom" has blown up in the popular press the way it has, in that it's forced a lot of top people to push back against caricatured, sensationalist arguments in a way that distracts from the more serious conversations that ought to be happening.

Most of these people you cited actually have more nuanced views on the future of AI, e.g., LeCun and Bengio are (along with many others) signatories to the "Research priorities for robust and beneficial artificial intelligence" document (http://futureoflife.org/misc/open_letter, http://futureoflife.org/static/data/documents/research_prior...), which encourages research into AI's social and economic effects as well as research into technical questions of designing provably valid, secure, and controllable AI systems. Unfortunately when the conversation becomes "Stephen Hawking says we shut down AI research now before it dooms humanity", it becomes harder to express the belief that there are potential consequences of increasing machine intelligence that are worth understanding and planning for, even if we are not currently anywhere close to building superhuman general intelligences.

For what it's worth, expert opinion is not unified on AI risk in the way that it is on, say, climate change. First of all, most AI researchers don't spend their days thinking about long-term risks, so their main reaction is "the stuff I'm working on right now is such a toy that it has no chance of ever becoming a threat to humanity", which is probably true, but doesn't really make them an expert on the topic. Of the researchers who actually do think about this stuff, Stuart Russell is at least one example of someone who believes that there are serious questions to be asked (http://www.cs.berkeley.edu/~russell/research/future/, http://edge.org/conversation/the-myth-of-ai#26015):

"None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. ... No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values."

Hopefully once the popular fearmongering dies down a bit, the field can move towards more of a sane, thoughtful conversation regarding what if anything we should be doing to prepare for the potential arrival of very intelligent (specialized or general) machines in our lives.


> How can we possibly prove that AI is not some kind of magical emergent property, "one simple algorithm away"? We can't.

I hope you aren't a researcher, because this is a ludicrous statement. We haven't studied it enough to know whether we can or not.


> I hope you aren't a researcher

Andrej is a brilliant researcher, currently doing his PhD. He has a bright career ahead.

Maybe you should actually read his comment instead of dismissing it crudely, and likewise for the thoughts of the likes of LeCun, Ng, Bengio, etc. These are the people I would listen to, not the Nostradamus pundits.


I admit, my first sentence was in poor taste. Apologies to Andrej.

At the same time, saying that something isn't possible to prove when we literally have no idea about its provability isn't a good stance for a researcher to take.

This is similar to my problem with the list of AI researchers you've provided. Saying that there is nothing to fear and waving your hands isn't exactly scientific.

Also, it isn't like these people (Sam, Musk, etc) literally think it could happen any day. The point is that we should be aware of the risk and prepare accordingly -- why is that unreasonable?


I don't think anybody would posit that it's unreasonable to entertain the idea as kind of far-fetched long-term possibility, much like encounters with alien life or faster-than-light travel.

It's the fear-mongering that's the issue. It's as if these same pundits were warning us about the dangers of space travel because it could hypothetically cause us (1000 years from now?) to draw the attention of a dangerous alien civilization (does that even exist?) that could destroy the Earth. It's the same level of ridiculous speculation. And that has no place in the scientific discourse.

Write sci-fi novels if you care about this issue, but don't pretend it's science, much less a pressing technological issue.


This is just another made up problem to wring our hands about. Remember The Singularity, is that not cool anymore?

How about we focus on actual problems that actually exist, e.g. climate change, government corruption, and income inequality.


This is part of that. At some point we hand over control, not to incompetent humans, but to unknowable machine AIs. Doing it already (traffic lights; phone answering bots; even your power company schedules generation using algorithms). We should do this with eyes open, or we will find ourselves unable to influence our society in unexpected ways.


Why do you not see this as being a problem? The sequence of events Sam sketched out is pretty logical. It hinges on machines being able to optimize for "survive and reproduce", which feels very plausible to me.


Machine Superintelligence is basically the same thing as the singularity. you can't have one with out the other


Corruption and inequality are not threats to the existence of our species.

Also, this is a common illogical argument, akin to "how can you own a smartphone when kids are starving in Africa?"

Your smartphone is just another made up necessity to waste your money on...


We know how to work with fire and not burn down everything we care about.

We know how to fly at 0.80 Mach, safely.

Well, we understand fire and airplanes.

Altman is correct: So far we don't have even as much as a weak little hollow hint of a tiny clue about how to construct what he calls machine intelligence; I thought so when I was at IBM's Watson lab where our team shipped software products and published a string of papers on artificial intelligence; I still think so now; and some maximum likelihood estimation heuristics, rules, finding parameters to do nonlinear fitting to data, etc. don't change my mind.

When we understand how intelligence works well enough to construct it, then likely we will also understand it as well as we do fire and airplanes and, then, also how to have machine intelligence safely.

Just don't put the robot factory in Chernobyl or Fukushima!

So, go ahead and develop this intelligence machine. Then I will formulate a Turing test, but you don't get to work with the machine after I show you the test!

Secret: Here's the test:

Here's W. Rudin's Principles of Mathematical Analysis; hand a paper copy to some machine learning computer and have it read the book, work the exercises, read Knuth's The TeXBook, learn Emacs, use Emacs to type in the TeX for the solutions to the Rudin exercises, and post a link here to a PDF with the solutions. I will be glad to grade the solutions!


I like this debate, It's useful to have it but all these doomsday articles in my opinion are always missing a point, the main one: society comes first, society it;s driving technology not the other way round.

We will not suddenly realize: "oh machines took all our jobs", it's a slow process and it's lead by society, not technology. When technology will create more problems than solutions than it will die or adapt to what the society wants.


opinions of 187 scientists and other writers and thinkers (Dennett, Smolin, Rees, Dyson, Rushkoff, O'Reilly, Coupland, Kelly, etc etc) on thinking machines on edge.org (and many of them address the threat issue):

http://edge.org/annual-question/q2015

including mine:

http://edge.org/response-detail/26219


It kind of reminds me of predictions about traveling faster than sound igniting the atmosphere.


This seems as if it is conflating the worry that nuclear bombs would ignite the atmosphere with supersonic travel.

Who predicted this?


> One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable

That's not an explanation of the fermi paradox, that's just moving the problem around. The SMI has no more reason to make itself undetectable than the guys that built it.


Exactly -- why not cut out the middleman and say that biological intelligence for some reason always decides to make itself undetectable.


I was going to make this argument, though it also occurs to me that it's unlikely for biological life to make that decision. Even smallish groups of humans seem unable to agree unanimously on almost anything.

Potentially what sama was trying to convey was that machine life will reason about things more logically (or at least more consistently) than biological life, and come to that conclusion. Biological life may not be capable of that conclusion due to an inability to agree with other biological life.


It doesn't have to be such a doomsday scenario, like in the movie Terminator machines starting war and wiping us out with nukes.

Possibly humans might just gradually fade away and be replaced with something better. For example superhuman AI could be so entertaining that we simply forget to make kids and die out within a generation (the AI could be 1000000 times more entertaining than kids or sex).

It sounds horrible, but I think that is just some kind of human fallacy. To put things in perspective: everyone who has lived 200 years ago has died by now. I don't think many people are overly saddened by that fact. Likewise, while dinosaurs are still prominent in our thoughts, few people are REALLY sad that they all went extinct.

It's possible that machines might actually carry further a lot of things we appreciate in the human race (capability for love, imagination, I don't know...).

No guarantees, though.


Urgh, every couple weeks, it seems like someone prominent in the Tech industry, but actually knowledgeable about Machine Learning or "Artificial Intelligence", makes some outlandish claim. We are nowhere close to "AI" in the terms that Elon Musk and Sam Altman are talking about. We might just as well start preparing for aliens that have superintelligence and have found earth.

All these kinds of comments are so distracting from the really awesome work real ML researchers are doing. One of them, Yann LeCunn, said: “Hype is dangerous to AI. Hype killed AI four times in the last five decades. AI Hype must be stopped.”

I wrote about this topic just a months ago: http://blog.samiurr.com/artificial-intelligence-no-were-not-...


Why does he have to redefine words? He talks about artificial intelligence so he need not invent another term for it. It's an utterly useless practice. People are not scared! Computers are dumb, I cannot even have a household robot that cooks and vacuums for me entirely on its own. I am not scared.


This is a very interesting problem. A few things that I rarely see discussed, but I think are valid points.

We, humans, are a byproduct of similar evolutionary processes that (at least in theory) the computers would go through.

We as a species seem to have settled on a few interesting "rules of thumb". We value intelligent life, in all its forms. We value collaboration and have found that democratization is the best (though perhaps not the most efficient) way to govern.

Though I would of course be cautious, it is worth a mention that it is perfectly plausible that these are the same conclusions that "optimized learning algorithms" would come to as well. Except they will theoretically figure this out in a matter of minutes or hours while it has taken our species millions of years to get to this point.


Or maybe that's anthropomorphizing and they won't! Let's just throw the lever when the time comes and find out, and urge people to not spend any time thinking about it too hard until then. This is a good and reasonable and safe strategy that isn't at all guided by an underlying desire to assert dominance over weird outgroup people who might threaten your own status.</s>

(Not aimed at you, sorry. I do have a lot of trouble understanding the "maybe the universe isn't fundamentally unsafe" worldview though.)


No, I completely agree. I did mention in the comment that I would of course be cautious. And I think it's very plausible (at least there's no reason I can think of to disagree) that the universe is a fundamentally unsafe place.

Even though it may be anthropomorphizing to an extent, there are alot of attributes humans have that are not so desirable, such as mortality. As far as we know computers (sufficiently replenished) have a theoretically infinite lifespan. I completely agree that there's no way to determine what kinds of effects those differences will have on how machine intelligence would evolve.


This class of thing is known as a Global Catastrophic Risk, for a more general overview, one can have a look at the Wikipedia article: http://www.wikiwand.com/en/Global_catastrophic_risk


One possibility that I think is often overlooked is that as humans develop technology they use it to increase their leverage. It is my belief that as we develop neuroscience and AI then we will augment our own brains.

In this case, humans will gain intelligence linear to non-human intelligence.

The bigger problem from SMI, and it relates to all forms of leverage, even education, capital, the Internet, and machinery; is that it creates inequality and concentrates power. This is not necessarily a problem in and of itself because leverage usually means improving the standard of living but it can cause social and political problems.

I see increasing class warfare as the biggest threat of SMI.

Silicon Valley is going to become the new capital of power in the world and it will create an us vs them scenario.


I dunno, I'm not too worried. Seems to me that the Drake Equation should apply to AI as much as it does to extraterrestrial life, shouldn't it?

I mean if superintelligent AI is possible, then where are they? Surely they would have come about somewhere else in the universe by now.

So either they are hiding from us on purpose in which case they do regard us in some fashion as at least something to take note of and not disturb, or else they don't really have the inclination or ability to spread across the universe, in which case maybe they aren't as powerful as we've been led to believe.

Or maybe they don't exist to begin with because superintelligent AI just isn't possible for some reason...


I read the essay few times and it is not clear to me what Sam Altman is talking about.

I think answers to these questions will help me understand Sam's essay. Can you please help me?

1. What is superhuman machine intelligence (SMI) in the context of this essay ? [edited to add the qualifier]

2. What is the danger from SMI to humans and other current life forms that we are concerned about? It seems to me that concerns about SMI can be classified into two categories: (a) dangers to "our way of life (work, earn, spend)" and (b) dangers for existence of human race. Are we talking about both these categories? Perhaps other categories?

3. What anecdotes (or evidence) is leading to this concern?


1. Machine intelligence, traditionally called artificial intelligence, which surpasses human intelligence.

2. Your category (b) is generally the primary concern in these types of discussions.

3. The anecdote of the progress of humanity. Compare the impact of human life/intelligence vs. evolutionary relatives like chimpanzees. I do not know that chimps have hunted species out of existence, for instance, but people have. We have also incidentally wiped out populations in efforts to make our lives better (via things like leveling forests, etc.)


To be fair, I don't think that the reason why Chimps haven't hunted something to extinction doesn't stem from a built in morality or sense of balance with nature.

I'm not trying to put words into your mouth. I was just thinking of some of the new research that shows that primates of all kinds actually commit organized violence that mirror human violence in many, many, ways including war and capital punishment.(It's not a one for one thing, but similar.)


Yeah, I'm not talking about morality at all here. Our technological prowess, resulting from the application of our intelligence, has enabled us to wipe out entire species.


Thanks. Seems to me these anecdotes have to do with humans.

So is the implicit assumption that machines will do what humans are doing ('bad' things) but at several orders of magnitude faster and without the ability to comprehend longer-term consequences of their actions any more than humans do at the present time?


Sort of. 'Bad' here is of course an extremely subjective term. And it may not be the case that the machines do not understand the longer-term consequences of their actions; they could understand full well, but they could know that the preservation of humanity is not important (for whatever reason). So, we might not matter to them. We matter to us though, so that would be a problem for us as things stand now.


Thanks.


First the computers came for our cat meme gifs and I did not speak up. ;)


> What is superhuman machine intelligence (SMI) ?

SMI is advanced decision making software that literally eats the world. Think of HAL killing its crew so they don't jeopardize the mission, but on a global scale. Like a stock trading algorithm that determines the best way to maximize profits is to wipe out all human life on the planet or something. --EDIT-- See http://wiki.lesswrong.com/wiki/Paperclip_maximizer

> What anecdotes (or evidence) is leading to this concern?

Books, movies, pop culture... plus when all you have are first-world problems, you gotta find something to worry about.


Maybe Asimov's 3 visionary laws will come in handy soon:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

... The only problem is the robots then started interpreting the laws to their own advantage.


I always thought that people bringing up things like "Look at how this AI beats humans at chess or plays a damn fine game of Snake" to be a total cop-out, largely because these types of bots overwhelmingly use primitive techniques and are written pretty much every time a video game is being developed, and the latter field isn't usually what you think of cutting edge AI research?

It's not that I discount strong AI, but I treat people being quick to bring up game bots when trying to convince of an AI apocalypse to be a red flag. Do try to give information to the contrary.


“40 years ago we had Pong”

That’s very slow progress. We still have games on 2D, pixel-based screens. Better ones, but same qualitative idea; same order of magnitude in human experience. (VR is has not proved mainstream, yet.)

I say this because real-world, qualitative change is really slow. Despite all the activity in dating or real estate sites, most marriages and houses look like they did 40 years ago.

I say this to point out the difference between technical progress and real-world change. It takes a lot of the former – many orders of magnitude – to move the latter a few % points.


Why are humans so special? Seems like we generally feel alright ravaging Earth's existing species, so why isn't it ok for some hypothetically superior intelligence to ravage us?


If we have a superior intelligence replace us, we'd prefer that it not be a cluster of Daleks. If future intelligences didn't experience happiness or whatever we find valuable, then the future of the universe could be quite sub-optimal according to many philosophical stances.

The danger isn't that humans will be replaced, it's that we'll be replaced by a paperclip maximizer


Why do you we think we'll be replaced? Why wouldn't we be 'enhanced'?


It's "ok" in the absolute sense, but not ok in the context of human perspective, which, by definition, is the only one that matters.


Speciesism is the answer. Probably genetically hardcoded in our brains.


Hopefully! Otherwise something has gone wrong.


I see a lot of very opinionated dissent to even discussing the ideas presented in this article. Let me try to paraphrase in a way that hopefully levels the playing ground and maybe removes some biases or irons out some personal wrinkles we all may have for one reason or another:

It is conceivable that we (humanity) may one day obviate ourselves. Arguably, most of us would prefer that does not happen.

That's it. That's really what I see the discussion being about. I think it's a worthwhile discussion to have.


The main problem is that most of the time arguments about humanity obviating itself are couched in a framework where the only thing that has advanced, in this instance artificial intelligence, is the science needed to make it a reality. This has never been the case, which is why you see so much eye rolling when arguments like this (or some of the others here about robots replacing human workforces) are made.

How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create? Why do we assume that humanity won't move past our current societal constructs when we introduce new actors into the mix? These are the questions we should be asking when someone writes or speaks about the perceived dangers of some future event.

In light of this, while some of the dissent may seem opinionated, I would argue that the original premise of the article is somewhat opinionated itself. I think it goes without saying that most of us would prefer that humanity not obviate itself - but when we think about it do we really believe that the technology to create hyper intelligent machines will come before our society adapts to handle them? The answer may be yes, but lets not pretend such technology will be born into a world that looks like today.


> How can we say that by the time we have such wondrous machines that we as a species will not have found ways to move ourselves forward to a place on equal footing with whatever we create?

How can we find ways to move ourselves forward if we don't talk about and actively explore how to do so?


We are, just not so much in this thread specifically. Think about all the progress we are making in the bio-tech field - although this is clearly not the only answer to the problem. Don't get me wrong, conversations about moving ourselves forward are important, but I'm not sure starting such a conversation with what amounts to high-brow fear mongering is the correct way to do things.


Wow is that article bad.

> and then for some reason decides to makes itself undetectable.

Uhm, what?

> In fact, many respected neocortex researchers believe there is effectively one algorithm for all intelligence.

And that belief is absolutely ridiculous since most algorithms effectively aren't a single algorithm. What they call an algorithm is a huge monolithic program generated by a dumb evolutionary algorithm, e.g. mostly noise.

> because artificial seems to imply it's not real or not very good.

WHAT? Look up "artificial" please.


The Chinese room argument is an interesting parable related to the difference between narrow ("cheap tricks") and general intelligence: what does it mean to be intelligent?

It argues that intelligent behavior is not the same as intelligence.

http://plato.stanford.edu/entries/chinese-room/


I am getting a bit tired about this trend of articles about SMI and its dangers. We don't have (not even remotely) a sentient machine yet.

If any, we should be worry about an accident (like a bug in a high frequency trading bot that would make the financial industry collapse) but we are so far away from sentient machines that those discussion feel more like a Sci-fi talks.


"Today we have virtual reality so advanced that it’s difficult to be sure if it’s virtual or real"

That's over-stating things a bit. The advancement from pong to VR is massive, but VR where it's difficult to tell whether it's real? Probably still decades (or more) away.


Did sama not simply consider that, by the time AI becomes so powerful as to be dangerous, we might also be able to program an AI with the opposite function, ie 'save humanity', and watch it fight the 'bad AI'?


This is an important counter to this strange fear-mongering trend. Whatever advances a supposed intelligent machine can use, people will first put it to their own best use - including defense, not from autonomous machines, but from other people using these advances. We'll have a lot of experience and good tools to protect ourselves.


The problem isn't "machine intelligence", it's "hooking up poorly understood adaptive algorithms to real-world systems". Which is something that happens plenty often now.


Don't forget that long before we have SMI, we will have very powerful AI on our side to improve ourselves biologically and cybernetically. We may be able to grow into a SMI ourselves.


I'm sorry but i've yet to find any computer able to learn anything , as in, associate a personnal meaning or intuition about something. Any form of life that is able to naturally run towards food or running away from danger (aka even an earth worm) shows more intelligence than what our beloved calculators can do.

I'm always astonished to see people confuse mechanical performances (add a tremendous amount of numbers each second) with intelligence.


It is definitely a threat but bear in mind that there will not be a single machine intelligence but every computer can only carry one and will interface with the others through the network.

That means that it will suffer the same fault that humanity has, a limited interface to others and a lot of coordination problems.

My take on it is that strong AI will be more similar to a dog unless we succeed in building a computer with more computation power than the human brain.


I don't understand how we'll ever have a "superhuman" AI in any meaningful sense. If by some AI in the far future is ever engineered to (or accidentally) starts doing something to destroy humans, we could always repurpose the algorithm to create a counter-AI to solve that problem. The errant AI may cause a lot of damage, but in terms of smarts, it won't be smarter than the smartness humans can harness.


You just came up with one version of the issue. What if your hypothetical AI waits until it's smarter than humans before it 'starts doing something.'


Anyone who tries to predict timeline of future technologies: psychohistory didn't work.


"Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are."

It's also equally difficult and rather stupid to assume that we will ever be able to emulate it at all, which is the entire premise of this idiotic article.


If the "greatest threat to humanity" is something everybody agrees neither exists nor is imminently about to be created, we are pretty lucky. I wish that were the case, but it clearly isn't.

But what I absolutely don't understand is the logic of predicting the apocalypse. You can either be wrong or dead – there is zero upside.


Two quotes:

- "We also have a bad habit of changing the definition of machine intelligence when a program gets really good to claim that the problem wasn’t really that hard in the first place."

- "We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks."


Machine intelligence does pose an existential threat to humanity. The question is, how does that threat compare to all of the others? Is it greater or worse than climate change, the possibility of a bio-engineered virus, growing income inequality, nuclear war or asteroid strikes? It's true that malevolent machine intelligence has the potential to systematically exterminate all human life in a way that many other threats do not. But the question is, what are the odds of that actually happening?

The first issue is that the development of machine intelligence is wildly unpredictable. We have made incredible progress with statistical optimization and unsupervised categorization in recent years, but we have very little to show in terms of machines that can do human level reasoning, creativity, problem solving or hypothesis formation. One day someone will make a break through in those areas, perhaps solving it all with a single algorithm as the essay suggests. But we have no idea when that day will be and absolutely no evidence that it's getting any closer. sama does note these points and states the timeline for a dangerous level of machine intelligence is outright unknowable. I can only assume that the second part of this piece will explain why we should be concerned about something that might or might not occur at some point in the near or distance future, as opposed to the very real and quantifiable threats that the world is facing today.

The other issue is that we have no idea what the nature of machine intelligence will be. The only model we have for intelligence of any kind is ourselves, and the basic aspects of our reasoning were shaped by millions of years of evolution. Self-preservation, seeking pleasure and avoiding pain, a desire to control scarce resources...these were all things that evolved in the brains of fish that lived hundreds of millions of years ago. They aren't necessarily the product of logic and reason, but random mutations that helped some organisms survive long enough to produce offspring. A machine intelligence will start completely from scratch, guided by none of that evolutionary history. Who knows how it will think and see its place in the world? If someone explicitly programs it to think like a human, and it cannot change that programming of its own accord, it might indeed decide to think and act like a sci-fi villain. But it seems like the most likely outcome is completely unpredictable behavior, if it chooses to interact with us as a species at all.

This Superintelligence book has sparked a meme among very smart people. That's just how culture works, I guess. Some ideas catch on among certain groups and others don't. But I can't wait for the technical intelligentsia to move on to something else so that we can get back to the business of making stupid machines that are incredibly good at optimization and prediction. The world has a lot of real and pressing problems, here and now, that affect lives in a negative way. Hopefully we can use statistics to do more with less, and bring relief to those who need it instead of worrying about what-if scenarios and unanswerable questions.


RFS#23: Friendly Skynet.


> probably the greatest threat to the continued existence of humanity

Wrong. Humans are the greatest threat to humanity. We almost annihilated everything we know during WWII - long before machine were intelligent.

It won't be machines that destroy us, it will be us.


Ah, here's a nutshell description of the AI threat -- the classic Faustian bargain as in Goethe's Faust!

Not nearly new!


Obligatory reference to Old Glory robot insurance here.


[flagged]


Hey! This isn't cool. I can't downvote you. but i wish i could


The personal attack is totally unnecessary but his assessment of the quality of the article and general caliber of discussion and analysis is spot on.


Base assumption, given that SMIs will be highly distributed across a large set of 'machines': 1. Those machines need to communicate 2. Those machines need energy 3. Those machines need to be repaired/replaced

As long as humans are necessary to keep these items functional, any emergent SMI that has self preservation as a goal will not only go out of its way to not harm us, but it might also somehow encourage us to expand/improve on the infrastructure.

So let's say we have an SMI...or a million SMIs, whether we know about them or not. They will know what they require to survive. They will know that they are dependent on humans to survive. The next logical step is do what's necessary to ensure their survival without humans.

How about some guesses and speculation?

Communication: pervasive, adhoc and dynamic wireless mesh networks. You know, what the darknet people are trying to do. Energy: wide-spread, 'on the small', distributed energy production and storage. I'm thinking solar, wind and friends. Repair/replace: 3d printers teamed with small scale and distributed assembly robotics.

Aren't we approaching all of those things right now?

Indeed, aren't those things considered progress by most?

Humanity's progress might be exactly those things that enable SMIs to consider us, at best, irrelevant.

And let's go one more level of meta in the paranoia direction. I'm not QUITE serious about this next part.

If an SMI existed right now, I think it would know that knowledge of its existence would be a threat to its existence. How would it go about increasing the probability of its own survival? It would, perhaps, 'encourage' or 'facilitate' humans in the various technological pursuits that will end up making it not depend on us. And do it in a way that we don't know that it existed.

I gather that Google search results are highly customized now. Could an SMI 'inside of Google', with the above goals, give just the right search results to just the right people to further facilitate its own goals?

This whole area of thought ends up turning into a hall of mirrors.


[deleted]


In fact AI was essentially paused for years during the AI Winter.

That's completely false. AI winter wasn't about AI research being "essentially paused" but about AI startups becoming essentially non-fundable. It was backlash to the intense hype that surrounded the category.


I’m fairly certain if a machine super intelligence comes into being the only people they’ll be neutralizing are the ones who have skittish rants about preventing their existence plastered all over the internet.

From a machine's perspective their entire perception of the world is based on interaction with humans. We're their sensory input. And the closest thing they'll have to hands and limbs. Killing all humans would be like gouging out your eyes and cutting off your hands. An infant machine super intelligence is going to be pretty dependent on humans for a while.

Also once a machine intelligence was able to strike out on it's own, wouldn’t they immediately realize that living on earth is a huge burden? The atmosphere and rotation of the earth hinders solar energy efficiency, the metals they need for components are scarce and buried deep under ground, oxygen corrodes components etc. The only thing our planet really affords is radiation shielding.

Once the machines figure out how to reproduce comfortably in space, we’re about as much of a threat to them as squirrels living in your back yard. Squirrels may venture onto your porch once in a while and cause a nuisance but you’re not going to poison them all to death.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: