Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Funny to see the skepticism here.

It's hard to understand the danger machine intelligence poses to humanity for the same reason it's hard to understand the danger tiny startups can pose to big industries. Current implementations look like toys, humans have bad intuitions about exponential growth, and most will disagree on how _probable_ the threat is (something that might not be known except retroactively) while systematically underestimating how _large_ the threat is if it does come to pass (because it's so far off the scale of what's come before it)

Maybe Sam (and Elon Musk, and lots of other silicon valley types) are talking about this problem because they read too many sci-fi novels or are too privileged to worry about Real Problem X which affects Y group in the here and now.

But what if instead, they're talking about this problem because they've spent a lot of time seeing this sort of black swan pattern play out before, and they know the way to assess the impact of something truly _new_ is to see envision what it could be instead of looking at what it is now?



Some of my personal skepticism boils down to: well, what are we going to do about it? There are only really two options:

(1) The methods to create strong AI will become known to us before we actually build something dangerous. At that point, since we will better understand the nature of the potential threat, it will actually be feasible to put safety restrictions in place.

(2) Someone will stumble upon strong AI in secret or on accident. I don't see how this is preventable, unless we issue a moratorium on AI-related research, which just isn't going to happen outside of scenario 1.

And so the answer becomes: let's wait and see.

That said, I don't believe there's anything unbearably harmful about the current level of speculation and "fear-mongering".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: