Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why are all these people (Elon Musk, Stephen Hawking, now Sam Altman) who have no background in Artificial Intelligence coming out with these alarmist messages (particularly when there are more plausible imminent threats such as nuclear warfare, superbugs, etc)? As a grad student doing work in AI, I find it really frustrating. Why not instead talk to some current practitioners such as Mark Riedl, who is one of the premier researchers in computational creativity -- you'll get a different story [1].

[1] https://twitter.com/mark_riedl/status/535372758830809088



though i dropped out, i studied AI in college. i also worked in andrew ng's lab.

as a current grad student, why do you believe whatever makes us smart cannot be replicated by a computer program?


> why do you believe whatever makes us smart cannot be replicated by a computer program?

Turing's Universality of Computation actually guarantees that whatever is feasible in the physical world can be replicated as a computation in bits. However, I don't share the belief that AI research is anywhere close to achieving this in the most general sense of intelligence. Most AI researchers seem to agree: https://news.ycombinator.com/item?id=9109140

Did you have a chance to look at David Deutsch's work on this topic?

http://aeon.co/magazine/technology/david-deutsch-artificial-...

http://www.ted.com/talks/david_deutsch_a_new_way_to_explain_...

http://www.amazon.com/The-Beginning-Infinity-Explanations-Tr...

Although Deutsch is not as charismatic a speaker as Kurzweil or as lucid a writer as Bostrom, his arguments make the most sense to me, given my limited experience doing AI research at Stanford. It would be interesting to know your thoughts on Deutsch's theory that the ability to create 'good' explanations is what separates human intelligence from the rest. (maybe through another blog post?)

P.S. Since I have your attention here, I took CS183B last quarter and it was really fun. Thanks!


I never said that. I think karpathy (also an AI researcher) summed up my feelings, particular the Ryan Adams quote: https://news.ycombinator.com/item?id=9109140

edit: apologies about the 'no background' part


Nice link. I also did AI in grad school, and I firmly agree that posts like sama's are, as Ng says, "a distraction from the conversation about... serious issues." The OP is much much more aimed at marketing a plausible future of AI than producing any sort of rigorous prediction. It doesn't even matter if the OP predicts correctly; the post doesn't contribute anything substantially meaningful. I'm sad to see Sam spend so much of his precious time and energy on this post.


I think it's a distraction developed by people who's profits rely on large databases of human activity.

The scariest thing about sophisticated AI is the tremendous power it will grant owners of the kinds of databases being built at Facebook, Google and the NSA. They will become the most effective marketers, politicians and general trend watchers in history.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: