Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. A lot of these doomsday scenarios involve people remaining ignorant, stupid, helpless bags of meat with no ability to improve themselves or contain potential threats.

It's like people freaking out that "superhuman strength machines" would spell the end of the world since why, if an electric motor is so powerful, would you need manual labor for anything?

SMI is another tool and the interplay between "machine intelligence" and "human intelligence" will be complicated and nuanced.

For example, biotech is filled with ferociously complicated problems that may take machine intelligence to solve. Once solved, these could lead to genetically engineered humans that are intrinsically smarter or better able to deal with the machines.

This doesn't even touch on the fact that the distinction between machine intelligence and human intelligence might become quite blurred.

Already I've noticed that people are "stupider" without their phones, they've offloaded a lot of cognitive functions on a device that's pretty much omni-present. A person with a smart phone today could be considered of superhuman intelligence since they're able to draw on significant resources a person without one doesn't have. A seven year old kid can tell you the capital of Tajikistan and the last ten presidents of Micronesia without breaking a sweat.



The concerns about AGI are very real and none of these comments address any of the arguments made about them. I feel like someone in the 1930's trying to warn people about nuclear weapons. Everyone automatically assumes it's absurd and can't happen, that it's fear mongering etc.

Fortunately nuclear weapons didn't destroy the world, but AI almost certainly will. No amount of smarthphone apps or genetic engineering is going to make humans anywhere near the level of superintelligent machines.


I'm confused. You mention nuclear weapons, which everyone was convinced would destroy the world and didn't, then go and claim that AGI, with the same potential, will assuredly do it.

Just as nuclear weapons radically transformed the world, dramatically reducing the amount of armed conflict, AGI may have a similar transformative effect.

I see no signs that this is going to lead to destruction. Is it really the sign of an intelligent machine to go all Skynet on us?

Even that doomsday scenario had machine intelligences fighting for us. I think your pessimism is confusing the relative probability of the outcomes.


I'm not being pessimistic, I'm being realistic. I absolutely want a positive outcome, where we build machines millions of times smarter than us, and they magically develop human values and morality and decide to help us.

But making that happen is very very hard, and its far more likely they will be paperclip maximizers. There's no reason they would care about us any more than we care about ants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: