Hacker News new | past | comments | ask | show | jobs | submit login

AGI alignment is a vastly bigger problem. Of course poorly built and deployed ML systems will kill and injure people - but these are tragedies of the kind humanity can endure and overcome and has overcome. Poorly aligned AGI is nothing less than the entire species at stake.



> AGI alignment is a vastly bigger problem

But also far less likely to happen anytime soon. A bigger danger is when someone thinks a machine is sentient or "semi-conscious" (whatever that means) and naively uses it to do tasks it shouldn't.


I don't think you nor anyone else knows when AGI is likely to happen. I also don't think that incorrectly believing a machine is sentient when it is not is a "bigger danger."

Again, an improperly aligned AGI could kill the entire human race. I'm not sure what harm incorrectly believing a machine is sentient might do, but I don't think it would be as bad as human extinction or enslavement which are both real possibilities with AGI.


You seem to be comparing only the worst-case impact and not the probability. To see why that's fallacious, consider that an asteroid could also kill the entire human race, but nobody would agree that asteroids are more dangerous than drunk driving.


I think there's a high probability of AGI within a century. Surveys show most experts share that opinion. It's hard to know the probability that the AI will be misaligned - but currently we have no idea how to align it. It's also hard to say how likely a misaligned AI would be to cause extinction. However, we have no reason to think that either of those things are unlikely.


> I don't think you nor anyone else knows when AGI is likely to happen.

Sure. But since I am an AI research I'd imagine I'd have a good leg up on the average person. I'm at least aware of the gullibility gap. Lots of people think it is closer than it is because they see things doing tasks that only humans can do but really your pets are smarter than these machines.


>Again, an improperly aligned AGI could kill the entire human race

Lots of humans are "improperly" aligned GI. Rouge humans like Hitler or Kim Jong Un haven't been very successful.


I think that's a reasonable stance, but only for some values of "soon". In a hundred years, we may well have AGI. At that time, we better have developed a robust science for how to control them. This is a somewhat unrelated problem to the current problem of machine/AI safety and both require more focus than they currently get.


We use the same scheme we use to control humans. The rich own all the valuable land and all the money. Ban robots from owning land. That way people can just use their shotgunsor call the police to kill them for trespassing.

If I were an AGI robot I would be scared of getting swatted for lols.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: