Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The risk here isn't wasting money, it's slowing down avenues of research with extreme payoffs to the point where we never see the breakthrough at all.

This gets much more interesting once you account for human politics. Say, EU passes the most stringent legislation like this; how long will it be able to sustain it as US forges ahead with more limited regulations, and China allows the wildest experiments so long as it's the government doing them?

FWIW I agree that we should be very safety-first on AI in principle. But I doubt that there's any practical scheme to ensure that given our social organization as a species. The potential payoffs are just too great, so if you don't take the risk, someone else still will. And then you're getting to experience most of the downsides if their bet fails, and none of the upsides if it succeeds (or even more downsides if they use their newly acquired powers against you).

There is a clear analogy with nuclear proliferation here, and it is not encouraging, but it is what it is.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: