Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think the common claim is that it would be "evil" or malicious, but that it could be a threat, being both powerful and unpredictable.

The idea that I've often seen people mention, is the idea of a "paperclip optimizer", in which an AI superintelligence primarily optimizes for there being more paperclips, not taking into account side effects of this, which other people might find harmful. If I understand the idea correctly, which I very well might not, it is meant to suggest that if an AI superintelligence is created with a particular goal in mind, we would do well to be very careful when choosing the AI's goals.

It would I suppose be essentially a similar problem to that of a "literal genie". (Except harsher, because the AI would go through steps to accomplish the task, instead of just magically making the task so, and therefore would have more side effects?)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: