Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Naive question: Isn't the easiest way to prevent an general purpose AI from taking over the world to not make a general purpose AI?


Unfortunately that just means you don’t create one. To prevent one being created, you have to either somehow figure out a way to get everyone in the world to agree not to create one, or obtain enough global power that you can forcibly stop anyone creating one. Not exactly easy!


A powerful AI could help with that! >.>


reminds me of a very frightening quote from security specialist Gavin de Becker (https://en.wikipedia.org/wiki/Gavin_de_Becker), paraphrased: "every evil that you can think of, someone will have done it"


The assumption many in the field make is that _someone_ is going to create General Purpose AI, and we'd rather it be people who want it to be 'good' (aka 'aligned').

Best case, that AI can prevent the creation of harmful AI, though that's glossing over a lot of details that I'm not qualified to describe.


If you want to stop the world from containing general intelligence, you'd have to stop everyone from having children, which are equally generally intelligent to AGIs (but possibly less specifically intelligent) and are even more dangerous since then actually exist.

The reason people don't accuse every random child of possibly ending the world is because things that actually exist are just less exciting.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: