Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Secret projects to continue to advance AI are much less of a danger than the current situation in which tens of thousands of AI researchers worldwide are in constant communication with each other with no need to hide the communications from the public or from any government.

Advancing the current publicly-known state of the art to the point where AI becomes potent enough to badly bite us (e.g., to cause human extinction) is probably difficult enough so as to not be in Pyongyang's power or even in Moscow's or Beijing's power especially if the government has to do it under the constraint of secrecy. It probably requires the worldwide community of researchers continuing to collaborate freely to reach the dubious "achievement" of creating an AI model that is so cognitively capable that once deployed, no human army, no human institution, would be able to stop it.



> ...especially if the government has to do it under the constraint of secrecy. It probably requires the worldwide community of researchers continuing to collaborate freely to reach the dubious "achievement" of creating an AI model that is so cognitively capable that once deployed, no human army, no human institution, would be able to stop it.

And stopping now may be helpful towards stalling advances (if they're even possible), by providing just enough capability to pollute the potential training data going forward. If the public internet becomes a "dead internet" or a "zombie internet," it'll be much harder to economically assemble good and massive datasets.

All the AI hype (and its implications) is bringing me around to the idea of viewing spam (of all things) as a moral good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: