Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

By 'upgrade everything from human secure' I meant that some targets aren't necessarily appealing to human targets but would be for AI targets. For example, for the vast majority of people, it's not worthwhile to hack medical devices or refrigerators, there's just no money or advantage in it. But for an AI who could be throttled by computational speed or wishes people harm, they would be an appealing target. There just isn't any incentive for those things to be secured at all unless everyone takes this threat seriously.

I don't understand how you arrived at point 3. Are you claiming that somehow memory safety is impossible, even for human level actors? Or that the AI somehow can't reason about memory safety? Or that it's impossible to have self reflection in C? All of these seem like supremely uncharitable interpretations. Help me out here.

Even ignoring that, there's nothing preventing the AI from creating another AI with the same/similar goals and abdicating to its decisions.



My point 3 was, somewhat snarkily, that AI will be built by humans on a foundation of crappy software, riddled with bugs, and that therefore it would very likely wind up crashing itself.

I am not a techno-optimist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: