Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The kind of AI you're describing is not the one anyone is worried about or that would be interesting in the first place; the issue is with AI that has access to its own soruce code (hence self-modifying and theoretically capable of getting smarter over time) and/or an AI that is smarter than humans.


What do you mean, it's not interesting? Human beings don't have unrestricted access to their own "source code", but they can still learn and get smarter. The AI I am describing can still be much, much smarter than a human being.

We're also assuming that an AI that can "modify its own source code" is going to do significantly better as a result, but that's not obviously true. If you had full access to every single neuron in your brain, you would have a lot of trouble figuring out how it works, much less how to improve it. Human-level AI would likely run into the same problem: the complexity of its code is greater than what it is capable of understanding. As for an AI that's a thousand smarter than a human, well, it's a thousand times more complex, still over the threshold.

I'm not saying that they wouldn't eventually figure out some improvements, but I suspect that it would end up being more trouble than it's worth. I think the evolution of intelligence is in fact the evolution of intelligence-producing algorithms. Any such algorithm eventually gets stuck in a local maximum, at which point greater intelligence requires restarting from scratch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: