Go is literally infinitely more easy to solve than general intelligence. "Literally" in the sense that Go has a finite number of board states, while a general intelligence must be able to deal with an infinite amount of novel situations, presumably by generalising from previously experienced ones.
Infinity is a real problem. When you try to learn from examples, you first need to see "enough" examples of whatever you're trying to learn. If there are infinitely many such examples, no matter how clever you are in tackling your search space there will always be infinitely many examples of infinitely many situations you've never come across, and that you won't be able to learn.
The typical example of this is language. You could give a learner all phrases of a given language every produced and it would still be missing an infinite amount of necessary examples. Somehow (and it's freaky when you stop to think about it) humans get around this and we can produce and understand parts of infinity, without sweating it.
Machine learning is simply incapable of generalising like that and anyone who thinks AGI is just around the corner has just failed to consider what "general" really, really means.
Though to be fair, now that I had my little rant I have to admit that you don't need to go "general intelligence" before you can be really, really dangerous. Even if AI doesn't "take over" it can do a lot of damage, frex if we start using autonomous weapon systems or hand over critical infrastructure maintenance to limited and inflexible mechanical intelligence.
Yes, but the point is that you 're not going to brute-force this. If it's finite you can hope to approximate it. If it's not finite, you can't even approximate it.
Look- take Monte Carlo methods. You can sample a very big number of events and hope to get some useful information from that. If you sample infinity, though, what do you get? Infinity.
Infinity is a real problem. When you try to learn from examples, you first need to see "enough" examples of whatever you're trying to learn. If there are infinitely many such examples, no matter how clever you are in tackling your search space there will always be infinitely many examples of infinitely many situations you've never come across, and that you won't be able to learn.
The typical example of this is language. You could give a learner all phrases of a given language every produced and it would still be missing an infinite amount of necessary examples. Somehow (and it's freaky when you stop to think about it) humans get around this and we can produce and understand parts of infinity, without sweating it.
Machine learning is simply incapable of generalising like that and anyone who thinks AGI is just around the corner has just failed to consider what "general" really, really means.
Though to be fair, now that I had my little rant I have to admit that you don't need to go "general intelligence" before you can be really, really dangerous. Even if AI doesn't "take over" it can do a lot of damage, frex if we start using autonomous weapon systems or hand over critical infrastructure maintenance to limited and inflexible mechanical intelligence.