> What they've proved is that the same kind of intelligence required to play Go can be implemented with computer hardware. Before now, software couldn't beat a ranked human player at Go no matter how much computing power we threw at it.
I don't think that's quite true as a description of what we knew about computer Go previously, though it depends on what precisely you mean. Recent systems (meaning the past 10 years, post the resurgence of MCTS) appear to scale to essentially arbitrarily good play as you throw more computing power at them. Play strength scales roughly with the log of computing power, at least as far as anyone tested them (maybe it plateaus at some point, but if so, that hasn't been demonstrated).
So we've had systems that can in principle play to any arbitrary strength, if you can throw enough computing power at them. Though you might legitimately argue: by "in principle" do you mean some truly absurd amount, like more computing power than could conceivably fit in the universe? The answer to that is also no; scaling trends have been such that people expected computer Go to beat humans anywhere from, well, around now [1], to 5 to 10 years from now [2].
The two achievements of the team here, at least as I see them, are: 1) they managed to actually throw orders of magnitude more computing power at it than other recent systems have used, in part by making use of GPUs, which the other strong computer-Go systems don't use (the AlphaGo cluster as reported in the Nature paper uses 1202 CPUs and 176 GPUs), and 2) improved the scaling curve by algorithmic improvements over vanilla MCTS (the main subject of their Nature paper). Those are important achievements, but I think not philosophical ones, in the sense of figuring out how to solve something that we previously didn't know how to solve even given arbitrary computing power.
[1] A 2007 survey article suggested that mastering Go within 10 years was probably feasible; not certain, but something that the author wouldn't bet against. I think that was at least a somewhat widely held view as of 2007. http://spectrum.ieee.org/computing/software/cracking-go
[2] A 2012 interview though that mastering Go would need a mixture of inevitable scaling improvements plus probably one significant new algorithmic idea, also a reasonably widely held view as of 2012. https://gogameguru.com/computer-go-demystified-interview-mar...
"Recent systems (meaning the past 10 years, post the resurgence of MCTS) appear to scale to essentially arbitrarily good play as you throw more computing power at them. Play strength scales roughly with the log of computing power, at least as far as anyone tested them (maybe it plateaus at some point, but if so, that hasn't been demonstrated)."
This is exactly the opposite of my sense based on following the computer go mailing list (which featured almost all the top program designers prior to Google/Facebook entering the race). They said that scaling was quite bad past a certain point. The programs had serious blindspots when dealing with capturing races and kos[1] that you couldn't overcome with more power.
Also, DNNs were novel for Go--Google wasn't the first one to use them, but no one was talking about them until sometime in 2014-2015.
[0] Not the kind of weaknesses that can be mechanically exploited by a weak player, but the kind of weaknesses that prevented them from reaching professional level.
> Play strength scales roughly with the log of computing power
To be fair, a lot of the progress in recent years has been due to taking a different approach to solving the problem, and not just due to pure computing power. Due to the way go works, you can't do what we do with chess and try all combinations, no matter how powerful of a computer you have. Using deep learning, we have recently helped computers develop what you might call intuition -- they're now much better at figuring out when they should stop going deeper into the tree (of all possible combinations).
There've definitely been algorithmic improvements, but from what I've read so far, the change in search algorithms, from traditional minimax search to MCTS, has been the biggest improvement, more than deep learning.
The game itself, however, scales exponentially, and there's nothing to do about that, so if you enlargen the board, no computer... And no human may be able to play it well.
The achivement was a leap towards the human level of play (and quite possibly over it). There might be additional leaps, which will take AIs WAY beyond humans, but none of those will scale linearily in the end. (And yeah, I guess you didn't want to say that either)
I don't think that's quite true as a description of what we knew about computer Go previously, though it depends on what precisely you mean. Recent systems (meaning the past 10 years, post the resurgence of MCTS) appear to scale to essentially arbitrarily good play as you throw more computing power at them. Play strength scales roughly with the log of computing power, at least as far as anyone tested them (maybe it plateaus at some point, but if so, that hasn't been demonstrated).
So we've had systems that can in principle play to any arbitrary strength, if you can throw enough computing power at them. Though you might legitimately argue: by "in principle" do you mean some truly absurd amount, like more computing power than could conceivably fit in the universe? The answer to that is also no; scaling trends have been such that people expected computer Go to beat humans anywhere from, well, around now [1], to 5 to 10 years from now [2].
The two achievements of the team here, at least as I see them, are: 1) they managed to actually throw orders of magnitude more computing power at it than other recent systems have used, in part by making use of GPUs, which the other strong computer-Go systems don't use (the AlphaGo cluster as reported in the Nature paper uses 1202 CPUs and 176 GPUs), and 2) improved the scaling curve by algorithmic improvements over vanilla MCTS (the main subject of their Nature paper). Those are important achievements, but I think not philosophical ones, in the sense of figuring out how to solve something that we previously didn't know how to solve even given arbitrary computing power.
While I don't agree with everything in it, I also found this recent blog post / paper on the subject interesting: http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...
[1] A 2007 survey article suggested that mastering Go within 10 years was probably feasible; not certain, but something that the author wouldn't bet against. I think that was at least a somewhat widely held view as of 2007. http://spectrum.ieee.org/computing/software/cracking-go
[2] A 2012 interview though that mastering Go would need a mixture of inevitable scaling improvements plus probably one significant new algorithmic idea, also a reasonably widely held view as of 2012. https://gogameguru.com/computer-go-demystified-interview-mar...