In 2015, it was commonly thought that it would still be decades before a computer could beat a top human player at Go. Now, you are calling it “easy,” because it’s been done.
The first chess program was written by Alan Turing on paper between 1948 and 1950. He didn’t have a computer to run it, but he could still play a game with it by stepping through the algorithm by hand. In 1997, Deep Blue beat Kasparov, using traditional algorithms and not deep learning.
Clearly there are differences between these problems and dexterity. Chess, for example, can be described relatively simply using logic, and there is no dynamic or physical element; a rudimentary player can be written using pencil and paper; a winning player just needs enough compute power, apparently.
More importantly, there is a technology curve. You are asking about the ultimate limits of a technique moments after its first success puts it at the low end of the spectrum of human ability. Give it a decade or two.
I am just shocked the video was real-time and not sped up like so many of these videos are (eg watch a robot arm fold a shirt in thirty seconds when you play it at 5x speed).
>> In 2015, it was commonly thought that it would still be decades before a computer could beat a top human player at Go
This needs a citation and it needs it badly.
It was widely reported in the popular press, to the dismay of many scientists working in game-playing AI, who had very different opinions about how close or far beating a professional human at Go was at the time of AlphaGo. The majority of them in fact did not make predictions- they just pointed out that Go was the last of the traditional board games to remain uncoquered by AI. Not that it would take X years to get there. Most AI researchers are loath to make such predictions, knowing well that they tend to be very inaccurate (on either direction).
All I know is what the articles and commenters were saying then, as an interesting contrast to this comment now. Every article on AlphaGo described a general state of shock at achieving something that (even if at a purely psychological level) seemed at least 10 years away.
> Just a couple of years ago, in fact, most Go players and game programmers believed the game was so complex that it would take several decades before computers might reach the standard of a human expert player.
>> All I know is what the articles and commenters were saying then, as an interesting contrast to this comment now.
I understand, but in such cases (when an opinion of experts is summarised in the popular press, rather than by experts themselves) it may be a good idea to dig a bit further before repeating what may be a misunderstanding on the part of reporters.
For example, my experience is very different than what you report. In an AI course during my data science Master's and in the context of a discussion on game-playing AI, the tutor pointed to Go as the only traditional board game that was not yet conquered by adversarial AI, without offering any predictions or comments about its hardness, other than to say that the difficulty of AI systems with Go is sometimes explained by saying that "intuition" is needed to play well. And I generally don't remember being surprised when I first heard of the AlphaGo result (I have some bakcground in adversarial AI, though I'm not an expert), and in fact thinking that it was bound to happen eventually, one way or another.
A similar discussion can be found in AI: A Modern Approach (3d ed) in the "Bibliographical and Historical Notes" section of chapter 5. Adversarial AI, where recent (at the time) successes are noted, but again no prediction about the timeframe of beating a human master is attempted and no explanation of the hardness of the game is given, other than its great branching factor. In fact, the relevant paragraph notes that "Up to 1997 there were no competent Go programs. Now the best programs play most [sic] of their moves at the master level; the only problem is that over the course of a game they usually make at least one serious blunder that allows a strong opponent to win" - a summary that, given the year is 2010, and to my opinion, strongly contradicts the assumption that most experts considered Go to be out of reach of an AI player. It looks like in 2010 experts understood then-current programs to be quite strong players already.
In general, I would be very surprised to find many actual experts (e.g. authors of Go playing systems) predicting that beating Go would take "at least 10 years", let alone "several decades" (!). Like I say, most AI researchers these days are very conservative with their predictions, precisely because they (and others) have been burned in the past. Stressing "most".
>> So is it fair to say that deep learning is fundamentally missing something that humans do?
Yes, it's missing the ability to generalise from its training examples to unseen data and to transfer acquired knowledge between tasks.
Like you say, the article describes an experiment where a robot hand learned to manipulate a cube. A human child that had learned to manipulate a cube that well would also be able to manipulate a ball, a pyramid, a disk and, really, any other physical object of any shape or dimensions (respecting the limits of its own size).
By contrast, a robot that has learned to manipulate cubes via deep learning, can only manipulate cubes and will never be able to manipulate anything but cubes, unless it's trained to manipulate something else, at which point it will forget how to manipulate cubes.
That's the fundamental ability that deep learning is missing, that humans have.
(Before beginning, I want to note that these are solely my opinions, and therefore are probably wrong.)
In the space of possible problems solvable by computers, there are those of which are "easy" and those of which are "hard".
Arbitrarily defined, an "easy" problem is any problem that be solved by throwing more resources at it -- whether it'd be more data, or more compute. A "hard" problem on the other hand is the opposite: solvable only by a major, intellectual breakthrough; the benefit of solving a hard problem is that it allows us to do "more" with "less".
Now, the question is: which type of problems are being looked at by today's AI practitioners. I'd argue it is the former. Chess, Go, Dota 2 -- these are all "easy" problems. Why? Because it is easy to find or generate more data, to use more CPUs and GPUs, and to get better results.
Hell, I might even add self-driving cars to that list since they, along with neural networks, existed since the 1980s [1]. The only difference, it seems, is more compute.
All and all, I think these recent achievements only qualify themselves as engineering achievements -- not as theoretical or scientific breakthroughs. One way to put it: have we, not the computers and machines, learned something fundamentally different?
Maybe another approach to current ML / AI is needed? I remember a couple weeks ago there was a post on HN, about Judas Pearl advocating causality as an alternative [2]. Intuitively it makes sense: baby humans don't only perform glorified pattern matching, but they are able to discern cause-and-effect. Perhaps that is what today's AI practitioners are missing.
I can't find it right now, but there was a nice quote on Wikipedia about AI where they were saying that AI optimism stemmed from underestimating ordinary tasks. The AI researchers, being all from a STEM background, assumed that the hard problems were solving chess, go or math theorems, when in reality threading a needle or brushing your teeth requires a much, much more complicated model.
On the other hand, Alpha Go or even a rudimentary chess program does better than 99.99% of all humans.
So is it fair to say that deep learning is fundamentally missing something that humans do? Or that chess and Go are "easy" problems in some sense?
(It seems like with "unlimited" training hours it could eventually be better than a human? Or is that a hardware issue?)