Because progress is hard to measure when you have no goal to compare it against, and therefore no metric. Most disciplines don't try to measure it and largely avoid making grandiose claims [1]. But some in machine learning, for some reason, choose to measure progress in terms of AI, and by that measure I don't think anyone can point to any substantial progress, let alone to major breakthroughs. But even by any other measure, I don't think there has been a major theoretical breakthrough in machine learning in decades, unlike, say, breakthroughs in complexity theory, distributed systems, and other CS disciplines. In any event, marketers can use the term AI to refer to Quicksort for all I care, but I would suggest that machine learning people avoid that term, which is loaded, ill-defined and with a lot of embarrassing historical baggage of failed promises. Instead, they should be pleased that the discipline has finally produced workable practices that are proving useful in some important domains.
[1]: Not all, sadly, but machine learning is certainly among the worst offenders when it comes to claims vs. reality, although programming language theory is occasionally a close contender.
How about this metric: How many times does the metric for "what counts as AI" move?
by that measure I don't think anyone can point to any substantial progress, let alone to major breakthroughs.
Because by the above metric, all those things which required intelligence before (image recognition, image captioning, good Go playing etc) and clearly aren't AI count as moved goal posts.
As for "I would suggest that machine learning people avoid that term, which is loaded, ill-defined and with a lot of embarrassing historical baggage of failed promises." I think it is interesting to note that Karpathy's article only mentions AI once (as AGI) in the closing sentence as a future work thing.
Personally, I think this is a crappy argument. I'm not at all sure "intelligence" is anything more than good pattern recognition, evolved heuristics and logical reasoning. I think good progress can be shown in all those areas.
> How about this metric: How many times does the metric for "what counts as AI" move?
There are only two things that "count" as AI: human (or perhaps animal) "intelligence" (this requires a definition of intelligence, which we don't have, but I'll take "we'll know it when we see it" for now), and the field of research working towards that goal. Anything else that some people call AI is nothing but empty marketing speech or the name given to whatever it is that the people researching AI are now doing. The second use seems more reasonable, and what counts as AI by that definition has never changed.
That's not to say that "AI" algorithms don't have some common features. They tend to be less discrete and more continuous, choosing a "best" answer rather than the definitely correct one. But, for example, back in the 40s and 50s, what we would now call control systems were also packaged under the same umbrella of Cybernetics. And, if you think about it, control systems use learning without memory (and some even do have memory; a Kalman filter is basically a single layer NN that employs backpropagation). Still, control systems have long been studied and produced by people who are not AI researchers, so we no longer consider them AI (although, do you remember the fuzzy logic craze of the '90s? It was considered a hybrid of AI and control).
> all those things which required intelligence before (image recognition, image captioning, good Go playing etc) and clearly aren't AI count as moved goal posts.
Doing arithmetic and recalling information based on queries had also been considered once to require intelligence, but they have never been considered "AI" because those were not the problems people in AI research have been working on. The goal posts have not moved an inch: AI is still human/animal intelligence, or whatever product AI researchers (working toward that goal) produce. These days, what AI researchers produce amounts to statistical clustering algorithms, so any statistical clustering algorithm is called AI. I don't see anything harder or more special about image recognition than DB technology, distributed systems, etc.
> I think it is interesting to note that Karpathy's article only mentions AI once (as AGI) in the closing sentence as a future work thing.
That's what I was referring to (I don't see what difference mentioning it only once makes). We simply don't know whether deep learning, i.e. deep neural networks trained through a variant of backpropagation, is the approach that would one day lead us to AI.
My other point was about the special status he assigns to machine learning as Software 2.0, something that is wrong both historically (machine-learning predates almost any other CS field) and in practice (machine learning is not taking over DBs, OSes, etc.; it's doing what it can do well, namely statistical learning).
> I'm not at all sure "intelligence" is anything more than good pattern recognition, evolved heuristics and logical reasoning. I think good progress can be shown in all those areas.
I don't know what I think about your definition of intelligence, but "good progress" is relative. I think current machine learning systems are quite disappointing (they're nowhere near as impressive as what, say, even insects can do, and I don't think we'd call insects intelligent, and they're prone to very "unintelligent" mistakes, not to mention that their learning process does not seem to resemble anything done by humans or animals). I think that in terms of theory, progress could be said to be slow at best, but in any event, we are certainly not in any position to say with any reasonable confidence that AI is less than 50 years in the future.
In terms of practice, I would say a program that displays the learning and reasoning abilities of some advanced invertebrates (say wasps or spiders). But we don't know how far away from AI that would put us. Once we achieve that, are we 5 years away from human-level intelligence or 50? We simply do not know.
In terms of theory, a breakthrough would be a better understanding of what intelligence is on one hand, and how "unorganized systems", to use Turing's terminology, evolve sophisticated algorithms. At some stage, Turing believed that as a precursor to intelligence, we should study simpler biological phenomena, and turned to so-called "artificial life". There hasn't been too much progress on that front, as well, but the work done by Stuart Kauffman [1] since the late sixties seems like moving in the right direction, albeit very slowly.
Don't get me wrong: I'm not an AI skeptic. I believe that we will achieve it one day. I just think it is very irresponsible for machine learning researchers to hint we're getting close, when, in fact, they have no idea whether we are or we aren't. To be more specific, we don't know whether deep learning, i.e. deep neural networks trained through a variant of backpropagation is the approach that would one day lead us to AI.
I find it very hard to judge papers by Numenta, because they make it very hard to separate science from marketing; their entire marketing moto is "we're better because we're more like the actual brain" (their HTM networks), rather than "because we perform drastically better". So claims to better biological accuracy (a controversial direction since the birth of AI) are pretty much their raison d'etre. So they say they're writing about theory in this paper, but I see mostly observations, so it's theory more in the sense of a hypothesis rather than what is meant by "theory" in math or physics. But I am really not qualified to judge this paper's importance.
they make it very hard to separate science from marketing
Isn't this true for pretty much any science done ever? It only becomes a problem when marketing is good, and science is not.
so it's theory more in the sense of a hypothesis rather than what is meant by "theory" in math or physics
Again, isn't it true for pretty much any neuroscience research? How would you judge a neuroscience paper importance?
Also, going back to your earlier answer: what would convince you that a program displays the learning and reasoning abilities of some advanced invertebrates?
> Isn't this true for pretty much any science done ever?
Maybe, but when it comes to a commercial entity I'm more suspicious.
> How would you judge a neuroscience paper importance?
I wouldn't; I'd let a neuroscientist judge. It's just that I believe most of us would hear of a major breakthrough in neuroscience.
> what would convince you that a program displays the learning and reasoning abilities of some advanced invertebrates?
It's hard to say precisely (largely because we don't know what intelligence is, let alone have a good quantitative measure for it), but if you read about insect behavior it's very clear that we're nowhere near that (just as it's clear people are more intelligent than spiders even though there are probably mental tasks that spiders can perform better/faster than humans). So ask me again when the question becomes harder to answer :)
[1]: Not all, sadly, but machine learning is certainly among the worst offenders when it comes to claims vs. reality, although programming language theory is occasionally a close contender.