As usual, the monks know what the laity doesn't, and aren't particularly afraid to talk about it. Also as usual, there's still a yawning gap between what domain experts are up to and what non-domain experts think they're up to.
That this is true in AI is not surprising; humility comes from knowing that my domain expertise in some fields (and thus a clearer picture of 'what's really going on') is guaranteed to be crippled in other fields. Knowing that by being in some knowledge in-groups requires me to also be in some knowledge out-groups is the beginnings of a sane approach to the world.
The author is just correcting a misnomer. It is not really accurate to say that machine learning is intelligent at all, so why label it as such? It's confusing for everyone and leads to great misunderstandings.
Machine learning is a particular narrow result of studying the wider field of artificial intelligence. Just as expert systems, or rdf knowledge representation, or first order logic reasoners, or planning systems - none of them are 'intelligent' but all of them are research results coming from (and being studing in) the discipline of studying how intelligence works and how can something like it be approached artificially.
There's lots in the field of AI that is not 'cognitive automation' - many currently popular things and use cases are, but that's not correcting a misnomer, that's a separate term for a separate (and more narrow) thing - even if that narrower thing constitutes the most relevant and most useful part of current AI research.
A classic definition of intelligence (Legg&Hutter) is "Intelligence measures an agent’s ability to achieve goals in a wide range of environments". That's a worthwhile goal to study even if (obviously) our artificial sytems are not yet even close to human level according to that criteria; and while it is roughly in the same direction as 'cognitive automation', it's less limited and not entirely the same.
For example, 'cognitive automation' pretty much assumes a fixed task to execute/automate, and excludes all the nuances of agentive behavior and motivation, but these are important subtopics in the field of AI.
But I am willing to concede that very many people are explicitly working only on the subfield of 'cognitive automation' and that it would be clearer if these people (but not all AI researchers) explicitly said so.
> Machine learning is a particular narrow result of studying the wider field of artificial intelligence.
I beg to differ, at least as far as terms go now. Neural networks lived in the "field" of machine learning along with Kernel machines and miscellaneous prediction systems circa the early 2000s. Neural network today are known as AI because ... why? Basically, the histories I've read and remember say that the only difference is now neural networks are successful enough they don't have to hide behind a more "narrow" term - or alternately, the hype train now prefers a more ambitious term. I mean, the Machine Learning reddit is one go-to place for actual researchers to discussion neural nets. Everyone now talks about these as AI because the terms have essentially merged.
> A classic definition of intelligence (Legg&Hutter) is "Intelligence measures an agent’s ability to achieve goals in a wide range of environments".
Machine learning mostly became AI through neural nets looking really good - but none of that involve them become more oriented to goals, is anything, less so. It was far more - high dimensional curve can actually get you a whole lot and when you do well, you can call it AI.
What do you mean by "today are known as AI" and "became AI" ?
Neural networks have always been part of AI, machine learning has always been a subfield of AI, all these things are terms within the field of AI since the day they were invented, there never was a single day in history when those things had not been part of AI field.
Neural networks were part of AI field also back when neural nets were not looking really good - e.g. the 1969 Minsky's book "Perceptrons", which was a description of neural networks of the time and a big critique about their limitations - that was an AI publication by an AI researcher on AI topics.
Your implication that an algorithm needs to do well so that "you can call it AI" is ridiculous and false. First, no algorithm should be called AI, AI is a term that refers to a scientific field of study, not particular instances of software or particular classes of algorithms. Second, the field of AI describes (and has invented) lots and lots of trivial algorithms that approximate some particular aspect of intelligent-like behavior.
Lots of things that now have branched into separate fields were developed during AI research in e.g. 1950s - e.g. all decision making studies (including things that are not ubiquitous such as minmax algorithm in game theory), planning and scheduling algorithms, etc all are subfields of AI. Study of knowledge representation is a subfield of AI; Probabilistic reasoning such as Kalman filters is part of AI; automated logic reasoning algorithms are one more narrow subfield of AI, etc.
I think what parent poster means was that for people who don't know better "neural networks === AI". For people who now a bit more, there is bunch of other stuff than just neural networks, and neural networks are not some god sent solution for AI.
The thing with differentiating machine learning and AI, is that nothing in AI world works except machine learning. It’s just a bunch of old theories and ideas, none of which have panned out
Every new discovery ever, people wanting to exploit it have done anything necessary to use people's honest interest in new technology and good feeling about human progress to get money or power.
I don't think there are many definitions of machine learning that claim the models to be intelligent. Most of them limit the term to models that can be built from data.
Learning is a skill that not necessarily comes with an "intelligent" label attached to it.
Have we even defined 'intelligent' might mean? As in, we had the Turing test as a bar and we are close to that already. What is intelligence then, last I checked, there wasn't a definitive answer do it. We'll need it so that we can label AI as I properly - or maybe we don't care so much... If it's close enough...
Once we start seeing cheaply made, imported yes/no engines (masquerading as AI or knowledge) flooding the market, the definition of intelligence will be lost on marketing anyways (unlimited data, superfood, etc)
A predictive model, whether created by ML (regression, SVM, NN, whatever) or something rules-based born out of data analysis is reliant on quality data, which can be expensive to get. There is also a catch 22 where most of the models that are easy to make aren't practically usable because they're not needed in the first place, like a model that tells you if it's a nice day outside; most people would probably take a look at the weather and decide for themselves. On the other hand, a model predicting optimal stocks to buy or self-driving car models are worth a massive amount, but are also really hard to make. Companies will obviously try to sell bad or cheaply made models and may be successful on a small or niche level, but I think most people will recognize the utility and efficacy of a model based on the difficulty of the task it accomplishes relative to their own ability in that task, regardless of buzzwords associated with it. However, a lot of powerful modeling libraries made by really smart people are open source, so maybe what I'm saying is moot apart from sourcing the data.
> Also as usual, there's still a yawning gap between what domain experts are up to
The best homophone for AI is "beyond be yawned".
Comparitive analysis against refined/biased datasets with Kiptronics (knowledge is power electronics/devices) is going to change the world, but spectacular fodder is to be expected.
That this is true in AI is not surprising; humility comes from knowing that my domain expertise in some fields (and thus a clearer picture of 'what's really going on') is guaranteed to be crippled in other fields. Knowing that by being in some knowledge in-groups requires me to also be in some knowledge out-groups is the beginnings of a sane approach to the world.