As usual, the monks know what the laity doesn't, and aren't particularly afraid to talk about it. Also as usual, there's still a yawning gap between what domain experts are up to and what non-domain experts think they're up to.
That this is true in AI is not surprising; humility comes from knowing that my domain expertise in some fields (and thus a clearer picture of 'what's really going on') is guaranteed to be crippled in other fields. Knowing that by being in some knowledge in-groups requires me to also be in some knowledge out-groups is the beginnings of a sane approach to the world.
The author is just correcting a misnomer. It is not really accurate to say that machine learning is intelligent at all, so why label it as such? It's confusing for everyone and leads to great misunderstandings.
Machine learning is a particular narrow result of studying the wider field of artificial intelligence. Just as expert systems, or rdf knowledge representation, or first order logic reasoners, or planning systems - none of them are 'intelligent' but all of them are research results coming from (and being studing in) the discipline of studying how intelligence works and how can something like it be approached artificially.
There's lots in the field of AI that is not 'cognitive automation' - many currently popular things and use cases are, but that's not correcting a misnomer, that's a separate term for a separate (and more narrow) thing - even if that narrower thing constitutes the most relevant and most useful part of current AI research.
A classic definition of intelligence (Legg&Hutter) is "Intelligence measures an agent’s ability to achieve goals in a wide range of environments". That's a worthwhile goal to study even if (obviously) our artificial sytems are not yet even close to human level according to that criteria; and while it is roughly in the same direction as 'cognitive automation', it's less limited and not entirely the same.
For example, 'cognitive automation' pretty much assumes a fixed task to execute/automate, and excludes all the nuances of agentive behavior and motivation, but these are important subtopics in the field of AI.
But I am willing to concede that very many people are explicitly working only on the subfield of 'cognitive automation' and that it would be clearer if these people (but not all AI researchers) explicitly said so.
> Machine learning is a particular narrow result of studying the wider field of artificial intelligence.
I beg to differ, at least as far as terms go now. Neural networks lived in the "field" of machine learning along with Kernel machines and miscellaneous prediction systems circa the early 2000s. Neural network today are known as AI because ... why? Basically, the histories I've read and remember say that the only difference is now neural networks are successful enough they don't have to hide behind a more "narrow" term - or alternately, the hype train now prefers a more ambitious term. I mean, the Machine Learning reddit is one go-to place for actual researchers to discussion neural nets. Everyone now talks about these as AI because the terms have essentially merged.
> A classic definition of intelligence (Legg&Hutter) is "Intelligence measures an agent’s ability to achieve goals in a wide range of environments".
Machine learning mostly became AI through neural nets looking really good - but none of that involve them become more oriented to goals, is anything, less so. It was far more - high dimensional curve can actually get you a whole lot and when you do well, you can call it AI.
What do you mean by "today are known as AI" and "became AI" ?
Neural networks have always been part of AI, machine learning has always been a subfield of AI, all these things are terms within the field of AI since the day they were invented, there never was a single day in history when those things had not been part of AI field.
Neural networks were part of AI field also back when neural nets were not looking really good - e.g. the 1969 Minsky's book "Perceptrons", which was a description of neural networks of the time and a big critique about their limitations - that was an AI publication by an AI researcher on AI topics.
Your implication that an algorithm needs to do well so that "you can call it AI" is ridiculous and false. First, no algorithm should be called AI, AI is a term that refers to a scientific field of study, not particular instances of software or particular classes of algorithms. Second, the field of AI describes (and has invented) lots and lots of trivial algorithms that approximate some particular aspect of intelligent-like behavior.
Lots of things that now have branched into separate fields were developed during AI research in e.g. 1950s - e.g. all decision making studies (including things that are not ubiquitous such as minmax algorithm in game theory), planning and scheduling algorithms, etc all are subfields of AI. Study of knowledge representation is a subfield of AI; Probabilistic reasoning such as Kalman filters is part of AI; automated logic reasoning algorithms are one more narrow subfield of AI, etc.
I think what parent poster means was that for people who don't know better "neural networks === AI". For people who now a bit more, there is bunch of other stuff than just neural networks, and neural networks are not some god sent solution for AI.
The thing with differentiating machine learning and AI, is that nothing in AI world works except machine learning. It’s just a bunch of old theories and ideas, none of which have panned out
Every new discovery ever, people wanting to exploit it have done anything necessary to use people's honest interest in new technology and good feeling about human progress to get money or power.
I don't think there are many definitions of machine learning that claim the models to be intelligent. Most of them limit the term to models that can be built from data.
Learning is a skill that not necessarily comes with an "intelligent" label attached to it.
Have we even defined 'intelligent' might mean? As in, we had the Turing test as a bar and we are close to that already. What is intelligence then, last I checked, there wasn't a definitive answer do it. We'll need it so that we can label AI as I properly - or maybe we don't care so much... If it's close enough...
Once we start seeing cheaply made, imported yes/no engines (masquerading as AI or knowledge) flooding the market, the definition of intelligence will be lost on marketing anyways (unlimited data, superfood, etc)
A predictive model, whether created by ML (regression, SVM, NN, whatever) or something rules-based born out of data analysis is reliant on quality data, which can be expensive to get. There is also a catch 22 where most of the models that are easy to make aren't practically usable because they're not needed in the first place, like a model that tells you if it's a nice day outside; most people would probably take a look at the weather and decide for themselves. On the other hand, a model predicting optimal stocks to buy or self-driving car models are worth a massive amount, but are also really hard to make. Companies will obviously try to sell bad or cheaply made models and may be successful on a small or niche level, but I think most people will recognize the utility and efficacy of a model based on the difficulty of the task it accomplishes relative to their own ability in that task, regardless of buzzwords associated with it. However, a lot of powerful modeling libraries made by really smart people are open source, so maybe what I'm saying is moot apart from sourcing the data.
> Also as usual, there's still a yawning gap between what domain experts are up to
The best homophone for AI is "beyond be yawned".
Comparitive analysis against refined/biased datasets with Kiptronics (knowledge is power electronics/devices) is going to change the world, but spectacular fodder is to be expected.
I think it is also important to remember that intelligence isn't clearly defined. It seems a lot of people interpret it in different ways and the definition is closer to pornography (I know it when I see it).
I often see two camps, one that defines intelligence to be more human like. Limiting it to really cetaceans and hominids. Maybe including ravens. The other group gives too vague of a definition.
Personally, I do not see a problem with having lots of bins. I don't think many disagree that intelligence is a continuum. So why restrict it to very high level bins? Because that's the vernacular usage? I for one vote for the many bin and continuum approach. In this I think you could say that ML has some extremely low level form of intelligence, but I would generally say lower level than that of an ant. In that respect, a multi agent system with the intelligence surpassing that of ants I believe would be extremely impressive.
I don't think it's a quantitative issue (the level of the bin). It's a qualitative issue. When people say that ML lacks intelligence what they're saying is that it lacks robustness, common sense, agent like behaviour, the ability to reason and so on.
Intelligence (in humans or animals) does not appear to be just data driven pattern matching. I think we can say this with some confidence given that even the fanciest ML algorithm still hopelessly sucks at performing tasks that are trivial even for barely intelligent animals.
> Intelligence (in humans or animals) does not appear to be just data driven pattern matching. I think we can say this with some confidence given that even the fanciest ML algorithm still hopelessly sucks at performing tasks that are trivial even for barely intelligent animals.
Do you mind expanding on this? A lot of the tasks primitive animals can be replicated by Deep RL algorithms given the same environment.
Humans adapt to sensor information, memory and a reward/penalty system just like RL. We're just much more advanced and have sophisticated systems to sense and act.
>A lot of the tasks primitive animals can be replicated by Deep RL algorithms given the same environment.
Highlighted the last part because this is the key difference. Humans and animals can navigate unknown and unstructured and open-ended environments. You can identify a dangerous predator without having to see ten thousand images of mauled bodies.
Humans and animals can generalise in a way that is robust and independent from 'data' because they understand what they see. RL algorithms have no understanding of the world. If you let an RL system play breakout but you resize the paddle by five pixels and tilt it by 2 degrees it cannot play the game.
Daniel Dennett helped popularise the notion of Umwelt, which is losely translated someone's self-centered world as subjectively perceived by an organism and filled with meaning and a notion of how the agent relates to it. It's distinct from an objective 'environment' that everyone shares.
Machines lack this notion, they have no real concept of anything, even the fanciest algorithms. Which is also why conversational agents have only really made advances on one front, which is understanding sound and turning text into nice soundwaves. They have made virtually no progress in understanding irony, or ambiguity or anything that requires having an understanding of the human Umwelt. We don't even have any idea how the mind constructs this sort of interior representation of the world at all, and my prediction is we're not going to if we continue to talk about layers of neurons instead of talking about what the possible structure of a human or animal mind is.
I think our Umwelt is formed by our senses, especially the touch: our skin allows us to form a model, in our brain, of our body, and its movement abilities.
I think that the brain doesn't simply do pattern matching, it also forms models of things based on external stimuli, and then does pattern matching on properties of those models.
And this model constitutes a small 'universe' inside our brains, with us at the center.
And that's how consciousness emerges: we know of an entity that is us because our brain has an object in it that represents us.
In your Breakout example, there is really no formation of object 'paddle' inside the 'AI' that plays the game, and hence if the paddle is actually changed even by little, the algorithm doen't know how to handle it. Whereas, in our brain, we see the pixels on the screen as an actual paddle representation, and hence we can easily play any version of Breakout no matter how the paddle looks.
EDIT:
Same thing with the Vision sense: seeing things allows us to build 3d models of things in our brain. And then we can look at a photograph and recognize objects, because we have these models in our brain.
Well, what if you trained on the unbelievable amount of data that enters the human brain?
Like, say inputting years of high definition video, audio, proprioception, introspection, debug and error logs, data from a bunch of other sensors, etc. Then put that in a really tight loop with high precision motion and audio output devices, and keep learning. Also do it on a really fast computer with lots of memory. Also make the code itself an output.
If thats not enough, you could always try self replicating to try to create more successful versions over million year timescales.
That's trivially true in the sense that this is how we can only assume humans and animals came to be but I don't think there's any guarantee that this can be replicated in a silicon-based software architecture which is very different from analog and chemical biological organisms. Today already energy and computional costs are high with model computation cost going into five or six figures even in just one domain.
But more importantly, I think the problem with this approach is that it's essentially a hail mary of sorts with potentially zero scientific insight into how the brain and the mind works. It's a little bit like behaviourism before the cognitive revolution with AI models being the equivalent of a Skinner box.
I don't know the history of it and am not an expert in the field, but it seems to me that it's valid to call the things ML can already do "intelligence" in a generalized sense, and that there is nothing categorically different between that and human intelligence, it's just a matter of how complicated humans are.
That seems like a problem you can sort of throw hardware at for a while until it gets good enough to help you figure out how to make something smarter.
I think there is little evidence to suggest that neural networks and human minds behave in the same way modulo complexity.
In science there is a tendency to come up with models of the world - simplifications which we can observe and quantify - and then fall into the trap thinking that these models explain the world.
While neural networks are inspired by biological neural connections between synapses and neurons, the converse that neural networks are therefor intelligent does not hold.
I'm not suggesting NNs should be considered intelligent because of their inspiration from biology. Just that they should be considered intelligent because of what they are currently capable of (though that is way less intelligent than humans or animals of course).
But it seems like there is a plausible path to increasing their "intelligence" by dumping more data and hardware at them.
Like GPT2 shows quite a surprising amount of structural complexity even though it's very dumb in other ways – it feels like if you could pump a million times more data into it you'd get something that seemed really quite intelligent.
I see a lot of AI engineers who seem concerned with this particular issue, which I never really understand.
Is it because of a perception that most regular people are likely overestimating the speed of which AI is going to overtake human intelligence? Or more about corp management wanting miracles that aren't possible?
Why does this matter and always seem to be talked about?
It's because this kind of hype inevitably leads to a trough of disillusionment -- the methods we collectively call "AI" today are never going to lead to a general-purpose artificial intelligence. People are disappointed we don't have self-driving cars yet, but it's not clear whether that problem domain is constrained enough for deep neural networks to solve.
What we have developed are ways to automate complex tasks within a constrained input domain that can be easily quantified. It seems like magic, which leads people to say that it's "AI" but in reality it's just a complex automation built through reinforcement techniques that leverage some clever math tricks. Throw an unexpected input or new set of circumstances at the model and you get interesting results.
It's not a sense that people are overestimating the speed with which AI is going to overtake human intelligence -- it's that the techniques we're using today that we call "AI" are not capable of doing anything of the sort.
This. Working in big Corp and Government I have seen how far this disillusionment can take an organization down the wrong road. Learning how to articulate complex technical / scientific topics to bureaucracy, I am learning, is a very valuable and needed skill amongst engineers.
It's a fine line you have to walk. They usually are looking for a person who will tell them what they want to hear, so it's usually a matter of starting with "the art of the possible" (aka a bunch of bullshit they heard on NPR) and working them over to something more realistic.
I've found it helps if you can frame it in the context of the other options (i.e. agree with where they want to go and present multiple ways to get there) they're more receptive. Leaders know about these hype cycles too, but they often have to play along for political reasons and they'll be thankful if you work with them rather than against them.
Because there's a history of overhyping ML/AI (whatever you want to call it) leading to AI winters. Winter in this case being kind of like a recession in economic terms - most research funding dries up, etc. We essentially had one of those winters from the late 80s until about a dozen years ago. A lot of laymen now think of AI as being "magic" that can do anything and that's not a good thing when the reality turns out to be different.
At this point I don't think we'll see an AI winter as deep as some of the previous ones. But we could certainly see an AI Fall.
>> Because there's a history of overhyping ML/AI (whatever you want to call it) leading to AI winters.
Note that past AI winters have not occurred because of overhyping machine learning. They occurred because of overhyping of symbolic AI that had nothing to do with machine learning. For example, the last AI winter at the end of the '80s happened because of the overhyping of expert systems- which of course are not machine learning systems.
Machine learning is not all, not even most, of AI, historically. It's the dominant trend right now, but it was not the dominant trend in the past. The dominant trend until the 1980's was symbolic reasoning.
But symbolic reasoning mostly worked, did it not? However, its Achilles heel was that for it to be useful, it's necessary to distill a lot of domain knowledge into a format that can be processed by an expert system. That means, writing 10s upon thousands of rows of "if then then that".
Machine learning is different in that it is more amenable to distilling those rules from the data automatically. It is successful where symbolic reasoning failed because it can go from the raw data. A good portion of machine learning research is in new ways to preprocess and format data into a structure that can be further consumed by linear algebra, which turns out to be a lot easier and practical than figure out a huge database of sensible first order predicate logic statements.
If ML techniques can be used to feed symbolic systems, the latter would show promise again, which is already happening in recent trends in causal inference and graph networks. The marriage of these two fields is inevitable, and has already started.
partly because, Machine Learning is inherently probabilistic. Lots of room for errors and hype with systems that never claim to give the right answer all of the time!
There was a neural net popularity surge in the late 80s, early 90s. Of course, the hardware wasn't there yet to be able to deliver on the promises. I was in a Goodwill book section about a year ago and there were a couple of NN books from that era on sale for $3, one titled "Apprentices of Wonder: Inside the Neural Network Revolution" from 1990 and the other was for programmers and included C code for a NN to predict the stock market from 1989. Anyway, that all had died out by about '92 or '93 and NNs were a pretty dead academic topic until about 2005 or so when they figured out that GPUs could be used to accelerate them.
The name is overhyped and pretentious by itself, and history bears this out. Who cares if it's an AI fall or winter if it's an AI stupid, because of all the credulous students.
Personally, when a lay person asks what I do, I like telling people I work on "Artificial Intelligence software" because it's the most accurate term that doesn't (a) get an immediate request to implement their app idea for them and (b) require explaining what machine learning / deep learning is.
But beyond that I hate the term within the industry because I think artificial intelligence gets equated with a Jarvis-like general AI that will talk to you like a superhuman servant. I get the desire to better define the current state of the art. But for most people, I agree it's going to seem like pedantry.
> artificial intelligence gets equated with a Jarvis-like general AI that will talk to you like a superhuman servant
to be fair, for a lot of researchers, that is the ultimate end goal, even for those who admit we are not even close to it. I for one first got interested in AI from an 80's movie, can't remember which, with a character who talked to his computer, which talked back. Since those early years, I haven't spent even one second working on actual AGI, seeing the plethora of subgoals needed to get there, but.. thinking about it.. plenty. that dream is a driving force behind more ML/AI researchers than maybe you think. Particularly in the RL community I would guess.
There’s already a term you can use. “Statistical learning”. There’s even a well known important book with that title: Elements of Statistical Learning.
It seems analogous to Searle's "Chinese Room" argument: automated responses to predefined stimuli isn't the same as "intelligence" or "understanding".
The OP suggests modern AI is a fancy way of teaching systems to effectively hardcode or automate their behavior themselves.
I'm not sure why that matters, as long as the results are what we aim for. It's not like most AI researchers are trying to create sentient artificial life-forms.
It matters because the hard-coded behavior is brittle and often doesn't do exactly what we want (or think).
For example, GPT-2 has been ascribed nearly magical powers: it's a knowledge base, it can play chess, it does calculus, it's a dessert topping AND a floor wax!
When you look closer, however, it doesn't do any of those things particularly well. It can regurgitate something that looks like a true fact--or its negation with equal probability. It doesn't quite know the rules of chess. It needs a solver to check that the solution to an integral is, in fact, a solution.
All of those caveats apply to human intelligence, but to a lesser degree. Kids can play chess without exactly knowing the rules, and come on, everybody needs to check their integrals.
But kids can eventually learn proper algorithms for chess and integrals. DNNs cannot.
In a sense, deep learning is like slide rules - you can squeeze some problem domains into giant, funny looking lookup tables, but generalize it does not.
Andrew Yang is a serious contender for the US presidency whose entire platform rests on assumptions about AI. He wants to fundamentally reshape welfare in the country and implement an entirely new tax. Thinking clearly about AI is therefore very important, as it is having real and substantial political implications.
I remember him a year ago saying stuff that wasn't particularly mainstream, like warning about fast food cashiers being replaced by kiosks, malls closing due to competition with Amazon, call center workers being automated, etc.
These are all things that are coming, I have peers working on some of them, but they aren't particularly mainstream, even though the most accessible jobs in the economy fall under those categories.
The thing is, technology to replace fast food cashiers or call center employees doesn't even have to be good, it just has to be really cheap. The companies making the decision to put those systems in place are not the end users who will be forced to interface with them. Human customers will be forced to modify their speech to be understood by mediocre telephone agent programs.
So yes, Yang is right that automation is coming for millions of American jobs, and sooner than most people might think.
He doesn't think it will solve them, he thinks it is going to cause them, and that we need to be ready with solutions.
Take Universal Basic Income. He is predicting far more jobs are going to be automated in the near future than most people expect, and something like UBI will be needed to keep the people out of work from starving or rioting.
The problem is not that we have a problem. The problem is that we have problems. So the solution is not finding a solution to a problem. The solution is finding a metasolution that is valid across time and tribes. Bam! the challenge of being an intelligent being in this universe. No way we can automate that. Only mimic a portion of it and call it intelligence doesn't make it really intelligent.
It's a very very interesting question. Personally I believe that "automaton" is almost the opposite of "being". But that's just me, not science or other authority. Certainly, somewhere between virus and human something comes into being (no pun intended.) I don't know of any non-metaphysical argument that we couldn't find some other way to create non-biological general AI.
I think we could genetically engineer human DNA to create wetware G"A"I but I put the "artificial" in quotes to indicate that I'm not saying whether that would count as AI or not. I know of a few efforts to create "Daleks" out of human brain organoids, but I don't think anyone has gone beyond the speculative/hype stage with it so far.
Definition of cognitive
1 : of, relating to, being, or involving conscious intellectual activity (such as thinking, reasoning, or remembering)
2 : based on or capable of being reduced to empirical factual knowledge
Using "cognitive" instead of "intelligence" puts the emphasis on data processing rather that adaptability, which may be a bit more in line with how things are done today. However, it doesn't addresses the core of the debate. The usual "[technology] isn't [AI/cognitive automation] because it can't do [thing humans do], it is just [thing computers do]". Both terms relate to consciousness, and are generally considered fundamentally human qualities.
I think there is simply no way out of that debate. Maybe use a term that it sounds completely unrelated to human activity, maybe something like "Big Data Statistical Matching".
Intelligence doesn't have a lot of scientific ground either. It's pretty hard to define what intelligence is, or at least have a scientific definition that is precise enough. The Turing Test is only a measure, it doesn't help to reach a definition.
Practical research will always hit a ceiling if scientists cannot try to define what they're looking for.
Even machine learning is not a good definition. There are other attemps, like "sophisticated statistics" or "statistical prediction".
There has been a lot of debate about the state of AI in recent weeks on Twitter (see #AIDebate). A lot of it was about naming. It occurred to me that very opinionated people had no idea what people in the community are actually doing, and so the whole excercise seemed like a learning experience for them, which seems to be a good thing.
The goals of AI (which Google trends classifies as “field of study” - I think that captures it quite well) haven’t changed in decades - to reverse engineer the miracle of human cognition. A certain number of people (like the teams of Yoshua Bengio or Demis Hassabis) have the clear mission to work on just that. The progress in this area is much slower than the perception of the last 5-10 years would suggest. It was just that work from the 90s and 2000s were put to test and quickly outperformed other approaches - symbolic or what we now call “classic machine learning” (e.g. in speech recognition, image classification/detection, machine translation, Information retrieval). All these areas had important and valuable applications in industry and have sucked up a lot of money.
But this is only a tiny part of what human cognition entails. Areas around memory, reasoning, consciousness etc. are completely unsolved. Where are we on a scale of 0 to 1000 of solving the problem? Perhaps somewhere between 20 and 50, nobody knows. AI is a north star, and it is a weird development that people have started to call it “AI” again (it felt totally weird about 3-4 years ago when this happened).
So, I think the field is still rightly called “AI”. Call the current state of it “system 1”, “differential programming”, “deep learning” or whatever.
Well said. The definition of intelligence is bastardized for virtually all current AI applications. They are glorified statistical heuristics / stochastic descent as has been mentioned before. The key to approaching actual intelligence as we know it, will be a system that can dynamically model its environment and actors in it, since even insects are able to do this to some extent.
When did Machine Learning become Artificial General Intelligence?
When did SVMs become AI?
I'm going to take the dissenting viewpoint here. I think AI as it is being sold today (e.g. the Deep Learning).. is bullshit. It's the new snake oil.
Everyone is pouring in all this money because of FOMO (Fear of Missing Out).
Yes, it's producing some fancy new toys. Beating the best human player at Go. Or doing some facial recognition. Or winning at StarCraft.
But I don't even see the point at mastering Chess past a certain level, and I'm certainly not going to bother with mastering a RTS game like StarCraft.
The scary thing is if some military planner thinks the StarCraft AI is sufficiently smart enough, to put on military weapons systems, and used to hunt down other humans.
Now, if we keep AI to be for these constrained things, then yes, it can produce more toys and products, that can be sold. It's the next evolution in smart products. Corporate America can keep cranking out new and evolved products to sell, and slap an AI sticker on it.
And have you noticed? Everyone is slapping an AI sticker on everything. It's like Microsoft and the .Net branding, or Sun and the Java branding all over again. But this time, everyone is calling their little algorithm, AI.
But beyond that, it's just another gimmick.
Deep Learning is an advanced form of OCR. Facial Recognition is an advanced form of OCR of the face. Do we consider OCR to be AI these days? No, we don't. We just think of it as a wonky pattern recognition engine, that half the time doesn't work, and the other half is frustrating enough, that we just type it out ourselves. In fact, OCR uses a neural network algorithm.
It's not an evolution that we need, in order to achieve AI. It's a revolution. And nothing I've seen so far, has convinced me that it can be achieved. In the meanwhile, if you're a newly certified 'AI Expert', then cash in as much as you can. But.. Beware. Winter is coming.
It’s quite clear to me what ML is - solving classification and regression problems. There are some fuzzy edges, but that’s true of any discipline. Maybe you invoke Mitchell’s definition and say “well, it’s improving on a task” (as many introductions to ML do) but that’s completely out of step with what people actually treat as ML.
There are lots of interesting “learning automation” areas that we’re neglecting - symbolic reasoning being the glaring one for me.
AI just seems like a nonsense term to me. Maybe it’d be better if we stopped using it.
Following the recent "AI Debate" between Yoshua Bengio and Gary Marcus [0], there was a lot of discussion about the exact definition (or redefinition even, as some argued) of some labels like "deep learning" and "symbol" (what do we mean exactly by these?), I find that it is quite relevant to this discussion.
I don't really agree, and think the misnomer should be applied in the opposite direction: AI should be called 'adaptive algorithms' and it should be just another tool the box of CS people.
We're not doing anything that we were not before.
There is no new paradigm shift. There is no AI. There's just a slightly new approach to solving problems. That's it. There's somer really nice improvements in computer vision ... and a few other things ...
... but all this talk of 'intelligence' etc. should be brushed aside, it's misleading to everyone.
There will be no 'general AI' with our current approaches for a whole variety of reasons.
I'm embarrassed at how so many intelligent colleagues drink the kool-aid on this.
Take classical ML: it was hyped for a while, now it's not as exciting as 'Deep Learning'. Well, in few years, I think that DL will be there as well: just a tool in the toolbox.
>> Our field isn't quite "artificial intelligence"
True, but so what? We call it AI and that's that, really. We've been calling it that for 70 years now and it's never been a problem.
And let's be absolutely clear that it's not the _name_ that's confusing the public but the way that industry luminaries promise autonomous cars and robotic maids in the next -3 years, or the way that the technology press -the technology press- can't get its shit together to figure out the difference between "machine learning", "deep learning" and "AI" as fields of research and as category labels. Of course the lay public is going to be confused if people who are paid to elucidate complex concepts make a mess of it.
> "True, but so what? We call it AI and that's that, really. We've been calling it that for 70 years now and it's never been a problem."
That isn't even ... true. AI became "machine learning" in the late 90s/early 2000s and that change happened because the chorus of criticism of "artificial intelligence" had become extremely loud and a less ambitious term served as a refuge.
AI was renamed into many things in the '80s and '90s, for example "Intelligent Systems" or "Adaptive Systems" etc, and that indeed was done to dissociate research from the bad rep that had accrued for AI. But "machine learning" has been the name of a sub-field of AI since the 1950's and it's never stood for the whole, at least not in conferences, papers or any kind of activity of the field.
For example- two of the (still) major conferences in the field are AAAI and IJCAI: the conference of the "Association for the Advancement of Artificial Intelligence" and the "International Joint Conferences of Artificial Intelligence". Neither of those is in any way, shape or form a conference for machine learning only and neither uses machine learning a byname for AI. By contrast, machine learning has its own journal(s actually) and there are specific conferences dedicated to machine learning and deep learning (NeurIPs and ICLR).
Additionally, there are many sub-fields of AI that are not machine learning, in name or function: intelligent agents, classical planning, reasoning, knowledge engineering etc etc.
The only confusion between "AI" and "machine learning" exists in the minds of tech journalists and the people who get their AI news exclusively from the tech press.
P.S. As a side note, the name for what the tech press is doing, referring to the field of AI as "machine learning", is "synecdoche": naming the whole by the name of the part.
Some people started saying things like that, more in about 2013, but all the time many people have been working on topics like MAS, answer sets, causal logic and other stuff.
At that time the big trend was actually rebranding maimed logical inference as The Semantic Web.
One thing I find strange is how much we emphasize the artificial nature of the intelligence. AI and automation always occurs in the context of human processes. Nothing is truly autonomous, so why design it as if human involvement is a failure? We can easily design artifacts to enhance human intelligence or team intelligence. Why the focus on the machine part and not the overall system that functionally accomplishes the desired work?
> One thing I find strange is how much we emphasize the artificial nature of the intelligence.
We really don't know what intelligence (sans qualifications) is. AI has been a term for effort emulate what we roughly think of as "intelligent" behavior. It's far from successful so far and the lack of a "theory of intelligence" is probably part of that. But it's pretty clear what "AI" researchers and systems are doing now is far from intelligence.
> AI and automation always occurs in the context of human processes. Nothing is truly autonomous, so why design it as if human involvement is a failure?
This argument makes as much sense as "we'll never exceed the speed of light, why act like faster transportation matters". An automated factory still requires some maintenance but it's creation certainly is significant.
> We can easily design artifacts to enhance human intelligence or team intelligence. Why the focus on the machine part and not the overall system that functionally accomplishes the desired work?
Both approaches matter and since there's really nothing keeping people from doing both of these, people pursue each separately. Moreover, I'd say AI research could do well to cross-pollinate with human-computer interaction theory.
But overall, you seem to just not understand why automation matters - automation has brought vast productivity in a variety of fields. It may or may not be possible other further fields but if it is, it will transform the world equivalently.
FWIW, I think that AI offers to offload thinking (whether it delivers or not is another thing) while IA appeals to people who want to improve their own intelligence. Maybe I'm too cynical, but the former seems more popular than the latter.
"artificial" and "synthetic" aren't exactly synonyms in my mind. If I synthesize glucose, there is nothing artificial about it. It just didn't come from a process developed by evolution. Conversely, artificial leather is nothing like real leather.
But again, it's the "intelligence" part that's the misnomer. Except for John Carmack, no one's trying to invent general intelligence. Every single bit of work is merely automating tasks that when performed by humans requires intelligence... except that too is a misnomer because as humans we literally can't do anything, no matter how mundane, without it "requiring intelligence".
What is? I assume you mean machine learning? Ok... What about one shot learning like lake and tabembauns bpl? What about optimal resource allocation in auctions? Is this recognising patterns in event spaces larger than the number of atoms in the universe?
nobody likes "artificial intelligence". laypeople are scared of it. the media blames it for all evils. it brings connotations of frankenstein, arrogant atheism, etc. it's reasonable to want to hide behind a different term.
Why would a non-human intelligence necessarily have a drive to “shape their environment?” Maybe a non-human intelligence would discover the inevitable end of the habitable universe and opt to just do nothing?
Nobody's going to pay for AWS hours for a lazy robot. They'll keep changing it does something. The human drive, which is not essentially rational, will give birth to the machine drive, which won't be rational either.
I hear this a lot: "Wellll, Machine Learning isn't TRUE artificial intelligence..."
Seriously people: are you THAT insecure? I realize it is a cutthroat hiring market and companies are stuffing extra zeroes onto signing bonuses to get anyone with an AI background, so maybe there is lots of jealousy and FOMO. I dunno, but it seems awfully gatekeep-y all of a sudden.
That this is true in AI is not surprising; humility comes from knowing that my domain expertise in some fields (and thus a clearer picture of 'what's really going on') is guaranteed to be crippled in other fields. Knowing that by being in some knowledge in-groups requires me to also be in some knowledge out-groups is the beginnings of a sane approach to the world.