This got me thinking about the whole “What is AI” issue again. I decided to post my highly opinionated thoughts…
For most of its existence as a “field of knowledge” AI has been shepherded by the Academic community. This same community likes to throw around inside jokes like “If it works is isn’t AI” and such. If you boil down all the discussions that take place over “what is AI” online and at conventions, a pictures emerges something along the lines that to the Academic community, AI is what makes an interesting research question. This has the unfortunate side affect that the definition not only changes constantly but that it can also be quite myopic. Expert systems, discussed in the 80’s as a massively successful branch of AI, was kicked out of the field in the 90’s -- at least until recently when the Academic community figured they hadn’t really exhausted all research possibilities and it was reincarnated as Answer Set Programming. Geofrey Hinton (one of the most influential figures in AI in the last 50 years) stated in a talk he gave at the 2015 AAAI conference “Unless it has 100,000 attributes it’s not AI, it’s applied statistics”, essentially demoting most of all AI ever researched or developed. I even been involved in discussions where the question as to if Machine Learning is really AI any longer was brought up (though everyone seems to agree that Deep Learning is AI). I find it odd and fascinating that a community responsible for developing an area of knowledge is so aggressive at denying everything it achieves. I can’t think of any other fields that approaches governance of their domain this way.
Things get more complicated by the fact that AI has different definitions depending on where you are working on it. AI in manufacturing tends to break down into satisfiability, constraint programming, and optimization – none of which are considered ‘fields’ of AI for the most part. In these cases, the pipeline (the process that results from combining all the steps in AI solution) is referred to as AI, not the individual techniques applied in that process. In the Games industry AI is the part of a game engine that makes decisions regardless of what technique you are using to achieve that. I actually think those definitions are probably more consistent and more practical.
Because I lack a better term for it, I’ve always referred to terms like “AI” by my own made-up phrase “problem words”. These are words that were made to refer to a challenge that practitioners in a field commonly run into but that is not created from an effort to make a scientifically precise definition. I lump “Deep Learning”, “Big Data”, and “Cloud Computing” into this category. Because the scope of what we find to be a problem always changes, so does the meaning of the term. Big Data 10 years ago is not what Big Data is today as an example. These kinds of terms NEVER have a final definition. Lastly, I find it anecdotally interesting that during the registration process for some AI conventions the question of “How do you define AI” is on the registration form.
For most of its existence as a “field of knowledge” AI has been shepherded by the Academic community. This same community likes to throw around inside jokes like “If it works is isn’t AI” and such. If you boil down all the discussions that take place over “what is AI” online and at conventions, a pictures emerges something along the lines that to the Academic community, AI is what makes an interesting research question. This has the unfortunate side affect that the definition not only changes constantly but that it can also be quite myopic. Expert systems, discussed in the 80’s as a massively successful branch of AI, was kicked out of the field in the 90’s -- at least until recently when the Academic community figured they hadn’t really exhausted all research possibilities and it was reincarnated as Answer Set Programming. Geofrey Hinton (one of the most influential figures in AI in the last 50 years) stated in a talk he gave at the 2015 AAAI conference “Unless it has 100,000 attributes it’s not AI, it’s applied statistics”, essentially demoting most of all AI ever researched or developed. I even been involved in discussions where the question as to if Machine Learning is really AI any longer was brought up (though everyone seems to agree that Deep Learning is AI). I find it odd and fascinating that a community responsible for developing an area of knowledge is so aggressive at denying everything it achieves. I can’t think of any other fields that approaches governance of their domain this way.
Things get more complicated by the fact that AI has different definitions depending on where you are working on it. AI in manufacturing tends to break down into satisfiability, constraint programming, and optimization – none of which are considered ‘fields’ of AI for the most part. In these cases, the pipeline (the process that results from combining all the steps in AI solution) is referred to as AI, not the individual techniques applied in that process. In the Games industry AI is the part of a game engine that makes decisions regardless of what technique you are using to achieve that. I actually think those definitions are probably more consistent and more practical.
Because I lack a better term for it, I’ve always referred to terms like “AI” by my own made-up phrase “problem words”. These are words that were made to refer to a challenge that practitioners in a field commonly run into but that is not created from an effort to make a scientifically precise definition. I lump “Deep Learning”, “Big Data”, and “Cloud Computing” into this category. Because the scope of what we find to be a problem always changes, so does the meaning of the term. Big Data 10 years ago is not what Big Data is today as an example. These kinds of terms NEVER have a final definition. Lastly, I find it anecdotally interesting that during the registration process for some AI conventions the question of “How do you define AI” is on the registration form.