I use & depend upon plenty of products that are built upon AI - GMail spam filtering & categorized inbox, Google image search, YouTube & Netflix recommendations, cheque OCR at my ATM, predictive keyboards on my phone, Amazon's "people also buy with this product" feature, Google translate, computer opponents in games that I play, and all of the signals that feed into Google Search.
The irony is that not one of these bills itself as AI. It's just "a product that works", and the company that produces it is happy to keep the details secret and let users enjoy the product. So you may be right that the term "AI" itself is pure salesmanship. When it starts to work it ceases to be AI.
Also - humans only look like we're fast at picking up new domains because we apply a helluva lot of transfer learning, and most "new" domains aren't actually that different from our previous experiences. Drop a human in an environment where their sensory input is truly novel - say, a sensory deprivation tank where all visual & auditory stimulation is random noise - and they will literally go insane. I've got a 5-month-old and a project where I'm attempting to use AI to parse webpages, and I will bet you that I can teach my computer to read the web before I can teach my kid to do so.
>The irony is that not one of these bills itself as AI. It's just "a product that works"
I think you are on to something, put differently:
If you need to use the term "AI" to enhance the marketability of the product it is probably because the product sucks.
And employees. Google's embrace of the term "AI" isn't because they need help developing or selling AI-powered products, it's to encourage all the kids to go into computer science and all the existing developers to learn TensorFlow. They can then pick off the best of them as potential employees without having to train them up themselves.
None of the things you mentioned are even close to AI. They’re applied statistics, and they mostly use techniques we’ve known about for decades but have only now found a use case because computing and storage is cheap enough to make them viable.
The recommendation, translation, & image classification algorithms are all done with deep-learning; that's considered AI now.
There was a time, not all that long ago, when SVMs, Bayesian networks, and perceptrons were considered AI. That's behind the spam filters, predictive keyboards, and most of the search signals.
There was a time, a bit longer ago, when beam search and A* were considered AI. That's behind the game opponents.
As the linked Wikipedia article says, "AI is whatever we don't know how to do yet." There will be a time (rapidly approaching) where deep learning and robotics are common knowledge among skilled software engineers, and we won't consider them AI either. We'll find something else to call AI then, maybe consciousness or creativity or something.
This is my point: the term AI has always been BS. It was BS when beam search was AI, it was BS when expert systems were AI, and it is equally as BS when applied to neural networks. It comes to the same thing: the 'AI' tools we use are increasingly good function approximators. That's it. It's still reaching the moon by building successively taller ladders.
As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.
And
I left the arena to pursue a more challenging task: reasoning with cause and effect. Many of my AI colleagues are still occupied with uncertainty. There are circles of research that continue to work on diagnosis without worrying about the causal aspects of the problem.
the 'AI' tools we use are increasingly good function approximators
Nothing in the definition of AI says that AI has to work the same way the human brain does... and as far as that goes, we're probably not 100% sure that, in the end, the brain is anything more than a really good function approximator and some applied statistics.
I would say the canonical definition of AI, to the extent that there is one, is roughly something like "making computers do things that previously only humans could do". If people think "AI is bullshit" I'd say it's because they're applying their own definition to the term, where there definition imposes much more stringent requirements.
This is an interesting comment - where would you draw the line between AI and applied statistics? A lot of AI which happens to be ML (not saying there is non-ML AI, just that a significant chunk of AI being practiced today is ML) also happens to be applied statistics. Or have statistical interpretations.
Also, because something has been around for decades does not make it not AI. For ex the cheque OCR mentioned probably runs off (or can feasibly run off) of a neural network. I think the parent's comment holds well - not sure about the last line though ...
The line is clear: everything today branded "AI" is just applied statistics. AI is a buzzword. I don't know what the definition of intelligence is, but I have a feeling it doesn't rest anywhere near concepts like function approximation, and that's all even the most sophisticated "AIs" at Google or Facebook or Apple boil down to.
What was not clear from your earlier comment, and is now, is that when you say AI you don't mean AI as is practiced by most of academia and the industry but the vision of Artificial General Intelligence (AGI). If so, yes, that's a good point to make. However, it is debatable whether the path of statistical learning wont lead to AGI, or is not how our brains function, or the truth partly does comprise of statistical learning and part of something else. The Norvig-Chomsky debate is an example of the arguments on both sides.
I didn't make an earlier comment. You're replying to my one and only comment.
> when you say AI you don't mean AI as is practiced by most of academia and the industry but the vision of Artificial General Intelligence (AGI).
What I actually mean is people practicing what they call "AI" in academia and the industry have co-opted the name to make what they do sound more interesting. First it was called "statistics". Then it was called "pattern matching". Then it was called "machine learning". Now it's called "AI". But it hasn't changed meaningfully through any iteration of these labels.
If you can definitely a problem rigorously, you've essentially defined a function. So "function approximation" is basically "general problem solving approximation".
I don't really think that characterization is fair, for example GANs, there is no data set of correct input output pairs for the function that is learned.
Something that actually learns on its own and is not completely stumped when it encounters something new but actually learns. When it recognizes failure it should go and start learning by itself, i.e. try to get more data and analyze that and do its own trial and error - so that it actually grows in capabilities (on its own).
I would argue that all of your examples have failed to be anything even remotely resembling AI, just data crunching to fit most use cases. I don't use GMail but I do regularly use Google image search, Translate, YouTube, Netflix and predictive typing via SwitfKey. And IMHO they all suck horribly (SwiftKey still sucks pretty bad after 8 years of learning from me). Google Translate is getting better and I have recently started using first-pass Google Translate before correcting the mistakes... instead of everything by hand. YT/Netflix Recommendations are always bullshit. I wish there was a way to say "never show me anything like this ever again" because I often feel like 90% of the recommendations make absolutely no sense. Sometimes I think that someone else must be logged into my account clicking on things just to mess with my recommendations. I usually spend a minimum of 30 minutes searching, often giving up out of frustration (and I always have an IMDB tab open to check details because all of the IMDB rating plugins for Firefox stop/ped working). Maybe I'm an edge case living outside the U.S.? Are their algorithms only tuned for English-speaking countries?
The most creative, intelligent and least frustrating "AI" I've ever encountered was in some games, such as Dota2 or many years ago F.E.A.R. They were frustrating but only due to unpredictability, even after hundreds of hours of playtime. YouTube and NetFlix AI after hundreds/thousands of hours invested are also very unpredictable and frustrating, but that's the opposite experience I am looking for in those situations.
Completely have to agree. YouTube has so much content, far more than Spotify, Vimeo and everybody else in the space, which is why I use it. But the recommendations are an offense. YT is only good at 'recommending' stuff I already watched or listened to. What's the point?
Translate can be useful at times...like once a year when I want to comprehend a Japanese website, usually I close the tab after 2 minutes.
I used GMail for many years and still do to some degree but I'm moving to a different mail provider. GMail's spam filter is great!
Not sure, since 2 years it became acceptable to make no difference between ML and AI. ML appears smart because of bizillions of training samples and I feel very impressed when I hear of that. But yeah, at the end of the day it doesn't have exactly the biggest impact on me... ;)
The irony is that not one of these bills itself as AI. It's just "a product that works", and the company that produces it is happy to keep the details secret and let users enjoy the product. So you may be right that the term "AI" itself is pure salesmanship. When it starts to work it ceases to be AI.
https://en.wikipedia.org/wiki/AI_effect
Also - humans only look like we're fast at picking up new domains because we apply a helluva lot of transfer learning, and most "new" domains aren't actually that different from our previous experiences. Drop a human in an environment where their sensory input is truly novel - say, a sensory deprivation tank where all visual & auditory stimulation is random noise - and they will literally go insane. I've got a 5-month-old and a project where I'm attempting to use AI to parse webpages, and I will bet you that I can teach my computer to read the web before I can teach my kid to do so.