So my perspective is that 2020:AI is as 2000:Internet. In 2000, the idea that all commerce is going to be on the Internet was obviously true, just as obviously true as the fact that all of the companies that IPOed at the time had no way of getting there (except for Amazon). AI one day is going to be the foundation of successful money-making products and services, but AlexNet and GPT-3 are not even stepping stones in that direction. Self-driving cars are a distraction like virtual worlds.
It's entirely possible, but also AI is older than the internet, and Google was the first big ML company, gaining their success as early as '99. It seems like ML has a longer adoption curve to it, moving slower.
Also, AI is anything cutting edge. The fill tool in Paint was once considered AI, so AI will continue on as long as we have new tech.
It would be AI by the standards of A*, maze solvers, and similar. It isn't ML and doesn't isn't related to that branch of AI.
Should graph search be considered AI? This gets us to the root of the issue, which is how do we consider something AI or not and to what extent has that definition changed as some problems have become trivial or commonplace. Machine learning is a bit easier since you can create much strong classifications around the need to teach, to feed it data with expected results, before it is useful to get a response (or otherwise give it a way to evaluate data it generates and then feeds itself). However there are a number of techniques that predate ML, or at least having fast enough computers to have worthwhile success at ML, and thus we are left with determining some way to objectively classify those as either AI or not.
They weren't saying the fill tool was an early form of what we think of as an "AI algorithm" today, but that the leading edge of machine intelligence / intuitive capability has changed, and it will continue to change.
Yep, people forget that 'pagerank' was Google's secret sauce and it was literally ML. It looked at data to compute parameters of a model (edge probabilities) and rank a set of candidate results.
I'm having a hard time seeing PageRank '99 being ML just because it uses a lot of data and is linear algebra at the core. It's one large eigenvalue problem.
I mean it's true that ML uses linear algebra. But the difference between ML and PageRank is that ML is Machine Learning. The learning part (involving storing and updating based on new data) is different to statically calculating Pagerank using Map/Reduce and applying it as a ranking function.
Isn't updating parameters(in ML) same updating some ranking function parameter. In my opinion any algorithm that update its model parameter based on data is ML.
Most machine learning in use today is “train once (and maybe finetune) and then deploy“, not online, continuous learning. I don’t think the frequency of model updates is generally a good indicator of whether or not something is considered machine learning.
More to the original point, nowadays when most people thick of machine learning they are thinking of deep neural networks, whereas Google‘s original pagerank was very simple and shallow by comparison. But they built an algorithm that allowed machines to learn what pages were high value and what pages were low value. If that seems simple by today’s standards, it’s evidence of the AI goal posts moving more than anything else.