The approach they show in the end - it's still machine learning though? Exploring a space and finding parameters to optimize for a loss function (speed around the track), just not deep learning with neural nets.
I think Arthur Samuel would agree. This approach has a loss function, parameters, and inputs that feed in to a model to optimise the parameters.
The big difference between this and the other approach mentioned in the article is the model is a simple one that's easy to understand instead of a many layered neural network which is rather opaque.
I think the article may be better titled "You might not need neural networks."
> I think the article may be better titled "You might not need neural networks."
Since a linear model is essentially a single layer neural network with linear activation, we can't even say that. The athor was using a neural network without realising it :)
Machine learning is a set of techniques developed to attack modeling problems that traditional algorithms couldn't solve. So if the algorithm was developed and used before computers it definitely isn't machine learning. Everything done in this article was known and used before computers existed hence not machine learning.
Doing automatically controlled systems was still possible before computers just that it required a bit more creative use of analog components.
> Machine learning is a set of techniques developed to attack modeling problems that traditional algorithms couldn't solve. So if the algorithm was developed and used before computers it definitely isn't machine learning
I think that it's important to note here that this was the set of problems that computer science researchers didn't know how to solve.
In the early days, they mostly ended up re-inventing statistical methods.
And to be fair, a neural net is just a bunch of linear models joined by a non-linearity. In that case, it's essentially stacked logistic regression, which was invented before computers.
> And to be fair, a neural net is just a bunch of linear models joined by a non-linearity.
Nobody did this before computers though.
> . In that case, it's essentially stacked logistic regression, which was invented before computers.
It isn't "basically logistic regression", it is just a technique which uses logistic regressions. The full technique is ML. If you remove the ML parts it is basically just logistic regression left though.
> Machine learning is a set of techniques developed to attack modeling problems that traditional algorithms couldn't solve. So if the algorithm was developed and used before computers it definitely isn't machine learning.
It might not be a hard boundary, but I think the perception of ML vs. optimization is how much of a model you have. If all you have is a black box, then it's ML; if you know how the system you are studying works, it's (parameter) optimization.
That’s a very unfair distinction, almost like a No True Scotsmam fallacy to say machine learning is only bad and other stuff is only good (in terms of transparency).
But machine learning has predated neural networks by hundreds of years. The core mathematical basis of all machine learning coursework linear regression and decision trees. Other models like SVMs, Bayesian models, nearest neighbor indexes, TFIDF text search, naive Bayes classifier, etc., are basically like machine learning 101, and they have many different properties regarding interpretability depending on the problem to solve.
Saying that linear regression is machine learning is like saying that newtons laws is chemistry. There was no machine learning before computers, just regular old optimization algorithms.
This is very false. Least squares regression fitting, Chebychev polynomial approximation, and maximum likelihhod estimators all existed at the time and those are all classic examples of standard machine learning. The term “machine learning” essentially encompasses any type of algorithm that expresses inductive statistical reasoning. Even just elementary school descriptive statistics is machine learning. “Machine learning” is a super old subfield of applied mathematics. The fact that the terminology “machine learning” didn’t exist until things like perceptron and SVMs came along is utterly irrelevant semantic hairsplitting.
But ML is essentially just function approximation, and that definitely existed back then.
I know this is super pedantic, but it's important to remember the roots of things, and that even things which appear new have precursors that are much older than a lot of people realise.
Isn't that just searching a space? Machine learning generally refers to a fancy way of searching a space. Here the guy searched randomly so I'd say not machine learning.
But perhaps using a cost function or a loss function is enough to call it machine learning. A machine just used an algorithm to learn another algorithm after all.
Stochastic gradient descent and other things like genetic algorithms and simulated annealing are random search techniques specifically created and taught in the context of machine learning.
Simulated annealing goes back to the seventies and was definitely not "specifically created in the context of machine learning". Many (most?) optimization techniques have their origin in Operations Research.
Simulated annealing was developed for purposes of parameter fitting in physics modeling, based on Metropolis Hastings which was likewise developed in the context of parameter inference for model fitting. Simulated annealing for eg traveling salesman problem came later.
I do agree some optimization algorithms are rooted in other fields. I wasn’t trying to say that machine learning is the only historic field from which optimization methods were developed. I just wanted to point out it is a major historic field where some highly respected search and optimization procedures were first created, since people often overlook how old machine learning is and the vast set of modeling procedures apart from neural networks that make up the core of machine learning.
Your comment is not coherent. You ask, “ Are all optimization problems specifically created and taught in the context of machine learning?” but this has no logical or semantic connection to my comment in any way. It fails to be a valid response or question.
Instead it seems you falsely believe you are writing with some sarcasm that endows rhetorical flair to undercut my comment. It’s very rude and juvenile in addition to being wholly ineffective.
Correct. I was waiting for someone to point this out. He optimized the paramaters of a polynomial equation. The machine learned to effectively race around the track through this optimization. The machine learned