That is a gigantic over-simplification. All machines are application specific, even machine-learning based ones. They all require human supervision, whether through goal setting or fixing errors.
There are some areas where machines are better than humans, and playing Go is now one of them, but that doesn't mean machines will replace humans in all facets at any given point in time. We grow, our tools grow, and the cycle repeats.
Skill related. I'd be interesting to see how quickly driving AIs take to beat the best human drivers, in a weight-equal vehicle. An algorithmic competitor in formula one, would be interesting.
Racing cars aren't limited by the driver's G tolerance, I don't think. They generate 4G-ish, from what I hear on the F1 coverage. Well within driver capabilities. Their G is limited by tyres.
The recent video of Boston Dynamics Atlas robot looked like it could walk about as well as a human, maybe even better at recovering its balance (see it heroically walking through snow).
I see "AI" doing well in games that have simple inputs, a limited range of legal outputs, and relatively easy "goodness" measures.
I see nothing that might be able to tell us why gravitational mass is the same as inertial mass, for example, or any moves in that direction. This "AI" is good at simple games.