Scott to be a bit of a Debbie Downer on quantum timelines, and you need to be able to separate that from actual criticisms of the underlying technology. If you look past the way he phrased it, his criticisms of quantum machine learning basically boil down to: there are still some things to be worked out. Not that we have no expectation on how to work those things out, just that there are still unsolved challenges to be tackled.
That’s not really a takedown of the idea.
The more critical challenge is that there is a massive, massive, constant factor difference between classical and quantum computing using foreseeable technology. Even in the best case (like factoring) where a quantum computer gives you a logarithmic algorithm for a classically exponential problem, it does happen to throw in slowdown factor of a trillion. Oops.
But still, even with large constant factors, algorithmic improvements eventually went out asymptotically. It’s just a matter of building a quantum computer big enough and reliable enough to handle problems of that size.
We are already hitting the limits of what we can train on GPUs with reasonable cost. I expect that there will be many advances in the years to come from improved training algorithms. But at some point that will run dry, and further advances will come from quantum computing. Scott is right to point out that this is far, far beyond any existing companies planning horizon. But that doesn’t mean quantum technologies won’t work, eventually.
See https://scottaaronson.blog/?p=8329 for something more recent.