At what point have our neural networks crossed over to demonstrating algorithmic behavior and we no longer consider them fancy interpolating functions? Is there a way to quantify this?
See Moravec's paradox [1], which has formulated decades ago. The more we build these models the more it's starting to seem like the paradox is basically true: that the hard part of intelligence isn't higher reasoning but the basic perception and sensorimotor control that animals have been evolving for hundreds of millions of years before homo sapiens came on the scene.
People in the 17th century viewed themselves as fancy clockwork machines, because of course mechanical clocks were the peak of high-tech hype at the time.
(Really what we should be thinking about is information complexity, not facile analogies.)
And in theological arguments back then, god was called "the great watchmaker". Now the question "are we living in a computer simulation?" keeps popping here and there. I wonder what the analogy will be in 400 years.
There's no way a neural network could ever learn a hash function directly (unless it had every possible input and output in its table), and if there was an indirect way to train it, you'd discover that it was interpolating between (say) possible hash functions by working in a larger space, for example if it was trained to generate and test C programs that computed hashes.