> Maybe computation isn’t the right principle metaphor to be thinking about in explaining this. It’s some sort of adaptation, and our computation is not locally adaptive, rather, our computation is only globally adaptive.
So perhaps much of the problem is that we're too often using a metaphor of non-adaptive computation. If instead we analogize to forms like simulated annealing, genetic algorithms, neural networks and so forth, the computational metaphor becomes less cul-de-sac and more throughway.
> One part of your talk is saying there is this range of metaphorical domains—dynamic systems, control systems, biological adaptation, resonance models—different kinds of pictures, and of that panopoly, we’ve chosen the computational almost uniquely to pursue.
> If instead we analogize to forms like simulated annealing, genetic algorithms, neural networks and so forth
Perhaps I'm missing something, but I see all of these things as well within the boundaries of computation!
I would say computation provides models for these systems, but, in general, these systems are not themselves computations - some of them are not literally the manipulation of abstract symbols by formal rules.
But, from a broader perspective, I don't think the "is this computation" question is a very useful one to pursue. Any true understanding of the conscious mind will have to answer the question of why it is different from the things we understand now: if it is, in fact, computation, then it is different in some way from the computations that we do understand now, and it is the differences that will be key to understanding it. When we understand it, we will have a informed basis for fitting it into our ontology.
Yes, computer scientists tend to! I'm always a little surprised when supposed robotics researchers and philosophers of mind insist that, for instance, control theory poses an alternative to talking about the nervous system performing computations. What do they think optimal control algorithms do?
Since the artificial is a subset of the natural, ML apps are natural processes often explicitly built to mimic natural processes. So it's hard to know when they're more of a metaphor or an example of natural processes.
Scratching the surface of some VERY interesting points of view.
Look at this list of participants: Daniel Hillis, Neil Gershenfeld, Frank Wilczek, David Chalmers, Robert Axelrod, Tom Griffiths, Caroline Jones, Peter Galison, Alison Gopnik, George Dyson, Freeman Dyson, Seth Lloyd, Rod Brooks, Stephen Wolfram, Ian McEwan.
Interesting to see McEwan mentioned as present (though he doesn't speak in this transcript) -- a great writer, with a nice recent novel about AI, but not really a technical guy!
So perhaps much of the problem is that we're too often using a metaphor of non-adaptive computation. If instead we analogize to forms like simulated annealing, genetic algorithms, neural networks and so forth, the computational metaphor becomes less cul-de-sac and more throughway.