"The usual corollary (that ML should "therefore" be able to learn with a few examples) may only apply, as I see it, if we somehow encode previous "learning" about the problem in very the structure (architecture, hardware, design) of the model itself."
Yes, and they do. They aren't choosing completely arbitrary algorithms when they attempt to solve a ML problem, they are typically using approaches that have already been proven to work well on related problems, or at least are variants of proven approaches.
The question is, how much information is encoded in those algos (to me, low-order logical truths about a few elementary variables, low degree of freedom for the system overall), compared to how much information is encoded in the "algos of the human brain" (and actually the whole body, if we admit that intelligence has little motivation to emerge if there's no signal to process and no action to ever be taken).
I was merely pointing out this outstanding asymmetry, as I see it, and the unfairness of judging our AI progress (or setting goals for it) relatively to anything even remotely close to evolved species, in terms of end-result behavior, emergent high-level observations.
Think of it this way: a tiny neural net (equivalent to the brain of what, not even an insect?) "generationally evolved" enough by us to be able to recognize cats and license numbers and process human speech and suggest songs and whatnot is really not too shabby. I'd call it monumental successs to be able to focus a NN so well on a vertical skill. But that's also low-order low-freedom, in the grander scheme of things, and "focus" (verticality) is just one aspect of intelligence (e.g. the raging battle is for "context" as we speak, horizontality and sequentiality of knowledge; and you can see how the concept of "awareness", even just mechanical, lies behind that). So, many more steps to go. So vastly much more to encode in our models before they're able to take a lesson in one standing and a few examples.
It really took big-big-big data for evolution to do it, anyway, and we're speeding that up thanks to focus in design, and electronics to hasten information processing, but not fundamentally changing the law of neural evolution, it seems.
If you ask me, the next step is to encode structural information in the neuron itself, as a machine or even network thereof, because that's how biology does it (the "dumb" logic gate transistor model is definitely wrong on all accounts, too simplistic). Seems like the next obvious move, architecturally.
Yes, and they do. They aren't choosing completely arbitrary algorithms when they attempt to solve a ML problem, they are typically using approaches that have already been proven to work well on related problems, or at least are variants of proven approaches.