>You have to realize that our theoretical tools are very weak. Sometimes, we have good mathematical intuitions for why a particular technique should work. Sometimes our intuition ends up being wrong [...] The questions become: how well does my method work on this particular problem, and how large is the set of problems on which it works well.
I'm not very familiar with this field. Has anyone made any progress on formalizing ways to measure the capabilities of intelligent systems? If the theory is weak, there must be someone working on improving it, right?
But since that's a $55M Black Hole with no published results other than a mostly meaningless claim to having solved Captcha (which wasn't all that tough a task to begin with), there's no way to tell since it doesn't seem like practitioners of the art are the ones evaluating his prospects for further funding. But don't believe some random dude on HN, here's Yann Le Cun saying pretty much the same thing:
Hey Michael, I loved your book on Quantum Computing, but don't get me started on D-Wave or as I see it: $15M for a huge magic box that might be faster than a $15,000 GPU cluster for some problems.
But seriously, the book rocked, and this one's coming along nicely.
This paper demonstrates something called "0 shot learning" where you can actually infer the correct label of an unseen image based on similarity among representations learned in a separate NLP task.
For instance, it can label an image "tiger" even if it has not seen tigers but has only learned about the word (and inferred its relation to cat, an image it has seen) from reading text.
It's not intelligent, not even close. But it's an awfully strange emergent phenomenon these concepts are demonstrating. Exciting stuff, I think.
I'm not very familiar with this field. Has anyone made any progress on formalizing ways to measure the capabilities of intelligent systems? If the theory is weak, there must be someone working on improving it, right?