Hacker News new | past | comments | ask | show | jobs | submit login

Watson also generates confidence estimates and minimum confidence bars for questions. It may sometimes have batty "answers" but usually it knows they are batty. The rate of incorrect answers that Watson has a high confidence in is fairly low.

What's remarkable and important to not take lightly is the result that it's possible to generate answers to often vague and indirect clues without understanding. That likely means that it will be possible to build useful systems for automating research and the synthesis of large amounts of data without needing to build artificial human-level intelligence.




...presuming that human-level intelligence entails any sort of understanding that is fundamentally deeper than what Watson is doing.


I think that's a fair bet. The new wave of "probablistic everywhere" NLP models, though even the very simplest strictly dominate older grammatical methods, are not often capable of taking advantage of a lot of the structure of language and topic that humans are wont to do. It's a cutting-edge accomplishment when NLP algorithms learn prediction of long-range word pairs such as how you almost certainly will see "law" or "marriage" somewhere in a sentence containing the word "annulled" even if that local area of the sentence doesn't seem to call for it. Humans on the other hand are more likely to forget that it's possible to annul pretty much anything else.

I don't own a TV and plan on watching the Jeopardy match later online, so I'm just going to guess about Watson's performance. I think that humans abuse discovered patterns and structure in language and meaning to search through possible interpretations very quickly. Watson on the other hand uses far less structure and a room full of 200 cores to search through everything is knows much less efficiently. I feel like Watson's "strange" answers probably aren't nearly so strange when you realize it's simply being more fair to any possible answer than a human would.

What's scart is this sort of thing---a willingness to consider out of context answers---sounds pretty similar to the kind of behaviors we humans praise as creative!


I think that humans abuse discovered patterns and structure in language and meaning to search through possible interpretations very quickly.

Right, but does that structure really represent a "deeper" understanding or just vast and meticulous optimizations of statistical algorithms similar to Watson's? Or is there a difference?

We feel like we know how we think, but we can't actually explain it in enough detail to reproduce. Humans have a bad history of rationalization and tunnel vision. And now we discover that all the "wrong" ways to think deeply are actually the right ways to make a working AI.

If the AI can fool us into believing that it "understands" then maybe we can fool ourselves in the same way.


I don't honestly feel like we know how we think at all. I do think that statistics is a pretty good bet for the "math of learning" in that it's a sensible way to track how information flows through a model. Furthermore, the combinatorial problems involved need to be tackled just the same by humans so we can maybe try to say that we're studying similar phenomena as the workings of the brain.

Of course, the implementations we build will always be vastly different from their appearance in the brain since the architectures are so extraordinarily different!


The confidence estimations are most impressive to me, and of course absolutely crucial in a game where you have to risk points to make points.

But the fact that bulk correlation mining can answer even some 'vague and indirect' questions isn't that remarkable. Jeopardy clues are a very constrained domain: short clues in English with some distinctive idioms, and short answers that are drawn from some well-defined and constantly recurring classes.

With true natural language understanding, offline searchable copies of Wikipedia and Wiktionary – 64GB, tops? – could be used to answer almost every question. Instead Watson uses 15TB of RAM and 2880 cores.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: