Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> No one tells the computer what to do.

Sure they do. Say you have a machine learning algorithm, that can learn a task from examples, and let's notate it like so:

y = f(x)

Where y is the trained system, f the learning function and x the training examples.

The "x", the training examples, is what tells the computer what to learn, therefore, what to do once it's trained. If you change the x, the learner can do a different y. Therefore, you're telling the computer what to do.

In fact, once you train a computer for a different y, it may or may not be really good at it, but it certainly can't do the old y anymore. Which is what I mean by "machine learning can't lead to AGI". Because machine learning algorithms are really bad at generalising from one domain to another, and the ability to do so is necessary for general intelligence.

Edit: note that the above has nothing to do with supervised vs unsupervised etc. The point is that you train the algorithm on examples, and that necessarily removes any possibility of autonomy.

>> Fine, all general AI. Like game playing etc.

I'm still not clear what you're saying; game-playing AI is not an instance of general AI. Do you mean "general game-playing AI"? That too doesn't always necessarily have a reward function. If I remember correctly for instance, Deep Blue did not use reinforcement learning and Watson certainly does not (I got access to the Watson papers, so I could double-check if you doubt this).

Btw, every game-playing AI requires a precise evaluation function. The difference with machine-learned game-playing AI is that this evaluation function is sometimes learned by the learner, rather than hard-coded by the programmer.



The thing about neural networks is they can generalize from one domain to another. We don' have a million different algorithms, one for recognizing cars, and another for recognizing dogs, etc. They learn features that both have in common.

>The "x", the training examples, is what tells the computer what to learn, therefore, what to do once it's trained. If you change the x, the learner can do a different y. Therefore, you're telling the computer what to do.

But with RL, a computer can discover it's own training examples from experience. They don't need to be given to it.

>I'm still not clear what you're saying; game-playing AI is not an instance of general AI.

But it is! The distinction between the real world and a game is arbitrary. If an algorithm can learn to play a random video game, you can just as easily plug it into a robot and let it play "real life". The world is more complicated, of course, but not qualitatively different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: