Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Zero-shot learning is a way of essentially building classifiers. There's no reasoning, there's no planning, there's no commonsense knowledge (not in a comprehensive, deep way that we would look for it call it that), and there's no integration of these skills to solve common goals. You can't take GPT and say ok turn that into a robot that can clean my house, take care of my kids, cook dinner, and then be a great dinner guest companion.

If you really probe at GPT, you'll see anything that goes beyond an initial sentence or two really starts to show how it's purely superficial in terms of understanding & intelligence; it's basically a really amazing version of Searle's Chinese room argument.



I think this is generally a good answer, but keep in mind I said AGI "in text". My forecasting is that within 3 years you will be able to give arbitrary text commands and get the textual output of the equivalents of "clean my house, take care of my kids, ..." like problems.

I also would contend that there is reasoning happening and that zero-shot demonstrates this. Specifically, reasoning about the intent of the prompt. The fact that you get this simply by building a general-purpose text model is a surprise to me.

Something I haven't seen yet is a model simulate the mind of the questioner, the way humans do, over time (minutes, days, years).

In 3 years, I'll ping you :) Already made a calendar reminder


Pattern recognition and matching isn’t the same thing as reasoning. Zero shot demonstrates reasoning as much as solving the quadratic equation for a new set of variables does; it’s simply the ability to create new decision boundaries leveraging the same set of classifying power and methodology. True agi isn’t bound to a medium — no one would say Helen Keller wasn’t intelligent for example.

I look forward to this ping :)


What exactly is the difference between pattern matching and reasoning?


I think pattern matching can be interpreted as a form of reasoning. But it is distinct from logical reasoning. Where you draw implications from assumptions. GPT seems really bad at this kind of thing. It often outputs texts with inconsistencies. And in the GPT-3 paper it performed poorly on tasks like Recognizing Textual Entailment which mainly involves this kind of reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: