Hacker News new | past | comments | ask | show | jobs | submit login

I think ironically there has been an "AI-anti-hype hype", with people like Gary Marcus trying to blow up every single possible issue into a deal breaker. Most of the claims in this article are based on tests performed only on GPT-3, and researchers often seem to make tests in a way that proves their point - see an earlier comment from me here with an example: https://news.ycombinator.com/item?id=37503944

I agree there has been many attention-grabbing headlines that are due to simple issues like contamination. However, I think AI has already proved its business value far beyond those issues, as anyone using ChatGPT with a code base not present in their dataset can attest.




I think some amount of that is necessary, though no? We have people claiming that this generation of AI will replace jobs - and plenty of companies have taken the bait and tried to get started with LLM-based bots. We even had a pretty high-profile case of a Google AI engineer going public with claims that their LaMDA AI was sentient. Regardless of what you think of that individual or Google's AI efforts, this resonates with the public. Additionally a pretty common sentiment I've seen has been non-tech people suggesting AI should handle content moderation - the idea being that since they're not human and don't have "feelings" they won't have biases and won't attempt to "silence" any one political group (without realising that bias can be built in via the training data).

It seems pretty important to counter that and to debunk any wild claims such as these. To provide context and to educate the world on their shortcomings.


I think skepticism is always welcome and we should continue to explore what LLM's can and cannot do. However, what I'm referring to is trying to get a quick win by defeating some inferior version of GPT or trying to apply a test which you don't even expect most humans to pass.

The article is actually fine and pretty balanced, but it is a bit unfortunate that 80% of their examples are not illustrative of current capabilities. At least for me, most of my optimism about the utility of LLM's comes from GPT-4 specifically.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: