Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Yann LeCun warns AI tech is marching into dead end (nytimes.com)
12 points by tietjens 20 days ago | hide | past | favorite | 4 comments



In the words of Richard Stallman, it's "pretend intelligence".

The language may be structurally correct and *some* of the facts as well but any real grasp of what is being said is largely missing.

This is a direct result of a design built around probability. The overriding objective is plausibility --- not to be confused with either facts or intelligence.


I agree as a whole but aren't all thoughts some sort of guesswork? Facts just have a higher probability?


Facts just have a higher probability?

You and I understand this but an LLM doesn't.

An LLM doesn't understand the difference between fact and fiction.

It just uses probability to choose the next word. Hopefully, there are a more facts in it's database that can serve as a guide. But if not, it will just as readily use fiction to produce something that sounds plausible.

Anything an LLM produces simply cannot be trusted and is a poor example of "intelligence".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: