Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nope.

I would appreciate if you and the GP not personally insult me when you have a question though. You may feel that you know Marcus to be into one particular thing but some of us have been familiar with his work long before he pivoted to AI.



I'm sorry, I didn't mean to insult you. To explain the reason: you seem to use some particular wordings that just seem strange to me, such as first saying that Marcus position is that "LLMs are impossible" which is either false or incredibly imprecise shortcut for "AGI using LLMs is impossible", and then claiming it was beautiful.

I didn't mean to attack you personally and I'm really sorry if it sounded this way. I appreciate the generally positive atmosphere on HN and I believe it more important than the actual argument, whatever it may be.


There are two problems with your post.

The first is that your phrasing "that LLMs are not possible or at least that they're some kind of illusion" collapses the claim being made to the point where it looks as if you're saying Marcus believes people are just deluded that something called a "LLM" exists in the world. But even allowing for some inference as to what you actually meant, it remains ambiguous whether you are talking about language acquisition (which you are in your 2nd paragraph) or the genuine understanding and reasoning / robust world model induction necessary for AGI, which is the focus of Marcus' recent discussion on LLMs, and why we're even talking about Marcus here in the first place.

You seem more familiar with Marcus' thinking on language acquisition than I, so I can only assume that his thinking on language acquisition and LLMs is somewhat related to his thinking on understanding and reasoning / world model induction and LLMs. But it doesn't appear to me, based on what I've read of Marcus, that his claims about the latter really depend on Chomsky. Which brings me to the 2nd problem with your post, where you make the uncharitable claim that "he appears to me and others to be having a sort of internal crisis that's playing out publicly", as if it were simply impossible to believe that LLMs are not capable of genuine understanding / robust world model induction otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: