Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah I was expecting the article to give an argument to back up this claim by talking about the mechanisms behind LLMs and the mechanisms behind human thought and demonstrating a lack of overlap.

But I don't see any discussion of multilayer perceptrons or multi-head attention.

Instead, the rest of the article is just saying "it's a con" with a lot of words.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: