Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It isn't a pseudo problem. In this case, it's a succinct statement of exactly the issue you're ignoring, namely the fact that great poets have minds and intentions that we understand. LLMs are language calculators. As I said elsewhere in this thread, if you don't already see the difference, nothing I say here is going to convince you otherwise.


Define "intentions" and "understand" in a way that is testable. All you are doing here is employing intuition pumps without actually saying anything.

> LLMs are language calculators.

And humans are just chemical reactions. That's completely irrelevant to the topic, as both can still act as Universal Turing machine just the same.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: