Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that LLMs are useful, in many ways, but think that people are in fact often making the stronger claim which I refer to in your quote from my original point. If the argument were put forward simply to highlight that LLMs, while fallible, are still useful, I would see no issue.

Yes, humans and LLMs are fallible, and both useful.

I'm not saying the comment I responded was an egregious case of the "fallacy" I'm wondering about, but I am saying that I feel like it's brewing. I imagine you've seen the argument that goes:

Anne: LLMs are human-like in some real, serious, scientific sense (they do some subset of reasoning, thinking, creating, and it's not just similar, it is intelligence)

Billy: No they aren't, look at XYZ (examples of "non-intelligence", according to the commenter).

Anne: Aha! Now we have you! I know humans who do XYZ! QED

I don't like Billy's argument and don't make it myself, but the rejoinder which I feel we're seeing often from Anne here seems absurd, no?



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: