Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey, I'm definitely on your side of the Great AI Wars--and definitely share your thoughts on the overall framing--but I think you're missing the serious nature of this contribution:

1. Small correction, it's actually a whole book AFAIK, and potentially someday soon, a class! So there's a lot more thought put in then the typical hot-take blog post. I also pop into one of these guy's replies on Bluesky to disagree on stuff fairly regularly, and can vouch for his good faith, humble effort to get it right (not something to be taken for granted!)

2. RE:“the AI has no ground truth”, I'd say this is true, no matter how often they're empirically correct. Epistemological discussions (aka "how do humans think") invariably end up at an idea called Foundationalism, which is exactly what it sounds like: that all of our beliefs can be traced back to one or more "foundational" beliefs that we either do not question at all (axioms) or very rarely do (premises on steroids?). In that sense, this phrase is simply recalling the hallucination debates we're all familiar with in slightly more specific, long-standing terms; LLMs do not have a systematic/efficient way of segmenting off such fundamental beliefs and dealing with them deliberately. Which brings me to...

3. RE:“can’t reason logically”, again this is a common debate that I think is being specified more than usual here. A lot of philosophy draws a distinction between automatic and deliberate cognition. I give credit to Kant for the best version, but it's really a common insight, found in ideas like "Fast vs. Slow thinking"[1], "first order vs. recursive" thought[2], "ego vs. superego"[3], and--most relevantly--intuition vs. reason.[4] At the very least, it's not a criticism to be dismissed out of hand based on empirical success rates!

4. Finally, RE:“can’t explain how they arrived at conclusions”, that's really just another discussion of point 2 in more explicitly epistemic terms. You can certainly ask o3 to reason (hehe) about the cognitive processing likely to be behind a given transcript, but it's not actually accessing any internal state, which is a very important distinction! o3 would do just as well explaining the reasoning behind a Claude output as it would with one of its own.

Sorry for the rant! I just leave a lot of comments that sound exactly like yours on "LLMs are useless" blog posts, and I wanted to do my best to share my begrudging appreciation for this work.

The title is absurdly provocative, but they're not dismissing LLMs, they're characterizing their weaknesses using a colloquial term -- namely "bullshit" as used for "lying without knowing that you're lying".

[1] https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow [2] https://www.mit.edu/~dxh/marvin/web.media.mit.edu/~minsky/pa... [3] https://en.wikipedia.org/wiki/Id,_ego_and_superego [4] https://plato.stanford.edu/entries/intuition/ , and a flawed but interesting one from Gary Marcus: https://garymarcus.substack.com/p/llms-dont-do-formal-reason...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: