Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI can't think. It doesn't create an internal model of the problem given, it just guesses.

These "AI can't think" comments pop up on every single thread about AI and they're incredibly tiresome. They never bring anything to the discussion except reminding us how inherently limited these AIs are or whatever.

Someone else already replied with the OthelloGPT counter-example that shows that, yes, they do have an internal model. To which you reply that the internal model doesn't count as thinking or abstract reasoning or something, and... like, what even is the point of bringing that up every discussion? These assertions never come with empirical predictions anyway.

GP's comment was interesting because it pointed at a specific area of what LLMs are bad at. A thousandth comment saying "LLMs can't think or do abstract things (except in all the cases where they can but those aren't really thinking)" doesn't bring any new info.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: