Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.

Large language models are not "smart". They do not have thought. They don't have intelligence despite the "AI" moniker, etc.

They vomit words based off very fancy statistics.

There is no path from that to "thought" and "intelligence."




Not that I disagree, but what's intelligence? How does our intelligence work? If we don't know that, how can we be so sure what does and what doesn't lead to intelligence? A little more humility is on order before whipping out the tired "LLMs are just stochastic parrots" argument.


Humility has to go both ways then, we can't claim that LLM models are actually (or not actually) AI without qualifying that term first.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: