Hacker News new | past | comments | ask | show | jobs | submit login

> All an LLM does is produce output. There's no conceptual understanding behind it, and so there is no agreement, or disagreement.

I think that I agree. However, even on HN, what percentage of human comments are simply some really basic inference, aka output/"reddit"/etc... and those are humans.

I am not trying to elevate LLMs to some form of higher intelligence, my only point is that most of the time, we are not all that much better. Even the 0.000001% best of us fall into these habits sometimes. [0]

I currently believe that modern LLM architecture will likely not lead to AGI/ASI. However, even without that, they could do a lot.

I could also be very wrong.

[0] https://en.wikipedia.org/wiki/Nobel_disease






Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: