Hacker News new | past | comments | ask | show | jobs | submit login

I may be wrong, but it seems to me this also is a case of improper use of words.

Those LLMs neither agree nor disagree. They do not understand. They produce output, and we read that output and we ourselves consider the output to be something, or something else.

All an LLM does is produce output. There's no conceptual understanding behind it, and so there is no agreement, or disagreement.






> All an LLM does is produce output. There's no conceptual understanding behind it, and so there is no agreement, or disagreement.

I think that I agree. However, even on HN, what percentage of human comments are simply some really basic inference, aka output/"reddit"/etc... and those are humans.

I am not trying to elevate LLMs to some form of higher intelligence, my only point is that most of the time, we are not all that much better. Even the 0.000001% best of us fall into these habits sometimes. [0]

I currently believe that modern LLM architecture will likely not lead to AGI/ASI. However, even without that, they could do a lot.

I could also be very wrong.

[0] https://en.wikipedia.org/wiki/Nobel_disease


LLMs learn high-dimensional representations that capture conceptual relationships in their training data. They manipulate those representations in ways that approximate human reasoning.

> They manipulate those representations in ways that approximate human reasoning.

Fwiw, this is the story of my life. Seriously.


LOL everyone is like that most of the time.

System 1 vs System 2 thinking.

System 1 is rapid, uses heuristics to make quick judgements. Not rigorous. System 1 is the default mode.

System 2 is slow deliberate reasoning, energy intensive, and even humans get that wrong.

LLMs often use something like System 1 pattern matching, get the answer wrong initially, then can be prodded into trying again with a System 2 approach (chain of thought).

https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: