Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The human mind is an estimator too.

The fact that the human mind can think in concepts, images AND words, and then compresses that into words for transmission, wheras LLMs think directly in words, is no object.

If you watch someone reach a ledge, your mind will generate, based on past experience, a probabilistic image of that person falling. Then it will tie that to the concept of problem (self-attention) and start generating solutions, such as warning them or pulling them back etc.

LLMs can do all this too, but only in words.






Do you think language is sufficient to model reality (not just physical, but abstract) here?

I think not, we can get close, but there exists problems and situations beyond that, especially in mathematics and philosophy. And I don't a visual medium or combination of is sufficient either, there's a more fundamental, underlying abstract structure that we use to model reality.


> Do you think language is sufficient to model reality (not just physical, but abstract) here?

It's sufficient to the level needed for human intelligence. We're a product of evolution, and we only need as much abstraction as it's required for operational reasons. Modeling reality in a deep, abstract way is something we want to, but not something that was required for our minds to evolve, nor for us to create civilization as it is today.


> Do you think language is sufficient to model reality (not just physical, but abstract) here?

After much time trying to accomplish this during the 20th century, the answer was as resounding "no" [1]

[1] https://en.wikipedia.org/wiki/Logical_positivism#Decline_and...


>LLMs think

Quick aside here: They do not think. They estimate generative probability distributions over the token space. If there's one thing I do agree with Dijkstra on, it's that it's important not to anthropomorphize mathematical or computing concepts.

As far as the rest of your comment, I generally agree. It sort of fits a Kantian view of epistemology, in which we have sensibility giving way to semiotics (we'll say words and images for simplicity) and we have concepts that we understand by a process of reasoning about a manifold of things we have sensed.

That's not probabilistic though. If we see someone reach a ledge and take a step over it, then we are making a synthetic a priori assumption that they will fall. It's synthetic because there's nothing about a ledge that means the person must fall. It's possible that there's another ledge right under we can't see. Or that they're in zero gravity (in a scifi movie maybe). Etc. It's a priori because we're making this statement not based on what already happened but rather what we know will happen.

We accomplish this by forming concepts such as "ledge", "step", "person", "gravity", etc., as we experience them until they exist in our mind as purely rational concepts we can use to reason about new experiences. We might end up being wrong, we might be right, we might be right despite having made the wrong claims (maybe we knew he'd fall because of gravity, however there was no gravity but he ended up being pushed by someone and "falling" because of it, this is called a "Gettier problem"). But our correctness is not a matter of probability but rather one of how much of the situation we understand and how well we reason about it.

Either way, there is nothing to suggest that we are working from a probability model. If that were the case, you wind up in what's called philosophical skepticism [1], in which, if all we are are estimation machines based on our observances, how can we justify any statement? If every statement must have been trained by a corresponding observation, then how do we probabilistically model things like causality that we would turn to to justify claims?

Kant's not the only person to address this skepticism, but he's probably the most notable to do so, and so I would challenge you to justify whether the "thinking" done by LLMs has any analogue to the "thinking" done using the process described in my second paragraph.

[1] https://en.wikipedia.org/wiki/Philosophical_skepticism#David...


> We accomplish this by forming concepts such as "ledge", "step", "person", "gravity", etc., as we experience them until they exist in our mind as purely rational concepts we can use to reason about new experiences.

So we receive inputs from the environment and cluster them into observations about concepts, and form a collection of truth statements about them. Some of them may be wrong, or apply conditionally. These are probabilistic beliefs learned a posteriori from our experiences. Then we can do some a priori thinking about them with our eyes and ears closed with minimal further input from the environment. We may generate some new truth statements that we have not thought about before (e. g. "stepping over the ledge might not cause us to fall because gravity might stop at the ledge") and assign subjective probabilities to them.

This makes the a priori seem to always depend on previous a posterioris, and simply mark the cutoff from when you stop taking environmental input into account for your reasoning within a "thinking session". Actually, you might even change your mind mid-reasoning process based on the outcome of a thought experiment you perform which you use to update your internal facts collection. This would give the a priori reasing you're currently doing an even stronger a posteriori character. To me, these observations above basically dissolve the concept of a priori thinking.

And this makes it seem like we are very much working from probabilistic models, all the time. To answer how we can know anything: If a statement's subjective probability becomes high enough, we qualify it as a fact (and may be wrong about it sometimes). But this allows us to justify other statements (validly, in ~ 1-sometimes of cases). Hopefully our world model map converges towards a useful part of the territory!


But I do not think humans think like that by default.

When I spill a drink, I don't think "gravity". That's too slow.

And I don't think humans are particularly good at that kind of rational thinking.


>When I spill a drink, I don't think "gravity". That's too slow.

I think you do, you just don't need to notice it. If you spilled it in the International Space Station, you'd probably respond differently even if you didn't have to stop and contemplate the physics of the situation.


I think they may have been referring to the fact that in the case of a spilled drink there's a shortcut from the sensory input to a motor output. Maybe you reach for the falling cup, maybe you back away to not get spilled on. These don't really require the conscious mind at all.

I don't think that we need to be aware of the reasoning our minds are doing for it to constitute reasoning.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: