I think it's fair to argue that part of human intelligence is actually just a statistical matching. The best example which comes to mind is actually grammar. Grammar has a very complex set of rules, however most people cannot really name them or describe them accurately; instead, they just know whether or not a sentence _sounds_ correct. This feels a lot like the same statistical matching performed by LLMs. An individual's reasoning iterates in their mind what words follow each other, and what phrases are likely.
Outside of grammar, you can hear a lot of this when people talk; their sentences wander, and they don't always seem to know ahead of time where their sentences will end up. They start anchored to a thought, and seem to hope that the correct words end up falling into place.
Now, does all thought work like this? Definitely not, and more importantly, there are many other facets of thought which are not present in LLMs. When someone has wandered badly when trying to get a sentence out, they are often also able to introspect and see that they failed to articulate their thought. They can also slow down their speaking, or pause, and plan out ahead of time; in effect, using this same introspection to prevent themselves from speaking poorly in the first place. Of course there's also memory, consciousness, and all sorts of other facets of intelligence.
What I'm on the fence about is whether this point, or your point actually detracts from the author's argument.
Recognizing a pattern and reproducing it is a huge part of the human experience, and that’s the main thing that’s intelligent-like about LLMs. A lot of time they’re lack context / cohesion, and they’re at a disadvantage for not being able to follow normal human social cues to course correct, as you point out.
Outside of grammar, you can hear a lot of this when people talk; their sentences wander, and they don't always seem to know ahead of time where their sentences will end up. They start anchored to a thought, and seem to hope that the correct words end up falling into place.
Now, does all thought work like this? Definitely not, and more importantly, there are many other facets of thought which are not present in LLMs. When someone has wandered badly when trying to get a sentence out, they are often also able to introspect and see that they failed to articulate their thought. They can also slow down their speaking, or pause, and plan out ahead of time; in effect, using this same introspection to prevent themselves from speaking poorly in the first place. Of course there's also memory, consciousness, and all sorts of other facets of intelligence.
What I'm on the fence about is whether this point, or your point actually detracts from the author's argument.