Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 1 The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles...

> 2) The intelligence illusion is in the mind of the user and not in the LLM itself.

I've felt as though there is something in between. Maybe:

3) The tech industry invented the initial stages a kind of mind that, though misses the mark, is approaching something not too dissimilar to how an aspect of human intelligence works.

> By using validation statements, … the chatbot and the psychic both give the impression of being able to make extremely specific answers, but those answers are in fact statistically generic.

"Mr. Geller, can you write some Python code for me to convert a 1-bit .bmp file to a hexadecimal string?"

Sorry, even if you think the underlying mechanisms have some sort of analog there's real value in LLM's, not so psychics doing "cold readings".



I think it's fair to argue that part of human intelligence is actually just a statistical matching. The best example which comes to mind is actually grammar. Grammar has a very complex set of rules, however most people cannot really name them or describe them accurately; instead, they just know whether or not a sentence _sounds_ correct. This feels a lot like the same statistical matching performed by LLMs. An individual's reasoning iterates in their mind what words follow each other, and what phrases are likely.

Outside of grammar, you can hear a lot of this when people talk; their sentences wander, and they don't always seem to know ahead of time where their sentences will end up. They start anchored to a thought, and seem to hope that the correct words end up falling into place.

Now, does all thought work like this? Definitely not, and more importantly, there are many other facets of thought which are not present in LLMs. When someone has wandered badly when trying to get a sentence out, they are often also able to introspect and see that they failed to articulate their thought. They can also slow down their speaking, or pause, and plan out ahead of time; in effect, using this same introspection to prevent themselves from speaking poorly in the first place. Of course there's also memory, consciousness, and all sorts of other facets of intelligence.

What I'm on the fence about is whether this point, or your point actually detracts from the author's argument.


Recognizing a pattern and reproducing it is a huge part of the human experience, and that’s the main thing that’s intelligent-like about LLMs. A lot of time they’re lack context / cohesion, and they’re at a disadvantage for not being able to follow normal human social cues to course correct, as you point out.


Yeah, the basic premise is off because LLM responses are regularly tested against ground truth (like running the code they produce), and LLMs don't get to carefully select what requests they fulfill. To the contrary they fulfill requests even when they are objectively incapable of answering correctly, such as incomplete or impossible questions.

I do think there is a degree of mentalist-like behavior that happens, maybe especially because of the RLHF step, where the LLM is encouraged to respond in ways that seem more truthful or compelling than is justified by its ability. We appreciate the LLM bestowing confidence on us, and rank an answer more highly if it gives us that confidence... not unlike the person who goes to a spiritualist wanting to receive comforting news of a loved one who has passed. It's an important attribute of LLMs to be aware of, but not the complete explanation the author is looking for.


Which aspect of how a human leg works is a truck tire similar to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: