Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The responses to the Chinese Room experiment always seem to involve far more tortuous definition-shifting than the original thought experiment

The human in the room understands how to find a list of possible responses to the token 你好吗, and how select a response like 很好 from the list and display that as a response

But he human does not understand that 很好 represents an assertion that he is feeling good[1], even though the human has an acute sense of when he feels good or not. He may, in fact, not be feeling particularly good (because, for example he's stuck in a windowless room all day moving strange foreign symbols around!) and have answered completely differently had the question been asked in a language he understood. The books also have no concept of well-being because they're ink on paper. We're really torturing the concept of "understanding" to death to argue that the understanding of a Chinese person who is experiencing 很好 feelings or does not want to admit they actually feel 不好 is indistinguishable from the "understanding" of "the union" of a person who is not feeling 很好 and does not know what 很好 means and some books which do not feel anything contain references to the possibility of replying with 很好, or maybe for variation 好得很, or 不好 which leads to a whole different set of continuations. And the idea that understanding of how you're feeling - the sentiment conveyed to the interlocutor in Chinese - is synonymous with knowing which bookshelf to find continuations where 很好 has been invoked is far too ludicrous to need addressing.

The only other relevant entity is the Chinese speaker who designed the room, who would likely have a deep appreciation of feeling 很好, 好得很 and 不好 as well as the appropriate use of those words he designed into the system, but Searle's argument wasn't that programmers weren't sentient.

[1]and ironically, I also don't speak Chinese and have relatively little idea what senses 很好 means "good" in and how that overlaps with the English concept, beyond understanding that it's an appropriate response to a common greeting which maps to "how are you"



It's sleight of hand because the sentience of the human in the system is irrelevant. The human is following a trivial set of rules, and you could just as easily digitize the books and replace the human with a microcontroller. Voila, now you have a Chinese-speaking computer program and we're back to where we started. "The books" don't feel anything, true - but neither do the atoms in your brain feel anything either. By asserting that the human in the room and the human who wrote the books are the only "relevant entities" - that consciousness can only emerge from collections of atoms in the shape of a human brain, and not from books of symbols - you are begging the question.

The Chinese room is in a class of flawed intuition pump I call "argument from implausible substrate", the structure of which is essentially tautological - posit a functioning brain running "on top" of something implausible, note how implausible it is, draw conclusion of your choice[0]. A room with a human and a bunch of books that can pass a Turing test is a very implausible construction - in reality you would need millions of books, thousands of miles of scratch paper to track the enormous quantity of state (a detail curiously elided in most descriptions), and lifetimes of tedious book-keeping. The purpose of the human in the room is simply to distract from the fabulous amounts of information processing that must occur to realize this feat.

Here's a thought experiment - preserve the Chinese Room setup in every detail, except the books are an atomic scan of a real Chinese-speaker's entire head - plus one small physics textbook. The human simply updates the position, spin, momentum, charge etc of every fundamental particle - sorry, paper representation of every fundamental particle - and feeds the vibrations of a particular set of particles into an audio transducer. Now the room not only speaks Chinese, but also complains that it can't see or feel anything and wants to know where its family is. Implausible? Sure. So is the original setup, so never mind that. Are the thoughts and feelings of the beleaguered paper pusher at all relevant here?

[0] Another example of this class is the "China brain", where everyone in China passes messages to each other and consciousness emerges from that. What is it with China anyway?


The sentience of the human is not irrelevant, because it helps us put ourselves in the place of a computer, which we know precisely how it works in terms of executing precision calculations in a fixed time series.


> It's sleight of hand because the sentience of the human in the system is irrelevant. The human is following a trivial set of rules, and you could just as easily digitize the books and replace the human with a microcontroller. Voila, now you have a Chinese-speaking computer program and we're back to where we started.

Substituting the microcontroller back is... literally the point of the thought experiment. If it's logically possible for an entity which we all agree can think to perform flawless pattern matching in Chinese without understanding Chinese, why should we suppose that flawless pattern matching in Chinese is particularly strong evidence of thought on the part of a microcontroller that probably can't?

Discussions about the plausibility of building the actual model are largely irrelevant too, especially in a class of thought experiments which has people on the other side insisting hypotheticals like "imagine if someone built a silicon chip which perfectly simulates and updates the state of every relevant molecule in someone's brain..." as evidence in favour of their belief that consciousness is a soul-like abstraction that can be losslessly translated to x86 hardware. The difficulty of devising a means of adequate state tracking is a theoretical argument against computers ever achieving full mastery of Chinese as well as against rooms, and the number of books irrelevant. (If we reduce the conversational scope to a manageable size the paper-pusher and the books still aren't conveying actual thoughts, and the Chinese observer still believes he's having a conversation with a Chinese-speaker)

As for your alternative example, assuming for the sake of argument that the head scan is a functioning sentient brain (though I think Searle would disagree) the beleaguered paper pusher still gives the impression of perfect understanding of Chinese without being able to speak a word of it, so he's still a P-zombie. If we replace that with a living Stephen Hawking whose microphone is rigged to silently dictate answers via my email address when I press a switch, I would still know nothing about physics and it still wouldn't make sense to try to rescue my ignorance of advanced physics by referring to Hawking and I as being a union with collective understanding. Same goes for the union of understanding of me, a Xerox machine and a printed copy of A Brief History of Time.


> But he human does not understand that 很好 represents an assertion that he is feeling good[1], even though the human has an acute sense of when he feels good or not.

The question being asked about the Chinese room is not whether or not the human/the system 'feels good', the question being asked about it is whether or not the system as a whole 'understands Chinese'. Which is not very relevant to the human's internal emotional state.

There's no philosophical trick to the experiment, other than an observation that while the parts of a system may not 'understand' something, the whole system 'might'. No particular neuron in my head understands English, but the system that is my entire body does.


It seems unreasonable to conclude that understanding of the phrase "how are you?" (or if you prefer "how do you feel?") in Chinese or any other language can be achieved without actually feeling or having felt something, and being able to convey that information (or consciously avoid conveying that information). Similarly, to an observer of a Thai room, me emitting สวัสดีค่ะ because I'd seen plenty of examples of that greeting being repeated in prose would apparently be a perfectly normal continuation, but when I tried that response in person, a Thai lady felt obliged - after she'd finished laughing - to explain that I obviously hadn't understood that selecting the ค่ะ suffix implies that I am a girl!

The question Searle actually asks is whether the actor understands, and as the actor is incapable of conveying how he feels or understanding that he is conveying a sentiment about how he supposedly feels, clearly he does not understand the relevant Chinese vocabulary even though his actions output flawless Chinese (ergo P-zombies are possible). We can change that question to "the system" if you like, but I see no reason whatsoever to insist that a system involving a person and some books possesses subjective experience of feeling whatever sentiment the person chooses from a list, or that if I picked สวัสดีค่ะ in a Thai Room that would be because the system understood that "man with some books" was best identified as being of the female gender. The system is as unwitting as it is incorrect about the untruths it conveys.

The other problem with treating actors in the form of conscious organisms and inert books the actor blindly copies from as a single "system" capable of "understanding" independent from the actor is that it would appear to imply that also applies to everything else humans interact with. A caveman chucking rocks "understands" Newton's laws of gravitation perfectly because the rocks always abide by them!


"But he human does not understand that 很好 represents an assertion that he is feeling good"

This is an argument about depth and nuance. A speaker can know:

a) The response fits (observe people say it)

b) Why the response fits, superficially (很 means "very" and 好 means "good")

c) The subtext of the response, both superficially and academically (Chinese people don't actually talk like this in most contexts, it's like saying "how do you do?". The response "very good" is a direct translation of English social norms and is also inappropriate for native Chinese culture. The subtext strongly indicates a non-native speaker with a poor colloquial grasp of the language. Understanding the radicals, etymology and cultural history of each character, related nuance: should the response be a play on 好's radicals of mother/child? etc etc)

The depth of c is neigh unlimited. People with an exceptionally strong ability in this area are called poets.

It is possible to simulate all of these things. LLMs are surprisingly good at tone and subtext, and are ever improving in these predictive areas.

Importantly: While the translating human may not agree or embody the meaning or subtext of the translation. I say "I'm fine" with I'm not fine literally all the time. It's extremely common for humans alone to say things they don't agree with, and for humans alone to express things that they don't fully understand. For a great example of this, consider psychoanalysis: An entire field of practice in large part dedicated to helping people understand what they really mean when they say things (Why did you say you're fine when you're not fine? Let's talk about your choices ...). It is extremely common for human beings to go through the motions of communication without being truly aware of what exactly they're communicating, and why. In fact, no one has a complete grasp of category "C".

Particular disabilities can draw these types of limited awareness and mimicry by humans into extremely sharp contrast.

"And the idea that understanding of how you're feeling - the sentiment conveyed to the interlocutor in Chinese - is synonymous with knowing which bookshelf to find continuations where 很好 has been invoked is far too ludicrous to need addressing."

I don't agree. It's not ludicrous, and as LLMs show it's merely an issue of having a bookshelf of sufficient size and complexity. That's the entire point!

Furthermore, this kind of pattern matching is probably how the majority of uneducated people actually communicate. The majority of human beings are reactive. It's our natural state. Mindful, thoughtful communications are a product of intensive training and education and even then a significant portion of human communications are relatively thoughtless.

It is a fallacy to assume otherwise.

It is also a fallacy to assume that human brains are a single reasoning entity, when it's well established that this is not how brains operate. Freud introduced the rider and horse model for cognition a century ago, and more recent discoveries underscore that the brain cannot be reasonably viewed as a single cohesive thought producing entity. Humans act and react for all sorts of reasons.

Finally, it is a fallacy to assume that humans aren't often parroting language that they've seen others use without understanding what it means. This is extremely common, for example people who learn phrases or definitions incorrectly because humans learn language largely by inference. Sometimes we infer incorrectly and for all "intensive purposes" this is the same dynamic -- if you'll pardon the exemplary pun.

In a discussion around the nature of cognition and understanding as it applies to tools it makes no sense whatsoever to introduce a hybrid human/tool scenario and then fail to address that the combined system of a human and their tool might be considered to have an understanding, even if the small part of the brain dealing with what we call consciousness doesn't incorporate all of that information directly.

"[1]and ironically, I also don't speak Chinese " Ironically I do speak Chinese, although at a fairly basic level (HSK2-3 or so). I've studied fairly casually for about three years. Almost no one says 你好 in real life, though appropriate greetings can be region specific. You might instead to a friend say 你吃了吗?


There's no doubt that people pattern match and sometimes say they're fine reflexively.

But the point is that the human in the Room can never do anything else or convey his true feelings, because it doesn't know the correspondence between 好 and a sensation or a sequence of events or a desire to appear polite, merely the correspondence between 好 and the probability of using or not using other tokens later in the conversation (and he has to look that bit up). He is able to discern nothing in your conversation typology below (a), and he doesn't actually know (a), he's simply capable of following non-Chinese instructions to look up a continuation that matches (a). The appearance to an external observer of having some grasp of (b) and (c) is essentially irrelevant to his thought processes, even though he actually has thought processes and the cards with the embedded knowledge of Chinese don't have thought processes.

And, no it is still abso-fucking-lutely ludicrous to conclude that just because humans sometimes parrot, they aren't capable of doing anything else[1]. If humans don't always blindly pattern match conversation without any interaction with their actual thought processes, then clearly their ability to understand "how are you" and "good" is not synonymous with the "understanding" of a person holding up 好 because a book suggested he hold that symbol up. Combining the person and the book as a "union" changes nothing, because the actor still has no ability to communicate his actual thoughts in Chinese, and the book's suggested outputs to pattern match Chinese conversation still remain invariant with respect to the actor's thoughts.

An actual Chinese speaker could choose to pick the exact same words in conversation as the person in the room, though they would tend to know (b) and some of (c) when making those word choices. But they could communicate other things, intentionally

[1]That's the basic fallacy the "synonymous" argument rests on, though I'd also disagree with your assertions about education level. Frankly it's the opposite: ask a young child how they are and they think about whether their emotional state is happy or sad or angry or waaaaaaahh and use whatever facility with language to convey it, and they'll often spontaneously emit their thoughts. A salesperson who's well versed in small talk and positivity and will reflexively, for the 33rd time today, give an assertive "fantastic, and how are yyyyou?" without regard to his actual mood and ask questions structured around on previous interactions (though a tad more strategically than an LLM...).


"But the point is that the human in the Room can never do anything else"

I disagree. I think the point is that the union of the human and the library can in fact do all of those things.

The fact that the human in isolation can't is as irrelevant as pointing out that the a book in isolation (without the human) can't either. It's a fundamental mistake as to the problem's reasoning.

"And, no it is still abso-fucking-lutely ludicrous to conclude that just because humans sometimes parrot, they aren't capable of doing anything else"

Why?

What evidence do you have that humans aren't the sum of their inputs?

What evidence do you have that "understanding" isn't synonymous with "being able to produce a sufficient response?"

I think this is a much deeper point than you realize. It is possible that the very nature of consciousness centers around this dynamic; that evolution has produced systems which are able to determine the next appropriate response to their environment.

Seriously, think about it.


> I disagree. I think the point is that the union of the human and the library can in fact do all of those things.

No, the "union of the human and the library" can communicate only the set of responses a programmer, who is not part of the room, made a prior decision to make available. (The human can also choose to refuse to participate, or hold up random symbols but this fails to communicate anything). If the person following instructions on which mystery symbols to select ends up convincing an external observer they are conversing with an excitable 23 year old lady from Shanghai, that's because the programmer provided continuations including those personal characteristics, not because the union of a bored middle aged non-Chinese bloke and lots and lots of paper understands itself to be an excitable 23 year old lady from Shanghai.

Seriously, this is madness. If I follow instructions to open a URL which points to a Hitler speech, it means I understood how to open links, not that the union of me and YouTube understands the imperative of invading Poland!

> The fact that the human in isolation can't is as irrelevant as pointing out that the a book in isolation (without the human) can't either. It's a fundamental mistake as to the problem's reasoning.

Do you take this approach to other questions of understanding? If somebody passes a non-Turing test by diligently copying the answer sheet, do you insist that the exam result accurately represents the understanding of the union of the copyist and the answer sheet, and people questioning whether the copyist understood what they were writing are quibbling over irrelevances?

The reasoning is very simple: if a human can convincingly simulate understanding simply by retrieving answers from storage media, it stands to reason a running program can do so too, perhaps with even less reason to guess what real world phenomena the symbols refer to. An illustrative example of how patterns can be matched without cognisance of the implications of the patterns

Inventing a new kind of theoretical abstraction of "union of person and storage media" and insisting that understanding can be shared between a piece of paper and a person who can't read the words on it like a pretty unconvincing way to reject that claim. But hey, maybe the union of me and the words you wrote thinks differently?!

> I think this is a much deeper point than you realize. It is possible that the very nature of consciousness centers around this dynamic; that evolution has produced systems which are able to determine the next appropriate response to their environment.

It's entirely possible, probable even, the very nature of consciousness centres around ability to respond to an environment. But a biological organism's environment consists of interacting with the physical world via multiple senses, a whole bunch of chemical impulses called emotions and millions of years of evolving to survive in that environment as well as an extremely lossy tokenised abstract representation of some of those inputs used for communication purposes. Irrespective of whether a machine can "understand" in some meaningful sense, it stretches credulity to assert that the "understanding" of a computer program whose inputs consist solely of lossy tokens is similar or "synonymous" to the understanding of the more complex organism that navigates lots of other stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: