Daniel Dennett stuck the boot into the Chinese Room argument in Consciousness Explained (and probably in several other places as well). As he points out, it's based on an intellectual sleight-of-hand (what he calls an "intuition pump").
From the linked article on the Stanford site:
> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room.
[Emphasis added.]
Unless those outside are satisfied with a very rudimentary back-and-forth, such a program would have to be amazingly complicated (and those outside very patient, or the operator superhumanly fast). It's not just a question of "look up input Chinese character and return corresponding output character" -- which is never explicitly stated, but which we're subconsciously led to by the image of a human "following the program for manipulating symbols and numerals".
Dennett gives an example where the outside person tells a joke, and asks the room's operator to explain it. You could also imagine a reading comprehension exercise, with questions ranging from purely structural ("What is the third word on the fifth line?") through semantic ("What colour was the girl's dress?") to analytical ("Why do you think the boy was sad?") Questions can be self-referential, or depend on previous questions (and their answers) or on a vast mass of knowledge we take for granted (a dress the colour of snow is white; a small boy might cry if his kite gets stuck in a tree).
It doesn't seem so intuitive that a system that complex lacks anything we might legitimately describe as "understanding".
From the linked article on the Stanford site:
> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room.
[Emphasis added.]
Unless those outside are satisfied with a very rudimentary back-and-forth, such a program would have to be amazingly complicated (and those outside very patient, or the operator superhumanly fast). It's not just a question of "look up input Chinese character and return corresponding output character" -- which is never explicitly stated, but which we're subconsciously led to by the image of a human "following the program for manipulating symbols and numerals".
Dennett gives an example where the outside person tells a joke, and asks the room's operator to explain it. You could also imagine a reading comprehension exercise, with questions ranging from purely structural ("What is the third word on the fifth line?") through semantic ("What colour was the girl's dress?") to analytical ("Why do you think the boy was sad?") Questions can be self-referential, or depend on previous questions (and their answers) or on a vast mass of knowledge we take for granted (a dress the colour of snow is white; a small boy might cry if his kite gets stuck in a tree).
It doesn't seem so intuitive that a system that complex lacks anything we might legitimately describe as "understanding".