Thursday, July 23, 2020

The Chinese Room

The Chinese Room is a philosophical argument about the nature of mind that takes the form of a thought experiment. Imagine you're in a room with two slots. Tiles are slid in one with Chinese markings on them. You have a guidebook that tells you that if the tiles have such-and-such figures on them in such-and-such order, you are to take another tile that has other markings on it and slide it through the other slot. You eventually get very good at it, maybe you even memorize the guidebook. The question is: does your competence in operating the Chinese room mean that you understand Chinese? If you answer no, ask yourself what else it means to understand a language.

The answer most people would give is meaning. You know what symbols you should put through the output slot based on what the symbols that are put through the input slot are. But that doesn't correspond to knowing what the symbols actually mean.

John Searle, who first proposed the Chinese Room Argument, said it's the difference between semantics and syntax. Knowing what symbols should be made in response to other symbols is an issue of syntax, but semantics involves meaning, and that is left out of the equation.

Here's a great Kids in the Hall skit that accidentally makes this point.



Part of the reason this is absurd is that we could only take what he's saying as actual claims if he is actually asserting them. If he's just repeating sounds that, for him, have no meaning, we have no basis for accepting the meaning that the words have -- or the meaning they would have if spoken by someone who did understand them.

If the Chinese Room Argument is sound, there are some interesting consequences. One is that, if we believe that we do in fact understand meaning, that we are able to operate on a semantic level not just on a syntactic level, then our minds cannot be completely explained in mechanistic terms. Mechanism would only explain things on the syntactic level, and if we operate on a semantic level, then our minds transcend mere cause-and-effect mechanical processes.

Two, attempts to recreate minds on a mechanistic basis, i.e. artificial intelligence, will only ever operate on a syntactic level. It could be set up to respond in exactly the same way a mind operating on a semantic level does but it would be a sham. It would be an attempt to trigger our intuition that there is a mind behind the symbols that intends to communicate meaning (remember the movie Screamers?), but insofar as they are only functioning on a syntactic level, they are in the same situation as the non-Chinese speaker in the Chinese room.

Obviously, there's a lot more to be said about this: it's a live issue in philosophy with a lot of ink being spilled on both sides. It may be possible to generate an artificial intelligence that does operate on a semantic level. But how could we tell? It's output would be identical to one that only operates on a syntactic level. But then how do we know that other people -- friends, family, strangers -- are operating on a semantic level? This is the problem of other minds, which has also caused a lot of ink to be spilled.

No comments: