Showing posts with label consconsciousness. Show all posts
Showing posts with label consconsciousness. Show all posts

Wednesday, May 11, 2022

Chinese Room thought experiments

Thought experiments like Searle’s Chinese Room are supposed to show that understanding and consciousness are not reducible to computation. For if they are, then a bored monolingual English-speaking clerk who moves around pieces of paper with Chinese letters letters—or photographic memories of them in his head—according to a fixed set of rules counts as understanding Chinese and having the consciousness that goes with that.

I used to find this an extremely convincing argument. But I am finding it less so over time. Anybody who thinks that computers could have understanding and consciousness will think that a computer can run two different simultaneous processes of understanding and consciousness sandboxed apart from one another. Neither process will have the understanding and consciousness of what is going on in the other process. And that’s very much what the functionalist should say about the Chinese Room. We have two processes running in the clerk’s head. One process is English-based and the other is a Chinese-based process running in an emulation layer. There is limited communication between the two, and hence understanding and consciousness do not leak between them.

If we accept the possibility of strong Artificial Intelligence, we have two choices of what to say about sandboxed intelligent processes running on the same hardware. We can say that there is one person with two centers of consciousness/understanding or that there are two persons each with one center. On the one person with two mental centers view, we can say that the clerk does understand Chinese and does have the corresponding consciousness, but that understanding is sandboxed away from the English-based processing, and in particular the clerk will not talk about it (much as in the computer case, we could imagine the two processes communicating with a user through different on-screen windows). On the two person view, we would say that the clerk does not understand Chinese, but that a new person comes into existence who does understand Chinese.

I am not saying that the proponent of strong AI is home free. I think both the one-person-two-centers and two-person views have problems. But these are problems that arise purely in the computer case, without any Chinese room kind of stuff going on.

The one-person-two-centers view of multiple intelligent processes running on one piece of hardware gives rise to insoluble questions of the unity of a piece of hardware. (If each process runs on a different processor core, do we count as having one piece of hardware or not? If not, what if they are constantly switching between cores? If yes, what if the separate the cores to separate pieces of silicon that are glued along an edge?) The two-persons view, on the other hand, is incompatible with animalism in our own case. Moreover, it ends up identifying persons with software processes, which leads to the unfortunate conclusion that when the processes are put to sleep, the persons temporarily cease to exist—and hence that we do not exist when sufficiently deeply asleep.

These are real problems, but no additional difficulty comes from the Chinese room case that I can see.