Showing posts with label Chinese room. Show all posts
Showing posts with label Chinese room. Show all posts

Wednesday, May 11, 2022

Chinese Room thought experiments

Thought experiments like Searle’s Chinese Room are supposed to show that understanding and consciousness are not reducible to computation. For if they are, then a bored monolingual English-speaking clerk who moves around pieces of paper with Chinese letters letters—or photographic memories of them in his head—according to a fixed set of rules counts as understanding Chinese and having the consciousness that goes with that.

I used to find this an extremely convincing argument. But I am finding it less so over time. Anybody who thinks that computers could have understanding and consciousness will think that a computer can run two different simultaneous processes of understanding and consciousness sandboxed apart from one another. Neither process will have the understanding and consciousness of what is going on in the other process. And that’s very much what the functionalist should say about the Chinese Room. We have two processes running in the clerk’s head. One process is English-based and the other is a Chinese-based process running in an emulation layer. There is limited communication between the two, and hence understanding and consciousness do not leak between them.

If we accept the possibility of strong Artificial Intelligence, we have two choices of what to say about sandboxed intelligent processes running on the same hardware. We can say that there is one person with two centers of consciousness/understanding or that there are two persons each with one center. On the one person with two mental centers view, we can say that the clerk does understand Chinese and does have the corresponding consciousness, but that understanding is sandboxed away from the English-based processing, and in particular the clerk will not talk about it (much as in the computer case, we could imagine the two processes communicating with a user through different on-screen windows). On the two person view, we would say that the clerk does not understand Chinese, but that a new person comes into existence who does understand Chinese.

I am not saying that the proponent of strong AI is home free. I think both the one-person-two-centers and two-person views have problems. But these are problems that arise purely in the computer case, without any Chinese room kind of stuff going on.

The one-person-two-centers view of multiple intelligent processes running on one piece of hardware gives rise to insoluble questions of the unity of a piece of hardware. (If each process runs on a different processor core, do we count as having one piece of hardware or not? If not, what if they are constantly switching between cores? If yes, what if the separate the cores to separate pieces of silicon that are glued along an edge?) The two-persons view, on the other hand, is incompatible with animalism in our own case. Moreover, it ends up identifying persons with software processes, which leads to the unfortunate conclusion that when the processes are put to sleep, the persons temporarily cease to exist—and hence that we do not exist when sufficiently deeply asleep.

These are real problems, but no additional difficulty comes from the Chinese room case that I can see.

Wednesday, May 22, 2019

Functionalism and maximalism

It is widely held that consciousness is a maximal property—a property F such that, “roughly, … large parts of an F are not themselves F.” Naturalists have used maximality, for instance, to respond to Merricks’ worry that on naturalism, if Alice is conscious, so is Alice minus a finger, as they both have a brain sufficient for consciousness (see previous link). There are also the sceptical consequences, noted by Merricks, arising from thinking our temporal parts to be consciousness.

But functionalists cannot hold to maximalism. For imagine a variant on the Chinese room experiment where the bored clerk processes Chinese characters with the essential help of exactly one stylus and one wax tablet. The functionalist is committed to the clerk plus the stylus and tablet—call that clerk-plus—being conscious, as long as the stylus and tablet are essential to the functioning of the system. But if the clerk-plus is conscious, the clerk is not by maximalism. For consciousness is a maximal property, and the clerk is a large part of the clerk-plus. But it is absurd to think that the clerk turns into a zombie as soon as he starts to process Chinese characters.

Perhaps, though, instead of consciousness being maximal, the functionalist maximalist can say that maximally specific phenomenal types of consciousness—say, feeling such and such a sort of boredom B—are maximal. The clerk feels B, but clerk-plus is, say, riveted by reading the Romance of the Three Kingdoms. There is no violation of maximality with respect to the clerk’s feeling bored, because clerk-plus isn’t bored.

That could be the case. But it could also so happen that at some moment clerk-plus feels B as well. After all, the same feeling of boredom can be induced by different things. The Romance has slow bits. It could happen that clerk-plus is stuck in a slow bit, and for a moment clerk and clerk-plus lose sight of the details and are aware of nothing but their boredom—the qualitatively same boredom. And that violates maximality for specific types of consciousness.

If maximalism is needed for a naturalist theory of mind and if functionalism is our best naturalist theory of mind, then the best naturalist theory fails.