Wednesday, May 11, 2022

Chinese Room thought experiments

Thought experiments like Searle’s Chinese Room are supposed to show that understanding and consciousness are not reducible to computation. For if they are, then a bored monolingual English-speaking clerk who moves around pieces of paper with Chinese letters letters—or photographic memories of them in his head—according to a fixed set of rules counts as understanding Chinese and having the consciousness that goes with that.

I used to find this an extremely convincing argument. But I am finding it less so over time. Anybody who thinks that computers could have understanding and consciousness will think that a computer can run two different simultaneous processes of understanding and consciousness sandboxed apart from one another. Neither process will have the understanding and consciousness of what is going on in the other process. And that’s very much what the functionalist should say about the Chinese Room. We have two processes running in the clerk’s head. One process is English-based and the other is a Chinese-based process running in an emulation layer. There is limited communication between the two, and hence understanding and consciousness do not leak between them.

If we accept the possibility of strong Artificial Intelligence, we have two choices of what to say about sandboxed intelligent processes running on the same hardware. We can say that there is one person with two centers of consciousness/understanding or that there are two persons each with one center. On the one person with two mental centers view, we can say that the clerk does understand Chinese and does have the corresponding consciousness, but that understanding is sandboxed away from the English-based processing, and in particular the clerk will not talk about it (much as in the computer case, we could imagine the two processes communicating with a user through different on-screen windows). On the two person view, we would say that the clerk does not understand Chinese, but that a new person comes into existence who does understand Chinese.

I am not saying that the proponent of strong AI is home free. I think both the one-person-two-centers and two-person views have problems. But these are problems that arise purely in the computer case, without any Chinese room kind of stuff going on.

The one-person-two-centers view of multiple intelligent processes running on one piece of hardware gives rise to insoluble questions of the unity of a piece of hardware. (If each process runs on a different processor core, do we count as having one piece of hardware or not? If not, what if they are constantly switching between cores? If yes, what if the separate the cores to separate pieces of silicon that are glued along an edge?) The two-persons view, on the other hand, is incompatible with animalism in our own case. Moreover, it ends up identifying persons with software processes, which leads to the unfortunate conclusion that when the processes are put to sleep, the persons temporarily cease to exist—and hence that we do not exist when sufficiently deeply asleep.

These are real problems, but no additional difficulty comes from the Chinese room case that I can see.

11 comments:

Colin Causey said...

I think that this objection is similar to the systems and subsystems replies. The systems reply holds that from the fact that the clerk doesn't understand Chinese, it does not follow that the room doesn't understand Chinese. To fix this, we can suppose that the clerk "absorbs" the room by memorizing the rulebook and running through the program mentally without using paper and pencil. At this point, the subsystems reply kicks in: From the fact that the clerk (who is now the entire system) does not understand Chinese, it does not follow that no subsystem in the clerk's head understands Chinese. Perhaps, in virtue of the clerk running the program, there is a homunculus inside the clerk that understands Chinese, a second stream of consciousness that the clerk is not aware of. B. Jack Copeland, for instance, defends this argument. In one of my papers, I attempt to defend the Chinese room from this objection by applying the Chinese room scenario to the homunculus. The basic strategy is to replace the homunculus inside the clerk with another clerk. Now, either the homunculus understands Chinese in virtue of running a computer program or it does not. If not, then the homunculus understands for some other reason and therefore Strong AI is ruled out as the explanation. The homunculus does not understand simply in virtue of running a program. If so, then have another clerk run whatever program the homunculus is running (this program would be a sub-program of the overall program that the first clerk is running). For the same reason that the original clerk does not understand Chinese, the second clerk also will not understand Chinese. Suppose, however, that we posit a homunculus inside this second clerk that understands Chinese. In this case, we can apply the Chinese room to this second homunculus just as we did with the first. The overall end result is either an infinite regress or we get to a point where some non-computational process explains (or at least contributes to an explanation of) a consciousness that understands Chinese. Thus, the thesis that computation, by itself, is sufficient for conscious understanding is false. (I apologize for any sloppiness or typos as I am typing this on the phone).

Thoughts?

Oktavian Zamoyski said...

I never understood the notion of the "room understanding Chinese". The whole point is that computation is like syntax without semantics. Where in the "room" can you find the signified? Not the symbols, not the rules and therefore neither the book nor the clerk. Without intentionality, we cannot speak of language and therefore intelligence.

Zsolt Nagy said...
This comment has been removed by the author.
Zsolt Nagy said...

I'm not convinced of the Chinese Room thought experiment showing consciousness not being reducible to computation as I'm not convinced of the conceivability of the existence of Philosophical Zombies without any consciousness supposedly implying physicalism to be false.
Physicalism either obtains or doesn't obtain consciousness and consciousness might be or might not be reducible to physicalism.
It might be even the case, that there are some physical minds as there are some non-physical mind. At this point it is really not, that we could really disregard either possibilities with our yet incomplete understanding and knowledge about Nature, Science and Consciousness.
Just because we can conceive a program giving perfect Chinese responses and simultaniosy not being conscious about those responses, from that doesn't follow, that it is metaphysically not possible for there to be a program giving perfect Chinese responses AND simultaneously being conscious about those responses.
It might also be just the case, that we have really bad methods for testing and differentiating between such two similar yet very distinct cases.

Alexander R Pruss said...

Colin:

This is very interesting, but I think the homunculus solution may be different from mine. The homunculus solution is a two-person solution. But the case of the computer running two parallel conscious processes sandboxed from each other shows that it is not right to identify the person with something that performs the computation or runs some software. For it is the hardware that performs the computation and runs the software, and there is only one relevant piece of hardware, and yet two persons. Thus, in the Chinese room case, the two-person functionalist should not identify the clerk with a physical object, but with a process running in a physical object's brain. The process is constituted by a piece of running software, and it is a category mistake to think of the process as performing the computation or running the software.

There is still a homunculus. For the computational process P1 that is the clerk has a subprocess P2 that is a Chinese speaker. But I think your objection no longer applies. For P2 does not understand in virtue of running a program. Rather, P2 understands in virtue of being constituted by the running of a program C (for Chinese), just as P1 understands in virtue of being constituted by the running of a program E (for English). It is true that P2 is some sort of emulation subprocess of P1, of course. Your suggestion was to have another clerk run whatever kind of software the homunculus is running. But now the homunculus is constituted by the running of C, rather than the homunculus itself being the subject running C. If we extract C and let it run directly on another clerk, without the emulation subsystem, then what we have is just a clerk who is perfectly ordinary Chinese speaker.

On the other hand, if we take the one-person two-personality view, then there isn't any homunculus. There is just one person with two consciousness processes somewhat sandboxed from each other. The clerk understands English and the clerk understands Chinese, but the two understandings are largely sandboxed from each other.

For Christian readers, it is worth making note of the fact that if the Incarnation makes sense, so does the one-person reading of the Chinese room. For Christ has two minds, a human mind and a divine mind, and presumably there is a separate stream of consciousness associated with each mind, so that Christ by his human mind need not be aware of what he is aware of by the divine mind (though of course by omniscience, he is aware of absolutely everything by the divine mind).

Zsolt Nagy said...
This comment has been removed by the author.
Zsolt Nagy said...

I knew it! You all have and hear multiple distinct voices in your heads.
So then what's wrong with me, if I only have and hear my one and only voice in my head?
Also I can't hear other voices from other heads directly only indirectly by sound waves. So I can not directly confirm you all having multiple distinct voices in your heads.
Is there such a method to directly confirm such hypothesis?

Zsolt Nagy said...

"Can a computer fool you into thinking it is human?" by Tim Harford
Really, it is not specifically difficult to fool and trick a conscious human. It is also to easy to be fooled and tricked one self by one selfs own intellect.
Instead or besides of thinking about the Chinese Room thought experiment supposedly showing consciousness not being reducible to computation one is also entitled by that thought experiment to think very carefully about, how consciousness can and should be identified in the first place.
If you can not tell apart a conscious being from a non-conscious being, then by really what means is consciousness not reducible to supposedly non-conscious computations?

Colin Causey said...

Dr. Pruss,

Thanks for the reply! I’m understanding more clearly now. Yes, I agree that in your proposed scenario, my objection no longer applies. The homunculus doesn’t understand in virtue of running the program itself, but is rather constituted by a running program that is executed by something else. On that theory, I concur that it seems like the Chinese room is simply inconclusive. I tend to think that the most fundamental argument against computationalism/Strong AI is the idea that computation is itself a mind-dependent phenomenon. Something counts as performing a computation only if an observer interprets the physical states of the system as having computational significance. Sometimes this argument is taken in the direction of pancomputationalism, the idea that every physical system computes everything. And if pancomputationalism is true and Strong AI is true, then this would entail panpsychism. Thus, if we reject panpsychism, then we should reject Strong AI. I think, however, that there are good arguments against pancomputationalism. But the idea of the mind-dependence of computation doesn’t entail pancomputationalism and so even if that is false, the mind-dependency argument could still go through. There may, in the functionalist spirit, be non-trivial causal constraints that limit what kind of physical system could function as a computer. Similarly, there are non-trivial causal constraints on what sort of a thing could function as a knife. For example, steel will do the job of cutting, but shaving cream will not. So, not just any old thing could function as a knife. Nevertheless, that something counts as a knife at all is dependent on our attitudes towards it. A knife is an artifact. I think that a similar thing applies to computation in physical systems. Perhaps not any old thing could function as a computer. Nevertheless, that a given thing is a computer is dependent on our attitudes towards it. If this is right, then computation is to be explained in terms of mind rather than mind in terms of computation. Obviously, this is just a very basic sketch.

Alexander R Pruss said...

That's basically one of the main reasons I have always been suspicious of Strong AI.

There is a way out of this, but it is expensive: require teleology for computation. This undercuts pancomputationalism, since not all of the abstract ways that a physical system could be said to compute realize the system's teleology. To save Strong AI, we can say that you can get teleology from a programmer. To save physicalist functiodnalism about human beings, we can then say that you can get teleology from evolution (Millikan, etc.). The cost is that consciousness now becomes an extremely extrinsic property: a computer's being conscious is partly constituted by the intentions of the programmer and a human's being conscious is partly constituted by what evolutionary events happened millions of years ago (and besides that, there are serious technical problems with the evolutionary reduction of teleology).

--

Note that it's not just panpsychism that results from Strong AI and pancomputationalism. Panpsychism doesn't seem so terrible to me. But what we get is what one might call panomnipsychism--every physical system is in every possible phenomenal state at every time. That undercuts lots of ethics as then there is no point to relieving anyone's suffering, because every system is always suffering in every possible way anyway.

Alexander R Pruss said...

By the way, causal constraints on what can function as a computer are not enough. For we still get omnicomputationalism about computers--anything that IS a computer satisfies the causal constraints, and hence computes everything all the time--and hence, given functionalism, my laptop right now has every possible conscious state. Rather, we need causal constraints on what process can count as a computation of a particular sort (e.g., what process can count as an addition).

This causal constraint view seems to me to be problematic for multiple reasons. The first is that you have no epistemic access to what the causal constraint is besides the fact that your brain fulfills the constraint. It's reasonable for you to generalize from that that all human brains fulfill the constraints, and maybe even all brains on earth. But it is not reasonable to generalize that alien life forms fulfill the constraints. This leads to an implausible scepticism about alien consciousness.

Second, now it seems lucky that the evolved beings on earth meet the constraint. (Remember that we don't know what the constraint is, besides the fact that human brains meet it.) Why did evolution come up with a solution that meets the constraint rather than a solution that produces the same behavior but does not meet the constraint? This isn't a problem for theists, but is a problem for physicalists.

Third, if this is actually going to support Strong AI, then we have the problem that we have no idea of what hardware architectures actually meet the constraints. What if carbon is required? What if only analog systems are allowed? How many layers of emulation and of what sort are allowed?