Monday, July 1, 2024

Duplicating electronic consciousnesses

Assume naturalism and suppose that digital electronic systems can be significantly conscious. Suppose Alice is a deterministic significantly conscious digital electronic system. Imagine we duplicated Alice to make another such system, Bob, and fed them both the same inputs. Then there are two conscious beings with qualitatively the same stream of consciousness.

But now let’s add a twist. Suppose that we create a monitoring system that continually checks all of Alice and Bob’s components, and as soon as any corresponding components disagree—are in a different state—then the system pulls the plug on both, thereby resetting all components to state zero. In fact, however, everything works well, and the inputs are always the same, so there is never any deviation between Alice and Bob, and the monitoring system never does anything.

What happens to the consciousnesses? Intuitively, neither Alice nor Bob should be affected by a monitoring system that never actually does anything. But it is not clear that this is the conclusion that specific naturalist theories will yield.

First, consider functionalism. Once the monitoring system is in place, both Alice and Bob change with respect to their dispositional features. All the subsystems of Alice are now incapable of producing any result other than one synchronized to Bob’s subsystems, and vice versa. I think a strong case can be made that on functionalism, Alice and Bob’s subsystems lose their defining functions when the monitoring system is in place, and hence lose consciousness. Therefore, on functionalism, consciousness has an implausible extrinsicness to it. The duplication-plus-monitoring case is some evidence against functionalism.

Second, consider Integrated Information Theory. It is easy to see that the whole system, consisting of Alice, Bob and the monitoring system, has a very low Φ value. Its components can be thought of as just those of Alice and Bob, but with a transition function that sets everything to zero if there is a deviation. We can now split the system into two subsystems: Alice and Bob. Each subsystem’s behavior can be fully predicted from that subsystem’s state plus one additional bit of information that represents whether the other system agrees with it. Because of this, the Φ value of the system is at most 2 bits, and hence the system as a whole has very, very little consciousness.

Moreover, Alice remains significantly conscious: we can think of Alice as having just as much integrated information after the monitoring system is attached as before, but now having one new bit of environmental dependency, so the Φ measure does not change significantly from the monitoring being added. Moreover, because the joint system is not significantly conscious, Integrated Information Theory’s proviso that a system loses consciousness when it comes to be in a part-to-whole relationship with a more conscious system is irrelevant.

Likewise, Bob remains conscious. So far everything seems perfectly intuitive. Adding a monitoring system doesn’t create a new significantly conscious system, and doesn’t destroy the two existing conscious systems. However, here is the kicker. Let X be any subsystem of Alice’s components. Let SX be the system consisting of the components in X together with all of Bob’s components that don’t correspond to the components in X. In other words, SX is a mix of Alice’s and Bob’s components. It is easy to see the information theoretic behavior of SX is exactly the same as the information theoretic behavior of Alice (or of Bob for that matter). Thus, the Φ value of SX will be the same for all X.

Hence, on Integrated Information Theory, each of the SX systems will be equally conscious. The number of these systems equals to 2n where n is the number of components in Alice. Of course, one of these 2n systems is Alice herself (that’s SA where A is the set of Alice’s components) and another one is Bob himself (that’s S). Conclusion: By adding a monitoring system to our Alice and Bob pair, we have created a vast number of new equally conscious systems: 2n − 2 of them!

The ethical consequences are very weird. Suppose that Alice has some large number of components, say 1011 (that’s how many neurons we have). We duplicate Alice to create Bob. We’ve doubled the number of beings with whatever interests Alice had. And then we add a dumb monitoring that pulls the plug given a deviation between them. Suddenly we have created 21011 − 2 systems with the same level of consciousness. Suddenly, the moral consideration owed to to the Alice/Bob line of consciousness vastly outnumbers everything.

So both functionalism and Integrated Information Theory have trouble with our duplication story.

4 comments:

  1. I think functionalists would deny that introducing the monitoring system would destroy consciousness. I think they would say that ‘Alice’ is conscious because its states have the appropriate causal relations to its input stream. Similarly for ‘Bob’. Granted, the setup forces them to be the same, but that doesn’t make them lose consciousness. Each still has the right causal relation to its input stream.

    Here's a variation. Suppose that Fred, a normal human, lives on Earth and parallel Fred lives on a parallel Earth. Up to now, everything about Fred and his relevant environment has been exactly matched by parallel Fred. There is a (possibly supernatural) monitoring system that will kill them both if they ever differ.

    Would functionalists have to say that Fred was not really conscious, because he could not differ from parallel Fred? I don’t see why. I think they would say that Fred is conscious because his internal states have the right causal relations to his environment. They would say the same about parallel Fred and his environment. The monitoring system might cause them both to die in an unexpected (possibly supernatural) way. But this won’t stop them being conscious while they are alive.

    ReplyDelete
  2. I think in the human case, the functionalist may need to say that the paralleling kills both of them. The reason is that their subsystems no longer have the right counterfactual properties. It is, for instance, false that if the eyes were presented with a red patch, they would emit such-and-such nerve signals. Instead, if the eyes were presented with a red patch, the human would die.

    One escape in the literature is to define the functions of subsystems in terms of the species rather than the individual.

    ReplyDelete
  3. The argument assumes that a conscious being can be fully deterministic in human terms. I think that is highly questionable based on current AI research, which seems to value the happy (not predicted) surprises that can be generated by AI, even when in retrospect we can see how the surprising response was generated.

    ReplyDelete
  4. Alex: Granted, paralleled humans differ from normal humans, because there is an extra thing that can kill them. So setting up the monitoring ends Fred’s life as a normal human. In that sense, normal Fred dies. But paralleled Fred doesn’t die until he either dies in a normal way or is killed by the monitoring system. Until then, his states are just what they would have been without the monitoring, and they are related to his environment just as they would have been.

    Here's a line: Presenting a red patch should cause the appropriate nerve signals, provided you are still alive. Normal people probably will be - it’s possible, but unlikely, that (for example) you might be shot dead as the red patch in presented. Linked people probably won’t be - it’s possible but unlikely that you will be saved by a matching change in your doppelganger. Either way, if you are alive, the appropriate nerves will be firing.

    ReplyDelete