Tuesday, July 2, 2024

Do we have normative powers?

A normative power is supposed to be a power to directly change normative reality. We can, of course, indirectly change normative reality by affecting the antecedents of conditional norms: By unfairly insulting you, I get myself to have a duty to apologize, but that is simply due to a pre-existing duty to apologize for all unfair insults.

It would be attractive to deny our possession of normative powers. Typical examples of normative powers are promises, commands, permissions, and requests. But all of these can seemingly be reduced to conditional norms, such as:

  • Do whatever you promise

  • Do whatever you are validly commanded

  • Refrain from ϕing unless permitted

  • Treat what you are requested as a reason for doing it.

One might think that one can still count as having a normative power even if it is reducible to prior conditional norms. Here is a reason to deny this. I could promise to send you a dollar on any day on which your dog barks. Then your dog has the power to obligate me to send you a dollar, a power reducible to the norm arising from my promise. But dogs do not have normative powers. Hence an ability to change normative reality by affecting the antecedents of a prior conditional norm is not a normative power.

If this argument succeeds, if a power to affect normative reality is reducible to a non-normative power (such as the power to bark) and a prior norm, it is not a normative power. Are there any normative powers, then, powers not reducible in this way?

I am not sure. But here is a non-conclusive reason to think so. It seems we can invent new useful ways of affecting normative reality, within certain bounds. For instance, normally a request comes along with a permission—a request creates a reason for the other party to do the requested action and while removing any reasons of non-consent against the performance. But there are rare contexts where it is useful to create a reason without removing reasons of non-consent. An example is “If you are going to kill me, kill me quickly.” One can see this as creating a reason for the murderer to kill one quickly, without removing reasons of non-consent against killing (or even killing quickly). Or, for another example, normally a general’s command in an important matter generates a serious obligation. But there could be cases where the general doesn’t want a subordinate to feel very guilty for failing to fulfill the command, and it would be useful for the general to make a new commanding practice, a “slight command” which generates an obligation, but one that it is only slightly wrong to disobey.

There are approximable and non-approximable promises. When I promise to bake you seven cookies, and I am short on flour, normally I have reason to bake you four. But there are cases where there is no reason to bake you four—perhaps you are going to have seven guests, and you want to serve them the same sweet, so four are useless to you (maybe you hate cookies). Normally we leave such decisions to common sense and don’t make them explicit. However, we could also imagine making them explicit, and we could imagine promises with express approximability rules (perhaps when you can’t do cookies, cupcakes will be a second best; perhaps they won’t be). We can even imagine complex rules of preferability between different approximations to the promise: if it’s sunny, seven cupcakes is a better approximation than five cookies, while if it’s cloudy, five cookies is a better approximation. These rules might also specify the degree of moral failure that each approximation represents. It is, plausibly, within our normative authority over ourselves to issue promises with all sorts of approximability rules, and we can imagine a society inventing such.

Intuitively, normally, if one is capable of a greater change of normative reality, one is capable of a lesser one. Thus, if a general has the authority to create a serious obligation, they have the authority to create a slight one. And if you are capable of both creating a reason and providing a permission, you should be able to do one in isolation from the other. If you have the authority to command, you have the standing to create non-binding reasons by requesting.

We could imagine a society which starts with two normative powers, promising and commanding, and then invents the “weaker” powers of requesting and permitting, and an endless variety of normative subtlety.

It seems plausible to think that we are capable of inventing new, useful normative practices. These, of course, cannot be a normative power grab: there are limits. The epistemic rule of thumb for determining these limits is that the powers do not exceed ones that we clearly have.

It seems a little simpler to think that we can create new normative powers within predetermined limits than that all our norms are preset, and we simply instance their antecedents. But while this is a plausible argument for normative powers, it is not conclusive.

Monday, July 1, 2024

Duplicating electronic consciousnesses

Assume naturalism and suppose that digital electronic systems can be significantly conscious. Suppose Alice is a deterministic significantly conscious digital electronic system. Imagine we duplicated Alice to make another such system, Bob, and fed them both the same inputs. Then there are two conscious beings with qualitatively the same stream of consciousness.

But now let’s add a twist. Suppose that we create a monitoring system that continually checks all of Alice and Bob’s components, and as soon as any corresponding components disagree—are in a different state—then the system pulls the plug on both, thereby resetting all components to state zero. In fact, however, everything works well, and the inputs are always the same, so there is never any deviation between Alice and Bob, and the monitoring system never does anything.

What happens to the consciousnesses? Intuitively, neither Alice nor Bob should be affected by a monitoring system that never actually does anything. But it is not clear that this is the conclusion that specific naturalist theories will yield.

First, consider functionalism. Once the monitoring system is in place, both Alice and Bob change with respect to their dispositional features. All the subsystems of Alice are now incapable of producing any result other than one synchronized to Bob’s subsystems, and vice versa. I think a strong case can be made that on functionalism, Alice and Bob’s subsystems lose their defining functions when the monitoring system is in place, and hence lose consciousness. Therefore, on functionalism, consciousness has an implausible extrinsicness to it. The duplication-plus-monitoring case is some evidence against functionalism.

Second, consider Integrated Information Theory. It is easy to see that the whole system, consisting of Alice, Bob and the monitoring system, has a very low Φ value. Its components can be thought of as just those of Alice and Bob, but with a transition function that sets everything to zero if there is a deviation. We can now split the system into two subsystems: Alice and Bob. Each subsystem’s behavior can be fully predicted from that subsystem’s state plus one additional bit of information that represents whether the other system agrees with it. Because of this, the Φ value of the system is at most 2 bits, and hence the system as a whole has very, very little consciousness.

Moreover, Alice remains significantly conscious: we can think of Alice as having just as much integrated information after the monitoring system is attached as before, but now having one new bit of environmental dependency, so the Φ measure does not change significantly from the monitoring being added. Moreover, because the joint system is not significantly conscious, Integrated Information Theory’s proviso that a system loses consciousness when it comes to be in a part-to-whole relationship with a more conscious system is irrelevant.

Likewise, Bob remains conscious. So far everything seems perfectly intuitive. Adding a monitoring system doesn’t create a new significantly conscious system, and doesn’t destroy the two existing conscious systems. However, here is the kicker. Let X be any subsystem of Alice’s components. Let SX be the system consisting of the components in X together with all of Bob’s components that don’t correspond to the components in X. In other words, SX is a mix of Alice’s and Bob’s components. It is easy to see the information theoretic behavior of SX is exactly the same as the information theoretic behavior of Alice (or of Bob for that matter). Thus, the Φ value of SX will be the same for all X.

Hence, on Integrated Information Theory, each of the SX systems will be equally conscious. The number of these systems equals to 2n where n is the number of components in Alice. Of course, one of these 2n systems is Alice herself (that’s SA where A is the set of Alice’s components) and another one is Bob himself (that’s S). Conclusion: By adding a monitoring system to our Alice and Bob pair, we have created a vast number of new equally conscious systems: 2n − 2 of them!

The ethical consequences are very weird. Suppose that Alice has some large number of components, say 1011 (that’s how many neurons we have). We duplicate Alice to create Bob. We’ve doubled the number of beings with whatever interests Alice had. And then we add a dumb monitoring that pulls the plug given a deviation between them. Suddenly we have created 21011 − 2 systems with the same level of consciousness. Suddenly, the moral consideration owed to to the Alice/Bob line of consciousness vastly outnumbers everything.

So both functionalism and Integrated Information Theory have trouble with our duplication story.