Saturday, March 30, 2019

A requirement on the intention in every action

I take intentions to specify the success conditions for an action. An action is successful precisely to the extent that it satisfies the intentions. I am drawn to the following rather strong requirement on the permissible intentions for an action:

  1. If it was metaphysically possible for one’s intentions to be satisfied by a wrong action, one acted wrongly.

For instance, suppose I eat a sausage and I don’t care at all what species of animal it comes from. Then I have done wrong, because my intention would have been satisfied even if it turned out to be from a human. This is true even if what I ate was, in fact, pork (I am assuming that it is permissible to eat pigs). For my action of eating the pork was performed under a description equivalent to “intentionally eating pork, human flesh, or any other kind of meat.” A bad description to act under.

Of course, normally, we don’t explicitly state to ourselves all the things going into the intention: there are standing background intentions, such as that the sausage be made of meats it is permissible to eat. We can probe what the intention was by asking whether our action would have been successful under hypothetical conditions. For permissibility, it needs to be the case that had the sausage turned out to be human flesh, I would have been correct to say: “I was trying to do something else.”

One way to fulfill (1) is simply to have a uniform standing intention in one’s actions that is logically incompatible with any impermissible action. For instance, one could have a standing intention in one’s actions for the actions to be properly expressive of one’s love of God—or just for them to be permissible.

Friday, March 29, 2019

Moving from world to world

If the A-theory of time is true, then it is (metaphysically) possible that the year 2010 have the objective property P of presentness and it is also possible that the year 2019 have P. For it is true that 2019 has P, and what is true is possible. But by the same token in 2010 it was true that 2010 has P, and so it was possible that 2010 have P. And what is metaphysically possible does not change. So even now it is possible that 2010 have P.

But a proposition is possible if and only if it is true at some possible world. Thus, if the A-theory of time true, there are possible world where 2010 has P and possible worlds where 2019 has P, and in 2010 we lived in one of the worlds where 2010 has P, while now in 2019 we instead live in a world where 2019 has P.

Consequently, given the A-theory of time, what world we inhabit continually changes.

This seems counterintuive. For now it looks like caring about what will happen is caring about what happens in some merely possible world.

Wednesday, March 27, 2019

Culpability for irrational action when you are culpable for the irrationality

It is widely thought that:

  1. If you act wrongly in an irrational state, but you are responsible for being in the irrational state, then the irrationality does not take away your culpability for the wrongful action.

But consider these two cases.

Case 1: Suppose that you now program an unstoppable robot to punch me in the face ten years from now. The law sentences you to a jail sentence justly suited to what you have done. Then, ten years later, the robot punches me in the face.

Comments: You clearly should not get another jail sentence for that. You’ve already been punished for all that you did, namely programming the robot.

Case 2: The same as Case 1, except now the unstoppable robot is programmed to brainwash you into punching me in the face. Ten years later, the robot brainwashes you, and you punch me in the face.

Comments: I think it is almost as clear as in Case 1 that you should not get another jail sentence. It shouldn’t make any difference to your culpability whether the robot punches me directly, as in Case 1, or brainwashes you (or someone else) to punch me.

This judgment seems to contradict (1). For in Case 2, you act out of an irrational state but are responsible for that irrational state.

I think we need to clarify things. We talk of culpability for actions and culpability for the effects of actions. Thus, if Alice freely punches Bob in the face, we can say that she culpable both for punching Bob and for the effect, say, Bob’s broken nose. But when we say that Alice is culpable for Bob’s broken nose, I think this should be taken as shorthand for: Alice is culpable for freely punching Bob in the face in a way that resulted in a broken nose. In other words, culpability for the effects of actions is culpability for an action qua resulting in the effects.

In Case 1, you have effect-culpability for the robot punching me, and action-culpability for programming the robot. Talking of the effect-culpability for my being punched is shorthand for saying that you are action-culpable for programming the robot so that it would punch me.

In Case 2, we should say a very similar thing. You have effect-culpability for your< punching me, and action-culpability for programming the robot to brainwash you to do that. You do not have action-culpability for your punching me, because your culpability for punching me is really just your culpability for programming the robot to cause you to punch me.

Principle (1) is right as regarding effect-culpability but wrong as regarding action-culpability.

Monday, March 25, 2019

Internalism about non-derivative responsibility

Internalism about non-derivative responsibility holds that whether one is non-derivatively responsible for a decision depends only on facts about the agent during the time of the decision.

Only an incompatibilist can be an internalist. For suppose that compatibilism is true. Then there will be possible cases of non-derivative responsibility where what the agent decides will be determined by factors just prior to the decision. But of course those factors could have been aberrantly produced in order to determine the particular decision by some super-powerful, super-smart being, and then the agent would not have been responsible for the decision. So whether there is responsibility on compatibilism depends on factors outside the time of the decision.

Speaking for myself, I have a strong direct intuition that internalism about non-derivative responsibility is true. But it would be interesting whether arguments can be constructed for or against such internalism. If so, that might give another way forward in the compatibilism/incompatibilism debate.

Thursday, March 21, 2019

If open futurism is true, then there are possible worlds that can't ever be actual

Assume open futurism, so that, necessarily, undetermined future tensed “will” statements are either all false or all lack truth value. Then there are possible worlds containing me such that it is impossible for it to be true that I am ever in that world. What do I mean?

Consider possible worlds where I flip an indeterministic fair coin on infinitely many days, starting with day 1. Among these worlds, there is a possible world wH where the coin always comes up heads. But it is impossible for it to be true that I am in that world. For that I ever am in that world entails that infinitely many future indeterministic fair coin tosses will be heads. But a proposition reporting future indeterministic events cannot be true given an open future. So, likewise, it cannot be true that I am ever in that world.

But isn’t it absurd that there be a possible world with me such that it is impossible that it be true that I am in it?

My presence is an unnecessary part of the above argument. The point can also be put this way. If open futurism is true, there are possible worlds (such as wH) that can’t possibly ever be actual.

If classical theism rules out open theism, then classical theism rules out presentism

If presentism and most, if not all, other versions of the A-theory are true, then propositions change in truth value. For instance, on presentism, in the time of the dinosaurs it was not true that horses exist, but now it is true; on growing block, ten years ago the year 2019 wasn't at the leading edge of reality, but now it is. The following argument seems to show that such views are incompatible with classical theism.

  1. God never comes to know anything.

  2. If at t1, x doesn’t know a proposition p but at t2 > t1, x knows p, then x comes to know p.

  3. If propositions change in truth value, then there are times t1 < t2 and a proposition p such that p is not true at t1 and p is true at t2.

  4. It is always the case that God knows every true proposition.

  5. It is never the case that anyone knows any proposition that isn’t true.

  6. So, if propositions change in truth value, then there are times t1 < t2 and a proposition p such that God doesn’t know p at t1 but God does know p at t2. (by 3-5)

  7. So, if propositions change in truth value, God comes to know something. (by 2 and 6)

  8. So, propositions do not change in truth value. (by 1 and 7)

I think the only controversial proposition is (1). Of course, some non-classical theists—say, open theists—will deny (1). But non-classical theists aren’t the target of the argument.

However, there is a way for classical theists to try to get out of (1) as well. They could say that the content of God’s knowledge changes, even though God and God’s act of knowing are unchanging. The move would be like this. We classical theists accept divine simplicity, and hence hold that God would not have been intrinsically any different had he created otherwise than he did. But had God created otherwise than he did, the content of his knowledge would have been different (since God knows what he creates). So the content of God’s knowledge needs to be partially constituted by created reality. (This could be a radical semantic externalism, say.) Thus, had God created otherwise than he did, God (and his act of knowledge which is identical to God) would have been merely extrinsically different.

But exactly the same move allows one to reconcile the denial of (1) with immutability. The content of God’s knowledge is partially constituted by created reality, and hence as created reality changes, the content of God’s knowledge changes, but the change in God is merely extrinsic, like a mother’s change from being taller than her daughter to being shorter than her daughter solely due to her daughter’s growth.

I agree that denying (1) is compatible with God’s being intrinsically unchanging. For a long time I thought that this observation destroyed the argument (1)-(8). But I now think not. For I am now thinking that even if (1) is compatible with immutability, (1) is a part of classical theism. For it is a part of classical theism that God doesn’t learn in any way, and coming to know is a kind of learning.

Here is one way to see that (1) is a part of classical theism. Classical theists want to reject any open theist views. But here is one open theist view, probably the best one. The future is open and propositions reporting what people will freely do tomorrow are now either false or neither-true-nor-false, but tomorrow they come to be true. An omniscient being knows all true propositions, but it is no shortcoming of omniscience to fail to know propositions that aren’t true. Then, our open theist says, God learns these propositions as soon as they become true. This is all that omniscience calls for.

Now, classical theists will want to reject this open theist view on the grounds of its violating immutability. But they cannot do so if they themselves reject (1). For the presentist (say) classical theist can reject (1) without violating immutability, so can our open theist. Indeed, our open theists can say exactly the same thing I suggested earlier: God changes extrinsically as time progresses, and the content of God’s knowledge changes, but God remains intrinsically the same.

So, what do I think the classical theist should say to our open theist? I think this: that God doesn’t come to know is not just a consequence of the doctrine of immutability, but is itself a part of the doctrine of immutability. A God who learns is mutable in an objectionable way even if this learning is not an intrinsic change in God. But if we say this, then of course we are committed to (1), and we cannot be presentists or accept any other of the theories of time on which propositions change in truth value.

I think the best response on the part of the classical theist who is an entrenched presentist would be to deny (1) and concede that classical theism does not rule out open theism. Instead, open theism is ruled out by divine revelation, and revelation here adds to classical theism. But it seems very strange to say that classical theism does not rule out open theism.

Wednesday, March 20, 2019

God and the B-theory of time

  1. All reality is such that it can be known perfectly from the point of view of God.

  2. The point of view of God is eternal and timeless.

  3. Thus, all reality is such that it can be known perfectly from an eternal and timeless point of view.

  4. If all reality is such that it can be known perfectly from an eternal and timeless point of view, then the B-theory of time is true.

  5. So, the B-theory of time is true.

I am not sure of premise (4), however.

Tuesday, March 19, 2019

Will dogs live forever?

Suppose a dog lives forever. Assuming the dog stays roughly dog-sized, there is only a finite number of possible configurations of the dog’s matter (disregarding insignificant differences on the order of magnitude of a Planck length, say). Then, eventually, all of the dog’s matter configurations will be re-runs, as we will run out of possible new configurations. Whatever the dog is feeling, remembering or doing is something the dog has already felt, remembered or done. It will be literally impossible to teach the dog a new trick (without swelling the dog beyond normal dog size).

But a dog’s life is a material life, unlike perhaps the life of a person. Plausibly, a dog’s mental states are determined by the configuration of the dog’s (brain) matter. So, eventually, every one of the dog’s mental states will be a re-run, too.

And then we will run out of states re-run once, and the dog will only have states that are on their second or later re-run. And so on. There will come a day when whatever the dog is feeling, remembering or doing is something the dog has felt, remembered or done a billion times: and there is still eternity to go.

Moreover, we’re not just talking about momentary re-runs. Eventually, every day of the dog’s life will be an identical re-run of an earlier day of the dog’s life (at least insofar as the dog is concerned: things beyond the power of the dog’s sensory apparatus might change). And then eventually every year of the dog’s life will be a re-run of an earlier year. And then there will come a year when every coming year of the dog’s life will already have been done a billion times already.

This doesn’t strike me as a particularly flourishing life for a dog. Indeed, it strikes me that it would be a more flourishing life for the dog to cut out the nth re-runs, and have the dog’s life come to a peaceful end.

Granted, the dog won’t be bored by the re-runs. In fact, probably the dog won’t know that things are being re-run over and over. In any case, dogs don’t mind repetition. But there is still something grotesque about such a life of re-runs. That’s just not the temporal shape a dog’s life should have, much as a dog shouldn’t be cubical or pyramidal in spatial shape.

If this is right, then considerations of a dog’s well-being do not lead to the desirability of eternal life for the dog. As far as God’s love for dogs goes, we shouldn’t expect God to make the dogs live forever.

This is, of course, the swollen head argument, transposed to dogs, from naturalist accounts of humans.

But maybe God would make dogs live forever because of his love for their human friends, not because of his love for the dogs themselves? Here, I think there is a better case for eternal life for dogs. But I am still sceptical. For the humans would presumably know that from the dog’s point of view, everything is an endless re-run. The dog has already taken a walk that looked and felt just like this one a billion times, and there is an infinite number of walks that look and feel just like this one to the dog ahead. Maybe to the human they feel different: the human can think about new things each time, because naturalism is false of humans, and so differences in human mental states don’t require differences in neural states (or so those of us who believe in an eternal afterlife for humans should say). But to the dog it’s just as before. And know that on the dog’s side it’s just endless repetition would, I think, be disquieting and dissatisfying to us. It seems to me that it is not fitting for a human to be tied down for an eternity of a friendship with a finite being that eventually has nothing new to exhibit in its life.

So, I doubt that God would make dogs live forever because of his love for us, either. And the same goes for other brute animals. So, I don’t think brute animals live forever.

All this neglects Dougherty’s speculative suggestion that in the afterlife animals may be transformed, Narnia-like, so that they become persons. If he’s right, then the naturalistic supervenience assumption will be no more true for the animals than for us, and the repetition argument above against dogs living forever will fail. But the argument above will still show that we shouldn’t expect brute animals to live forever. And I am dubious of the transformation hypothesis, too.

At the same time, I want to note that I think it is not unlikely that there will be brute animals on the New Earth. But if so, I expect they will have finite lifespans. For while an upper temporal limit to the life of a human would be an evil, an upper temporal limit to the life of a brute animal seems perfectly fitting.

Monday, March 18, 2019

Disliking

It is a staple of sermons on love that we are required to love our neighbor, not like them. I think this is true. But it seems to me that in many cases, perhaps even most cases, _dis_liking people is a moral flaw. My argument below has holes, but I still think there is something to the line of thought. I am sharing it because it has helped me identify what seems to be a flaw in myself, and it may be a flaw that you share.

Just about everyone has some dislikable feature. After all, just about everyone has a moral flaw, and every moral flaw is dislikable. Moreover, there are many dislikable features that are not moral flaws: a voice that is too hoarse, a face that is too asymmetrical, an intellect that is too slow, etc. However, that Alice has a dislikable feature F need not justify my disliking Alice: on its face it only justifies my disliking F. For the feature to justify disliking Alice, it would have to be a feature sufficiently central to Alice as a person. And only moral flaws or faults would have the relevant centrality, I think.

If I dislike persons because they have a disability or because of their gender or their race, that is a moral flaw in me, even if I act justly towards them. This suggests that dislikes cannot have an arbitrary basis. There must be a good reason for disliking. And it is hard to see how anything other than a moral flaw could form the right kind of basis.

Moreover, not just any moral flaw is sufficient to justify dislike of the person. It has to be a flaw that goes significantly beyond the degree of flawedness that people ordinarily exhibit. Here is a quick line of thought. Few people should dislike themselves. (Maybe Hitler should. And I don’t deny that almost everyone should be dissatisfied with themselves.) Hence few people are dislikable. Granted, there is a leap here: a move from being dislikable to self and being dislikable to another. But if the basis of dislikability is moral flaws, it seems to me that there would be something objectionably arbitrary about disliking someone who isn’t dislikable simpliciter.

Yet I find myself disliking people on the basis of features that aren’t moral flaws or at least aren’t moral flaws significantly bigger than flaws I myself have. Indeed, often the basis is a flaw smaller than flaws I know myself to have, and sometimes it is a flaw I myself share. This disliking is itself a flaw.

I may love the people I unfairly dislike. But I don’t love them enough. For unfair disliking goes against the appreciative aspect of love (unless, of course, the person is so flawed as to be really dislikable—in which case the appreciative aspect may be largely limited to an appreciation of what they ought to be rather than what they now are).

I used to be rather lessez-faire about my dislikes, on the fallacious ground that love is not the same thing as liking. Enough. Time to fight the good fight against dislike of persons and hence for a more appreciative love. Pray for me.

That said, there is nothing wrong in disliking particular dislikable features in others. But when they are dislikable, one should also dislike them in oneself.

Σ10 alethic Platonism

Here is an interesting metaphysical thesis about mathematics: Σ10 alethic Platonism. According to Σ10 alethic Platonism, every sentence about arithmetic with only one unbounded existential quantifier (i.e., an existential quantifier that ranges over all natural numbers, rather than all the natural numbers up to some bound), i.e., every Σ10 sentence, has an objective truth value. (And we automatically get Π10 alethic Platonism, as Π10 sentences are equivalent to negations of Σ10 sentences.)

Note that Σ10 alethic Platonism is sufficient to underwrite a weak logicism that says that mathematics is about what statements (narrowly) logically follow from what recursive axiomatizations. For Σ10 alethic Platonism is equivalent to the thesis that there is always a fact of the matter about what logically follows from what recursive axiomatization.

Of course, every alethic Platonist is a Σ10 alethic Platonist. But I think there is something particularly compelling about Σ10 alethic Platonism. Any Σ10 sentence, after all, can be rephrased into a sentence saying that a certain abstract Turing machine will halt. And it does seems like it should be possible to embody an abstract Turing machine as a physical Turing machine in some metaphysically possible world with an infinite future and infinite physical resources, and then there should be a fact of the matter whether that machine would in fact halt.

There is a hitch in this line of thought. We need to worry about worlds with “non-standard” embodiments of the Turing machine, embodiments where the “physical Turing machine” is performing an infinite task (a supertask, in fact an infinitely iterated supertask). To rule those worlds out in a non-arbitrary way requires an account of the finite and the infinite, and that account is apt to presuppose Platonism about the natural numbers (since the standard mathematical definition of the finite is that a finite set is one whose cardinality is a natural number). We causal finitists, however, do not need to worry, as we think that it is impossible for Turing machines to perform infinite tasks. This means that causal finitists—as well as anyone else who has a good account of the difference between the finite and the infinite—have good reason to accept Σ10 alethic Platonism.

I haven't done any surveys, but I suspect that most mathematicians would be correctly identified as at least being Σ10 alethic Platonists.

Logicism and Goedel

Famously, Goedel’s incompleteness theorems refuted (naive) logicism, the view that mathematical truth is just provability.

But one doesn’t need all of the technical machinery of the incompleteness theorems to refute that. All one needs is Goedel’s simple but powerful insight that proofs are themselves mathematical objects—sequence of symbols (an insight emphasized by Goedel numbering). For once we see that, then the logicist view is that what makes a mathematical proposition true is that a certain kind of mathematical object—a proof—exists. But the latter claim is itself a mathematical claim, and so we are off on a vicious regress.

Friday, March 8, 2019

Obligations of friendship

We are said to have various obligations, especially of benevolence, to our friends precisely because they are our friends. Yet this seems mistaken to me if friendship is by definition mutual.

Suppose you and I think we really are friends. We do all the things good friends do together. We think we are friends. And you really exhibited with respect to me all, externally and internally, all the things that good friends exhibit. But one day I realize that the behavior of my heart has not met the minimal constitutive standards for friendship. Perhaps though I had done things to benefit you, they were all done for selfish ends. And thus I was never your friend, and if friendship is mutual, it follows that we weren’t ever friends.

At the same time, I learn that you are in precisely the kind of need that triggers onerous obligations of benevolence in friends. And so I think to myself: “Whew! I thought I would have an obligation to help, but since I was always selfish in the relationship, and not a real friend, I don’t.”

This thought would surely be a further moral corruption. Granted, if I found out that you had never acted towards me as a friend does, but had always been selfish, that might undercut my obligation to you. But it would be very odd to think that finding out that I was selfish would give me permission for further selfishness!

So, I think, in the case above I still would have towards you the kinds of obligations of benevolence that one has towards one’s friends. Therefore, it seems, these obligations do not arise precisely from friendship. The two-sided appearance of friendship coupled with one-sided (on your side) reality is enough to generate these obligations.

Variant case: For years I’ve been pretending to be your friend for the sake of political gain, while you were sincerely doing what a friend does. And now you need my help. Surely I owe it to you!

I am not saying that these sorts of fake friendships give rise to all the obligations normally attributed to friendship. For instance, one of the obligations normally attributed to friendship is to be willing to admit that one is friends with them (Peter violated this obligation when he denied Jesus). But this obligation requires real friendship. Moreover, certain obligations to socialize with one’s friends depend on the friendship being real.

A tempting thought: Even if friendship is mutual, there is a non-mutual relation of “being a friend to”. You can be a friend to someone who isn’t a friend to you. Perhaps in the above cases, my obligation to you arises not from our friendship, which does not exist, but from your being a friend to me. But I think that’s not quite right. For then we could force people to have obligations towards us by being friends to them, and that doesn’t seem right.

Maybe what happens is this. In friendship, we invite our friends’ trust in us. This invitation of trust, rather than the friendship itself, is what gives rise to the obligations of benevolence. And in fake friendships, the invitation of trust—even if insincere—also gives rise to obligations of benevolence.

So, we can say that we have obligations of benevolence to our friends because they are our friends, but not precisely because they are our friends. Rather, the obligations arise from a part of friendship, the invitation of trust, a part that can exist apart from friendship.

Wednesday, March 6, 2019

Another dilemma?

Following up on my posts (this and this) regarding puzzles generated by moral uncertainty, here is another curious case.

Dr. Alice Kowalska believes that a steroid injection will be good for her patient, Bob. However, due to a failure of introspection, she also believes that she does not believe that a steroid injection will be beneficial to Bob. Should she administer the steroid injection?

In other words: Should Dr. Kowalska do what she thinks is good for her patient, or should she do what she thinks she thinks is good for her patient?

The earlier posts pushed me in the direction of thinking that subjective obligation takes precedence over objective obligation. That would suggest that she should do what she thinks she thinks is good for her patient.

But doesn’t this seem mistaken? After all, we don’t want Dr. Kowalska to be gazing at her own navel, trying to figure out what she thinks is good for the patient. We want her to be looking at the patient, trying to figure out what is good for the patient. So, likewise, it seems that her action should be guided by what she thinks is good for the patient, not what she thinks she thinks is good for the patient.

How, though, to reconcile this with the action-guiding precedence that the subjective seems to have in my previous posts? Maybe it’s this. What should be relevant to Dr. Kowalska is not so much what she believes, but what her evidence is. And here the case is underdescribed. Here is one story compatible with what I said above:

  1. Dr. Kowalska has lots of evidence that steroid injections are good for patients of this sort. But her psychologist has informed her that because of a traumatic experience involving a steroid injection, she has been unable to form the belief that naturally goes with this evidence. However, Dr. Kowalska’s psychologist is incompetent, and Dr. Kowalska indeed has the belief in question, but trusts her psychologist and hence thinks she does not have it.

In this case, it doesn’t matter whether Dr. Kowalska believes the injection would be good for patient. What matters is that she has lots of evidence, and she should inject.

Here is another story compatible with the setup, however:

  1. Dr. Kowalska knows there is no evidence that steroid injections are good for patients of this sort. However, her retirement savings are invested in a pharmaceutical company that specializes in these kinds of steroids, and wishful thinking has led to her subconsciously and epistemically akratically forming the belief that these injections are beneficial. Dr. Kowalska does not, however, realize that she has formed this subconscious belief.

In this case, intuitively, again it doesn’t matter that Dr. Kowalska has this subconscious belief. What matters is that she knows there is no evidence that the injections are good for patients of this sort, and given this, she should not inject.

If I am right in my judgments about 1 and 2, the original story left out crucial details.

Maybe we can tell the original story simply in terms of evidence. Maybe Dr. Kowalska on balance has evidence that the injection is good, while at the same time on balance having evidence that she does not on balance have evidence that the injection is good. I am not sure this is possible, though. The higher order evidence seems to undercut the lower order evidence, and hence I suspect that as soon as she gained evidence that she does not on balance have evidence, it would be the case that on balance she does not have evidence.

Here is another line of thought suggesting that what matters is evidence, not belief. Imagine that Dr. Kowalska and Dr. Schmidt both have the same evidence that it is 92% likely that the injections would be beneficial. Dr. Schmidt thereupon forms the belief that the injections would be beneficial, but Dr. Kowalska is more doxastically cautious and does not form this belief. But there is no disagreement between them as to the probabilities on the evidence. Then I think there should be no disagreement between them as to what course of action should be taken. What matters is whether 92% likelihood of benefit is enough to outweigh the cost, discomfort and side-effects, and whether the doctor additionally believes in the benefit is quite irrelevant.

Tuesday, March 5, 2019

More on moral risk

You are the captain of a small damaged spaceship two light years from Earth, with a crew of ten. Your hyperdrive is failing. You can activate it right now, in a last burst of energy, and then get home. If you delay activating the hyperdrive, it will become irreparable, and you will have to travel to earth at sublight speed, which will take 10 years, causing severe disruption to the personal lives of the crew.

The problem is this. When such a failing hyperdrive is activated, everything within a million kilometers of the spaceship’s position will be briefly bathed in lethal radiation, though the spaceship itself will be protected and the radiation will quickly dissipate. Your scanners, fortunately, show no planets or spaceships within a million kilometers, but they do show one large asteroid. You know there are two asteroids that pass through that area of space: one of them is inhabited, with a population of 10 million, while the other is barren. You turn your telescope to the asteroid. It looks like the uninhabited asteroid.

So, you come to believe there is no life within a million kilometers. Moreover, you believe that as the captain of the ship who has a resposibility to get the crew home in a reasonable amount of time, unless of course this causes undue harm. Thus, you believe:

  1. You are obligated to activate the hyperdrive.

You reflect, however, on the fact that ship’s captains have made mistakes in asteroid identification before. You pull up the training database, and find that at this distance, captains with your level of training make the relevant mistake only once in a million times. So you still believe that this is the lifeless asteroid. but now you get worried. You imagine a million starship captains making the same kind of decision as you. As a result, 10 million crew members get home on time to their friends and families, but in one case, 10 million people are wiped out in an asteroid. You conclude, reasonably, that this is an unacceptable level of risk. One in a million isn’t good enough. So, you conclude:

  1. You are obligated not to activate the hyperdrive.

This reflection on the possibility of perceptual error does not remove your belief in (1), indeed your knowledge of (1). After all, a one in a million chance of error is less than the chance of error in many cases of ordinary everyday perceptual knowledge—and, indeed, asteroid identification just is a case of everyday perceptual knowledge for a captain like yourself.

Maybe this is just a case of your knowing you are in a real moral dilemma: you have two conflicting duties, one to activate the hyperdrive and the other not to. But this fails to account for the asymmetry in the case, namely that caution should prevail, and there has to be an important sense of “right” in which the right decision is not to activate the hyperdrive.

I don’t know what to say about cases like this. Here is my best start. First, make a distinction between subjective and objective obligations. This disambiguates (1) and (2) as:

  1. You are objectively obligated to activate the hyperdrive.

  2. You are subjectively obligated not to activate the hyperdrive.

Second, deny the plausible bridge principle:

  1. If you believe you are objectively obligated to ϕ, then you are subjectively obligated to ϕ.

You need to deny (4), since you believe (3), and if (4) were true, then it would follow you are subjectively obligated to activate the hyperdrive, and we would once again have lost sight of the asymmetric “right” on which the right thing is not to activate.

This works as far as it goes, though we need some sort of a replacement for (4), some other principle bridging from the objective to the subjective. What that principle is is not clear to me. A first try is some sort of an analogue to expected utility calculations, where instead of utilities we have the moral weights of non-violated duties. But I doubt that these weights can be handled numerically.

And I still don’t know how to handle is the problem of ignorance of the bridge principles between the objective and the subjective.

It seems there is some complex function from one’s total mental state to one’s full-stop subjective obligation. This complex function is one which is not known to us at present. (Which is a bit weird, in that it is the function that governs subjective obligation.)

A way out of this mess would be to have some sort of infallibilism about subjective obligation. Perhaps there is some specially epistemically illuminated state that we are in when we are subjectively obligated, a state that is a deliverance of a conscience that is at least infallible with respect to subjective obligation. I see difficulties for this approach, but maybe there is some hope, too.

Objection: Because of pragmatic encroachment, the standards for knowledge go up heavily when ten million lives are at stake, and you don’t know that the asteroid is uninhabited when lives depend on this. Thus, you don’t know (1), whereas you do know (2), which restores the crucial action-guiding asymmetry.

Response: I don’t buy pragmatic encroachment. I think the only rational process by which you lose knowledge is getting counterevidence; the stakes going up does not make for counterevidence.

But this is a big discussion in epistemology. I think I can avoid it by supposing (as I expect is true) that you are no more than 99.9999% sure of the risk principles underlying the cautionary judgment in (2). Moreover, the stakes go up for that judgment just as much as they do for (1). Hence, I can suppose that you know neither (1) nor (2), but are merely very confident, and rationally so, of both. This restores the symmetry between (1) and (2).

Monday, March 4, 2019

Isomorphism of inputs

For simplicity, I’ll stick to deterministic systems in this post. Functionalists think that if A is a conscious system, and B is functionally isomorphic to B, then when B receives valid inputs that correspond under the isomorphism to A’s valid inputs, B has exactly the same conscious states as A does.

Crucial to this is the notion of a functional isomorphism. A paradigmatic example would be a computer built of electronics and a hydraulic computer, with the same software. The electronic computer has electrical buttons as inputs and the hydraulic computer uses valves. Perhaps a pressed state of a button has as its isomorph an open valve.

But I think the notion of a functional isomorphic is a dubious one. Start with two electronic systems.

  • System A: Has 16 toggle switches, in two rows of 8, a momentary button, and 9 LEDs. When the button is pressed, the LEDs indicate the sum of the binary numbers encoded in the obvious way by the two rows of toggle switches.

  • System B: Has 25 toggle switches, in three rows, of 8, 8 and 9, respectively, a momentary button, and 9 LEDs. When the momentary button is pressed, the LEDs indicate the positions of the toggle switches in the third row. The toggle switches in the first two rows are not electrically connected to anything.

These two systems seem to be clearly non-isomorphic. The first seems to be an 8-bit adder and the second is just nine directly controlled lights.

But now imagine that the systems come with these instructions:

  • A: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), and press the momentary button. The input state is only validly defined when the momentary button is pressed.

  • B: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), move the toggle switches in the third row to correspond to the bits in the sum of the two input numbers, and press the momentary button. The input state is only validly defined when the momentary button is pressed and the third row of switches contains the sum of the numbers in the first two rows.

There is now an isomorphism between valid inputs of A and B. Thus, the valid input of A:

  • 00000001,00000001,momentary pressed

corresponds to the valid input of B:

  • 00000001,00000001,000000010,momentary pressed.

Moreover, the outputs given the isomorphically corresponding valid inputs match: given the above inputs, both devices show (left to right) seven LEDs off, one LED on, and one LED off.

So it seems that whether A and B count as functionally isomorphic depends on what the instruction manuals specify as valid inputs. If the only valid inputs of B are ones where the third row of inputs corresponds to the sum of the first two, then B is an 8-bit adder. If that restriction is removed, then B is no longer an adder, but something much less interesting.

This point generalizes. Any computational system can be made isomorphic to a much simpler system with a more complex instruction manual.

This is all well and good if we are dealing with computers and software that come with specifications and manuals. But it is disastrous for the functionalist project. For the functionalist project is supposed to be a contemporarynaturalistic naturalistic account of our minds, and given naturalism, our brains do not come with specifications or manuals if contemporary naturalism is true. (If we have Aristotelian naturalism instead, we might get something akin to specifications or manuals embedded in our teleology.)

Objection 1: We need only allow those systems where the specification of valid inputs is relatively simple in a language whose linguistic structure corresponds to what is perfectly natural (Lewis) or structural (Sider), or only count as an isomorphism something that can be described in relatively simple ways in such a language.

Response: First, where is the line of the “relatively simple” to be drawn. Precise specification of the position of a toggle switch or water valve in the language of fundamental physics will be very complicated.

Second, System A is a bona fide electronic 8-bit adder. Imagine System A* is a very similar bona fide hydraulic 8-bit adder. It is very likely that a specification of what counts as a depressed toggle switch or an open valve in the language of microphysics is quite complex (just describing electricity or the flow of water in microphysics is really hard). It is also quite likely that the specification of one of these inputs is quite a bit more complex than the specification of the other. Let’s suppose, for simplicity, that A* is the system where the microphysical specification of how valid inputs work is quite a bit more complicated. Intuitively, fluid dynamics is further from the microphysics than electricity. Then the specification of the valid input states of System B may welll turn out to be closer in complexity to the specification of the valid input states of System A than that of the hydraulic A*. If so, then counting A* as isomorphic to A would force one to likewise count B as isomorphic to A.

Objection 2: The trick in the argument above was to use the notion of a valid input. But perhaps functional isomorphism needs a correspondence between all inputs, not just valid ones.

Response: This is implausible. Amongst invalid inputs to a human brain is a bullet, which produces a variety of outputs, namely death or a wide variety of forms of damage (and corresponding mutations of other behaviors), depending on the bullet trajectory. It is too stringent a requirement on an isomorph of the human brain that it should have the possibility of being damaged in precisely the ways that a bullet would damage a human brain, with exactly isomorphic mutations of behaviors.

More generally, the variety of invalid inputs is just too great to insist on isomorphism. Think of our electronic and hydraulic case. The kind of output you get when you press a toggle switch too hard, or too lightly, is unlikely to correspond to the kind of output you get when you open a valve too much, or too little, and such correspondence should not be required for isomorphism.

Conclusions: We need a manual or other source of specifications to talk of functional isomorphism. Functionalism, thus, requires a robust notion of function that is incompatible with contemporary naturalism.

Friday, March 1, 2019

Between subjective and objective obligation

I fear that a correct account of the moral life will require both objective and subjective obligations. That’s not too bad. But I’m also afraid that there may be a whole range of hybrid things that we will need to take into account.

Let’s start with clear examples of objective and subjective obligations. If Bob promised Alice to give her $10 but I misremember the promise and instead thinks he promised never to give her any more, then:

  1. Bob is objectively required to give Alice $10.

  2. Bob is subjectively required not to give Alice any money.

These cases come from a mistake about particular fact. There are also cases arising from mistakes about general facts. Helmut is a soldier in the Germany army in 1944 who knows the war is unjust but mistakenly believes that because he is a soldier, he is morally required to kill enemy combatants. Then:

  1. Helmut is objectively required to refrain from shooting Allied combatants.

  2. Helmut is subjectively required to kill Allied combatants.

But there are interesting cases of mistakes elsewhere in the reasoning that generate curious cases that aren’t neatly classified in the objective/subjective schema.

Consider moral principles about what one should subjectively do in cases of moral risk. For instance, suppose that Carl and his young daughter are stuck on a desert island for the next three months. The island is full of chickens. Carl believes it is 25% likely that chickens have the same rights as humans, and he needs to feed his daughter. His daughter has a mild allergy to the only other protein source on the island: her eyes will sting and her nose run for the next three months if she doesn’t live on chicken. Carl thus thinks that if chickens have the same rights as humans, he is forbidden from feeding chicken to his daughter; but if they don’t, then he is obligated to feed chicken to her.

Carl could now accept one of these two moral risk principles (obviously, these will be derivative from more general principles):

  1. An action that has a 75% probability of being required, and a 25% chance of being forbidden, should always be done.

  2. An action that has a 25% probability of being forbidden with a moral weight on par with the prohibition on multiple homicides and a 75% probability of being required with a moral weight on par with that of preventing one’s child’s mild allergic symptoms for three months should never be done.

Suppose that in fact chickens have very little in the way of rights. Then, probably:

  1. Carl is objectively required to feed chicken to his daughter.

Suppose further that Carl’s evidence leads him to be sure that (5) is true, and hence he concludes that he is required to feed chicken to his daughter. Then:

  1. Carl is subjectively required to feed chicken to his daughter.

This is a subjective requirement: it comes from what Carl thinks about the probabilities of rights, moral principles about what what to do in cases of risk, etc. It is independent of the objective obligation in (7), though in this example it agrees with it.

But suppose, as is very plausible, that (5) is false, and that (6) is the right moral principle here. (To see the point, suppose that he sees a large mammal in the woods that would suffice to feed his daughter for three months. If the chance that that mammal is a human being is 25%, that’s too high a risk to take.) Then Carl’s reasoning is mistaken. Instead, given his uncertainty:

  1. Carl is required to to refrain from killing chickens.

But what kind of an obligation is (9)? Both (8) and (9) are independent of the objective facts about the rights of chickens and depend on Carl’s beliefs, so it sounds like it’s subjective like (8). But (8) has some additional subjectivity in it: (8) is based on Carl’s mistaken belief about what his obligations are in cases of mortal risk, while (9) is based on what Carl’s obligations (but of what sort?) “really are” in those cases.

It seems that (9) is some sort of a hybrid objective-subjective obligation.

And the kinds of hybrid obligations can be multiplied. For we could ask about what we should do when we are not sure which principle of deciding in circumstances of moral risk we should adopt. And we could be right or we could be wrong about that.

We could try to deny (9), and say that all we have are (7) and (8). But consider this familiar line of reasoning: Both Bob and Helmut are mistaken about their obligations; they are not mistaken about their subjective obligations; so, there must be some other kinds of obligations they are mistaken about, namely objective ones. Similarly, Carl is mistaken about something. He isn’t mistaken about his subjective obligation to feed chicken. Moreover, his mistake does not rest in a deviation between subjective and objective obligation, as in Bob’s and Helmut’s case, because in fact objectively Carl should feed chicken to his daughter, as in fact (I assume for the sake of the argument) chickens have no rights. So just as we needed to suppose an objective obligation that Bob and Helmut got wrong, we need a hybrid objective-subjective one that Carl got wrong.

Here’s another way to see the problem. Bob thinks he is objectively obligated to give no money to Alice and Helmut thinks he is objectively obligated to kill enemy soldiers. But when Carl applies (5), what does he come to think? He doesn’t come to think that he is objectively required to feed chicken to his daughter. He already thought that this was 75% likely, and (5) does not affect that judgment at all. It seems that just as Bob and Helmut have a belief about something other than mere subjective obligation, Carl does as well, but in his case that’s not objective obligation. So it seems Carl has to be judging, and doing so incorrectly, about some sort of a hybrid obligation.

This makes me really, really want an account of obligation that doesn’t involve two different kinds. But I don’t know a really good one.