Monday, March 18, 2019

Disliking

It is a staple of sermons on love that we are required to love our neighbor, not like them. I think this is true. But it seems to me that in many cases, perhaps even most cases, _dis_liking people is a moral flaw. My argument below has holes, but I still think there is something to the line of thought. I am sharing it because it has helped me identify what seems to be a flaw in myself, and it may be a flaw that you share.

Just about everyone has some dislikable feature. After all, just about everyone has a moral flaw, and every moral is dislikable. Moreover, there are many dislikable features that are not moral flaws: a voice that is too hoarse, a face that is too asymmetrical, an intellect that is too slow, etc. However, that Alice has a dislikable feature F need not justify my disliking Alice: on its face it only justifies my disliking F. For the feature to justify disliking Alice, it would have to be a feature sufficiently central to Alice as a person. And only moral flaws or faults would have the relevant centrality, I think.

If I dislike persons because they have a disability or because of their gender or their race, that is a moral flaw in me, even if I act justly towards them. This suggests that dislikes cannot have an arbitrary basis. There must be a good reason for disliking. And it is hard to see how anything other than a moral flaw could form the right kind of basis.

Moreover, not just any moral flaw is sufficient to justify dislike of the person. It has to be a flaw that goes significantly beyond the degree of flawedness that people ordinarily exhibit. Here is a quick line of thought. Few people should dislike themselves. (Maybe Hitler should. And I don’t deny that almost everyone should be dissatisfied with themselves.) Hence few people are dislikable. Granted, there is a leap here: a move from being dislikable to self and being dislikable to another. But if the basis of dislikability is moral flaws, it seems to me that there would be something objectionably arbitrary about disliking someone who isn’t dislikable simpliciter.

Yet I find myself disliking people on the basis of features that aren’t moral flaws or at least aren’t moral flaws significantly bigger than flaws I myself have. Indeed, often the basis is a flaw smaller than flaws I know myself to have, and sometimes it is a flaw I myself share. This disliking is itself a flaw.

I may love the people I unfairly dislike. But I don’t love them enough. For unfair disliking goes against the appreciative aspect of love (unless, of course, the person is so flawed as to be really dislikable—in which case the appreciative aspect may be largely limited to an appreciation of what they ought to be rather than what they now are).

I used to be rather lessez-faire about my dislikes, on the fallacious ground that love is not the same thing as liking. Enough. Time to fight the good fight against dislike of persons and hence for a more appreciative love. Pray for me.

That said, there is nothing wrong in disliking particular dislikable features in others. But when they are dislikable, one should also dislike them in oneself.

Σ10 alethic Platonism

Here is an interesting metaphysical thesis about mathematics: Σ10 alethic Platonism. According to Σ10 alethic Platonism, every sentence about arithmetic with only one unbounded existential quantifier (i.e., an existential quantifier that ranges over all natural numbers, rather than all the natural numbers up to some bound), i.e., every Σ10 sentence, has an objective truth value. (And we automatically get Π10 alethic Platonism, as Π10 sentences are equivalent to negations of Σ10 sentences.)

Note that Σ10 alethic Platonism is sufficient to underwrite a weak logicism that says that mathematics is about what statements (narrowly) logically follow from what recursive axiomatizations. For Σ10 alethic Platonism is equivalent to the thesis that there is always a fact of the matter about what logically follows from what recursive axiomatization.

Of course, every alethic Platonist is a Σ10 alethic Platonist. But I think there is something particularly compelling about Σ10 alethic Platonism. Any Σ10 sentence, after all, can be rephrased into a sentence saying that a certain abstract Turing machine will halt. And it does seems like it should be possible to embody an abstract Turing machine as a physical Turing machine in some metaphysically possible world with an infinite future and infinite physical resources, and then there should be a fact of the matter whether that machine would in fact halt.

There is a hitch in this line of thought. We need to worry about worlds with “non-standard” embodiments of the Turing machine, embodiments where the “physical Turing machine” is performing an infinite task (a supertask, in fact an infinitely iterated supertask). To rule those worlds out in a non-arbitrary way requires an account of the finite and the infinite, and that account is apt to presuppose Platonism about the natural numbers (since the standard mathematical definition of the finite is that a finite set is one whose cardinality is a natural number). We causal finitists, however, do not need to worry, as we think that it is impossible for Turing machines to perform infinite tasks. This means that causal finitists—as well as anyone else who has a good account of the difference between the finite and the infinite—have good reason to accept Σ10 alethic Platonism.

I haven't done any surveys, but I suspect that most mathematicians would be correctly identified as at least being Σ10 alethic Platonists.

Logicism and Goedel

Famously, Goedel’s incompleteness theorems refuted (naive) logicism, the view that mathematical truth is just provability.

But one doesn’t need all of the technical machinery of the incompleteness theorems to refute that. All one needs is Goedel’s simple but powerful insight that proofs are themselves mathematical objects—sequence of symbols (an insight emphasized by Goedel numbering). For once we see that, then the logicist view is that what makes a mathematical proposition true is that a certain kind of mathematical object—a proof—exists. But the latter claim is itself a mathematical claim, and so we are off on a vicious regress.

Friday, March 8, 2019

Obligations of friendship

We are said to have various obligations, especially of benevolence, to our friends precisely because they are our friends. Yet this seems mistaken to me if friendship is by definition mutual.

Suppose you and I think we really are friends. We do all the things good friends do together. We think we are friends. And you really exhibited with respect to me all, externally and internally, all the things that good friends exhibit. But one day I realize that the behavior of my heart has not met the minimal constitutive standards for friendship. Perhaps though I had done things to benefit you, they were all done for selfish ends. And thus I was never your friend, and if friendship is mutual, it follows that we weren’t ever friends.

At the same time, I learn that you are in precisely the kind of need that triggers onerous obligations of benevolence in friends. And so I think to myself: “Whew! I thought I would have an obligation to help, but since I was always selfish in the relationship, and not a real friend, I don’t.”

This thought would surely be a further moral corruption. Granted, if I found out that you had never acted towards me as a friend does, but had always been selfish, that might undercut my obligation to you. But it would be very odd to think that finding out that I was selfish would give me permission for further selfishness!

So, I think, in the case above I still would have towards you the kinds of obligations of benevolence that one has towards one’s friends. Therefore, it seems, these obligations do not arise precisely from friendship. The two-sided appearance of friendship coupled with one-sided (on your side) reality is enough to generate these obligations.

Variant case: For years I’ve been pretending to be your friend for the sake of political gain, while you were sincerely doing what a friend does. And now you need my help. Surely I owe it to you!

I am not saying that these sorts of fake friendships give rise to all the obligations normally attributed to friendship. For instance, one of the obligations normally attributed to friendship is to be willing to admit that one is friends with them (Peter violated this obligation when he denied Jesus). But this obligation requires real friendship. Moreover, certain obligations to socialize with one’s friends depend on the friendship being real.

A tempting thought: Even if friendship is mutual, there is a non-mutual relation of “being a friend to”. You can be a friend to someone who isn’t a friend to you. Perhaps in the above cases, my obligation to you arises not from our friendship, which does not exist, but from your being a friend to me. But I think that’s not quite right. For then we could force people to have obligations towards us by being friends to them, and that doesn’t seem right.

Maybe what happens is this. In friendship, we invite our friends’ trust in us. This invitation of trust, rather than the friendship itself, is what gives rise to the obligations of benevolence. And in fake friendships, the invitation of trust—even if insincere—also gives rise to obligations of benevolence.

So, we can say that we have obligations of benevolence to our friends because they are our friends, but not precisely because they are our friends. Rather, the obligations arise from a part of friendship, the invitation of trust, a part that can exist apart from friendship.

Wednesday, March 6, 2019

Another dilemma?

Following up on my posts (this and this) regarding puzzles generated by moral uncertainty, here is another curious case.

Dr. Alice Kowalska believes that a steroid injection will be good for her patient, Bob. However, due to a failure of introspection, she also believes that she does not believe that a steroid injection will be beneficial to Bob. Should she administer the steroid injection?

In other words: Should Dr. Kowalska do what she thinks is good for her patient, or should she do what she thinks she thinks is good for her patient?

The earlier posts pushed me in the direction of thinking that subjective obligation takes precedence over objective obligation. That would suggest that she should do what she thinks she thinks is good for her patient.

But doesn’t this seem mistaken? After all, we don’t want Dr. Kowalska to be gazing at her own navel, trying to figure out what she thinks is good for the patient. We want her to be looking at the patient, trying to figure out what is good for the patient. So, likewise, it seems that her action should be guided by what she thinks is good for the patient, not what she thinks she thinks is good for the patient.

How, though, to reconcile this with the action-guiding precedence that the subjective seems to have in my previous posts? Maybe it’s this. What should be relevant to Dr. Kowalska is not so much what she believes, but what her evidence is. And here the case is underdescribed. Here is one story compatible with what I said above:

  1. Dr. Kowalska has lots of evidence that steroid injections are good for patients of this sort. But her psychologist has informed her that because of a traumatic experience involving a steroid injection, she has been unable to form the belief that naturally goes with this evidence. However, Dr. Kowalska’s psychologist is incompetent, and Dr. Kowalska indeed has the belief in question, but trusts her psychologist and hence thinks she does not have it.

In this case, it doesn’t matter whether Dr. Kowalska believes the injection would be good for patient. What matters is that she has lots of evidence, and she should inject.

Here is another story compatible with the setup, however:

  1. Dr. Kowalska knows there is no evidence that steroid injections are good for patients of this sort. However, her retirement savings are invested in a pharmaceutical company that specializes in these kinds of steroids, and wishful thinking has led to her subconsciously and epistemically akratically forming the belief that these injections are beneficial. Dr. Kowalska does not, however, realize that she has formed this subconscious belief.

In this case, intuitively, again it doesn’t matter that Dr. Kowalska has this subconscious belief. What matters is that she knows there is no evidence that the injections are good for patients of this sort, and given this, she should not inject.

If I am right in my judgments about 1 and 2, the original story left out crucial details.

Maybe we can tell the original story simply in terms of evidence. Maybe Dr. Kowalska on balance has evidence that the injection is good, while at the same time on balance having evidence that she does not on balance have evidence that the injection is good. I am not sure this is possible, though. The higher order evidence seems to undercut the lower order evidence, and hence I suspect that as soon as she gained evidence that she does not on balance have evidence, it would be the case that on balance she does not have evidence.

Here is another line of thought suggesting that what matters is evidence, not belief. Imagine that Dr. Kowalska and Dr. Schmidt both have the same evidence that it is 92% likely that the injections would be beneficial. Dr. Schmidt thereupon forms the belief that the injections would be beneficial, but Dr. Kowalska is more doxastically cautious and does not form this belief. But there is no disagreement between them as to the probabilities on the evidence. Then I think there should be no disagreement between them as to what course of action should be taken. What matters is whether 92% likelihood of benefit is enough to outweigh the cost, discomfort and side-effects, and whether the doctor additionally believes in the benefit is quite irrelevant.

Tuesday, March 5, 2019

More on moral risk

You are the captain of a small damaged spaceship two light years from Earth, with a crew of ten. Your hyperdrive is failing. You can activate it right now, in a last burst of energy, and then get home. If you delay activating the hyperdrive, it will become irreparable, and you will have to travel to earth at sublight speed, which will take 10 years, causing severe disruption to the personal lives of the crew.

The problem is this. When such a failing hyperdrive is activated, everything within a million kilometers of the spaceship’s position will be briefly bathed in lethal radiation, though the spaceship itself will be protected and the radiation will quickly dissipate. Your scanners, fortunately, show no planets or spaceships within a million kilometers, but they do show one large asteroid. You know there are two asteroids that pass through that area of space: one of them is inhabited, with a population of 10 million, while the other is barren. You turn your telescope to the asteroid. It looks like the uninhabited asteroid.

So, you come to believe there is no life within a million kilometers. Moreover, you believe that as the captain of the ship who has a resposibility to get the crew home in a reasonable amount of time, unless of course this causes undue harm. Thus, you believe:

  1. You are obligated to activate the hyperdrive.

You reflect, however, on the fact that ship’s captains have made mistakes in asteroid identification before. You pull up the training database, and find that at this distance, captains with your level of training make the relevant mistake only once in a million times. So you still believe that this is the lifeless asteroid. but now you get worried. You imagine a million starship captains making the same kind of decision as you. As a result, 10 million crew members get home on time to their friends and families, but in one case, 10 million people are wiped out in an asteroid. You conclude, reasonably, that this is an unacceptable level of risk. One in a million isn’t good enough. So, you conclude:

  1. You are obligated not to activate the hyperdrive.

This reflection on the possibility of perceptual error does not remove your belief in (1), indeed your knowledge of (1). After all, a one in a million chance of error is less than the chance of error in many cases of ordinary everyday perceptual knowledge—and, indeed, asteroid identification just is a case of everyday perceptual knowledge for a captain like yourself.

Maybe this is just a case of your knowing you are in a real moral dilemma: you have two conflicting duties, one to activate the hyperdrive and the other not to. But this fails to account for the asymmetry in the case, namely that caution should prevail, and there has to be an important sense of “right” in which the right decision is not to activate the hyperdrive.

I don’t know what to say about cases like this. Here is my best start. First, make a distinction between subjective and objective obligations. This disambiguates (1) and (2) as:

  1. You are objectively obligated to activate the hyperdrive.

  2. You are subjectively obligated not to activate the hyperdrive.

Second, deny the plausible bridge principle:

  1. If you believe you are objectively obligated to ϕ, then you are subjectively obligated to ϕ.

You need to deny (4), since you believe (3), and if (4) were true, then it would follow you are subjectively obligated to activate the hyperdrive, and we would once again have lost sight of the asymmetric “right” on which the right thing is not to activate.

This works as far as it goes, though we need some sort of a replacement for (4), some other principle bridging from the objective to the subjective. What that principle is is not clear to me. A first try is some sort of an analogue to expected utility calculations, where instead of utilities we have the moral weights of non-violated duties. But I doubt that these weights can be handled numerically.

And I still don’t know how to handle is the problem of ignorance of the bridge principles between the objective and the subjective.

It seems there is some complex function from one’s total mental state to one’s full-stop subjective obligation. This complex function is one which is not known to us at present. (Which is a bit weird, in that it is the function that governs subjective obligation.)

A way out of this mess would be to have some sort of infallibilism about subjective obligation. Perhaps there is some specially epistemically illuminated state that we are in when we are subjectively obligated, a state that is a deliverance of a conscience that is at least infallible with respect to subjective obligation. I see difficulties for this approach, but maybe there is some hope, too.

Objection: Because of pragmatic encroachment, the standards for knowledge go up heavily when ten million lives are at stake, and you don’t know that the asteroid is uninhabited when lives depend on this. Thus, you don’t know (1), whereas you do know (2), which restores the crucial action-guiding asymmetry.

Response: I don’t buy pragmatic encroachment. I think the only rational process by which you lose knowledge is getting counterevidence; the stakes going up does not make for counterevidence.

But this is a big discussion in epistemology. I think I can avoid it by supposing (as I expect is true) that you are no more than 99.9999% sure of the risk principles underlying the cautionary judgment in (2). Moreover, the stakes go up for that judgment just as much as they do for (1). Hence, I can suppose that you know neither (1) nor (2), but are merely very confident, and rationally so, of both. This restores the symmetry between (1) and (2).

Monday, March 4, 2019

Isomorphism of inputs

For simplicity, I’ll stick to deterministic systems in this post. Functionalists think that if A is a conscious system, and B is functionally isomorphic to B, then when B receives valid inputs that correspond under the isomorphism to A’s valid inputs, B has exactly the same conscious states as A does.

Crucial to this is the notion of a functional isomorphism. A paradigmatic example would be a computer built of electronics and a hydraulic computer, with the same software. The electronic computer has electrical buttons as inputs and the hydraulic computer uses valves. Perhaps a pressed state of a button has as its isomorph an open valve.

But I think the notion of a functional isomorphic is a dubious one. Start with two electronic systems.

  • System A: Has 16 toggle switches, in two rows of 8, a momentary button, and 9 LEDs. When the button is pressed, the LEDs indicate the sum of the binary numbers encoded in the obvious way by the two rows of toggle switches.

  • System B: Has 25 toggle switches, in three rows, of 8, 8 and 9, respectively, a momentary button, and 9 LEDs. When the momentary button is pressed, the LEDs indicate the positions of the toggle switches in the third row. The toggle switches in the first two rows are not electrically connected to anything.

These two systems seem to be clearly non-isomorphic. The first seems to be an 8-bit adder and the second is just nine directly controlled lights.

But now imagine that the systems come with these instructions:

  • A: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), and press the momentary button. The input state is only validly defined when the momentary button is pressed.

  • B: 8-bit adder. To use, move the toggle switches in the two rows to correspond to the bits in the two input numbers (down=1, up=0), move the toggle switches in the third row to correspond to the bits in the sum of the two input numbers, and press the momentary button. The input state is only validly defined when the momentary button is pressed and the third row of switches contains the sum of the numbers in the first two rows.

There is now an isomorphism between valid inputs of A and B. Thus, the valid input of A:

  • 00000001,00000001,momentary pressed

corresponds to the valid input of B:

  • 00000001,00000001,000000010,momentary pressed.

Moreover, the outputs given the isomorphically corresponding valid inputs match: given the above inputs, both devices show (left to right) seven LEDs off, one LED on, and one LED off.

So it seems that whether A and B count as functionally isomorphic depends on what the instruction manuals specify as valid inputs. If the only valid inputs of B are ones where the third row of inputs corresponds to the sum of the first two, then B is an 8-bit adder. If that restriction is removed, then B is no longer an adder, but something much less interesting.

This point generalizes. Any computational system can be made isomorphic to a much simpler system with a more complex instruction manual.

This is all well and good if we are dealing with computers and software that come with specifications and manuals. But it is disastrous for the functionalist project. For the functionalist project is supposed to be a contemporarynaturalistic naturalistic account of our minds, and given naturalism, our brains do not come with specifications or manuals if contemporary naturalism is true. (If we have Aristotelian naturalism instead, we might get something akin to specifications or manuals embedded in our teleology.)

Objection 1: We need only allow those systems where the specification of valid inputs is relatively simple in a language whose linguistic structure corresponds to what is perfectly natural (Lewis) or structural (Sider), or only count as an isomorphism something that can be described in relatively simple ways in such a language.

Response: First, where is the line of the “relatively simple” to be drawn. Precise specification of the position of a toggle switch or water valve in the language of fundamental physics will be very complicated.

Second, System A is a bona fide electronic 8-bit adder. Imagine System A* is a very similar bona fide hydraulic 8-bit adder. It is very likely that a specification of what counts as a depressed toggle switch or an open valve in the language of microphysics is quite complex (just describing electricity or the flow of water in microphysics is really hard). It is also quite likely that the specification of one of these inputs is quite a bit more complex than the specification of the other. Let’s suppose, for simplicity, that A* is the system where the microphysical specification of how valid inputs work is quite a bit more complicated. Intuitively, fluid dynamics is further from the microphysics than electricity. Then the specification of the valid input states of System B may welll turn out to be closer in complexity to the specification of the valid input states of System A than that of the hydraulic A*. If so, then counting A* as isomorphic to A would force one to likewise count B as isomorphic to A.

Objection 2: The trick in the argument above was to use the notion of a valid input. But perhaps functional isomorphism needs a correspondence between all inputs, not just valid ones.

Response: This is implausible. Amongst invalid inputs to a human brain is a bullet, which produces a variety of outputs, namely death or a wide variety of forms of damage (and corresponding mutations of other behaviors), depending on the bullet trajectory. It is too stringent a requirement on an isomorph of the human brain that it should have the possibility of being damaged in precisely the ways that a bullet would damage a human brain, with exactly isomorphic mutations of behaviors.

More generally, the variety of invalid inputs is just too great to insist on isomorphism. Think of our electronic and hydraulic case. The kind of output you get when you press a toggle switch too hard, or too lightly, is unlikely to correspond to the kind of output you get when you open a valve too much, or too little, and such correspondence should not be required for isomorphism.

Conclusions: We need a manual or other source of specifications to talk of functional isomorphism. Functionalism, thus, requires a robust notion of function that is incompatible with contemporary naturalism.

Friday, March 1, 2019

Between subjective and objective obligation

I fear that a correct account of the moral life will require both objective and subjective obligations. That’s not too bad. But I’m also afraid that there may be a whole range of hybrid things that we will need to take into account.

Let’s start with clear examples of objective and subjective obligations. If Bob promised Alice to give her $10 but I misremember the promise and instead thinks he promised never to give her any more, then:

  1. Bob is objectively required to give Alice $10.

  2. Bob is subjectively required not to give Alice any money.

These cases come from a mistake about particular fact. There are also cases arising from mistakes about general facts. Helmut is a soldier in the Germany army in 1944 who knows the war is unjust but mistakenly believes that because he is a soldier, he is morally required to kill enemy combatants. Then:

  1. Helmut is objectively required to refrain from shooting Allied combatants.

  2. Helmut is subjectively required to kill Allied combatants.

But there are interesting cases of mistakes elsewhere in the reasoning that generate curious cases that aren’t neatly classified in the objective/subjective schema.

Consider moral principles about what one should subjectively do in cases of moral risk. For instance, suppose that Carl and his young daughter are stuck on a desert island for the next three months. The island is full of chickens. Carl believes it is 25% likely that chickens have the same rights as humans, and he needs to feed his daughter. His daughter has a mild allergy to the only other protein source on the island: her eyes will sting and her nose run for the next three months if she doesn’t live on chicken. Carl thus thinks that if chickens have the same rights as humans, he is forbidden from feeding chicken to his daughter; but if they don’t, then he is obligated to feed chicken to her.

Carl could now accept one of these two moral risk principles (obviously, these will be derivative from more general principles):

  1. An action that has a 75% probability of being required, and a 25% chance of being forbidden, should always be done.

  2. An action that has a 25% probability of being forbidden with a moral weight on par with the prohibition on multiple homicides and a 75% probability of being required with a moral weight on par with that of preventing one’s child’s mild allergic symptoms for three months should never be done.

Suppose that in fact chickens have very little in the way of rights. Then, probably:

  1. Carl is objectively required to feed chicken to his daughter.

Suppose further that Carl’s evidence leads him to be sure that (5) is true, and hence he concludes that he is required to feed chicken to his daughter. Then:

  1. Carl is subjectively required to feed chicken to his daughter.

This is a subjective requirement: it comes from what Carl thinks about the probabilities of rights, moral principles about what what to do in cases of risk, etc. It is independent of the objective obligation in (7), though in this example it agrees with it.

But suppose, as is very plausible, that (5) is false, and that (6) is the right moral principle here. (To see the point, suppose that he sees a large mammal in the woods that would suffice to feed his daughter for three months. If the chance that that mammal is a human being is 25%, that’s too high a risk to take.) Then Carl’s reasoning is mistaken. Instead, given his uncertainty:

  1. Carl is required to to refrain from killing chickens.

But what kind of an obligation is (9)? Both (8) and (9) are independent of the objective facts about the rights of chickens and depend on Carl’s beliefs, so it sounds like it’s subjective like (8). But (8) has some additional subjectivity in it: (8) is based on Carl’s mistaken belief about what his obligations are in cases of mortal risk, while (9) is based on what Carl’s obligations (but of what sort?) “really are” in those cases.

It seems that (9) is some sort of a hybrid objective-subjective obligation.

And the kinds of hybrid obligations can be multiplied. For we could ask about what we should do when we are not sure which principle of deciding in circumstances of moral risk we should adopt. And we could be right or we could be wrong about that.

We could try to deny (9), and say that all we have are (7) and (8). But consider this familiar line of reasoning: Both Bob and Helmut are mistaken about their obligations; they are not mistaken about their subjective obligations; so, there must be some other kinds of obligations they are mistaken about, namely objective ones. Similarly, Carl is mistaken about something. He isn’t mistaken about his subjective obligation to feed chicken. Moreover, his mistake does not rest in a deviation between subjective and objective obligation, as in Bob’s and Helmut’s case, because in fact objectively Carl should feed chicken to his daughter, as in fact (I assume for the sake of the argument) chickens have no rights. So just as we needed to suppose an objective obligation that Bob and Helmut got wrong, we need a hybrid objective-subjective one that Carl got wrong.

Here’s another way to see the problem. Bob thinks he is objectively obligated to give no money to Alice and Helmut thinks he is objectively obligated to kill enemy soldiers. But when Carl applies (5), what does he come to think? He doesn’t come to think that he is objectively required to feed chicken to his daughter. He already thought that this was 75% likely, and (5) does not affect that judgment at all. It seems that just as Bob and Helmut have a belief about something other than mere subjective obligation, Carl does as well, but in his case that’s not objective obligation. So it seems Carl has to be judging, and doing so incorrectly, about some sort of a hybrid obligation.

This makes me really, really want an account of obligation that doesn’t involve two different kinds. But I don’t know a really good one.

Thursday, February 28, 2019

A reading of 1 Corinthians 14:33b-34a

1 Corinthians 14:33b-34a is one of the “hard texts” of the New Testament. The RSV translates it as:

As in all the churches of the saints, the women should keep silence in the churches.

Besides the fact that this is a hard saying, a textual difficulty is that earlier in the letter, at 11:5, Paul has no objection to women prophesying or praying (it seems very likely that praying would be out loud), though it has been suggested that this was outside of a liturgical context. Nor does later Church practice prohibit women from joining in vocal prayer during the liturgy.

I assume that the second "the churches" means "the churches of Corinth", while the first "the churches" refers to the churches more generally. And yesterday at our Department Bible study, I was struck by the fact that the “As” (Greek hōs) that begins the text can be read as “In the manner of”. On that reading, the first sentence of the hard text does not say that women should keep silent in the Corinthian churches. Rather, it says that women should keep silent in the Corinthian churches in the way and to the extent to which they keep silent in the other churches. In other words, women should only speak up in Corinthian liturgies at the points at which women speak up in non-Corinthian liturgies. This is compatible with women having various speaking roles—but only as long as they have these roles in “all the churches of the saints.”

(Note, however, that some versions punctuate differently, and make “As in all the churches of the saints” qualify what came earlier rather than what comes afterwards. My reading requires the RSV’s punctuation. Of course, the original has no punctuation.)

On this reading, the first sentence of the text is an application of a principle of liturgical uniformity between the churches, and Paul could equally well have said the same thing about the men. But the text suggests to me that there was some particular problem, which we can only speculate about, that specifically involved disorderly liturgical participation by Corinthian women, in addition to other problems of disorderly participation that Paul discusses earlier in the chapter.

The difficulty for my reading is the next sentence, however:

For they are not permitted to speak, but should be subordinate, as even the law says. (1 Cor. 14:34b, RSV)

I would want to read this with “speak” restricted to the kinds of speech not found in the other churches. Perhaps in the other churches, there was no “chatting in the pews”, or socializing during the liturgy (Mowczko in a very nice summary of interpretations notes that this is St. John Chrystostom’s interpretation).

Another interpretation is that “the law” here is Roman law or Corinthian custom (though I don’t know that in Koine Greek “nomos” can still cover custom, like it can in classical Greek), so that Paul is reprising a motif of noting that the Corinthians are behaving badly even by their own cultural standards.

I don’t know that my reading is right. I think it is a little bit more natural to read the Greek as having a complete prohibition on women speaking, but my reading seems to be grammatically permissible, and one must balance naturalness of language with consistency in a text (in this case, consistency with 11:5). And in the case of a Biblical text, I also want an interpretation compatible with divine inspiration.

Wednesday, February 27, 2019

White lies

Suppose Bob is known by Alice to be an act utilitarian. Then Bob won’t believe when Alice asserts p in cases where Bob knows that by Alice’s lights, if p is false, nonetheless the utility of getting Bob to believe p exceeds the utility of Bob knowing that p is false. For in such cases an act utilitarian is apt to lie, and her testimony to p is of little worth.

Such cases are not uncommon in daily life. Alice feels bad about a presentation she just made. Bob praises it. Alice dismisses the praise on the grounds that even if her presentation was bad, getting her to feel better outweighs the utility of her having a correct estimate of the presentation, at least by Bob’s lights.

Praise from an act utilitarian is of little value: instead of being direct evidence for the proposition that one did well, it is direct evidence for the proposition that it would be good for one to believe that one did well. Now, that it would good for one to believe that one did well is some evidence that one did well, but it is fairly weak evidence given facts about human psychology.

And so in cases where praise is deserved, the known act utilitarian is not going to promote utility for friends as effectively as a known deontologist, since the deontologist’s praise is going to get a lot more credence. Such cases are not rare: it is quite common for human performances to deserve praise and for the agent to be such that they would benefit from being uplifted by praise. While, on the other hand, in cases where praise is undeserved, the known act utilitarian’s praise does little to uplift the spirit.

These kinds of ordinary interactions are such a large part of our lives that I think a case can be made that just on the basis of these, by the lights of act utilitarianism, an act utilitarian should either hide their act utilitarianism from others or else should convert to some other normative ethical view (say, by self-brainwashing). Since the relevant interactions are often with friends, and it is unlikely one can hide one’s character from one’s friends over a significant period of time, and since doing so is likely to be damaging to one’s character in ways that even the act utilitarian will object to, this seems to be yet another of the cases where act utilitarianism pushes one not to be an act utilitarian.

Such arguments have been made before in other contexts (e.g., worries that the demandingness of act utilitarianism would sap our energies). They are not definitive refutations of act utilitarianism. As Parfit has convincingly argued, it is logically consistent to hold that an ethical theory is true but that one morally should not believe it. But still we get the conclusion that everybody morally should be something other than an act utilitarian. For if act utilitarianism is false, you surely shouldn’t be an act utilitarian. And if it’s true, you shouldn’t, either.

The above, I think, is more generally relevant to any view on which everyday white lies are acceptable. For the only justifications available for white lies are consequentialist ones. But hiding from one’s friends that one is the sort of person who engages in white lies is costly and difficult, whereas letting it be known undercuts the benefits of the white lies, while at the same removing the benefits of parallel white truths. Thus, we should all reject white lies in our lives, and make it clear that we do so.

Here, I use “white lie” in a sense in which it is a lie. I do not think “Fine” is a lie, white or otherwise, when answering “How are you?” even when you are not fine, because this is not a case of assertion but of a standardized greeting. (There is no inconsistency in an atheist saying “Good-bye”, even though it’s a contraction of “God be with you.”) One way to see this isn't a lie is to note that it is generally considered rude (but sometimes required) to suggest that one's interlocutor lied, there is nothing rude about saying to someone who answered “Fine”: “Are you sure? You look really tired.” At that point, we do move into assertion category. The friend who persists in the “Fine” answer but isn't fine now is lying.

Tuesday, February 26, 2019

The reportable and the assertible

I’ve just had a long conversation with a grad student about (inter alia) reporting and asserting. My first thought was that asserting is a special case of reporting, but one can report without asserting. For instance, I might have a graduate assistant write a report on some aspect of the graduate program, and then I could sign and submit that report without reading it. I would then be reporting various things (whether responsibly so would depend on how strong my reasons to trust the student were), but it doesn’t seem right to say that I would be asserting these things.

But then I came to think that just as one can report without asserting, one can assert without reporting. For instance, there is no problem with asserting facts about the future, such as that the sun will rise tomorrow. But I can’t report such facts, even though I know them.

It’s not really a question of time. For (a) I also cannot report that the sun rose a million years ago, and (b) if I were to time-travel to the future, observe the sunrise, and come back, then I could report that the sun will rise tomorrow.

And it’s not a distinction with respect to the quantity of evidence. After all, I can legitimately report what I had for dinner yesterday, but it’s not likely that I have as good evidence about that as I do that the sun will rise tomorrow.

I suspect it’s a distinction as to the kind of evidence that is involved. I am a legally bound reporter of illegal activity on campus. But I can’t appropriately report that a violation of liquor laws occurred in the dorms over the weekend if I know it only on the basis of the general claim that such violations, surely, occur every weekend. The kind of evidence that memory provides is typically appropriate for reporting, while the kind of evidence that induction provides is at least typically not.

Interestingly, although I can’t appropriately report that tomorrow the sun will rise, I can appropriately report that I know that the sun will rise tomorrow. This means that the reportable is not closed under obvious entailment.

Lying and consequences

Suppose Alice never lies while Bob lies to saves innocent lives.

Consider circumstances where Alice and Bob know that getting Carl to believe a proposition p would save an innocent life, and suppose that Alice and Bob know whether p is true.

In some cases of this sort, Bob is likely to do better with respect to innocent lives:

  1. p is false and Carl doesn’t know Alice and Bob’s character.

  2. p is false and Carl doesn’t know that Alice and Bob know that getting Carl to believe p would save an innocent livfe.

For in cases 1 and 2, Bob is likely to succeed in getting Carl to believe p, while Alice is not.

But in one family of cases, Alice is likely to do better:

  1. p is true and Carl knows Alice and Bob’s character and knows that they believe that getting Carl to believe p would save an innocent life.

For in these cases, Carl wouldn’t be likely to believe Bob with regard to p, as he would know that Bob would affirm p whether p was true or false, as Bob is the sort of person who lies to save innocent lives, while Carl would surely believe Alice.

Are cases of type (1) and (2) more or less common than cases of type (3)?

I suppose standard cases where an aggressor at the door is asking whether a prospective victim is in the house may fall under category (1) when the aggressor knows that they are known to be an aggressor and will fall under category (2) when the aggressor doesn’t know that they are known to be an aggressor (Korsgaard discusses this case in a paper on Kant on lying).

On the other hand, category (3) includes some death penalty cases where (a) the life of the accused depends on some true testimony being believed and (b) the testifier is someone likely to think the accused to be innocent independently of the testimony (say, because the accused is a friend). For in such a case, Bob would just give the testimony whether it’s true or false, while Alice would only give it if it were true (or at least she thought it was), and so Bob’s testimony carries no weight while Alice’s does.

Category (3) also includes some cases where an aggressor at the door knows the character of their interlocutor in the house, and knows that they are known to be an aggressor, and where the prospective victim is not in the house, but a search of the house would reveal other prospective victims. For instance, suppose a Gestapo officer is asking whether there are Jews in the house, which there aren’t, but there are Roma refugees in the house. The Gestapo officer may know that Bob would say there aren’t any Jews even if there were, and so he searches the house and finds the Roma if Bob is at the door; but he believes Alice, and doesn’t search, and the Roma survive.

Roughly, the question of whether Alice or Bob’s character is better consequentialistically comes down to the question whether it is more useful, with respect to innocent life, to be more believable and always honest (Alice) or to be less believable and able to lie (Bob).

More on grounding of universals

The standard First Order Logic translation of “All As are Bs” is:

  1. x(A(x)→B(x)).

Suppose we accept this translation and we further accept the principle:

  1. Universal facts are always partially grounded in their instances.

Then we have the oddity that the fact that all ravens are black seems to be partially grounded in my garbage can being black. Let R(x) and B(x) say that x is a raven and black, respectively, and let g be my garbage can. Then an instance of ∀x(R(x)→B(x)) is R(g)→B(g), and the latter material conditional is definable as ¬R(g)∨B(g). But a disjunction is grounded in its true disjuncts, and hence this one will be grounded in B(g) (as well as in ¬R(g)).

There are three things to dispute here: the translation (1), the grounding principle (2), and the claim that a material conditional is grounded in its consequent whenever that consequent is true. Of these, I am most suspicious of the translation of the two-place universal quantifier and the grounding principle (2).

Friday, February 22, 2019

Grounding of universals and partial grounding

It is common to claim that:

  1. The fact that everything is F is partially grounded in the fact that a1 is F and in the fact that a2 is F and so on, for all the objects ai in the world.

But this can’t be right if partial grounds are parts of full grounds. For suppose you live in a world with only two objects, a and b, which are both sapient. Then everything is sapient, and by (1) it follows that:

  1. The fact that everything is sapient is partially grounded in a being sapient and in b being sapient.

But suppose partial grounds are parts of full grounds. The facts that a is sapient and b is sapient are not a full ground of the fact that everything is sapient, because the full grounds of a fact entail that fact, and a being sapient and b being sapient doesn’t entail that everything is sapient (since it’s possible for a to be sapient and b to be sapient and yet for there to exist a c that is not).

So we need to be able to add something to the two particular sapience facts to get full grounds. The most obvious thing to add is:

  1. Everything is a or b.

Clearly fact (3) together with the facts that a is sapient and b is sapient will entail that everything is sapient.

But applying (1) to (3), we get:

  1. Fact (3) is partially grounded in the facts that a is a or b and that b is a or b.

But, once again, if partial grounds are parts of full grounds, then we need a fact to add to the two facts on the right hand side of the grounding relation in (4) such that together these facts will entail (3). But the obvious candidate to add is:

  1. Everything is a or b.

And that yields circularity.

So it seems that either we should reject the particular-grounds-universal principle (1) or we should reject the principle that partial grounds are parts of full grounds.

Here is a reason for the latter move. Maybe we should say that God’s creating me is partially grounded in God. But that’s merely a partial grounding, since God’s existence doesn’t entail that God created me. And it seems that the only good candidate for a further fact to be added to the grounds so as to entail that God created me would be my existence. (One might try to add the fact that God willed that I exist. But by divine simplicity, that fact has to be partly constituted by my existence or the like.) But my existence is grounded in God’s creating me, so that would be viciously circular.

Are desires really different from wishes?

It is tempting to conflate what is worth desiring with what is worth pursuing. But there seem to be cases where things are worth desiring but not worth pursuing:

  1. Having a surprising good happen to you completely gratuitously—i.e., without your having done anything to invite it—seems worth desiring but the pursuit of it doesn’t seem to make sense.

  2. If I have published a paper claiming a certain mathematical result, and I have come to realize that the result is false, it seems to make perfect sense to desire that the result be true, but it makes no sense to pursue that.

The standard response to cases like 1 and 2 is to distinguish wishes from desires, and say that it makes sense to wish for things that it makes no sense to pursue, but it does not make sense to desire such things.

But consider this. Suppose in case 2, I came to be convinced that God has power over mathematics, and that if I pray that the result be true, God might make it be true. Then the affective state I have in case 2 would motivate me to pray. But the nature of the affective state need not have changed upon coming to think that God has power over mathematics. Thus, either (a) I would be motivated to pray by a mere wish or else (b) wishes and desires are the same thing. But the wish/desire distinction does not fit with (a), which leaves (b).

I suppose one could claim that a desire just is a wish plus a belief that the object is attainable. But that makes desires be too gerrymandered.