Monday, November 30, 2015

Lying, violence and dignity

I've argued that typically the person who is hiding Jews from Nazis and is asked by a Nazi if there are Jews in her house tells the truth, and does not lie, if she says: "No." That argument might be wrong, and even if it's right, it probably doesn't apply to all cases.

So, let's think about the case of the Nazi asking Helga if she is hiding Jews, when she is in fact hiding Jews, and when it would be a lie for her to say "No" (i.e., when there isn't the sort of disparity of languages that I argued is normally present). The Christian tradition has typically held lying to be always wrong, including thus in cases like this. I want to say some things to make it a bit more palatable that Helga does the right thing by refusing to lie.

The Nazi is a fellow human being. Language, and the trust that underwrites it (I was reading this morning that one of the most difficult questions in the origins of language is about the origination of the trust essential to language's usefulness), is central to our humanity. By refusing to betray the Nazi's trust in her through lying, Helga is affirming the dignity of all humans in the particular case of someone who needs it greatly--a human being who has been dehumanized by his own choices and the influence of an inhuman ideology. By attempting to dehumanize Jews, the Nazi dehumanized himself to a much greater extent. Refusing to lie, Helga gives her witness to a tattered child of God, a being created to know and live by the truth in a community of trust, and she he gives him a choice whether to embrace that community of trust or persevere on the road of self-destruction through alienation from what is centrally human. She does this by treating him as a trusting human rather than a machine to be manipulated. She does this in sadness, knowing that it is very likely that her gift of community will be refused, and will result in her own death and the deaths of those she is protecting. In so doing she upholds the dignity of everyone.

When I think about this in this way, I think of the sorts of things Christian pacifists say about their eschatological witness. But while I do embrace the idea that we should never lie, I do not embrace the pacifist rejection of violence. For I think that just violence can uphold the dignity of those we do violence to, in a way in which lying cannot. Just violence--even of an intentionally lethal sort--can accept the dignity of an evildoer as someone who has chosen a path that is wrong. We have failed to sway him by persuasion, but we treat him as a fellow member of the human community by violently preventing him from destroying the community that his own wellbeing is tied to, rather than by betraying with a lie the shattered remains of the trustful connection he has to that community.

I don't think the above is sufficient as an argument that lying is always wrong. But I think it gives some plausibility to that claim.

Saturday, November 28, 2015

Education about sports

A lot of worthwhile texts, both fiction and nonfiction, make direct reference to particular sports or use sports analogies or metaphors. These are difficult to understand for readers who do not know the rudiments of these sports. But there is not enough education on these sports in school, except in the context of actual participation in them. But I suspect that only a minority of children in English-speaking countries participates in all of the culturally important sports that figure in English-language texts, sports such as American football, baseball, cricket, golf, hockey and soccer, understanding of which is needed for basic cultural literacy among readers of English (I have to confess to lacking that understanding in the case of most of these sports--my own school education was deficient in this respect). Thus, either there should either be broader participation--but that is unsafe in the case of American football and likely impractical in the case of golf--or there should be teaching about the rules of sports outside of contexts of participation, say in English or history class.

This post is inspired by my daughter's noting her difficulties in reading the cricket-related bits of a P. G. Wodehouse novel.

Tuesday, November 24, 2015

Dutch Books and infinite sequences of coin tosses

Suppose we have an infinite sequence of independent and fair coins. A betting portfolio is a finite list of subsets of the space of outcomes (heads-tails sequences) together with a payoff for each subset. Assume:

  1. Permutation: If a rational agent would be happy to pay x for a betting portfolio, and A is one of the subsets in the betting portfolio, then she would also be happy to pay x for a betting portfolio that is exactly the same but with A replaced by A*, where A* is isomorphic to A under a permutation of the coins.
  2. Equivalence: A rational agent who is happy to pay x for one betting scenario, will be willing to accept an equivalent betting scenario---one that is certain to give the same payoff for each outcome---for the same price.
  3. Great Deal: A rational agent will be happy to pay $1.00 for a betting scenario where she wins $1.25 as long as the outcome is not all-heads or all-tails.
Leveraging the Axiom of Choice and using the methods of the Banach-Tarski Paradox, one can then find two betting portfolios that the agent would be happy to accept for $1.00 each such that it is certain that if she accepts them both, she will win at most $1.25; hence she will accept a sure loss of at least $0.75. For details of a closely related result, see Chapter 6 in Infinity, Causation and Paradox (draft temporarily here).

So what to do? I think one should accept Causal Finitism, the doctrine that causal antecedents are always finite. Given Causal Finitism, one can't have a real betting scenario based an infinite number of coin tosses. Moreover, the only known way to operationalize the use of the Axiom of Choice in the proof in a betting scenario also involves infinite causal dependencies.

Monday, November 23, 2015

Values cannot be accurately modeled by real numbers

Consider a day in a human life that is just barely worth living. Now consider the life of Beethoven. For no finite n would having n of the barely-worth-living days be better than having all of the life of Beethoven. This suggests that values in human life cannot be modeled by real numbers. For if a and b are positive numbers, then there is always a positive integer n such that nb>a. (I am assuming additiveness between the barely-liveable days. Perhaps memory wiping is needed to ensure additiveness, to avoid tedium?)

Friday, November 20, 2015

The value of victory

Is winning a game always worthwhile? Consider this solitary game: I guess a number, and if my number is different from the number of particles in the universe, then my score equals the number of particles in the universe. I can play this over and over, winning each time. If it's good for me to win at a game, I continue to rack up benefits. So for reasons of self-interest, I should play this game all the time. I could even set myself up as playing it by default: I announce that each time I breathe, the length of my inspiration in milliseconds counts as my guess. I will be racking up benefits every day, every night. But that's silly.

Wednesday, November 18, 2015

Another impossibility result for finitely additive probabilities and invariance

Consider a countably infinite sequence of fair and independent coin tosses. Given the Axiom of Choice, there is no finitely additive probability measure that satisfies these conditions:

  1. It is defined for all sets of outcomes.
  2. It agrees with the classical probabilities where these are defined.
  3. It is invariant under permutations of coins.
(Sketch of proof: Index the coins with members of the free group of rank two. The members of the group then induce permutations of coins, and hence act on the space of outcomes. The set of non-trivial fixed points under that action has classical probability zero. Throwing that out, we can use a standard paradoxical decomposition of the free group of rank two to generate a paradoxical decomposition of the rest of our space of results--here Choice will be used--and that rules out the possibility of a finitely additive probability measure.)

Monday, November 16, 2015

"Even if" clauses in promises

If I promise to visit you for dinner, but then it turns out that I have a nasty case of the flu, I don't need to come, and indeed shouldn't come. But I could also promise to meet you for dinner even if I have a nasty case of the flu, then if the promise is valid, I need to come even if I have the flu. I suspect, however, that typically such a promise would be immoral: I should not spread disease. But one can imagine cases where it would be valid, maybe say if you really would like to get the flu for a serious medical experiment on yourself.

In my previous post, I gave a case where it would be beneficial to have a promise that binds even when fulfilling it costs multiple lives. Thus, there is some reason to think that one could have promises with pretty drastic "even if" clauses such as "even if a terrorist kills ten people as a result of this." But clearly not every "even if" clause is valid. For instance, if I say I promise to visit you for dinner even if I have to endanger many lives by driving unsafely fast, my "only if" clause is not valid under normal circumstances (if we know that my coming to dinner would save lives, though, then it might be).

One can try to handle the question of distinguishing valid from invalid "only if" clauses by saying that the invalid case is where it is impermissible to do the promised thing under the indicated conditions. The difficulty, however, is that whether doing the promised thing is or is not permissible can depend on whether one has promised it. Again, the example from my previous post could apply, but there are more humdrum cases where one would have an on balance moral reason to spend the evening with one's family had one not promised to visit a friend.

Maybe this is akin to laws. In order to be valid, a law has to have a minimal rationality considered as an ordinance for the common good. In order to be valid, maybe a promise has to have a minimal rationality considered as an ordinance for the common human good with a special focus on the promisee? To promise to come to an ordinary dinner even if it costs lives does not have satisfy that condition, while to promise to bring someone out of general anesthesia even if a terrorist kills people as a result could satisfy it under some circumstances. It would be nice to be able to say more, but maybe that can't be done.

Deontology and anti-utilitarian promises

Assume deontology. Can one make a promise so strong that one shouldn't break it even if breaking it saves a number of lives? I don't know for sure, but there are cases where such promises would be useful, assuming deontology.

Fred needs emergency eye-surgery. If he doesn't have the surgery this week, he will live, but he will lose sight in one eye. The surgery will be done under general anesthesia, and if Fred is not brought out of the general anesthesia he will die. But there is a complication. A terrorist has announced that if Fred lives out this week, ten random innocent people will die.

Prima facie here are the main options:

  1. Kill Fred. Nine lives on balance are saved.
  2. Do nothing. Fred loses binocular vision, and ten people are killed.
  3. Perform surgery as usual. Fred's full vision is saved, he lives, but ten people are killed.
Deontological constraints rule out (1). Clearly, (3) is preferable to (2). But there is a problem with (3). Once Fred has received general anesthesia, positive actions are needed to bring him out of it. These positive actions will cause the terrorist to kill ten people. Thus, bringing Fred to consciousness requires an application of the Principle of Double Effect: the intended effect is bringing Fred back to consciousness and keeping him from dying; an unintended side-effect is the terrorist's killing of ten. But as it stands, the proportionality condition in Double Effect fails: one should not save one person's life at the expense of ten others. So once Fred has received general anesthesia, Double Effect seems to prohibit bringing him out of it. But this means that there is no morally licit way to do (3), even though this seems the morally best of the unhappy options.

(A legalistic deontologist might try to suggest another option: Perform surgery, but don't bring Fred out of general anesthesia. The thought is that the surgery is morally permissible, and bringing Fred out of general anesthesia will be prohibited, but Fred's death is never intended. It is simply not prevented, and the reason for not preventing it is not in order to save lives, but simply because Double Effect prohibits us from bringing Fred out of general anesthesia. The obvious reason why this sophistical solution fails is that there is no rational justification for the surgery if Fred is't going to be brought back to consciousness. I am reminded of a perhaps mythical Jesuit in the 1960s who would suggest to a married woman with irregular cycles that worked poorly with rhythm--or maybe NFP--that she could go on the Pill to regularize her cycles in order to make rhythm/NFP work better, and that once she did that, she wouldn't need rhythm/NFP any more. That's sophistical in a similar way.)

What we need for cases like this is promises that bind even at the expense of lives. Thus, the anesthesiologist could make such a strong promise to bring Fred back. As a result, Double Effect is not violated in solution (3), because proportionality holds: granted, on balance nine lives are lost, but also a promise is kept.

In practice, I think this is the solution we actually adopt. Maybe we do this through the Hippocratic Oath, or perhaps through an implicit taking on of professional obligations by the anesthesiologist. But it is crucial in cases like the above that such promises bind even in hard cases where lives hang on it.

All that said, the fact that it would be useful to have promises like this does not entail that there are promises like that. Still, it is evidence for it. And here we get an important metametaethical constraint: metaethical theories should be such that the usefulness of moral facts be evidence for them.

Friday, November 13, 2015

Bayesian divergence

Suppose I am considering two different hypotheses, and I am sure exactly one of them is true. On H, the coin I toss is chancy, with different tosses being independent, and has a chance 1/2 of landing heads and a chance 1/2 of landing tails. On N, the way the coin falls is completely brute and unexplained--it's "fundamental chaos", in the sense of my ACPA talk. So, now, you observe n instances of the coin being tossed, about half of which are heads and half of which are tails. Intuitively, that should support H. But if N is an option, if the prior probability of N is non-zero, we actually get Bayesian divergence as n increases: we get further and further from confirmation of H.

Here's why. Let E be my total evidence--the full sequence of n observed tosses. By Bayes' Theorem we should have:

P(H|E) = P(E|H)P(H)/[P(E|H)P(H) + P(E|N)P(N)].
But there is a problem: P(E|N) is undefined. What shall we do about this? Well, it is completely undefined. Thus, we should take it to be an interval of probabilities, the full interval [0,1] from 0 to 1. The posterior probability P(H|E), then, will also be an interval between:
P(E|H)P(H)/[P(E|H)P(H) + (0)·P(N)] = 1
and
P(E|H)P(H)/[P(E|H)P(H) + (1)·P(N)] ≤ P(E|H)/P(N) = 2n / P(N).
(Remember that E is a sequence of n fair and independent tosses if H is true.) Thus, as the number of observations increases, the posterior probability for the "sensible" hypothesis H gets to be an interval [a,1], where a is very small. But something whose probability is almost the whole interval [0,1] is not rationally confirmed. So the more data we have, the further we are from confirmation.

This means that no-explanation hypotheses like N are pernicious to Bayesians: if they are not ruled out as having zero or infinitesimal probability from the outset, they undercut science in a way that is worse and worse the more data we get.

Fortunately, we have the Principle of Sufficient Reason which can rule out hypotheses like N.

Thursday, November 12, 2015

An aesthetic argument for the Axiom of Choice

The mathematics that supposes the Axiom of Choice is more beautiful than the mathematics that does not. So, the Axiom of Choice is probably true.

Wednesday, November 11, 2015

Positing non-epistemic vagueness doesn't solve a puzzle

Suppose we want to explain why one tortoise doesn't fall down, and we explain this by saying that it's standing on two tortoises. And then we explain why the two lower tortoises doesn't fall down, we suppose that each stands on two tortoises. And so on. That's terrible: we're constantly explaining one puzzling thing by two that are just as puzzling.

Now suppose we try to explain the puzzle of the transition from bald to non-bald in a Sorites sequences of heads of hair (no hair, one hair, two hairs, etc.). We do this by saying that there are going to be vague cases of baldness. But this is just as the case of tortoises. For while previously we had one puzzling transition, from bald to non-bald, now we have two puzzling transitions, from definitely bald to vaguely bald and from vaguely bald to definitely bald. So, we repeat with higher levels of vagueness. The transition from definitely bald to vaguely bald yields a transition from definitely bald to vaguely vaguely bald and a transition from vaguely vaguely bald to definitely vaguely bald, and similarly for the transition from vaguely bald to definitely bald. At each stage, each transition is replaced with two. We're constantly explaining one puzzling thing by two that are just as puzzling.

That said, it is possible with care to stand a tortoise on two tortoises, and we could have evidence that a particular tortoise is doing that. In that case, the two tortoises aren't posited to solve a puzzle, but simply because we have evidence that they are there. A similar thing could be the case with baldness. We might just have direct evidence that there is vagueness in the sequence. But as we go a level deeper, I suspect the evidence peters out. After all, in ordinary discourse we don't talk of vague vagueness and the like. So perhaps we might have a view on which there is one level of vagueness--and then epistemicism, i.e., there is a sharp transition from definitely non-bald to vaguely bald, and another from vaguely bald to definitely bald. But the more levels we posit, the more we offend against parsimony.

Tuesday, November 10, 2015

Parameters in ethics

In physical laws, there are a number of numerical parameters. Some of these parameters are famously part of the fine-tuning problem, but all of them are puzzling. It would be really cool if we could derive the parameters from elegant laws that lack arbitrary-seeming parameters, but as far as I can tell most physicists doubt this will happen. The parameters look deeply contingent: other values for them seem very much possible. Thus people try to come up either with plenitude-based explanations where all values of parameters are exemplified in some universe or other, or with causal explanations, say in terms of universes budding off other universes or a God who causes universes.

Ethics also has parameters. To further spell out an example from Aquinas' discussion of the order of charity, fix a set of specific circumstances involving yourself, your father and a stranger, where both your father and the stranger are in average financial circumstances, but are in danger of a financial loss, and you can save one, but not both, of them from the loss. If it's a choice between saving your father from a ten dollar loss or the stranger from an eleven dollar loss, you should save your father from the loss. But if it's a choice between saving your father from a ten dollar loss or the stranger from a ten thousand dollar loss, you should save the stranger from the larger loss. As the loss to the stranger increases, at some point the wise and virtuous agent will switch from benefiting the father to benefiting the stranger. The location of the switch-over is a parameter.

Or consider questions of imposition of risk. To save one stranger's life, it is permissible to impose a small risk of death on another stranger, say a risk of one in a million. For instance, an ambulance driver can drive fast to save someone's life, even though this endangers other people along the way. But to save a stranger's life, it is not permissible to impose a 99% risk of death on another stranger. Somewhere there is a switch-over.

There are epistemic problems with such switch-overs. Aquinas says that there is no rule we can give for when we benefit our father and when we benefit a stranger, but we must judge as the prudent person would. However I am not interested right now in the epistemic problem, but in the explanatory problem. Why do the parameters have the values they do? Now, granted, the particular switchover points in my examples are probably not fundamental parameters. The amount of money that a stranger needs to face in order that you should help the stranger rather than saving your father from a loss of $10 is surely not a fundamental parameter, especially since it depends on many of the background conditions (just how well off is your father and the stranger; what exactly is your relationship with your father; etc.) Likewise, the saving-risking switchover may well not be fundamental. But just as physicists doubt that one can derive the value of, say, the fine-structure constant (which measures the strength of electromagnetic interactions between charged particles) from laws of nature that contain no parameters other than elegant ones like 2 and π, even though it is surely a very serious possibility that the fine-structure constant isn't truly fundamental, so too it is doubtful that the switchover points in these examples can be derived from fundamental laws of ethics that contain no parameters other than elegant ones. If utilitarianism were correct, it would be an example of a parameter-free theory providing such a derivation. But utilitarianism predicts the incorrect values for the parameters. For instance, it incorrectly predicts that that the risk value at which you need to stop risking a stranger's life to certainly save another stranger is 1, so that you should put one stranger in a position of 99.9999% chance of death if that has a certainty of saving another stranger.

So we have good reason to think that the fundamental laws of ethics contain parameters that suffer from the same sort of apparent contingency that the physical ones do. These parameters, thus, appear to call for an explanation, just as the physical ones do.

But let's pause for a second in regard to the contingency. For there is one prominent proposal on which the laws of physics end up being necessary: the Aristotelian account of laws as grounded in the essences of things. On such an account, for instance, the value of the fine-structure constant may be grounded in the natures of charged particles, or maybe in the nature of charge tropes. However, such an account really does not remove contingency. For on this theory, while it is not contingent that electromagnetic interactions between, say, electrons have the magnitude they do, it is contingent that the universe contains electrons rather than shmelectrons, which are just like electrons, but they engaged in shmelectromagnetic interactions that are just like electromagnetic interactions but with a different quantity playing the role analogous to the fine-structure constant. In a case like this, while technically the laws of physics are necessary, there is still a contingency in the constants, in that it is contingent that we have particles which behave according to this value rather than other particles that would behave differently. Similarly, one might say that it is a necessary truth that such-and-such preferences are to be had between a father and a stranger, and that this necessary truth is grounded in the essence of humanity or in the nature of a paternity trope. But there is still a contingency that our world contains humans and fathers rather than something functionally very similar to humans and fathers but with different normative parameters.

So in any case we have a contingency. We need a meta-ethics with a serious dose of contingency, contingency not just derivable from the sorts of functional behavior the agents exhibit, but contingency at the normative level--for instance, contingency as to appropriate endangering-saving risk tradeoffs. This contingency undercuts the intuitions behind the thesis that the moral supervenes on the non-moral. Here, both Natural Law and Divine Command rise to the challenge. Just as the natures of contingently existing charged objects can ground the fine-structure constants governing their behavior, the natures of contingently existing agents can ground the saving-risking switchover values governing their behavior. And just as occasionalism can have God's causation ground the arbitrary-seeming parameters in the laws of physics, so God's commands can ground the arbitrary-seeming parameters in ethics (the illuminating analogy between occasionalism and Divine Command is due to Mark Murphy). Can other theories rise to the challenge? Maybe. But in any case, it is a genuine challenge.

It would be particularly interesting if there were an analogue to the fine-tuning argument in this case. The fine-tuning argument arises because in some sense "most" of the possible combinations of values of parameters in the laws of physics do not allow for life, or at least for robust, long-lasting and interesting life. I wonder if there isn't a similar argument on the ethics side, say that for "most" of the possible combinations of parameters, we aren't going to have the good moral communities (the good could be prior to the moral, so there may be no circularity in the evaluation)? I don't know. But this would be an interesting research project for a graduate student to think about.

Objection: The switchover points are vague.

Response: I didn't say they weren't. The puzzle is present either way. Vagueness doesn't remove arbitrariness. With a sharp switchover point, just the value of it is arbitrary. But with a vague switchover point, we have a vagueness profile: here something is definitely vaguely obligatory, here it is definitely vaguely vaguely obligatory, here it is vaguely vaguely vaguely obligatory, etc. In fact, vagueness may even multiply arbitrariness, in that there are a lot more degrees of freedom in a vagueness profile than in a single sharp value.

Monday, November 9, 2015

Four plausibilistic arguments for redirecting the trolley

Start with the standard scenario: trolley speeding towards five innocent strangers, and you can flip a lever to redirect it to a side-track with only one innocent stranger. Here are four arguments each making it plausible that redirecting the trolley is right. [Unfortunately, as you can see from the comments, the first three arguments, at least, are very weak. - ARP]

1. Back and forth: Suppose there is just enough time to flip the lever to redirect and then flip it back--but no more time than that. Assuming one shouldn't redirect, there is nothing wrong with flipping the lever if one has a firm plan to flip it back immediately. After all, nobody is harmed by such a there-and-back movement. The action may seem flippant (pun not intended--I just can't think of a better term), but we could suppose that there is good reason for it (maybe it cures a terrible pain in your arm). But now suppose that you're half-way through this action. You've flipped the lever. The trolley is now speeding towards the one innocent. At this point it is clearly wrong for you to flip it back: everyone agrees that a trolley speeding towards one innocent stranger can't be redirected towards five. This seems paradoxical: the compound action would be permissible, but you'd be obligated to stop half way through. If redirecting the trolley is the right thing to do, we can block the paradox by saying that it's wrong to flip it there and back, because it is your duty to flip it there.

2. Undoing. If you can undo a wrong action, getting everything back to the status quo ante, you probably should. So if it's wrong to flip the lever, then if you've flipped the lever, you probably should flip it back, to undo the action. But flipping it back is uncontroversially wrong. So, probably, flipping the lever isn't wrong.

3. Advice and prevention. Typically, it's permissible to dissuade people who are resolved on a wrong action. But if someone is set on flipping the lever, it's wrong to dissuade her. For once she is resolved on flipping the lever, it is the one person on the side-track who is set to die, and so dissuading the person from flipping the lever redirects death onto the five again. But it's clearly wrong to redirect death onto the five. So, probably, flipping the lever isn't wrong. Similarly, typically one should prevent wrongdoing. But to prevent the flipping of the lever is to redirect the danger onto the five, and that's wrong.

4. Advice and prevention (reversed). The trolley is speeding towards the side-track with one person, and you see someone about to redirect the trolley onto the main track with five persons. Clearly you should try to talk the person out of it. But talking her out of it redirects death from the five innocents to the one. Hence it's right to engage in such redirection. Similarly, it's clear that if you can physical prevent the person from redirecting the trolley onto the main track, you should. But that's redirection of danger from five to one.

Trolleys, breathing, killing and letting die

Start with the standard trolley scenario: trolley is heading towards five innocent people but you can redirect it towards one. Suppose you think that it is wrong to redirect. Now add to the case the following: You're restrained in the control booth, and the button that redirects the trolley is very sensitive, so if you breathe a single breath over the next 20 seconds, the trolley will be redirected towards the one person.

To breathe or not to breathe, that is the question. If you breathe, you redirect. Suppose you hold your breath, thinking that redirecting is wrong. Why are you holding your breath, then? To keep the trolley away from the one person. But by holding your breath, you're also keeping the trolley on course towards the five. If in the original case it was wrong to redirect the trolley towards the one, why isn't it wrong to hold your breath so as to keep the trolley on course towards the five? So perhaps you need to breathe. But if you breathe, your breathing redirects the trolley, and you thought that was wrong.

I suppose the intuition behind not redirecting in the original case is a killing vs. letting die intuition: By redirecting, you kill the one. By not redirecting, you let the five die, but you don't kill them. However, when the redirection is controlled by the wonky button, things perhaps change. For perhaps holding one's breath is a positive action, and not just a refraining. So in the wonky button version, holding one's breath is killing, while breathing is letting die. So perhaps the person who thinks it's wrong to redirect in the original case can consistently say that in the breath case, it's obligatory to breathe and redirect.

But things aren't so simple. It's true that normally breathing is automatic, and that it is the holding of one's breath rather than the breathing that is a positive action. But if lives hung on it, you'd no doubt become extremely conscious of your breathing. So conscious, I suspect, that every breath would be a positive decision. So to breathe would then be a positive action. And so if redirecting in the original case is wrong, it's wrong to breathe in this case. Yet holding one's breath is generally a decision, too, a positive action. So now it's looking like in the breath-activated case, whatever happens, you do a positive action, and so you kill in both cases. It's better to kill one rather than killing five, so you should breathe.

But this approach makes what is right and wrong depend too much on your habits. Suppose that you have been trained for rescue operations by a utilitarian organization, so that it became second nature to you to redirect trolleys towards the smaller number of people. But now you've come to realize that utilitarianism is false, and you haven't been convinced by the Double Effect arguments for redirecting trolleys. Still, your instincts remain. You see the trolley, and you have an instinct to redirect. You would have to stop yourself from it. But stopping yourself is a positive action, just as holding your breath is. So by stopping yourself, you'd be killing the five. And by letting yourself go, you'd be killing the one. So by the above reasoning, you should let yourself go. Yet, surely, whether you should redirect or not doesn't depend on which action is more ingrained in you.

Where is this heading? Well, I think it's a roundabout reductio ad absurdum of the idea that you shouldn't redirect. The view that you should redirect is much more stable until such tweaks. If, on the other hand, you say in the original case that you should redirect, then you can say the same thing about all the other cases.

I think the above line of thought should make one suspicious of other cases where people want to employ the distinction between killing and letting-die. (Perhaps instead one should employ Double Effect or the distinction between ordinary and extraordinary means of sustenance.)

Friday, November 6, 2015

Pacifism and trolleys

In the standard trolley case, a runaway trolley is heading towards five innocent people, but can be redirected onto a side-track where there is only one innocent person. I will suppose that the redirection is permissible. This is hard to deny. If redirection here is impermissible, it's impermissible to mass-manufacture vaccines, since mass vaccinations redirect death from a larger number of potentially sick people to a handful of people who die of vaccine-related complications. But vaccinations are good, so redirection is permissible.

I will now suggest that it is difficult to be a pacifist if one agrees with what I just said.

Imagine a variant where the one person on the side-track isn't innocent at all. Indeed, she is the person who set the trolley in motion against the five innocents, and now she's sitting on the side-track, hoping that you'll be unwilling to get your hands dirty by redirecting the trolley at her. Surely the fact that she's a malefactor doesn't make it wrong to direct the trolley at the side-track she's on. So it is permissible to protect innocents by activity that is lethal to malefactors.

This conclusion should make a pacifist already a bit uncomfortable, but perhaps a pacifist can say that it is wrong to protect innocents by violence that is lethal to malefactors. I don't think this can be sustained. For protecting innocents by non-lethal violence is surely permissible. It would be absurd to say a woman can't pepper-spray a rapist. But now modify the trolley case a little more. The malefactor is holding a remote control for the track switch, and will not give it to you unless you violently extract it from her grasp. You also realize that when you violently extract the remote control from the malefactor, in the process of extracting it the button that switches the tracks will be pressed. Thus your violent extraction of the remote will redirect the trolley at the malefactor. Yet surely if it is permissible to do violence to the malefactor and it is permissible to redirect the trolley, it is permissible to redirect the trolley by violence done to the malefactor. But if you do that, you will do a violent action that is lethal to the malefactor.

So it is permissible to protect innocents by violence that is lethal to malefactors. Now, perhaps, it is contended that in the last trolley case, the death of the malefactor is incidental to the violence. But the same is true when one justifies lethal violence in self-defense by means of the Principle of Double Effect. For instance, one can hit an attacker with a club intending to stop the malefactor, with the malefactor's death being an unintended side-effect.

This means that if it is permissible to redirect the trolley, some lethal violence is permissible. What is left untouched, however, by this argument is a pacifism that says that it is always impermissible to intend a malefactor's death. I disagree with that pacifism, too, but this argument doesn't affect it.

Thursday, November 5, 2015

Cheating, throwing a match and perversion

To pervert a social practice is to engage in it while subverting a defining end. Sports and other games are social practices one of whose internal ends is score (which generalizes victory). To throw a match is, thus, a form of perversion: it subverts the defining end of score.

Interestingly, cheating is actually a case of throwing a match. To score one must follow the rules. Thus cheaters don't win: they at most appear to. Their cheating subverts score, a defining end of the game. Cheating is, thus, a form of perversion.

The difference between the cheat and the paradigm case of throwing a match is that the cheat seeks to make it look like he did well at the game while the ordinary thrower of a match doesn't.

Relational gender essentialism

It might turn out to be like this: There is no significant difference between matter and antimatter, except insofar as they are related to one another. A proton is attracted to antiproton, while each is repelled by its own kind. Our universe, as a contingent matter of fact, has more matter than antimatter. But, perhaps, if one swapped the matter and antimatter, the resulting universe wouldn't be different in any significant way. If we this is true, we might say that there is a relational matter-antimatter essentialism. It is of great importance to matter and to antimatter that they are matter and antimatter, respectively, but it is important only because of the relation between the two, not because of intrinsic differences.

I don't know if it's like that with matter and antimatter, but I do know that it's like that with the Father, the Son and the Holy Spirit. The only important non-contingent differences are those constituted by the relationships between them. (There are also contingent extrinsic differences.)

Could it be like that with men and women? The special relation between men and women--say, that man is for woman and woman for man, or that one of each is needed for procreation--is essential and important to men and women. But there are no important non-contingent intrinsic differences on this theory.

There might, however, be important contingent theological differences due to some symmetry-breaking contingent event or events. Maybe, when the Logos became one human being, the Logos had to become either a man or a woman. If the relation between men and women is important, the decision whether to become a man or to become a woman, might have been a kind of symmetry-breaking, with other differences in salvation history following on it. In itself, that decision could have been unimportant. If the Logos had become a woman, we would have a salvation history that was very much alike, except now Sarah would have been asked to sacrifice a daughter, we would have had an all-female priesthood, and so on.

Or perhaps the symmetry-breaking came from the contingent structure of our sinfulness. Perhaps the contingent fact that men tended to oppress women more than the other way around made it appropriate for the Logos to become a man, so as to provide the more sorely needed example of a man becoming the servant of all and sacrificing himself for all, and in turn followed the other differences.

I don't know if relational gender essentialism is the right picture. But it's a picture worth thinking about.

Wednesday, November 4, 2015

Why do we need consciousness?

A perfect Bayesian agent is really quite simple. It has a database of probability assignments, a utility function, an input system and an output system. Inputs change the probability assignments according to simple rules. It computes which outputs maximize expected utility (either causally or evidentially--it won't matter for this post). And it does that (in cases of ties, it can take the lexicographically first option).

In particular, there is no need for consciousness, freedom, reflection, moral constraints, etc. Moreover, apart perhaps from gerrymandered cases (Newcomb?), for maximizing expected utility of a fixed utility function, the perfect Bayesian agent is as good as one can hope to get.

So, if we are the product of entirely unguided evolution, why did we get consciousness and these other things that the perfect Bayesian agent doesn't need, rather than just a database of probability assignments, a utility function that maximizes reproductive potential, and finely-honed input and output systems? Perhaps it is as some sort of compensation for us not being perfect Bayesian agents. There is an interesting research program available here: find out how these things we have compensate for the shortfalls, say, by allowing lossy compression of the database of probability assignments or providing heuristics in lieu of full optimizations. I think that some of the things the perfect Bayesian agent doesn't need can fit into these categories (some reflection and some moral constraints). But I doubt consciousness is on that list.

Consciousness, I think, points towards a very different utility function than one we would expect in an unguidedly evolutionarily produced system. Say, a utility function where contemplation is a highest good, and our everyday consciousness (and even that of animals) is a mirror of that contemplation.

Monday, November 2, 2015

Empathy and inappropriate suffering

Consider three cases of inappropriate pains:

  1. The deep sorrow of a morally culpable racist at social progress in racial integration.
  2. Someone's great pain at minor "first world problems" in their life.
  3. The deep sorrow of a parent who has been misinformed that their child died.
All three cases are ones where something has gone wrong in the pain. The pain is not veridical. In the first case, the pain represents as bad something that is actually good. In the second, the pain represents as very bad something that is only somewhat bad. In the third, the pain represents as bad a state of affairs that didn't take place. There is a difference, however, between the first two cases and the third. In the third case, the value judgment embodied in the pain is entirely appropriate. In the first two cases, the value judgment is wrong--badly wrong in the first case and somewhat wrong in the second.

Let's say that full empathy involves feeling something similar to the pain that the person being empathized with feels. In the parent case, full empathy is the right reaction by a third party, even a third party who knows that the child had not died (but, say, is unable to communicate this to the parent). But in the racist and first-world-problem cases, full empathy is inappropriate. We should feel sorry for those who have the sorrows, but I think we don't need to "feel their pain", except in a remote way. Instead, what should be the object of our sorrow is the value system that give rise to the pain, something which the person does not take pain in.

I think that in appropriate empathy, one feels something analogous to what the person one empathizes with feels. But the kind of analogy that exists is going to depend on the kind of pain that is involved. In particular, I think the following three cases will all involve different analogies: morally appropriate psychological pain; morally inappropriate psychological pain; physical pain. I suspect that "full empathy", where the analogy involves significant similarity, should only occur in the first of the three cases.

An experiment in book writing and crowdsourcing comments

I am working away at Infinity, Causation and Paradox, and am about half way to the first draft. As an experiment, I am putting all my in-progress materials on github. There is both TeX source and a periodically updated pdf file of the whole thing (click on "document.pdf" and choose "Raw").

To report bugs, i.e., philosophical errors, nonsequiturs, typos, etc., or to make improvement suggestions, click on "Issues" at the top of the github page (preferred), or just email me.

I will be committing new material to the repository after each days' work on the book. Click on the commit description to see what I've added. If you must quote from the manuscript, explicitly say that you are quoting from "an in-progress early draft".

Please bear in mind that this is super rough. A gradually growing first draft.

Please note that while you're permitted to download the material for your personal use only, you are not permitted to redistribute any material. The repository may disappear at any time (and in particular when the draft is ready for initial submission).