Wednesday, March 31, 2021

Complicity, Fetal Tissue and Vaccines

I just got permission from the National Catholic Bioethics Quarterly to post my 2006 paper on fetal tissue and vaccines. It is here.

Tuesday, March 30, 2021

Vaccines and cell-lines descended from tissue derived from abortion

A number of Catholic authorities have made a moral distinction between the Pfizer and Moderna vaccines, on the one hand, and the AstraZeneca and Johnson & Johnson vaccines, on the other, with respect to the involvement of cells that descend (after many generations) from an aborted fetus (e.g., see here and here). The difference appears to be that in the Pfizer and Moderna cases, the cells were only used in a confirmatory test of efficacy, while in the other two vaccines, they were used throughout the development and production. Consequently, the Catholic authorities said that the AstraZeneca (AZ) and Johnson & Johnson (JJ) vaccines may be used if no alternatives are available, but the Pfizer and Moderna are preferable if available.

A colleague at another institution asked me if I thought that the moral distinction here is sustainable. I do think it is, and my judgment concurs very closely with the recent statements from Catholic authorities: all four vaccines may (indeed, should) be used, but the Pfizer and Moderna ones are to be chosen when available.

In a 2004 paper, I argued that (a) while it is not categorically forbidden to engage in research using cell-lines that ultimately descend from tissue from an aborted fetus, (b) this may only be done for “sufficiently beneficial purposes”. Such research—and likewise the use of the fruits of the research—is thus a situation that involves the weighing of different factors rather than categorical prohibitions. It seems clearly right that in the case of the vaccines (AZ and JJ) where the illicitly derived cell-lines are used more heavily, we have more of the morally problematic feature, and hence we need greater benefits to outweigh them. Those benefits are available in the case of the current pandemic when alternatives that involve less of the problematic features are not available: thus the AZ and JJ vaccines may be used when the alternatives are not available. But when the alternatives—which also appear to be significantly more effective as vaccines!—then they should be used.

In a 2006 paper, I argued that the Principle of Double Effect allows one to use, and even manufacture, vaccines that make use of the morally tainted cell-lines. The use of the cell-lines in itself is not innately morally evil (after all, it need not be wrong to transplant an organ from a murder victim). What is problematic is what I call “downstream cooperation” with the plans of those involved in the evil of abortion: they likely acted in part (probably in very small part) in order to procure tissue for public health benefits, and now by using the vaccine, we are furthering their plans. But one need not intend to be furthering these plans. Thus, that “downstream cooperation” is something one should weigh using the complex proportionality calculus of Double Effect. In the paper I concluded that the use of the vaccines is permissible, and in the present emergency the point is even clearer. However, it seems to me that the more heavily the cell-lines are used, the more there is of the unintended but still problematic cooperation with the plans of those involved in the evil of abortion, and so one should opt for those vaccines where the cooperation is lesser when possible.

I note that even apart from the moral considerations involving cell-lines descended from aborted fetuses, in a time of significant and unfortunate public vaccine scepticism, it was rather irresponsible from the public health standpoint for the vaccine manufacturers to have made use of such cell-lines if there was any way of avoiding this (and I do not know if there was given the time available).

Monday, March 29, 2021

The pace of reception of goods

Suppose I know that from now on for an infinite number of years, I will be offered an annual game. A die will be rolled, I will be asked to guess whether the die will show six, and if I guess right, I will get a slice of delicious chocolate cake (one of my favorite foods).

Intuitively, I rationally should guess “Not a six”, and thereby get a 5/6 chance of the prize instead of the 1/6 chance if I guess “Six”.

But suppose that instead of the prizes being slices of chocolate cake, there is an infinite supply of delightful and varied P. G. Wodehouse novels (he’s one of my favorite authors), numbered 1, 2, 3, …, and each prize is the opportunity to read the next one. Moreover, the pleasure of reading book n after book n − 1 is the same regardless of whether the interval in between is longer or shorter, there being advantages and disadvantages of each interval that cancel out (at shorter intervals, one can make more literary connections between novels and remember the recurring characters a little better; but at longer intervals, one’s hunger for Wodehouse will have grown).

Now, it is clear that there is no benefit to guessing “Not a six” rather than “Six”. For whatever I guess, I am going to read every book eventually, and the pace at which I read them doesn’t seem prudentially relevant.

At this point, I wonder if I should revise my statement that in the cake case I should guess “Not a six”. I really don’t know. I can make the cake case seem just like the book case: There is an infinite supply of slices of cake, frozen near-instantly in liquid helium and numbered 1, 2, 3, …, and each time I win, I get the next slice. So it seems that whatever I do, I will eat each slice over eternity. So what difference does it make that if I guess “Not a six”, I will eat the slices at a faster pace?

On the other hand, it feels that when the pleasures are not merely equal in magnitude but qualitatively the same as in the cake case, the higher pace does matter. Imagine a non-random version where I choose between getting the prize every year and getting it every second year. Then on the every-second-year plan, the prize days are a proper subset of the prize days on the every-year plan. In the cake case, that seems to be all that matters, and so the every-year plan is better. But in the Wodehouse case, this consideration is undercut by the fact that each pleasure is different in sort, because I said the novels are varied, and I get to collect one of each regardless of which installment plan I choose.

Here is another reason to think that in the cake case, the pace matters: It clearly matters in the case of non-varied pain. It is clearly better to have a tooth pulled every two years than every year. But what about varied torture from a highly creative KGB officer? Can’t I say that on either installment plan, I get all the tortures, so neither plan is worse than the other? That feels like the wrong thing to say: the every-second-year plan still seems better even if the tortures are varied.

I am fairly confident that in the novel case—and especially if the novels continue to be varied—the pace doesn’t matter, and so in the original game version, it doesn’t matter how I gamble. I am less confident of what to say about the cake version, but the torture case pushes me to say that in the cake version, the pace does matter.

Friday, March 26, 2021

Credences and decision-theoretic behavior

Let p be the proposition that among the last six coin tosses worldwide that proceeded my typing the period at the end of this sentence, there were exactly two heads tosses. The probability of p is 6!/(26⋅2!⋅4!) = 15/64.

Now that I know that, what is my credence in p? Is it 15/64? I don’t think so. I don’t think my credences are that precise. But if I were engaging in gambling behavior with amounts small enough that risk aversion wouldn’t come into play, now that I’ve done the calculation, I would carefully and precisely gamble according to 15/64. Thus, I do not think my decision-theoretic behavior reflects my credence—and not through any irrationality in my decision-theoretic behavior.

Here’s a case that makes the point perhaps even more strongly. Suppose I didn’t bother to calculate what fraction 6!/(26⋅2!⋅4!) was, but given any decision concerning p, I calculate the expected utilities by using 6!/(26⋅2!⋅4!) as the probability. Thus, if you offer to sell me a gamble where I get $19 if p is true, I would value the gamble at $19⋅6!/(26⋅2!⋅4!)$, and I would calculate that quantity as $4.45 without actually calculating 6!/(26⋅2!⋅4!). (E.g., I might multiply 19 by 6! first, then divide by 26⋅2!⋅4!.) I could do this kind of thing fairly mechanically, without noticing that $4.45 is about a quarter of $19, and hence without having much of an idea as to where 6!/(26⋅2!⋅4!) lies in the 0 to 1 probability range. If I did that, then my decision-theoretic behavior would be quite rational, and would indicate a credence of 15/64 in p, but in fact it would be pretty clearly incorrect to say that my credence in p is 15/64. In fact, it might not even be correct to say that I assigned a credence less than a half to p.

I could even imagine a case like this. I make an initial mental estimate of what 6!/(26⋅2!⋅4!) is, and I mistakenly think it’s about three quarters. As a result, I am moderately confident in p. But whenever a gambling situation is offered to me, instead of relying on my moderate confidence, I do an explicit numerical calculation, and then go with the decision recommended to me by expected utility maximization. However, I don’t bother to figure out how the results of these calculations match up with what I think about p. If you were to ask me, I would say that p is likely true. But if you were to offer me a gamble, I would do calculations that better fit with the hypothesis of my having a credence close to a quarter. In this case, I think my real credence is about three quarters, but my rational decision-theoretic behavior is something else altogether.

Furthermore, there seems to me to be a continuum between my decision-theoretic behavior coming from mental calculation, pencil-and-paper calculation, the use of a calculator or the use of a natural language query system that can be asked “What is the expected utility of gambling on exactly two of six coin tosses being heads when the prize for being right is $19?” (a souped up Wolfram Alpha, say). Clearly, the last two need not reflect one’s credences. And by the same token, I think that neither need the first two.

All this suggests to me that decision-theoretic behavior lacks the kinds of tight conceptual connection to credences that people enamored of representation theorems like.

Wednesday, March 24, 2021

Doing and refraining, and proportionality

In my previous post, I suggested that proportionality considerations in Double Effect work differently for positive actions (doings) than for negative ones (refrainings). One thing that is now striking me is that there is an interesting asymmetry with respect to relational features that is brought out by thinking about pairs of trolley cases with different groups of people on the two tracks, but where we vary which track the trolley is initially heading for.

For an initial pair of cases, suppose on one track is someone one has a close relationship with (one’s child, spouse, parent, sibling, close friend, etc.)—“friend” is the term I will use for convenience—and on the other track a stranger. Then:

  • It’s completely clear that if the trolley is heading for the stranger, it is permissible not to redirect the trolley

  • It’s significantly less clear but plausible that if the trolley is heading for the friend, it is permissible to redirect the trolley.

In this case, I already feel a moral difference between doing, i.e., redirecting the trolley, and refraining, i.e., leaving the trolley be, even though my permissibility judgment is the same in the two cases: redirecting the trolley towards the stranger and allowing the trolley to hit the stranger are both permissible. And yet regardless of where the trolley is initially heading, there are the same two outcomes: either a stranger dies or a friend dies. The difference between the cases seems to be solely grounded in which outcome is produced by doing (redirecting) and which by refraining (not redirecting).

Suppose we vary the ratio of strangers to friends in this case. At a 2:1 ratio of strangers to friends, my intuitions say:

  • It’s very plausible that if the trolley is heading for the strangers, it is permissible not to redirect the trolley

  • I can’t tell whether if the trolley is heading for the friends, it is permissible to redirect the trolley.

As the ratio of strangers to friends increases, my intuition shifts in favor of saving the greater number of strangers. But, nonetheless, my intuition consistently favors saving the strangers more strongly when this is done by refraining-from-redirecting than when this is done by redirecting. Thus, even at a 10:1 ratio of strangers to friends:

  • It’s almost completely clear that if the trolley is heading for the strangers, it is morally required to redirect

  • It’s completely clear that if the trolley is heading for the friends, it is morally forbidden to redirect.

In fact, I think there are points where the ratio of strangers to friends is both sufficiently high that:

  • If the trolley is heading for the friends, it is forbidden to redirect

and yet still sufficiently low that:

  • If the trolley is heading for the strangers, it is not required to redirect.

I feel that 3:2 may be such a ratio, though the details will depend on the exact nature of one’s relationship with the friends.

These cases suggest to me that the proportionality requirements governing refrainings and doings are different. It is consistently easier to justify refraining from redirect than to justify redirecting even when the consequences are the same. Nonetheless, even though the proportionality requirements are different, in the cases above they do not look qualitatively different, but only quantitatively so.

Monday, March 22, 2021

Doing, refraining and Double Effect

The Principle of Double Effect seems to imply that either there are real dilemmas—cases where an action is both forbidden and required—even for agents who have always been virtuous and well-informed, or else there is a morally significant distinction between doing and refraining.

Here is the argument. Consider two cases. In both cases, you know that teenage and now innocent Adolf will kill tens of millions of innocents unless he dies now.

  1. Adolf is drowning. You can throw him a life-preserver.

  2. Adolf is on top of a cliff. You can give him a push.

Double Effect prohibits throwing Adolf a life-preserver. For Double Effect says that an action that has good and bad foreseen consequences is only permissible when the bad effects are proportionate to the good effects. But the deaths of tens of millions of innocents are disproportionate to the life of one innocent teenager.

Now, I take it that in case 2, it is wrong to push Adolf over the precipice. Double Effect certainly agrees: pushing him over the precipice is intentionally doing an evil as a means to a good.

If there is no morally significant distinction between doing and refraining, then it seems that refusal to throw a life-preserver in the drowning case is just like pushing in the cliff case: both are done in order that Adolf might die before he kills tens of millions. If in the cliff case we are forbidden from pushing, then in the drowning case we are forbidden from not throwing the life-preserver. But at the same time, Double Effect forbids throwing the life-preserver. So we must throw and not throw. Thus, the drowning case becomes a real dilemma—and it remains one even if the agent has always been virtuous and well-informed.

I find it very plausible that there are no moral dilemmas for agents who have always been virtuous and well-informed. (Vicious agents might face dilemmas due to accepting incompatible commitments. And agents with mistaken conscience might be in dilemmas unawares, because their duties to conscience might conflict with “objective” duties.) I also think the Principle of Double Effect is basically correct.

This seems to push me to accept a morally significant distinction between action and abstention: it is not permissible to push teenage Adolf off the cliff, but it is permissible—and required—not to throw a life-preserver to him when he is drowning.

But perhaps there is a distinction to be drawn between the two cases that is other than a simple doing/refraining distinction. In the cliff case, presumably one’s purpose in pushing Adolf is that he should die. If he survives, one has failed. But in the drowning case, it is not so clear that one’s purpose in not throwing the life-preserver is that Adolf should drown. Rather, the purpose in not throwing the life-preserver is to refrain from violating Double Effect. Suppose that Adolf survives despite the lack of a life-preserver. Then one has still been successful: one has refrained from violating Double Effect.

Nonetheless, this is still basically a doing/refraining distinction, just a more subtle one. Double Effect requires one to refrain from disproportionate actions—ones whose foreseen evil effects are disproportionate to their foreseen good effects. But Double Effect does not require one to refrain from disproportionate refrainings. For if Double Effect were to require one to refrain from disproportionate refrainings, then in the cliff case, it would require one to refrain from refraining from pushing—i.e., it would require one to push. And it would require one not to push, thereby implying a real dilemma. But in the cliff case, classical Double Effect straightforwardly says not to push. (Things are a little different in threshold deontology, but given threshold deontology we can modify the case to reduce the number of deaths of innocents resulting from Adolf’s survival and the point should still go through.)

In fact, this last point shows that embracing real dilemmas probably will not help a friend of Double Effect avoid a doing/refraining distinction. For even if there are real dilemmas, the cliff case is not one of them: pushing is straightforwardly impermissible.

It is tempting to conclude from this that Double Effect only applies to doings and not refrainings. But that might miss something of importance, too. Double Effect gives necessary conditions for the permissibility of a doing that has foreseen evil effects and an intended good effect:

  1. the evil is not a means to the intended good

  2. the action is intrinsically neutral or good

  3. the evil is not disproportionate to the intended good.

The argument above shows that (5) is not a necessary condition for the permissibility of a refraining. It seems that all refrainings are intrinsically neutral. So, (4) may be vacuous for refrainings. But it is still possible that (3) is true both for doings and refrainings. Thus, while it is permissible to refrain from throwing the life-preserver, perhaps one’s aim in refraining should not be the death of Adolf, but rather the avoidance of doing something disproportionate. And even if (5) is not a necessary condition for the permissibility of a refraining, there may be some weaker proportionately condition on refrainings. Indeed, that has to be right, since it’s wrong to refrain to pull out a drowning child simply to save one’s clothes, as Singer has pointed out. I don’t know how to formulate the proportionateness constraint correctly in the refraining case.

We thus have two Double Effect positions available on doing and refraining. One position says that Double Effect puts constraints on doings but not on refrainings. The subtler position says that Double Effect puts more constraints on doings.

Friday, March 19, 2021

More dwindling of the prospects for an accuracy-based argument for probabilism in infinite cases

Let P be the set of countably additive probabilities on a countable set Ω. A strictly proper accuracy scoring rule on some set of credences C that includes P is a function s from C to [ − ∞, M]Ω for some finite M such that Eps(p)>Eps(q) for any p ∈ P and any q ∈ C: i.e., from the point of view of each probability in P, that probability has the highest expected accuracy. (It’s a little easier for my example to work with accuracy scoring rules.)

We can identify P with the non-negative functions of ℓ1(Ω) that sum to 1. This identification induces a topology on P (based on the norm and weak topologies on ℓ1(Ω) which are the same). On [ − ∞, M]Ω we will take the product topology.

We say that a scoring rule is uniformly bounded provided there is some finite R such that |s(p)(ω)| < R for all p ∈ C and ω ∈ Ω. We say that one function strictly dominates another provided that the former is strictly less than the latter everywhere.

Theorem: There is a uniformly bounded strictly proper scoring rule s on C that is continuous on P and a credence c ∈ C − P such that s(c) is not strictly dominated by s(p) for any p ∈ P.

In a recent post, I showed that it’s not possible to use strictly proper scoring rules to produce a strict domination argument for probabilism (the thesis that all credences should be probabilities) in infinite cases when we take probabilities to be finitely additive, because in the finitely additive case there is no strictly proper scoring rule. The above Theorem is a complement for the countably additive case: it shows that there are nice strictly proper scoring rules for the countably additive probabilities, but they don’t support the strict domination results that arguments for probabilism seem to require.

There is a remaining open question as to what happens when one further assumes additivity of the scoring rule. But I do not think additivity of a scoring rule is a reasonable constraint, because it seems to me that epistemic utilities will depend on some global features of a forecast.

Here is a sketch of the proof of the theorem. Assume Ω is the natural numbers. Identify P with the non-negative members of ℓ1 that sum to 1. Let s(p)(n)=p(n)/∥p2 for p ∈ P. Note that the ℓ2-norm is continuous on ℓ1, and hence s is continuous on Ω. Observe that s(p)∈ℓ1. By the Cauchy-Schwarz inequality (together with the condition for equality in it), s is strictly proper on P. Define s(c)(n)=1/2(n + 1) for any credence c that is not in P. Note that Eps(p)=∥p2 for all p. Observe that Eps(c)<∥s(c)∥2p2 for every p by Cauchy-Schwarz again. But ∥s(c)∥2<1. Thus, s is strictly proper. But s(c) for c not a probability is not a member of ℓ1, and hence is not dominated by any score of a probability.

A necessary truth that explains a contingent one

Van Inwagen’s famous argument against the Principle of Sufficient Reason rests on the principle:

  1. A necessary truth cannot explain a contingent one.

For a discussion of the argument, see here.

I just found a nice little counterexample to (1).

Consider the contingent proposition, p, that it is not the case that my next ten tosses of a fair coin will be all heads, and suppose that p is true (if it is false, replace “heads” with “tails”). The explanation of this contingent truth can be given entirely in terms of necessary truths:

  1. Either it is or is not the case that I will ever engage in ten tosses of a fair coin.

  2. If it is not the case that I will, then p is true.

  3. If I will, then by the laws of probability, the probability of my next ten tosses of a fair coin being all heads is 1/210 = 1/1024, which is pretty small.

My explanation here used only necessary truths, namely the law of excluded middle, and the laws of probability as applied to a fair coin, and so if we conjoin the explanatory claims, we get a counterexample to 1.

It is, of course, a contingent question whether I will ever engage in ten tosses of a fair coin. I have never, after all, done so in the past (no real-life coin is literally fair). But my explanation does not require that contingent question to be decided.

This counterexample reminds me of Hawthorne’s work on a priori probabilistic knowledge of contingent truths.

Scoring rules for finitely additive probabilities on an infinite space

Michael Nielsen has drawn my attention to the interesting question of scoring rules for forecasts on an infinite sample space Ω. Suppose that the forecasts are finitely additive probabilities on Ω, and that a scoring rule assigns an inaccuracy score s(p) to every finitely additive probability p on Ω (and maybe also to some or all inconsistent credences, but that won’t matter to us), where s(p) is a function from Ω to [ − ∞, ∞]. If we think of Ω as a space of situations or worlds, then s(p)(ω) measures how far off your forecast p would be if you were actually in situation or world ω.

Say that two forecasts p and q are orthogonal provided that there is a subset A of Ω such that p(A)=1 and q(A)=0. Orthogonal forecasts disagree as badly as possible.

Here is a plausible condition on a scoring rule s:

  • A scoring rule s has orthogonal fine grain if for any two orthogonal forecasts p and q, there is an ω ∈ Ω such that s(p)(ω)≠s(q)(ω).

Requiring orthogonal fine grain is plausible, because if forecasts disagree maximally, we would expect them to score differently in at least one situation.

Theorem: Assume the Axiom of Choice. If Ω is infinite, then no scoring rule has orthogonal fine grain. In fact, for any scoring rule s there will be orthogonal maximally opinionated forecasts p and q such that s(p)=s(q) everywhere on Ω.

Here, a forecast p is maximally opinionated provided that for any event A, p(A) is either 0 or 1. Note that the theorem does not assume any continuity or propriety.

An immediate corollary is that there is no strictly proper scoring rule (i.e., scoring rule s such that Ep(s(p)) < Ep(s(q)) for every distinct pair of forecasts p and q) for an infinite sample space, a result Michael Nielsen communicated to me under additional assumptions on s. This in turn should either make one a little suspicious of arguments for probabilistic consistency in finite cases that are based on an insistence on strict propriety, or it should push one in the direction of requiring countable additivity in the infinite case.

Proof of Theorem: Any two maximally opinionated forecasts are orthogonal. For suppose that p and q are maximally opinionated and not identical. Then for some A ⊆ Ω we have p(A)≠q(A). Thus, either p(A)=1 and q(A)=0 or p(Ω − A)=1 and q(Ω − B)=0, and so we have orthogonality.

The set of maximally opinionated forecasts is in one-to-one correspondence with the set of ultrafilters on Ω which has cardinality 22|Ω| (Proposition 6 here), of course assuming the Axiom of Choice (without which all the ultrafilters might be principal, and hence there might be only |Ω| of them).

On the other hand, s(p) for every p is a function from Ω to [ − ∞, ∞]. The cardinality of the set of such functions is (20)|Ω| = 20×|Ω| = 2|Ω| for infinite Ω, assuming the Axiom of Choice. Hence, there are more maximally opinionated forecasts than possible scores, and hence some maximally opinionated forecasts must share the same score. But we saw that any two maximally opinionated forecasts are orthogonal. QED

Thursday, March 18, 2021

Valuations and credences

One picture of credences is that they are derived from agents’ valuations of wagers (i.e., previsions) as follows: the agent’s credence in a proposition p is equal to the agent’s valuation of a gamble that pays one unit if p is true and 0 units if p false.

While this may give the right answer for a rational agent, it does not work for an irrational agent. Here are two closely related problems. First, note that the above definition of credences is dependent on the unit system in which the gambles are denominated. A rational agent who values a gamble that pays one dollars on heads and zero dollars otherwise at half a dollar will also value a gamble that pays one yen on heads and zero yen otherwise at half a yen, and we can attribute a credence of 1/2 in heads to the agent. In general, the rational agent’s valuations will be invariant under affine transformations and so we do not have a problem. But Bob, an irrational agent, might value the first gamble at $0.60 and the second at 0.30 yen. What, then, is that agent’s credence in heads?

If there were a privileged unit system for utilities, we could use that, and equate an agent’s credence in p with their valuation of a wager that pays one privileged unit on p and zero on not-p. But there are many units of utility, none of them privileged: dollars, yen, hours of rock climbing, glazed donuts, etc.

And even if there were a privileged unit system, there is a second problem. Suppose Alice is an irrational agent. Suppose Alice has two different probability functions, P and Q. When Alice needs to calculate the value of a gamble that pays exactly one unit on some proposition and exactly zero units on the negation of that proposition, she uses classical mathematical expectation based on P. When Alice needs to calculate the value of any other gamble—i.e., a gamble that has fewer than or more than two possible payoffs or a gamble that has two payoffs but at values other than exactly one or zero—she uses classical mathematical expectation based on Q.

Then the proposed procedure attributes to Alice the credence function P. But it is in fact Q that is predictive of Alice’s behavior. For we are never in practice offered gambles that have exactly two payoffs. Coin-toss games are rare in real life, and even they have more than two payoffs. For instance, suppose I tell you that I will give you a dollar on heads and zero otherwise. Well, a dollar is worth a different amount depending on when exactly I give it to you: a dollar given earlier is typically more valuable, since you can invest it for longer. And it’s random when exactly I will pay you. So on heads, there are actually infinitely many possible payoffs, some slightly larger than others. Moreover, there is a slight chance of the coin landing on the edge. While that eventuality is extremely unlikely, it has a payoff that’s likely to be more than a dollar: if you ever see a coin landing on edge, you will get pleasure out of telling your friends about it afterwards. Moreover, even if we were offered a gamble that had exactly two payoffs, it is extremely unlikely that these payoffs would be exactly one and zero in the privileged unit system.

The above cases do not undercut a more sophisticated story about the relationship between credences and valuations, a story on which one counts as having the credence that would best fit one’s practical valuations of gambles with two-values, and where there is a tie, one’s credences are underdetermined or interval-valued. In Alice’s case, for instance, it is easy to say that Q best fits the credences, while in Bob’s case, the credence for heads might be a range from 0.3 to 0.6.

But we can imagine a variant of Alice where she uses P whenever she has a gamble that has only two payoffs, and she uses Q at all other times. Since in practice two-payoff gambles don’t occur, she always uses Q. But if we use two-payoff gambles to define credences, then Alice will get P attributed to her as her credences, despite her never using P.

Can we have a more sophisticated story that allows credences to be defined in terms of valuations of gambles with more payoffs than two? I doubt it. For there are multiple ways of relating a prevision to a credence when we are dealing with an inconsistent agent, and none of them seem privileged. Even my favorite way, the Level Set Integral, comes in two versions: the Split and Shifted versions.

Tuesday, March 16, 2021

Presumption of legal permissibility

A standard principle in the interpretation of Catholic canon law is the principle that restrictive laws are understood narrowly and permissive ones broadly: i.e., in case of ambiguity, we err on the side of freedom. I’ve wondered if this presumption in favor of freedom is just a principle internal to Catholic canon law or if it is part of the concept of law in general. The background for the question is a broadly natural law conception of positive law, on which not every piece of legislation counts as a law, but only those that satisfy the moral-bindingness conditions.

Here is an argument that the presumption for freedom is more general than for Catholic canon law: the presumption for freedom follows from Aquinas’s principle that in order to be binding, a law must be promulgated.

Promulgation means that the law is made available to the reasonable agent. But insofar as a restrictive law is ambiguous, to that extent it has not been made available to the reasonable agent, and hence thus far it has not been promulgated. Thus, ambiguity in a restrictive law yields freedom. What about a permissive law? Well, since positive law is fundamentally restrictive—everything is legally permitted unless it is expressely forbidden—a “permissive law” is really an exception against the background of a specific restrictive law. Thus, one might have a general restrictive law against civilians carrying spears, and later introduce a specific permissive law allowing it when invaders are nearby. In that case, it is reasonable to think of the two laws as forming a single restrictive unit: “Civilians shall not carry spears, unless invaders are nearby.” And then the the ambiguity-yields-freedom principle for restrictive laws implies that in cases of ambiguity in the permissive exception clause, we also have a presumption for freedom.

Monday, March 15, 2021

Abortion, contraception and Christian tradition

It is traditional Christian teaching, as far back as we can trace it, that:

  1. Abortion is always wrong.

Nonetheless, historically many Christian theologians, such as Thomas Aquinas, accepted the best science and philosophy of the day (Aristotle!) which held that:

  1. Human existence starts about a month and a half after conception.

Our science no longer teaches (2), of course: it is scientifically clear that we have the same organism at conception—or at very latest at implantation—as at birth.

However, for those of us who think that Christian tradition carries significant epistemic weight, it is interesting to ask why it was that historically Christian teaching stalwartly affirmed (1) despite many Christian thinkers accepting (2). I see two hypotheses each of which may explain this puzzle. Both hypotheses may be true (and indeed I think they are).

The first hypothesis is that (1) is simply a datum of the apostolic teaching of the early Church (it is after all found in the first-century Didache). The Church’s stalwart acceptance of the prohibition of abortion notwithstanding the tension between this prohibition and the best science of the day is a sign that the prohibition of abortion was grounded in divine revelation rather than philosophical speculation.

The second hypothesis is that the reasons for the traditional prohibition of abortion are logically independent of the moral status of the embryo or early fetus. We also know that the early Church forbade contraception. If the embryo or early fetus is not a human being, the an early abortion may not be morally very different from contraception. But the Church was opposed to contraception. Via this second hypothesis, the apparent tension between the blanket prohibition on abortion and the philosophical and scientific views on the beginning of human life is further evidence for an apostolic prohibition of contraception.

Monday, March 8, 2021

Strict propriety of credential scoring rules

An (inaccuracy) scoring rule measures how far a probabilistic forecast lies from the truth. Thus, it assigns to each forecast p a score s(p) which is a [0, ∞]-valued random variable varying over the probability space Ω that measures distance from truth. Let’s work with finite probability spaces and assume all the forecasts are consistent probability functions.

A rule s is proper provided that Eps(p)≤Eps(q) for any probability functions p and q, where Epf = ∑ωΩp({ω})f(ω) is the expectation of f according to p, using the convention that 0 ⋅ ∞ = 0. Propriety is the very reasonable condition that whatever your forecast, according to your forecast you don’t expect any other other specific forecast to be better—if you did, you’d surely switch to it.

A rule is strictly proper provided that Eps(p)<Eps(q) whenever p and q are distinct. It says that by the lights of your forecast, your forecast is better than any other. It is rather harder to intuitively justify strict propriety.

A very plausible condition is continuity: your score in every possible situation ω ∈ Ω depends continuously on your probability assignment.

Last week while having a lot of time on my hands while our minivan was having an oil change, I got interested in the question of what kinds of failures of strict propriety can be exhibited by a continuous proper scoring rule. It is, of course, easy to see that one can have continuous proper scoring rules that aren’t strictly proper: for instance, one can assign the same score to every forecast. Thinking about this and other examples, I conjectured that the only way strict propriety can fail in a continuous proper scoring rule (restricted to probability functions) is by assigning the same score to multiple forecasts.

Last night I found what looks to be a very simple proof of the conjecture: Assuming the proof is right (it still looks right this morning), if s is a continuous proper scoring rule defined on the probabilities, and Eps(p)=Eps(q), then s(p)=s(q) (everywhere in Ω).

Given this, the following follows:

  • A continuous scoring rule defined on the probabilities is strictly proper if and only if it is proper and fine-grained,

where a scoring rule is fine-grained provided that it is one-to-one on the probabilities: it assigns different scores to different probabilities. (I mean: if p and q are different, then there is an ω ∈ Ω such that s(p)(ω)≠s(q)(ω).)

But fine-grainness seems moderately plausible to me: a scoring rule is insufficiently “sensitive” if it assigns the same score to different consistent forecasts. So we have an argument for strict propriety, at least as restricted to consistent probability functions.

Friday, March 5, 2021

Harm Principle

Consider this Harm Principle:

  1. Without a relevant connection to actual, intended or risked harm, there is no wrongdoing.

Now suppose Carl tortures Bob because he has justified practical certainty that this torture will lead Bob to abandon beliefs that Carl takes to be heretical and thereby cause him to avoid the pains of hell. (How could Carl be justified in such practical certainty? Easy: we can imagine a ton of hallucinatons that evidentially support the claim.) Suppose, further, that Bob’s being tortured in fact transforms Bob’s life in ways quite different from those Carl envisioned. Bob’s own wholesome beliefs are deepened. He abandons his meaningless corporate job and becomes an advocate for the vulnerable, leading a deeply meaningful life. Moreover, were all known about Bob’s character at the time of the torture, this transformation would have been predictable with a very high probability.

It seems Bob is not actually harmed: his life becomes better. And Carl does not intend Bob to be actually harmed. Given Carl’s justified practical certainty that the torture will benefit Carl, Carl does not subjectively risk harm. And given that Bob’s transformation was quite predictable given full knowledge of his character, Carl does not objectively risk harm. So, it seems (1) is false.

There is, however, a natural response to (1): Carl does actually and intentionally harm Bob, just not on balance. The torture is a real harm, even if it results in an overall benefit.

This natural response seems right. Thus, in (1) we should not understand harm as on-balance or all-things-considered harm. The problem with this interpretation of (1), is that (1) becomes trivial in light of this plausible observation:

  1. Every significant human action has a relevant connection to some actual or risked harm (perhaps a very minor one).

Tuesday, March 2, 2021

Discrimination without disadvantage

The SEP’s article on discrimination talks of discrimination as involving the imposition of a relative disadvantage on a member of a group.

This seems incorrect. Suppose Bob refuses to hire Alice because Alice is a woman. But Bob’s workplace is such a toxic environment that one is better off being jobless than working for Bob, regardless of whether one is a man or a woman. Bob has paradigmatically discriminated against Alice, but he has not imposed a relative disadvantage on her.

One might object that losing an option is always a disadvantage. But that is false: some options are degrading and it is better not to have them.

Perhaps we should subjectivize the relative disadvantage and say that discrimination involves the imposition of what is believed or intended to be a relative disadvantage. Bob presumably doesn’t think that working for him is a disadvantage. But imagine that Bob has the sexist belief that women are better off as housewives, and further believes that being a housewife is as good for a woman as being an employee of Bob’s is for a man. Then Bob does not believe he is imposing a relative disadvantage and he is not intending to do so, but he is clearly discriminating.

I am not sure how to fix the account of discrimination.

Monday, March 1, 2021

Necessary and Sufficient Conditions for Domination Results for Proper Scoring Rules

The preprint is now up.

Abstract: Scoring rules measure the deviation between a probabilistic forecast and reality. Strictly proper scoring rules have the property that for any forecast, the mathematical expectation of the score of a forecast p by the lights of p is strictly better than the mathematical expectation of any other forecast q by the lights of p. Probabilistic forecasts need not satisfy the axioms of the probability calculus, but Predd, et al. (2009) have shown that given a finite sample space and any strictly proper additive and continuous scoring rule, the score for any forecast that does not satisfy the axioms of probability is strictly dominated by the score for some probabilistically consistent forecast. Recently, this result has been extended to non-additive continuous scoring rules. In this paper, a condition weaker than continuity is given that suffices for the result, and the condition is proved to be optimal.

Deserving the rewards of virtue

We have the intuition that when someone has worked uprightly and hard for something good and thereby gained it, they deserve their possession of it. What does that mean?

If Alice ran 100 meters faster than her opponents at the Olympics, she deserves a gold medal. In this case, it is clear what is meant by that: the organizers of the Olympics owe her a gold medal in just recognition of her achievement. Thus, Alice’s desert appears appears to be appropriately analyzable partly in terms of normative properties had by persons other than Alice. In Alice’s case, these properties are obligations of justice, but they could simply be reasons of justice. Thus, if someone has done something heroic and they receive a medal, the people giving the medal typically are not obligated to give it, but they do have reasons of justice to do so.

But there are cases that fit the opening intuition where it is harder to identify the other persons with the relevant normative properties. Suppose Bob spends his life pursuing virtue, and gains the rewards of a peaceful conscience and a gentle attitude to the failings of others. Like Alice’s gold medal, Bob’s rewards are deserved. But if we understand desert as in Alice’s case, as partly analyzable in terms of normative properties had by others, now we have a problem: Who is it that has reasons of justice to bestow these rewards on Bob?

We can try to analyze Bob’s desert by saying that we all have reasons of justice not to deprive him of these rewards. But that doesn’t seem quite right, especially in the case of the gentle attitude to the failings of others. For while some people gain that attitude through hard work, others have always had it. Those who have always had it do not deserve it, but it would still be unjust to deprive them of it.

The theist has a potential answer to the question: God had reasons of justice to bestow on Bob the rewards of virtue. Thus, while Alice deserved her gold medal from the Olympic committee and Carla (whom I have not described but you can fill in the story) deserved her Medal of Honor from the Government, Bob deserved his quiet conscience and “philosophical” outlook from God.

This solution, however, may sound wrong to many Christians, especially but not only Protestants. There seems to be a deep truth to Leszek Kolakowski’s book title God Owes Us Nothing. But recall that desert can also be partly grounded in non-obligating reasons of justice. One can hold that God owes us nothing but nonetheless think that when God bestowed on Bob the rewards of virtue (say, by designing and sustaining the world in such a way that often these rewards came to those who strove for virtue), God was doing so in response to non-obligating reasons of justice.

Objection: Let’s go back to Alice. Suppose that moments after she ran the race, a terrorist assassinated everyone on the Olympic Committee. It still seems right to say that Alice deserved a gold medal for her run, but no one had the correlate reason of justice to bestow it. Not even God, since it just doesn’t seem right to say that God has reasons of justice to ensure Olympic medals.

Response: Maybe. I am not sure. But think about the “Not even God” sentence in the objection. I think the intuition behind the “Not even God” undercuts the case. The reason why not even God had reasons of justice to ensure the medal was that Alice deserved a medal not from God but from the Olympic Committee. And this shows that her desert is grounded in the Olympic Committee, if only in a hypothetical way: Were they to continue existing, they would have reasons of justice to bestow on her the medal.

This suggests a different response that an atheist could give in the case of Bob: When we say that Bob deserves the rewards of virtue, maybe we mean hypothetically that if God existed, God would have reasons of justice to grant them. This does not strike me as a plausible analysis. If God doesn’t exist, the existence of God is a far-fetched and fantastical hypothesis. It is implausible that Bob’s ordinary case of desert be partly grounded in hypothetical obligations of a non-existent fantastical being. On the other hand, it is not crazy to think that Alice’s desert, in the exceptional case of the Olympic Committee being assassinated, be partly grounded in hypothetical obligations of a committee that had its existence suddenly cut short.