Tuesday, May 21, 2024

A problem for probabilistic best systems accounts of laws

Suppose that we live in a Humean universe and the universe contains an extremely large collection of coins scattered on a flat surface. Statistical analysis of all the copper coins fits extremely well with the hypothesis that each coin was independently randomly placed with the chance of heads being 1/16 and that of tails being 15/16.

Additionally, there is a gold coin where you haven’t observed which side it’s on.

And there are no other coins.

On a Lewisian best systems account of laws of nature, if the number of coins is sufficeintly large, it will be a law of nature that all coins are independently randomly placed with the chance of heads being 1/16 and that of tails being 15/16. This is true regardless of whether the gold coin is heads or tails. If you know the information I just gave, and have done the requisite statistical analysis of the copper coins, you can be fully confident that this is indeed a law of nature.

If you are fully confident that it is a law of nature that the chance of tails is 15/16, then your credence for tails for the unobserved gold coin should also be 15/16 (I guess this is a case of the Principal Principle).

But that’s wrong. The fact that the coin is of a different material from the observed coins should affect your credence in its being tails. Inductive inferences are weakened by differences between the unobserved and the observed cases.

One might object that perhaps the Lewisian will say that instead of a law saying that the chance of tails on a coin is 15/16, there would be a law that the chance of tails on a copper coin is 15/16. But that’s mistaken. The latter law is not significantly more informative than the former (given that all but one coin is copper), but is significantly less brief. And laws are generated by balancing informativeness with brevity.

Friday, May 17, 2024

Yet another argument for thirding in Sleeping Beauty?

Suppose that a fair coin has been flipped in my absence. If it’s heads, there is an independent 50% chance that I will be irresistably brainwashed tonight after I go to bed in a way that permanently forces my credence in heads to zero. If it’s tails, there will be no brainwashing. When I wake up tomorrow, there will be a foul taste in my mouth of the brainwashing drugs if and only if I’ve been brainwashed.

So, I wake up tomorrow, find no taste of drugs in my mouth, and I wonder what I should to my credence of heads. The obvious Bayesian approach would be to conditionalize on not being brainwashed, and lower my credence in heads to 1/3.

Next let’s evaluate epistemic policies in terms of a strictly proper scoring accuracy rule (T,F) (i.e., T(p) and F(q) are the epistemic utilities of having credence p when the hypothesis is in fact true or false respectively). Let’s say that the policy is to assign credence p upon observing that I wasn’t brainwashed. My expected epistemic utility is then (1/4)T(p) + (1/4)T(0) + (1/2)F(p). Given any strictly proper scoring rule, this is optimized at p = 1/3. So we get the same advice as before.

So far so good. Now consider a variant where instead of a 50% chance of being brainwashed, I am put in a coma for the rest of my life. I think it shouldn’t matter whether I am brainwashed or put in a coma. Either way, I am no longer an active Bayesian agent with respect to the relevant proposition (namely, whether the coin was heads). So if I find myself awake, I should assign 1/3 to heads.

Next consider a variant where instead of a coma, I’m just kept asleep for all of tomorrow. Thus, on heads, I have a 50% chance of waking up tomorrow, and on tails I am certain to wake up tomorrow. It shouldn’t make a difference whether we’re dealing with a life-long coma or a day of sleep. Again, if I find myself away, I should assign 1/3 to heads.

Now suppose that for the next 1000 days, each day on heads I have a 50% chance of waking up, and on tails I am certain to wake up, and after each day my memory of that day is wiped. Each day is the same as the one day in the previous experiment, so each day I am awake I should assign 1/3 to heads.

But by the Law of Large Numbers, this is basically an extended version of Sleeping Beauty: on heads I will wake up on approximately 500 days and on tails on 1000 days. So I should assign 1/3 to heads in Sleeping Beauty.

Acting for the sake of rationality alone

Alice is confused about the nature of practical rationality and asks wrong philosopher about it. She is given this advice:

  1. For each of your options consider all the potential pleasures and pains for you that could result from the option. Quantify them on a single scale, multiply them by their probabilities, and add them up. Go for the option where the resulting number is biggest.

Some time later, Alice goes to a restaurant and follows the advice to the letter. After spending several hours pouring over the menu and performing back-of-the-envelope calculations she orders and eats the kale and salmon salad.

Traditional decision theory will try to explain Alice’s action in terms of ends and means. What is her end? The obvious guess is that it’s pleasure. But that need not be correct. Alice may not care at all about pleasure. She just cares about doing the action that maximizes the sum of pleasure quantities multiplied by their probabilities. She may not even know that this sum is an “expected value”. It’s just a formula, and she is simply relying on an expert’s opinion as to what formula to use. (If we want to, we could suppose the philosopher gives Alice a logically equivalent formula that was so complicated that she can’t tell that she is maximizing expected pleasure.)

I suppose the right end-means analysis of Alice’s action would be something like this:

  • End: Act rationally.

  • Means: Perform an action that maximizes the sum of products of pleasures and probabilities.

The means is constitutive rather than causal. In this case, there is no causal means that I can see. (Alice may have been misinformed by the same philosopher that there is no such thing as causation.)

The example thus shows that there can be cases of action where one’s aim is simply to act rationally, where one isn’t aiming at any other end. These may be defective cases, but they are nonetheless possible.

Wednesday, May 15, 2024

An interview

Every so often I get asked to do a video interview. I almost always turn down these requests. Recently, I gave in and agreed to do one, because I highly valued the work of the organization that asked me.

It was a terrible experience that has restored my judgment to avoid such things. After initially stumbling (not a big deal), I started talking at length and pretty fluently. But what I was saying was stuff that I hadn’t thought out. It sounded pretty good to me, but it just wasn’t backed up with arguments. Instead of a pattern where first I think and refine what I am about to say, and then I speak, I just spoke, and spoke in a manner that suggested more knowledge than I consciously had. Ugh!

For all I know, all that I said was true, and could be backed up by arguments. But maybe it wasn’t.

Very open-minded scoring rules

An accuracy scoring rule is open-minded provided that the expected value of the score after a Bayesian update on a prospective observation is always greater than or equal to the current expected value of the score.

Now consider a single-proposition accuracy scoring rule for a hypothesis H. This can be thought of as a pair of functions T and F where T(p) is the score for assigning credence p when H is true and F(p) is the score for assigning credence p when H is false. We say that the pair (T,F) is very open-minded provided that the conditional-on-H expected value of the T score after a Bayesian update on a prospective observation is greater than or equal to the current expected value of the T score and provided that the same is true for the F score with the expected value being conditional on not-H.

An example of a very open-minded scoring rule is the logarithmic rule where T(p) = log p and F(p) = log (1−p). The logarithmic rule has some nice philosophical properties which I discuss in this post, and it is easy to see that any very open-minded scoring rule has these properties. Basically, the idea is that if I measure epistemic utilities using a very open-minded scoring rule, then I will not be worried about Bayesian update on a prospective observating damaging other people’s epistemic utilities, as long as these other people agree with me on the likelihoods.

One might wonder if there are any other non-trivial proper and very open-minded scoring rules besides the logarithmic one. There are. Here’s a pretty easy to verify fact (see the Appendix):

  • A scoring rule (T,F) is very open-minded if and only if the functions xT(x) and (1−x)F(1−x) are both convex.

Here’s a cute scoring rule that is proper and very open-minded and proper:

  • T(x) =  − ((1−x)/x)1/2 and F(x) = T(1−x).

(For propriety, use Fact 1 here. For open-mindedness, note that the graph of xT(x) is the lower half of the semicircle with radius 1/2 and center at (1/2,0), and hence is convex.)

What’s cute about this rule? Well, it is symmetric (F(x) = T(1−x)) and it has the additional symmetry property that xT(x) = (1−x)T(1−x) = (1−x)F(x). Alas, though, T is not concave, and I think a good scoring rule should have T concave (i.e., there should be diminishing returns from getting closer to the truth).

Appendix:

Suppose that the prospective observation is as to which cell of the partition E1, ..., En we are in. The open-mindedness property with respect to T then requires:

  1. iP(Ei|H)T(P(H|Ei)) ≥ T(P(H)).

Now P(Ei|H) = P(H|Ei)P(Ei)/P(H). Thus what we need is:

  1. iP(Ei)P(H|Ei)T(P(H|Ei)) ≥ P(H)T(P(H)).

Given that P(H) = ∑iP(Ei)P(H|Ei), this follows immediately from the convexity of xT(x). The converse is easy, too.

Tuesday, May 14, 2024

An argument for purgatory

Here is a plausible argument for purgatory.

Start with these observations:

  1. Some people end up in heaven even though at the time of death they had not yet forgiven all wrongs done to them by other people who end up in heaven nor had they been performing such an act of forgiveness while dying.

  2. An act of forgiveness takes time, and at the beginning of the act one has not yet forgiven.

  3. It is impossible to be in heaven without having forgiven all wrongs done to one by other members of the heavenly community.

Premise 3 seems clearly true: the perfection of the heavenly community requires it.

Premise 1 is pretty plausible. It does not seem that a minor bit of unforgiveness would damn one to hell.

Premise 2 is what I am actually least confident of. It is pretty plausible in our present state. But I guess there is the possibility that we can forgive in the very first instant of our presence in heaven, so that the act is already completed in that very instant. Maybe, but it doesn’t seem very human.

It follows from 1-3 that some people who end up in heaven have to initiate the necessary act of forgiveness post-death. When they initiated the act of forgiveness, they were not in heaven. Nor were they in hell, since they ended up in heaven, and one cannot transfer between heaven and hell. Hence, they must have been in some intermediate state, which we may call purgatory.

Here is a difficulty, though. Suppose a person in heaven is wronged by someone on earth
who will end up in heaven. This surely happens: for instance, a parent is in heaven, and their child on earth fails to fulfill a promise they made to the parent. If an act of forgiveness takes time, isn’t there a short period of time before the person in heaven forgives?

I don’t think so. Perhaps a part of becoming the kind of person that ends up in heaven is one’s having engaged in a prospective forgiveness of all who might wrong one (or at least all who might wrong one and yet are going to be a part of the heavenly community, since the argument above only requires one to forgive such persons as a condition for heavenly beatitude). Some have engaged in it in this life, having transformed themselves into perfect forgivers who have always already forgiven, and others need purgatory.

An argument against strong universalism

  1. It is impossible end up in heaven without forgiving all evils done to one at least by other people who end up in heaven.

  2. Some people have had evils done to them by other people who end up in heaven.

  3. No one is necessitated to forgive evil done to them by other people.

  4. So, at least one person is not necessitated to end up in heaven.

This is an argument against a strong universalism on which God necessitates everyone to go to heaven. It is not an argument against a weaker universalism on which there is a possibility of eternal damnation but no one in fact chooses it. (For the record, alas, I think the Biblical evidence is that the weaker universalism is also false.)

Why do I think the premises are true?

Premise 1: Heavenly beatitude is that of a perfect community of love. Such a community of love is impossible if one has failed to forgive evils done to one by other members of the community.

Premise 2: St Paul did evil to a number of people before his conversion.

Premise 3: This is probably the most controversial of the premises. There are two ways of arguing for it. One is by saying that necessitating someone to forgive is unfitting, and so we have good reason to think God wouldn’t do that—and presumably nobody else but God would be capable of necessitating forgiveness. The second is to note that it is impossible to be forced to forgive. It’s just not forgiveness if it’s forced. One can be forced to stop resenting, one can be forced forget, but that’s not forgiveness. This is akin to promising: it is not possible to force someone to make a promise—the words just wouldn’t be binding.

It's worth noting that the argument also tells against Calvinism.

Monday, May 13, 2024

A feature of the logarithmic scoring rule

Accuracy scoring rules measure the epistemic utility of having some credence assignment. For simplicity, let’s assume that all credence assignments are probabilistically coherent. A strictly proper scoring rule has the property that always by one’s own lights, the expected value of one’s actual credence assignment is better than that of any other credence assignment.

A well-known fact is that a strictly proper scoring rules always makes it rational to update on non-trivial evidence. I.e., by one’s present lights, the expected epistemic utility after examining and updating on non-trivial evidence will be higher than the expected epistemic utility of ignoring that evidence. We might put this by saying that a strictly proper scoring rule is strictly open-minded.

The logarithmic scoring rule makes the score of assigning credence r be log r when the hypothesis is true and log (1−r) when the hypothesis is false. It is strictly proper and hence strictly open-minded.

The logarithmic scoring rule, however, satisfies a condition even stronger than strict open-mindedness. This condition is easiest to describe in a binary case where one is simply evaluating the score of one’s credence in a single hypothesis H. Assuming some non-triviality assumptions, it turns out that not only is the expected epistemic utility increased by examining evidence, but the expected epistemic utility conditional on H is increased by examining evidence. (This is a pretty easy calculation.)

So what?

Well, there are several reasons this matters. First, on my recent account of what it is to have a no-hedge commitment to a hypothesis H, if your epistemic utilities are measured by some scoring rules (e.g., Brier) and you have a no-hedge commitment to H but you do not have credence 1 in H, then you will sometimes have reason to refuse to look at evidence. But the above fact about the logarithmic scoring rule shows that this is not so for the logarithmic scoring rule. With the logarithmic scoring rule, it makes sense to look at the evidence even if you have a no-hedge commitment to H—i.e., even if all your betting behavior is “as if H”.

Second, let’s imagine that I run a funding agency and you come to me with an interest in doing some experiment relevant to a hypothesis H. Let’s suppose that the relevant epistemic community agrees on the relevant likelihoods with respect to the evidence obtainable from the experiment, and is perfectly rational, but differs with regard to the priors of H. I might then have this paternalistic worry about funding the experiment. Even though updating on the results of the experiment by my lights is expected to benefit me epistemically, if a strictly proper scoring rule is the appropriate measure of benefit, it may not be true that by my lights other members of the community will benefit epistemically from updating on the results of the experiment. I may, for instance, be close to certain of H, and think that some members of the community have credences that are sufficiently high that the benefit to them of getting a boost in credence in H from the experiment is outweighed by the risk of misleading evidence. If it is my job to watch out for the epistemic good of the community, this could give me reason to refuse funding.

But not so if I think the logarithmic rule is the right way to evaluate epistemic utility. If everyone shares likelihoods, and we differ only in priors for H, and everyone is rational, then when we measure epistemic utility with the logarithmic rule, I have a positive expectation of the epistemic utility effect of examining the experiment’s results on each member of the community. This is easily shown to follow from my above observation about the logarithmic scoring rule. (By my lights the expectation of a fellow community member’s epistemic utility after updating on the experimental results is a weighted sum of an expectation given H and an expectation given not-H. Each improves given the experiment.)

Saturday, May 11, 2024

What is it like not to be hedging?

Plausibly, a Christian commitment prohibits hedging. Thus in some sense even if one’s own credence in Christianity is less than 100%, one should act “as if it is 100%”, without hedging one’s bets. One shouldn’t have a backup plan if Christianity is false.

Understanding what this exactly means is difficult. Suppose Alice has Christian commitment, but her credence in Christianity is 97%. If someone asks Alice her credence in Christianity, she should not lie and say “100%”, even though that is literally acting “as if it is 100%”.

Here is a more controversial issue. Suppose Alice has a 97% credence in Christianity, but has the opportunity to examine a piece of evidence which will settle the question one way or the other—it will make her 100% certain Christianity is true or 100% certain it’s not. (Maybe she has an opportunity for a conversation with God.) If she were literally acting as if her credence were 100%, there would be no point to looking at any more evidence. But that seems the wrong answer. It seems to be a way of being scared that the evidence will refute Christianity, but that kind of a fear is opposed to the no-hedge attitude.

Here is a suggestion about how no-hedge decision-making should work. When I think about my credences, say in the context of decision-making, I can:

  1. think about the credences as psychological facts about me, or

  2. regulate my epistemic and practical behavior by the credences (use them to compute expected values, etc.).

The distinction between these two approaches to my credences is really clear from a third-person perspective. Bob, who is Alice’s therapist, thinks about Alice’s credences as psychological facts about her, but does not regulate his own behavior by these credences: Alice’s credences have a psychologically descriptive role for Bob but not a regulative role for Bob in his actions. In fact, they probably don’t even have a regulative role for Bob when he thinks about what actions are good for Alice. If Alice has a high credence in the danger of housecats, and Bob does not, Bob will not encourage Alice to avoid housecats—on the contrary, he may well try to change Alice’s credence, in order to get Alice to act more normally around them.

So, here is my suggestion about no-hedging commitments. When you have a no-hedging commitment to a set of claims, you regulate your behavior by them as if the claims had credence 100%, but when you take the credences into account as psychological facts about you, you give them the credence they actually have.

(I am neglecting here a subtle issue. Should we regulate our behavior by our credences or by our opinion about our credences? I suspect that it is by our credences—else a regress results. If that’s right, then there might be a very nice way to clarify the distinction between taking credences into account as psychological facts and taking them into account as regulative facts. When we take them into account as psychological facts, our behavior is regulated by our credences about the credences. When we take them into account regulatively, our behavior is directly regulated by the credences. If I am right about this, the whole story becomes neater.)

Thus, when Alice is asked what her credence in Christianity is, her decision of how to answer depends on the credence qua psychological fact. Hence, she answers “97%”. But when Alice decides whether or not to engage in Christian worship in a time of persecution, her decision on how to answer would normally depend on the credence qua regulative, and so she does not take into account the 3% probability of being wrong about Christianity—she just acts as if Christianity were certain.

Similarly, when Alice considers whether to look at a piece of evidence that might raise or lower her credence in Christianity, she does need to consider what her credence is as a psychological fact, because her interest is in what might happen to her actual psychological credence.

Let’s think about this in terms of epistemic utilities (or accuracy scoring rules). If Alice were proceeding “normally”, without any no-hedge commitment, when she evaluates the expected epistemic value of examining some piece of evidence—after all, it may be practically costly to examine it (it may involve digging in an archaeological site, or studying a new language)—she needs to take her credences into account in two different ways: psychologically when calculating the potential for epistemic gain from her credence getting closer to the truth and potential for epistemic loss from her credence getting further from the truth, and regulatively when calculating the expectations as well as when thinking about what is or is not true.

Now on to some fun technical stuff. Let ϕ(r,t) be the epistemic utility of having credence r in some fixed hypothesis of interest H when the truth value is t (which can be 0 or 1). Let’s suppose there is no as-if stuff going on, and I am evaluating the expected epistemic value of examining whether some piece of evidence E obtains. Then if P indicates my credences, the expected epistemic utility of examining the evidence is:

  1. VE = P(H)(P(E|H)ϕ(P(H|E),1)+P(∼E|H)ϕ(P(H|E),1)) + P(∼H)(P(E|∼H)ϕ(P(H|E),0)+P(∼E|∼H)ϕ(P(H|∼E),0)).

Basically, I am partitioning logical space based on whether H and E obtain.

Now, in the as-if case, basically the agent has two sets of credences: psychological credences and regulative credences, and they come apart. Let Ψ and R be the two. Then the formula above becomes:

  1. VE = R(H)(R(E|H)ϕ(Ψ(H|E),1)+R(∼E|H)ϕ(Ψ(H|∼E),1)) + R(∼H)(R(E|∼H)ϕ(Ψ(H|E),0)+R(∼E|∼H)ϕ(Ψ(H|∼E),0)).

The no-hedging case that interests us makes R(H) = 1: we regulatively ignore the possibility that the hypothesis is false. Our expected value of examining whether E obtains is then:

  1. VE = R(E|H)ϕ(Ψ(H|E),1) + R(∼E|H)ϕ(Ψ(H|∼E),1).

Let’s make a simplifying assumption that the doctrines that we are as-if committed to do not affect the likelihoods P(E|H) and P(E|∣H) (granted the latter may be a bit fishy if P(H) = 1, but let’s suppose we have Popper functions or something like that to take care of that), so that R(E|H) = Ψ(E|H) and R(E|∣H) = Ψ(E|∣H).

We then have:

  1. Ψ(H|E) = Ψ(H)R(E|H)/(R(E|H)Ψ(H)+R(E|∼H)Ψ(∼H)).

  2. Ψ(H|∼E) = Ψ(H)R(∼E|H)/(R(∼E|H)Ψ(H)+R(∼E|∼H)Ψ(∼H)).

Assuming Alice has a preferred scoring rule, we now have a formula that can guide Alice what evidence to look at: she can just check whether VE is bigger than ϕ(Ψ(H),1), which is her current score regulatively evaluated, i.e., evaluated in the as-if H is true way. If VE is bigger, it’s worth checking whether E is true.

One might hope for something really nice, like that if the scoring rule ϕ is strictly proper, then it’s always worth looking at the evidence. Not so, alas.

It’s easy to see that VE beats the current epistemic utility when E is perfectly correlated with H, assuming ϕ(x,1) is strictly monotonic increasing in x.

Surprisingly and sadly, numerical calculations with the Brier score ϕ(x,t) =  − (xt)2 show that if Alice’s credence is 0.97, then unless the Bayes’ factor is very far from 1, current epistemic utility beats VE, and so no-hedging Alice should not look at the evidence, except in rare cases where the evidence is extreme. Interestingly, though, if Alice’s current credence were 0.5, then Alice should always look at the evidence. I suppose the reason is that if Alice is at 0.97, there is not much room for her Brier score to go up assuming the hypothesis is correct, but there is a lot of room for her score to go down. If we took seriously the possibility that the hypothesis could be false, it would be worth examining the evidence just in case the hypothesis is false. But that would be a form of hedging.

Wednesday, May 8, 2024

Forgiving the forgiven

Suppose that Alice wronged Bob, repented, and God forgave Alice for it. Bob, however, withholds his forgiveness. First, it is interesting to ask the conceptual question: What is it that Bob withholds? On my account of objective guilt, when Alice wronged Bob, she gained a normative burden of guilt (minimally, she came to owe it to Bob that she think of herself as guilty), and forgiveness is the removal of that normative burden.

Now in forgiveness, God removed Alice’s normative burden not just to himself, but to Bob. For if God did not remove Alice’s normative burden owed to Bob, then it would be in principle possible that Alice is in heaven—having been forgiven by God—and yet still carries the burden of having wronged Bob. But no one in heaven has a burden.

But if Alice’s normative burden owed to Bob has also been removed by God, and forgiveness is the removal of the burden, then what is it that Bob is withholding?

I think the answer is that there are two parts of forgiveness: there is the removal of the burden of objective guilt and the acknowledgment of the removal of that burden. When God has removed the burden of objective guilt from Alice, all that’s left for Bob to do is to acknowledge this removal.

Note, too, that it would be rather bad for Bob to fail to acknowledge the removal of Alice’s burden, because we should acknowledge what is real and good, and this removal is real and good.

One might think this problem is entirely generated by the idea that God can forgive not just sins against God but also sins against other people. Not so. There seems to be a secular variant of this problem, too. For there seems to be a way in which one’s normative burden of objective guilt of wrongs against fellow humans can be removed without God’s involvement: one can repent of the wrong and suffer an adequate punishment. (Of course, any wrong against neighbor is also a sin against God, and this only removes the guilt with respect to neighbor, unless the punishment is adequate to sin against God, too.) In that case, the burden is presumably removed, but the victim should still acknowledge this removal.

This points to a view of forgiveness on which we ought to forgive those whose normative burden has been removed. If we think that God always forgives the repentant, then this implies that we should always forgive the repentant.

This is close to Aquinas’s view (in his Catechetical Instructions) that we are all required to forgive all those who seek our forgiveness, but it is even better (“perfect” is his phrase) if we forgive even those who do not.

Tuesday, May 7, 2024

Mushrooms

Some people have the intuition that there is something fishy about doing standard Bayesian update on evidence E when one couldn’t have observed the absence of E. A standard case here is where the evidence E is being alive, as in firing squad or fine-tuning cases. In such cases, the intuition goes, you should just ignore the evidence.

I had a great conversation with a student who found this line of thought compelling, and came up with this pretty convincing (and probably fairly standard) case that you shouldn’t ignore evidence E like that. You’re stranded on a desert island, and the only food is mushrooms. They come in a variety of easily distinguishable species. You know that half of the species have a 99% chance of instantly killing you, and otherwise having no effect on you other than nourishment, and the other half have a 1% chance of instantly killing you, again otherwise having no effect on you other than nourishment. You don’t know which are which.

To survive until rescue, you need to eat one mushroom a day. Consider two strategies:

  1. Eat a mushroom from a random species the first day. If you survive, conclude that this species is likely good, and keep on eating mushrooms of the same species.

  2. Eat a mushroom from a random species every day.

The second strategy makes just as much sense as the first if your survival does not count as evidence. But we all know what will happen if you follow the second strategy: you’ll be very likely dead after a few days, as your chance of surviving n mushrooms is (1/2)n. On the other hand, if you follow the first strategy, your chance of surviving n mushrooms is slightly bigger than (1/2)(0.99)n. And the first strategy is precisely what is favored by updating on your survival: you take your survival to be evidence that the mushroom you ate was one of the safer ones, so you keep on eating mushrooms from the same species. If you want to live until rescue, the first strategy is your best bet.

Suppose you’re not yet convinced. Here’s a variant. You have a phone. You call your mom on the first day, and describe your predicament. She comforts you and tells you that rescue will come in a week. And then she tells you that she was once stuck for a week on this very island, and ate the pink lacy mushrooms. Then your battery dies. You rejoice: you will eat the pink lacy mushrooms and thus survive! But then suddenly you get worried. You don’t know when your mom was stuck on the island. If she was stuck on the island before you were conceived, then had she not survived the mushrooms, you wouldn’t have been around to hear it. And in that case, you think her evidence is worthless, because you wouldn’t have any evidence had she not survived. So now it becomes oddly epistemically relevant to you whether your mom was on the island before or after you were conceived. But it seems largely epistemically irrelevant when your mom’s visit to the island was.

Socrates' harm thesis

Socrates famously held that a wrongdoer harms themselves more than they harm their victim.

This is a correct rule of thumb, but I doubt that it is true in general.

First, Socrates was probably thinking of the harm to self resulting from becoming a vicious person. But one can imagine cases where a wrongdoer does not become any more vicious, because they have already maxed out on the vice. I don’t know if such cases are real, though.

But here is a more realistic kind of case. It is said that often abusers were themselves abused. Thus it seems that by abusing another one may cause them to become an abuser. Suppose Alice physically abuses Bob and thereby causes Bob to become an abuser. Then Alice has produced three primary harms:

  1. Bob’s physical suffering

  2. Bob’s being an abuser, and

  3. Alice’s being an abuser.

It seems, then, that Alice has harmed Bob worse than she has harmed herself. For she has harmed herself by turning herself into an abuser. But she has harmed Bob by both turning Bob into an abuser and making him suffer physically.

Objection 1: If Bob becomes an abuser because he was abused, then his responsibility for being an abuser is somewhat mitigated, and hence the moral harm to Bob is less than the moral harm to Alice.

Response: Maybe. But this objection fails if we further suppose that Alice herself was the victim of similar abuse, which mitigated her responsibility to exactly the same degree as Alice’s abuse of Bob mitigates Bob’s responsibility.

Objection 2: One does not cause another to become vicious: one at worst provides an occasion for them to choose to become vicious.

Response: Whether one causes another to become vicious or not is beside the point. One harms the other by putting them in circumstances where they are likely to be vicious. This is why corrupting the youth is so wicked, and why Jesus talks of millstones in connection with those who make others trip up.

From the normative burden of wrongdoing to the existence of God

In recent posts I’ve been exploring the idea that wrongdoing imposes on us a debt of a normative burden.

This yields this argument:

  1. Whenever one does wrong, one comes to have a debt of a normative burden to one who has been wronged.

  2. A debt can only be owed to a person.

  3. One cannot owe a debt to oneself.

  4. Therefore, every wrongdoing includes a wrong to a person.

This has some interesting consequences.

First, it is possible to do wrong to future generations, but one cannot owe anything to the nonexistent. So either eternalism is true, and future generations exist simpliciter, or God exists and we owe a normative burden to God when wrong future generations, or both. So we get the disjunction of eternalism and God’s existence.

Second, we simply get the existence of God. For it is wrong to engage in cruelty to animals even if no human is wronged, other than perhaps oneself. But one cannot be in debt to a non-person or to oneself (debts are the sort of thing one can be released from by the one to whom one owes them; this makes no sense if the creditor is oneself, and impossible if the creditor is a non-person). So the only explanation of whom one can owe the normative burden to is that it’s God, who creates and loves the animals.

If one thinks that it is possible to owe a debt to animals, or one is unconvinced that cruelty to animals is wrong, there is yet another argument for the existence of God. Suppose Alice is the only finite conscious thing in the universe. However, Alice comes across misleading evidence that there are many other finite persons, and that there is a button that, when pressed, will result in excruciating pain to these persons. She then maliciously presses the button. Alice has done wrong, but the only finite conscious thing she can be counted as wronging is herself. She doesn’t owe a normative debt to herself. So she must owe it to something other than a finite conscious being. One cannot owe a debt to anything but a conscious being. So there must be an infinite conscious being, i.e., God.

A perhaps underemphasized aspect of Christ's atonement

Usually, Christ’s sacrifice of the Cross is thought of as atonement for our sins before God. This leads to old theological question: Why can’t God simply forgive our sins, without the need for any atoning sacrifice? Aquinas’s answer is: God could, but it’s more fitting that the debt be paid. I want to explore a different answer.

Suppose that when you do a wrong to someone, you come to owe it to them to be punished. But now instead of thinking of God as the aggrieved party, think of all the times when we have done wrong to other human beings. Some of them have released or will release us from our debt through forgiveness. But, probably, not everyone. But what, now, if we think of Christ’s sacrifice as atomenent for our sins before the unforgiving. We don’t need to pay to other unforgiving humans the debt of being punished, because Christ has paid it on our behalf.

This neatly answers the question of why God’s can’t simply forgive us our sins: God can simply release us from our debt to God, but it is either impossible or at least significantly unfitting for God to simply release us from our debt to fellow human beings.

Here is a consequence of the story. If we fail to forgive our fellow human beings, that is yet another way in which we become shamefully co-responsible for Christ’s sufferings, since now Christ is atoning for these fellow human beings before us. We should then be ashamed of ourselves, especially given that Christ is also suffering for us.

The story isn’t complete. Christ’s atonement applies not just to my sins against my neighbor, but also to my sins against God alone and my sins against myself. But once we have seen that some atoning sacrifice is needed on our behalf, the idea of a total atoning sacrifice, capable of atoning for everyone’s debts to everyone, including to God, looks even more fitting.

Monday, May 6, 2024

Forgiveness

If I have done you a serious wrong, I bear a burden. I can be relieved of that burder by forgiveness. What is the burden and what is the relief?

The burden need not consist of anything emotional or dispositional on your side, such as your harboring resentment or being disposed not to interact with me in as amicable a way as before or pursuing my punishment. For, first, if I secretly betrayed you in such a way that you never found out you were wronged, my burden is still there. And, second, if you die without forgiving me, then the burden feels intact—unless perhaps I believe in life after death or divine forgiveness.

The burden need not consist of something emotional or dispositional on my side, either. For if it had to, I could be relieved of it by therapy. But therapy might make it easier to bear the burden, or (if badly done) may make me think the burden is gone, but the burden will still be there.

People often talk about forgiveness as healing a damaged relationship. But that’s not quite right, either. Suppose I have done many grave wrongs to you over the years that have completely ruptured the relationship. You have finally, generously, brought yourself to forgive me some but not all of them. (A perhaps psychologically odd story: you are working backwards through your life, forgiving all who have wronged you, year by year. So far you’ve forgivenes the wrongs in the last three years of your life. But my earlier wrongs remain.) The remaining ones may be sufficient to make our relationship remain completely ruptured.

The burden is fundamentally a normative feature of reality, as is hinted at by the use of “debt” language in the Lord’s Prayer (“Forgive us our debts as we forgive those indebted to us”). By wronging someone, we make a move in normative space: we burden ourself with an objective, and not merely emotional, guilt. In forgiveness, the burden is removed, but the feeling of burden can remain—one can still feel guilty, just as one’s back can continue hurt when a load is removed from one’s back.

Insofar as there is a healing of a relationship, it is primarily a normative healing. There need not be any great psychological change, as can be seen from the case where you have forgiven me some but not all wrongs. Moreover, psychological change can be slow: forgiveness can be fast, but healing the effects of the wrongdoing can take long.

So far we have identified the type of thing that forgiveness is: it is a move in normative space that relieves something that the wrongdoer owes to a victim. But we are still not clear on what it is that the wrongdoer owes to the victim. And I don’t really know the answer here.

One possibility it is that it has something to do with punishment: I owe it to you to be punished. If so, then there are two ways for the burden to be cleared: one is by being punished and the other is by being forgiven. I can think of one objection to the punishment account: even after being adequately punished, you still can choose whether to forgive me. But if punishment clears the burden, what does your forgiveness do? Maybe it is at this point that the psychological components of forgiveness can enter: it’s up to you whether you stop resenting, whether you accept the clearing of the burden? Plus, in practice, it may be that the punishment is not actually sufficient to clear the burden—a lifetime in jail is not enough for some crimes.

Another possibility is that there is something normative and emotional. I owe it to you to feel guilty, and you can clear that debt and make it no longer obligatory for me to feel that way. That, too, doesn’t seem quite right. One problem is circularity: objective guilt consists in me owing you a feeling of guilt, but a feeling of guilt is a feeling that I am objectively guilty. Maybe the owed feeling has some other description? I don’t know!

But whatever the answer is, I am convinced now that the crucial move in forgiveness is normative.

Thursday, May 2, 2024

The essentiality of dignity

Start with this:

  1. Dignity is an essential property of anything that has it.

  2. Necessarily, something has dignity if and only if it is a person.

  3. Therefore, personhood is an essential property of anything that has it.

Now, suppose the standard philosophical pro-choice view that

  1. Personhood consists in developed sophisticated cognitive faculties of the sort that fetuses and newborns lack but typical toddlers have.

Consider a newborn, Alice. By (4) Alice is not a person, but if she grows up into a typical toddler, that toddler will be a person. By (3), however, we cannot say that Alice will have become that person, since personhood is an essential property, and one cannot gain essential properties—either you necessarily have them or you necessarily lack them.

Call the toddler person “Alicia”. Then Alice is a different individual from Alicia.

So, what happens to Alice once we get to Alicia? Either Alice perishes or where Alicia is, there is Alice co-located with her.

Let’s suppose first the co-location option. We then have two conscious beings, Alice and Alicia, feeling the same things with the same brain, one (Alice) older than the other. We have standard and well-known problems with this absurd position (e.g., how does Alicia know that she is a person rather than just being an ex-fetus?).

But the option that Alice perishes when Alicia comes on the scene is also very strange. For even though Alice is not a person, it is obviously appropriate that Alice’s parents love for and care for her deeply. But if they love for and care for her deeply, they will have significant moral reason to prevent her from perishing. Therefore, they will have significant moral reason to give Alice drugs to arrest her intellectual development at a pre-personhood stage, to ensure that Alice does not perish. But this is a truly abhorrent conclusion!

Thus, we get absurdities from (3) and (4). This means that the pro-choice thinker who accepts (4) will have to reject (3). And they generally do so. This in turn requires them to reject (1) or (2). If they reject (2) but keep (1), then Alice the newborn must have dignity, since otherwise we have to say that Alice is a different entity from the later dignified Alicia, and both the theory that Alice perishes and the theory that Alice doesn’t perish is unacceptable. But if Alice the newborn has dignity, then the pro-choice argument from the lack of developed sophisticated cognitive abilities fails, because Alice the newborn lacks these abilities and so dignity comes apart from these abilities. But if dignity comes apart from these abilities, then the pro-choice argument based on personhood and these cognitive abilities is irrelevant. For it dignity is sufficient to ground a right to life, even absent personhood.

So, I think the pro-choice thinker who focuses on cognitive abilities will in the end need to deny that dignity is an essential property. I suspect most do deny that dignity is an essential property.

But I think the essentiality of dignity is pretty plausible. Dignity doesn’t seem to be something that can come and go. It seems no more alienable than the inalienable rights it grounds. It’s not an achievement, but is at the foundation of what we are.

From fetal pain to the impermissibility of abortion

At some point in pregnancy it is widely acknowledged that fetuses start to feel pain. Estimates of this point vary from around seven to thirty weeks of gestation.

We cannot directly conclude from the fact that some fetus can feel pain that killing that fetus is impermissible. For it seems permissible, given good reason, to humanely kill a conscious non-human animal. But perhaps there is an indirect argument. I want to try out one.

It has been argued that if the fetus is the same individual as the adult person that the fetus would grow into, then it is wrong to kill the fetus for the same reason that it is wrong to kill the adult: the victim is the same, and no more deserving of death, while the harm of death is greater (the fetus is deprived of a greater chunk of life).

But if a fetus can feel pain, then this offers significant support for the hypothesis that the fetus is the same individual as the resultant adult. Imagine the fetus has a constant minor chronic pain, is carried to term, and grows into an adult, without ever any relief to the pain. The adult will then feel the pain. If the fetus is not the same individual as the adult, there are two possibilities at the time of adulthood:

  1. There are two beings feeling pain: the adult and the grown-up fetus.

  2. At some point the grown-up fetus had perished and was replaced by a new individual feeling pain.

Option (1) seems crazy: if I have a headache while sitting alone on the sofa, there is only one entity in pain on the sofa, namely me, rather than me and some grown-up fetus. Option (2) is also rather implausible. On our hypothesis we have the continuous presence of a brain state correlated with pain, and yet allegedly at some point the individual with the pain perishes and a new individual inherits the brain with the pain. That doesn’t seem right.

If we reject both (1) and (2), we have to conclude that the fetus in pain is the same individual as the adult that it grows up into. And thus we conclude that at least once fetuses are capable of pain, abortion is wrong.

This argument doesn’t say anything about what happens prior to the possibility of fetal pain. I think that is still the same individual, but that requires another argument.

Tuesday, April 30, 2024

Killing and consent

I think it’s wrong for us to kill innocent people. Some fellow deontologists, however, think this prohibition should be restricted to say that it’s wrong for us to kill nonconsenting innocent people. These thinkers hold that it is both permissible to consent to being killed and to kill those who have given such consent (except in special cases, such as when the victim has overriding unfulfilled duties to others).

I want to argue for a curious consequence of this restriction of the prohibition of murder while maintaining deontology.

By “sacrificing one’s life to save lives”, I will mean actions which save lives but have one’s own death as an unintended but foreseen side-effect. For instance, jumping in front of a train to push a child out of the way. Everyone agrees it’s typically praiseworthy, and hence permissible, to sacrifice your life to save an innocent life. Most people, however, will say that it is supererogatory to do so. It is brave to do it, but not cowardly to omit it.

But now consider cases where by sacrificing your life you can save a larger number of innocent lives, say a dozen. It is pretty plausible that it would be cowardly to refrain from the sacrifice, and I suspect it would be wrong to do it except in special cases (such as when you have just figured out how to cure cancer). But I agree that the point is not completely clear to me. However, it is quite clear to me that it would be wrong to refuse to sacrifice your life to save a dozen people when that dozen includes one’s spouse and one’s children (again, with some very rare exceptions).

Now let’s assume the view that it is permissible to consent to being killed and permissible to kill the consenting. Consider a classic deontology case: a terrorist says that if you don’t kill Bob, a dozen other innocent people will be killed. Add that the dozen people include Bob’s spouse and children. If it’s permissible to kill the consenting, then if Bob were to consent, it would be permissible to kill him. But Bob expressly and clearly refuses consent, despite his believing that it would be permissible to consent.

Assuming that it is morally required to sacrifice your life to save a dozen innocent lives when these lives include your spouse and children, it is very difficult to deny that if it is permissible to consent to being killed, in a case like the above, Bob would be morally required to consent to being killed. Granted, the sacrifice case does not include consenting to one’s death, while the terrorist case does. But as long as we have granted that it is permissible to consent to one’s death, the difference does not seem significant. Thus Bob is morally required to consent to being killed, given our assumptions about consensual killing. Bob’s refusal of consent is thus morally wrong. And very badly so: it causes eleven more lives to be lost, including his very own spouse and children. His refusal is about as bad as mass murder!

It seems that Bob is far from innocent. On the contrary, he is guilty of refusing to save the lives of eleven people, including his spouse and children. But now it seems that the prohibition against killing the innocent does not apply to Bob, and hence it is permissible—and maybe even obligatory—to kill Bob. If so, then the deontological prohibition on killing the innocent, if restricted to the nonconsenting, has a giant loophole: when enough is at stake, a nonconsenting victim is no longer innocent! Now, maybe, it is only permissible to kill the guilty when one acts on behalf of a state (and when enough is stake, which it is in this case). But it would still be very strange for a deontologist to think it permissible to kill Bob even should the state authorize it.

This is not a knockdown argument against the restriction of the prohibition of murder to nonconsenting victims. But it is some evidence against the restriction.

Monday, April 29, 2024

From aggregative value comparisons to hyperreal values

Suppose that we have n objects α1, ..., αn, and we want to define something like numerical values (at least hyperreal ones, if we can’t have real ones) on the basis of comparisons of value. Here is one interesting way to proceed. Consider the space of formal sums m1α1 + ... + mnαn, where the mi are natural numbers, and suppose there is a total preorder (total transitive reflexive relation) on this space satisfying the axioms:

  1. x + z ≤ y + z iff x ≤ y

  2. mx ≤ my iff x ≤ y for all positive m.

We can think of m1α1 + ... + mnαn ≤ p1α1 + ... + pnαn as saying that the “aggregative value” of having mi copies of αi for all i is less than or equal to the “aggregative value” of having pi copies of αi for all i. The aggregative value of a number of objects is the “sum value”, where we don’t take into account things like the diversity or lack thereof or other “arrangement values”.

Now extend ≤ to formal sums m1α1 + ... + mnαn where the mi are allowed to be positive or negative by stipulating that:

  • m1α1 + ... + mnαn ≤ p1α1 + ... + pnαn iff (k+m1)α1 + ... + (k+mn)αn ≤ (k+p1)α1 + ... + (k+pn)αn for some natural k such that k + mi and k + pi are non-negative for all i.

Axiom (1) implies that the choice of k is irrelevant. It is easy to see that ≤ still satisfies both (1) and (2). Moreover, ≤ is still total, transitive and reflexive.

Next extend ≤ to formal sums r1α1 + ... + rnαn where the ri are rational numbers by stipulating that:

  • r1α1 + ... + rnαn ≤ s1α1 + ... + snαn iff ur1α1 + ... + urnαn ≤ us1α1 + ... + usnαn for some positive integer u such that uri and usi is an integer for all i.

Axiom (2) implies that the choice of u is irrelevant. Again, it is easy to see that ≤ continues to satisfy (1) and (2), and that it remains total, transitive and reflexive.

Thus, ≤ is a total vector space preorder on an n-dimensional vector space V over the rationals with basis α1, ..., αn.

Let C be the positive cone of ≤: C = {x ∈ V : 0 ≤ x}. This is closed under addition and positive rational-valued scalar multiplication. Let K be the kernel of the preorder, i.e., {x ∈ V : 0 ≤ x ≤ 0} = C ∩  − C.

Now, let W be the n-dimensional vector space over the reals with basis α1, ..., αn. Let D be the smallest subset of W containing C and closed under addition and multiplication by positive real scalars: this is the set of real-linear combinations of elements of C with positive coefficients. It is easy to check that D ∩ V = C. Let L = D ∩  − D. Then L ∩ V = K.

Let E be a maximal subset of W that contains D, is closed under addition and multiplication by positive real scalars, and is such that E ∩  − E = L. This exists by Zorn’s Lemma. I claim that for any v in W, either v or  − v is in E. For suppose neither v nor  − v is not in E. Then let E′ = {e + tv : t > 0, e ∈ E}. This contains C, and is closed under addition and multiplication by positive reals. If we can show that E′ ∩  − E′ = L, then since E is a proper subset of E′, we will contradict the maximality of E. Suppose z ∈ E′ ∩  − E but not z ∈ L. Since E ∩  − E = L, we must have either z or  − z in E′ ∖ E. Without loss of generality suppose z ∈ E′ ∖ E. Then z = e + tv for e ∈ E and t > 0. Thus, e + tv ∈  − E. Hence tv ∈ (−e) + (−E) ⊆  − E, since e ∈ E and E is closed under addition. Since E is closed under positive scalar multiplication, we have v ∈  − E, which contradicts our assumption that  − v is not in E.

Define ≤* on W by letting v*w iff w − v ∈ E. Note that ≤* agrees with on V. If v ≤ w are in V, then w − v ∈ C ⊆ E and so v*w. Conversely, if v*w, then w − v ∈ E. Now, since w − v is in V, and is total, if we don’t have v ≤ w, we must have w ≤ v and hence v − w ∈ C, so w − v ∈  − C. Since E ∩  − E = L, we have w − v ∈ L. But v, w ∈ V, so w − v ∈ L ∩ V = K. Thus, v ≤ w, a contradiction.

It’s also easy to see that * is total, transitive and reflexive. It is therefore representable by lexicographically-ordered vector-valued utilities by the work of Hausner in the middle of the last century. And vector-valued utilities are representable by hyperreals (just represent (x1,...,xn) with x1 + x2ϵ + ... + xnϵn − 1 for a positive infinitesimal ϵ).

Remark 1: Here is a plausible condition on the extension ≤* that we can enforce if we like: if Q and U are neighborhoods of v and w respectively, and for all q ∈ Q ∩ V and all u ∈ U ∩ V we have q ≤ v, then v*w. For this condition will hold provided we can show that if Q is a neighborhood of v such that Q ∩ V ⊆ C, then v ∈ E. Note that any positive-real-linear combination of points v satisfying this neighborhood condition also satisfies this condition, and any sum of a point v satisfying this condition and a point in D will also satisfy it. Thus we can add to D all such points v, and carry on with the rest of the proof.

Remark 2: If we start off with being a partial preorder, * still becomes a total order. Then instead of proving it agrees with the partial preordering on V (or the initial ordering), we use the basically the same proof to show that it extends both the non-strict and strict orders: (a) if w ≤ v, then w*v and if w < v, then w<*v.

Question 1: Can we make sure that the values are real numbers?

Response: No. Suppose you are comparing a sheep and a goat, and suppose that they are valued positively and equally—the one exception is ties are broken in favor of the sheep. Thus, n+1 copies of the goat are better than n copies of the sheep and both are better than nothing, but n copies of the sheep are better than n copies of the goat. To represent this with hyperreals we need to take the value of the sheep to be ϵ + g where g > 0 is the value of the goat, and where ϵ/g is a positive infinitesimal.

Question 2: Is the representation is “practically unique”, i.e., does it generate the same decisions in probabilistic situations, or at least ones with real-valued probabilities?

Response: No. Supose you have a sheep and a goat. Now consider two hypotheses: on the first, the sheep is worth  − ϵ + π goats, and on the second, the sheep is worth ϵ + π goats, for a positive infinitesimal ϵ. Both hypotheses generate the same aggregative value comparisons between aggregates consisting of n1 copies of the goat and n2 copies of the sheep for natural numbers n1 and n2, since π is irrational. But the two hypotheses generate opposite probabilistic decisions if we are choosing between a 1/π chance of the sheep and certainty of the goat.

Thursday, April 25, 2024

Brain snatching is not a model of life after death

Van Inwagen infamously suggested the possibility that at the moment of death God snatches a core chunk of our brain, transports it to a different place, replaces it with a fake chunk of brain, and rebuilds the body around the transported chunk.

I think that, were van Inwagen’s suggestion is correct, it would be correct to say that we die. If not, then it is a seriously problematic view given the Christian commitment that people do, in fact, die. Hence van Inwagen's model is not a model of life after death.

Argument: If in the distant future all of a person’s body was destroyed in an accident except for a surving core chunk, and medical technology had progressed so much that it could regrow the rest of the body from that chunk, I think we would not say that the medical technology resurrected the person, but that it prevented the person’s death.

Objection: The word “death” gets its meaning ostensively from typical cases we label as cases of “death”. In these cases, the heart stops, the parts of the brain observable to us stop having electrical activity, etc. What we mean by “death” is what happens in these cases when this stuff happens. If van Inwagen’s suggestion is correct, then what happens in these cases is the snatching of a core chunk. Hence if van Inwagen’s suggestion is correct, then death is divine snatching of a core chunk of the brain, and we do in fact die.

Responses: First, if death is divine snatching of a core chunk of the brain, then jellyfish and trees don’t die, because they don’t have a brain. I suppose, though, one might say that “death” is understood analogously between jellyfish and humans, and it is human death that is a divine snatching of a core chunk of the brain.

Second, it seems obvious that if God had chosen not to snatch a core chunk of Napoleon’s brain, and allowed Napoleon’s body to rot completely, then Napoleon would be dead. Hence, not even the death of a human is identical to a divine snatching.

Third, I think it is an important part of the concept of death is that death is something that is in common between humans and other organisms. People, dogs, jellyfish, and trees all die. We should have an account of death common between these. The best story I know is that death is the destruction of the body. And the van Inwagen story doesn’t have that. So it’s not a story about death.

Wednesday, April 24, 2024

A small disability

On the mere difference view of disability, one isn’t worse off for being disabled as such, though one is worse off due to ableist arrangements in society. A standard observation is that the mere difference view doesn’t work for really big disabilities.

In this post, I want to argue that it doesn’t work for some really tiny disabilities. For instance, about 3-5% of the population without any other brain damage exhibits “musical anhedonia”, an inability to find pleasure in music. I haven’t been diagnosed, but I seem to have something like this condition. With the occasional exception, music is something I either screen out or a minor annoyance. Occasionally I find myself with an emotional response, but I also don’t like having my emotions pulled on by something I don’t understand. When I play a video game, one of the first things I do is turn off all music. If I could easily run TV through a filter that removed music, I would (at least if watching alone). (Maybe movies as well, though I might feel bad about disturbing the artistic integrity of the director.)

On the basis of testimony, however, I know that music can embody immense aesthetic goods which cannot be found in any other medium. I am missing out on these goods. My missing out on them is not a function of ableist assumptions. After all, if the world were structured in accordance with musical anhedonia, there would be no music in it, and I would still miss out on the aesthetic goods of music—it’s just that everybody else would miss out on them as well, which is no benefit to me. I suppose in a world like that more effort would be put into other art forms. The money spent on music in movies might be spent on better editing, say. In church, perhaps, better poetic recitations would be created in place of hymns. However, more poetry and better editing wouldn’t compensate for the loss of music, since having music in addition to other art forms makes for a much greater diversity of art.

Furthermore, presumably, parallel to music anhedonia there are other anhedonias. If to compensate for musical anhedonia we replace music with poetic recitations, then those who have poetic anhedonia (I don’t know if that is a real or a hypothetical condition; I would be surprised, though, if no one suffered from it; I myself don’t appreciate sound-based poetry much, though I do appreciate meaning-based poetry, like Biblical Hebrew poetry or Solzhenitsyn’s “prose poems”) but don’t have musical anhedonia are worse off.

In general, the lack of an ability to appreciate a major artistic modality is surely a loss in one’s life. It need not be a major loss: one can compensate by enjoying other modalities. But it is a loss.

In the case of a more major disability, there can be personal compensations from the intrinsic challenges arising from the disability. But really tiny disabilities need not generate much in the way of such meaningful compensations.

Here’s another argument that musical anhedonia isn’t a mere difference. Suppose that Alice is a normal human being who would be fully able to get pleasure from music. But Alice belongs to a group unjustly discriminated against, and a part of this discrimination is that whenever Alice is in earshot, all music is turned off. As a result, Alice has never enjoyed music. It is clear that Alice was harmed by this. And the bulk of the harm was that she did not have the aesthetic experience of enjoying music—which is precisely the harm that the person with music anhedonia has.

Objection 1: Granted, musical anhedonia is not a mere difference. But it is also not a disability because it does not significantly impact life.

Response 1.1: But music is one of the great cultural accomplishments of the human species.

Response 1.2: Moreover, transpose my argument to a hypothetical society where it is difficult to get by without enjoying music, a society where, for instance, most social interactions involve explicit sharing in the pleasure of music. In that society, musical anhedonia may make one an outcast. It would be a disability. But it would still make one lose out on one of the great forms of art, and hence would still be a really bad thing, rather than a mere difference.

Objection 2: There is a philosophical and a spiritual benefit to me from my musical anhedonia, and it’s not minor. The spiritual benefit is that I look forward to being able to really enjoy music in heaven in a way in which I probably wouldn’t if I already enjoyed it significantly. The philosophical benefit is that music provides me with a nice model of an aesthetic modality that is beyond one’s grasp. Normally, “things beyond one’s grasp” are hard to talk about! But in the case of music, I can lean on the testimony of others, and thus talk about this art form that is beyond my grasp. And this, in turn, provides me with a reason to think that there are likely other goods beyond our current ken, perhaps even goods that we will enjoy in heaven (back to the spiritual). Furthermore, music provides me with a conclusive argument against emotivist theories of beauty. For I think music is beautiful, but I do not have the relevant aesthetic emotional reaction to it. My belief that music is beautiful is largely based on testimony.

Response 2: These kinds of compensating benefits help the mere difference view. Even if one were able to get tenure on the strength of a book on the philosophy of disease inspired by getting a bad case of Covid, the bad case of Covid would be bad and not a mere difference. The mere difference view is about something more intrinsic to the condition.

Tuesday, April 23, 2024

Value and aptness for moral concern

In two recent posts (this and this) I argued that dignity does not arise from value.

I think the general point here goes beyond value. Some entities are more apt for being morally concerned about than others. These entities are more appropriate beneficiaries of our actions, we have more reason to protect them, and so on. The degreed property these entities have more of has no name, but I will call it “apmoc”: aptness for moral concern. Dignity is then a particularly exalted version of apmoc.

Apmoc as such is agent-relative. If you and I have cats, then my cat has more apmoc relative to me than your cat, while your cat has more apmoc relative to you. Thus, I should have more moral concern for my cat and you for yours. Agent-relativity can be responsible for the bulk of the apmoc in the case of some entities—though probably not in the case of entities whose apmoc rises to the level of dignity.

However, we can distinguish an agent-independent core to an entity’s apmoc, which I will call the entity’s “core apmoc”. One can think of the core apmoc as the apmoc the entity has relative to an agent who has no special relationship to the entity. (Note: My concern in this post is the apmoc relative to human agents, so the core apmoc may still be relative to the human species.)

Now, then, here is a thesis that initially sounds good, but I think is quite mistaken:

  1. An entity’s core apmoc is proportional to its value.

For suppose I have two pet dragons, on par with respect to all properties, except one can naturally fly and the other is naturally flightless. The flying dragon has more value: it is a snazzier kind of being, having an additional causal power. Both dragons equally like being scratched under the chin (perhaps with a rake). The fact that the flying dragon has more value does not give me any additional reason to scratch it. More generally, the flying dragon does not have any more core apmoc.

One might object: if it is a matter of saving the life of one of the dragons, other things being equal, one should save the life of the flying dragon, because it is a better kind of being. However, even if this judgment is correct, it is not due to a difference in apmoc. If the flying dragon dies, more value is lost. The death of a dragon removes from the world all the goods of the dragon: its majestic beauty, its contribution to winter heating, its protection of the owner, its prevention of sheep overpopulation, and so on. The death of the flying dragon removes a good—an instance of the causal power of flight—from the world which the death of the flightless dragon does not. If the reason one should save the life of the flying dragon over the flightless one is that the flying one is a better kind of being, then the reason one is saving its life is not because the flying dragon has more apmoc, but because more is lost by its death. If I have a choice of saving Alice from losing a thumb or Bob from losing the little toe, I should save Alice from losing a thumb, not because Alice has more apmoc, but because a thumb is a bigger loss than a toe.

The above objection points out one feature. Sometimes bestowing what is in some sense “the same benefit” to entity will actually bestow a benefit proportional to the value of the entity. Saving an entity from destruction sounds like “the same benefit”, but is a greater benefit where there is more value to be saved. Similarly, if I have a choice between fixing a tire puncture in my car or in my bike, more value is gained when I fix the car’s tire, because the car is more valuable. However, this is not due to the car having more apmoc, but simply because the benefits are different: if I fix the car’s tire, the car would become capable of transporting around my whole family, while the bike would only become capable of transporting me.

Let’s move away from fantasy. Suppose Alice and Bob are on par in all respects, except that Alice knows the 789th digit of π while Bob does not. Knowledge is valuable, and so if you have more knowledge, you have more value. But now if I have a choice of whom to give a delicious chocolate-chip muffin, the fact that Alice knows the 789th digit of π is irrelevant—it contributes (slightly) to value but not at all to core apmoc (it might contribute to the agent-relative aspects of apmoc in some special cases, since shared knowledge can be a partial constituent of a morally relevant relationshiop).

Granted, a piece of knowledge is a contingent contribution to value. One might think that core apmoc is determined proportionately to the essential values of an entity. But I think this is implausible. Most people have the intuition that, other things being equal, a virtuous person has more apmoc than a vicious one. But virtue is not an essential value—it is a value that fluctuates over a lifetime.

The case of virtue and vice suggests that there may be some values that contribute to core apmoc. I think this is likely. Core apmoc does not appear in a vacuum. But the connection between apmoc and value is complex, and the two are quite different.

Monday, April 22, 2024

Does culpable ignorance excuse?

It is widely held that if you do wrong in culpable ignorance (ignorance that you are blameworthy for), you are culpable for the wrong you do. I have long though think this is mistaken—instead we should frontload the guilt onto the acts and omissions that made one culpable for the ignorance.

I will argue for a claim in the vicinity by starting with some cases that are not cases of ignorance.

  1. One is no less guilty if one tries to shoot someone and misses than if one hits them.

  2. If one drinks and drives and is lucky enough to hit no one, one is no less guilty than if one does hit someone, as long as the degree of freedom and knowledge in the drinking and driving is the same.

  3. If one freely takes a drug one knows to remove free will and produce violent behavior in 25% of cases, one is no less guilty if involuntary violence does not ensue than if involuntary violence does ensue.

Now, let’s consider this case of culpable ignorance:

  1. Mad scientist Alice offers Bob a million dollars to undergo a neural treatment that over the next 48 hours will make Bob think that Elbonians—a small ethnic group—are disease-bearing mosquitoes. Bob always kills organisms that he thinks are disease-bearing mosquitoes on sight. Bob correctly estimates that there is a 25% chance that he will meet an Elbonian over the next 48 hours. If Bob accepts the deal, he is no less guilty if he is lucky enough to meet no Elbonians than if he does meet and kill one.

This is as clear a case of culpable ignorance as can be: in accepting the deal, Bob knows he will become ignorant of the human nature of Elbonians, and he knows there is a 25% chance this will result in his killing an Elbonian. I think that just as in cases (1)–(3), one is no less guilty if the bad consequences for others don’t result, so too in case (4), Bob is no less guilty if he never meets an Elbonian.

For a final case, consider:

  1. Just like (4), except that instead of coming to think Elbonians are (disease-bearing) mosquitoes, Bob will come to believe that unlike all other innocent human persons whom it is impermissible to kill, it is obligatory to kill Elbonians, and Bob’s estimate that this belief will result in his killing an Elbonian is 25%.

Again, Bob is no less guilty for taking the money and getting the treatment if he does not run into any Elbonians than if he does run into and kill an Elbonian.

Therefore, one is no less guilty for one’s culpable ignorance if wicked action does not result. Or, equivalently:

  1. One is no more guilty if wicked action does result from culpable ignorance than if it does not.

But (6) is not quite the claim I started with. I started claiming one is not guilty for the wicked action in cases of culpable ignorance. The claim I argued for is that one is no guiltier for the wicked action than if there is no wicked action resulting from the ignorance. But now if one was guilty for the wicked action, it seems one would be guiltier, since one would have both the guilt for the ignorance and for the wicked action.

However, I am now not so sure. The argument in the previous paragraph depended on something like this principle:

  1. Being guilty of both action A and action B is guiltier than just being guilty of action A, all other things being equal. (Ditto for omissions, but I want to be briefer.)

Thus being guilty of acquiring ignorance and acting wickedly on the ignorance would be guiltier than just of acquiring ignorance, and hence by (6) the wicked action does not have guilt. But now that I have got to this point in the argument, I am not so sure of (7).

There may be counterexamples to (7). First, a politician’s lying to the people an hour after a deadly natural disaster is not less guilty than lying in the same way to the people an hour before the natural disaster. But in lying to the people after the disaster one lies to fewer people—since some people died in the disaster!—and hence there are fewer actions of lying (instead of lying to Alice, and lying to Bob, and lying to Carl, one “only” lies to Alice and one lies to Bob). But I am not sure that this is right—maybe there is just one action of lying lying to the people rather than a separate one for each audience member.

Second, suppose Bob strives to insult Alice in person, and consider two cases. In one case, when he has decided to insult Alice, he gets into his car, drives to see Alice, and insults her. In the other case, when he gets into the car he realizes he doesn’t have enough gas to reach Alice, and so he buys gas, then drives to see Alice, and then insults her. In the second case, Bob performed an action he didn’t perform in the first case: buy gas in order to insult Alice. But it doesn’t seem that Bob is guiltier in the second case, even though he did perform one more guilty action. I am also not sure about this case. Here I am actually inclined to think that Bob is more guilty, for two reasons. First, he was willing to undertake a greater burden in order to insult Alice—and that increases guilt. Second, he had an extra chance to repent—each time one acquiesces in a means, that’s a chance to just say no to the whole action sequence. And yet he refused this chance. (It seems to me that Bob is guiltier in the second case, just as the assassin possessing two bullets and shooting the second after missing with the first—regardless of whether the second shot hits—is guiltier than the assassin who after shooting and missing once stops.)

While I am not convinced of the cases, they point to the idea that in the context of (7), the guilt of action A might “stretch” to making B guilty without increasing the total amount of guilt. If that makes sense, then that might actually be the right way of account of accounting for actions done in culpable ignorance. If Bob kills an Elbonian, he is guilty. That is not an additional item of guilt, but rather the guilt of the actions and omissions that caused the guilt stretches over and covers the killing. This seems to me to mesh better with ordinary ways of talking—we don’t want to say that Bob’s killing of the Elbonian in either case (4) or (5) is innocent. And saying that there is no additional guilt may be a way of assuaging the intuition I have had over the years when I thought that culpable ignorance excuses.

Maybe.

A final obvious question is about punishment. We do punish differentially for attempted and completed murder, and for drunk driving that does not result in death and drink driving that does. I think there pragmatic reasons for this. If attempted and completed murder were equally punished, there would be an incentive to “finish the job” upon initial failure. And having a lesser penalty for non-lethal drunk driving creates an incentive for the drunk driver to be more careful driving—how much that avails depends on how drunk the driver is, but it might make some difference.

Thursday, April 18, 2024

Evaluating some theses on dignity and value

I’ve been thinking a bit about the relationship between dignity and value. Here are four plausible principles:

  1. If x has dignity, then x has great non-instrumental value.

  2. If x has dignity, then x has great non-instrumental value because it has dignity.

  3. If x has dignity and y does not, then x has more non-instrumental value than y.

  4. Dignity just is great value (variant: great non-instrumental value).

Of these theses, I am pretty confident that (1) is true. I am fairly confident (3) is false, except perhaps in the special case where y is a substance. I am even more confident that (4) is false.

I am not sure about (2), but I incline against it.

Here is my reason to suspect that (2) is false. It seems that things have dignity in virtue of some further fact F about them, such as that they are rational beings, or that they are in the image and likeness of God, or that they are sacred. In such a case, it seems plausible to think that F directly gives the dignified entity both the great value and dignity, and hence the great value derives directly from F and not from the dignity. For instance, maybe what makes persons have great value is that they are rational, and the same fact—namely that they are rational—gives them dignity. But the dignity doesn’t give them additional value beyond that bestowed on them by their rationality.

My reason to deny (4) is that great value does not give rise to the kinds of deontological consequences that dignity does. One may not desecrate something with dignity no matter what consequences come of it. But it is plausible that mere great value can be destroyed for the sake of dignity.

This leaves principle (3). The argument in my recent post (which I now have some reservations about, in light of some powerful criticisms from a colleague) points to the falsity of (3). Here is another, related reason. Suppose we find out that the Andromeda Galaxy is full of life, of great diversity and wonder, including both sentient and non-sentient organisms, but has nothing close to sapient life—nothing like a person. An evil alien is about to launch a weapon that will destroy the Andromeda Galaxy. You can either stop that alien or save a drowning human. It seems to me that either option is permissible. If I am right, then the value of the human is not much greater than that of the Andromeda Galaxy.

But now imagine that the Whirlpool Galaxy has an order of magnitude more life than the Andromeda Galaxy, with much greater diversity and wonder, than the Andromeda Galaxy, but still with nothing sapient. Then even if the value of the human is greater than that of the Andromeda Galaxy, because it is not much greater, while the value of the Whirlpool Galaxy is much greater than that of the Andromeda Galaxy, it follows that the human does not have greater value than the Whirlpool Galaxy.

However, the Whirlpool Galaxy, assuming it has no sapience in it, lacks dignity. A sign of this is that it would be permissible to deliberately destroy it in order to save two similar galaxies from destruction.

Thus, the human is not greater in value than the Whirlpool Galaxy (in my story), but the human has dignity while the Whirlpool Galaxy lacks it.

That said, on my ontology, galaxies are unlikely to be substances (especially if the life in the galaxy is considered a part of the galaxy, since following Aristotle I doubt that a substance can be a proper part of a substance). So it is still possible that principle (3) is true for substances.

But I am not sure even of (3) in the case of substances. Suppose elephants are not persons, and imagine an alien sentient but not sapient creature which is like an elephant in the temporal density of the richness of life (i.e., richness per unit time), except that (a) its rich elephantine life lasts millions of years, and (b) there can only be one member of the kind, because they naturally do not reproduce. On the other hand, consider an alien person who naturally only has a life that lasts ten minutes, and has the same temporal density of richness of life that we do. I doubt that the alien person is much more valuable than the elephantine alien. And if the alien person is not much more valuable, then by imagining a non-personal animal that is much more valuable than the elephantine alien, we have imagined that some person is not more valuable than some non-person. Assuming all non-persons lack dignity and all persons have dignity, we have a case where an entity with dignity is not more valuable than an entity without dignity.

That said, I am not very confident of my arguments against (3). And while I am dubious of (3), I do accept:

  1. If x has dignity and y does not, then y is not more valuable than x.

I think the case of the human and the galaxy, or the alien person and alien elephantine creature, are cases of incommensurability.

Wednesday, April 17, 2024

Desire-fulfillment theories of wellbeing

On desire-fulfillment (DF) theories of wellbeing, cases of fulfilled desire are an increment to utility. What about cases of unfulfilled desire? On DF theories, we have a choice point. We could say that unfulfilled desires don’t count at all—it’s just that one doesn’t get the increment from the desire being fulfilled—or that they are a decrement.

Saying that unfulfilled desires don’t count at all would be mistaken. It would imply, for instance, that it’s worthwhile to gain all the possible desires, since then one maximizes the amount of fulfilled desire, and there is no loss from unfulfilled desire.

So the DF theorist should count unfulfilled desire as a decrement to utility.

But now here is an interesting question. If I desire that p, and then get an increment x > 0 to my utility if p, is my decrement to utility if not p just  − x or something different?

It seems that in different cases we feel differently. There seem to be cases where the increment from fulfillment is greater than the decrement from non-fulfillment. These may be cases of wanting something as a bonus or an adjunct to one’s other desires. For instance, a philosopher might want to win a pickleball tournament, and intuitively the increment to utility from winning is greater than the decrement from not winning. But there are cases where the decrement is at least as large as the increment. Cases of really important desires, like the desire to have friends, may be like that.

What should the DF theorist do about this? The observation above seems to do serious damage to the elegant “add up fulfillments and subtract non-fulfulfillments” picture of DF theories.

I think there is actually a neat move that can be made. We normally think of desires as coming with strengths or importances, and of course every DF theorist will want to weight the increments and decrements to utility with the importance of the desire involved. But perhaps what we should do is to attach two importances to any given desire: an importance that is a weight for the increment if the desire is fulfilled and an importance that is a weight for the decrement if the desire is not fulfilled.

So now it is just a psychological fact that each desire comes along with a pair of weights, and we can decide how much to add and how much to subtract based on the fulfillment or non-fulfillment of the desire.

If this is right, then we have an algorithm for a good life: work on your psychology to gain lots and lots of new desires with large fulfillment weights and small non-fulfillment weights, and to transform your existing desires to have large fulfillment weights and small non-fulfillment weights. Then you will have more wellbeing, since the fulfillments of desires will add significantly to your utility but the non-fulfillments will make little difference.

This algorithm results in an inhuman person, one who gains much if their friends live and are loyal, but loses nothing if their friends die or are disloyal. That’s not the best kind of friendship. The best kind of friendship requires vulnerability, and the algorithm takes that away.