Monday, April 26, 2021

If materialism is true, God exists

Causal finitism is the doctrine that backwards infinite causal histories are impossible.

  1. If the xs compose y, then y cannot have caused all of the xs.

  2. If materialism is true and causal finitism is false, then it is possible to have a human being that (a) is composed of cells and (b) caused each of its cells via a backwards infinite regress.
  3. So, if materialism is true, causal finitism is true. (1, 2)

  4. If causal finitism is true, then God exists.

  5. So, if materialism is true, God exists. (3, 4)

  6. If God exists, the materialism is false.

  7. So, materialism is false. (5, 6)

Premise (1) is a strengthening of a plausible principle banning self-causation.

Premise (2) follows from the fact that we are causes of all our present cells. If presentism is true, that completes the argument against materialism as in my previous post. But if eternalism or growing block are true, then we may also be composed of our past cells. And we didn’t cause our first cells. However if causal finitism is false, then it’s very plausible that backwards infinite causal regresses are possible, and so we could have existed from eternity, continually producing new cells, with the old ones dying.

Premise (4) is backed by a version of the kalaam argument.

Premise (6) is definitional if we understand materialism strongly enough to apply not just to us but to all reality. If we understand materialism more weakly, then the argument “only” yields the conclusion (5) that if materialism is true, God exists.

If presentism is true, materialism is false

  1. If the xs compose y, then y cannot have caused all of the xs.

  2. I caused all my present cells.

  3. If presentism is true, then all my cells are present cells.

  4. So, if presentism is true, then I caused all my cells. (2, 3)

  5. If materialism is true, then I am composed of my cells.

  6. If materialism is true, then I did not cause all of my cells. (1, 5)

  7. So, if presentism is true, materialism is not true. (4, 6)

At least sometimes parts aren't prior to wholes

  1. Efficient causes are explanatorily prior to their effects.

  2. Circularity of explanatory priority is impossible.

  3. I am the efficient cause of my teeth—I grew them!

  4. Therefore, my teeth are not explanatorily prior to me. (1–3)

  5. My teeth are parts of me.

  6. Therefore, at least some parts are not explanatorily prior to the wholes. (4, 5)

Friday, April 23, 2021

Why I can't believe in a God other than of classical theism

I can’t get myself to believe in a God who is an old bearded guy in the sky. That would be just a fairy tale.

What’s wrong with such a concept of God? It’s the beard! Seriously, the problem is that a guy who has a beard has parts and changing. Whether the parts are material or immaterial does not seem of very deep metaphysical significance. But having parts or changing, either one of these is an absurd anthropomorphism.

And hence I can’t get myself to believe in a God who changes or has parts. That leaves classical theism and atheism as the options. And atheism leads to scepticism, I think.

More on doing and allowing

Let’s suppose disease X if medically unchecked will kill 4.00% of the population, and there is one and only one intervention available: a costless vaccine that is 100% effective at preventing X but that kills 3.99% of those who take it. (This is, of course, a very different situation than the one we are in regarding COVID-19, where we have extremely safe vaccines.) Moreover, there is no correlation between those who would be killed by X and those who would be killed by the vaccine.

Assuming there are no other relevant consequences (e.g., people’s loss of faith in vaccines leading to lower vaccine uptake in other cases), a utilitarian calculation says that the vaccine should be used: instead of 316.0 million people dying, 315.2 million people would die, so 800,000 fewer people would die. That’s an enormous benefit.

But it’s not completely clear that this costless vaccine should be promoted. For the 315.2 million who would die from the vaccine would be killed by us (i.e., us humans). There is at least a case to be made that allowing 316.0 million deaths is preferable to causing 315.2 million. The Principle of Double Effect may justify the vaccination because the deaths are not intentional—they are neither ends nor means—but still one might think that there is a doing/allowing distinction that favors allowing the deaths.

I am not confident what to say in the above case. But suppose the numbers are even closer. Suppose that we have extremely precise predictions and they show that the hypothetical costless vaccine would kill exactly one less person than would be killed by X. In that case, I do feel a strong pull to thinking this vaccine should not be marketed. On the other hand, if the numbers are further apart, it becomes clearer to me that the vaccine is worth it. If the vaccine kills 2% of the population while X kills 4%, the vaccine seems worthwhile (assuming no other relevant consequences). In that case, wanting to keep our hands clean by refusing to vaccinate would result in 158 million more people dying. (That said, I doubt our medical establishment would allow a vaccine that kills 2% of the population even if the vaccine would result in 158 million fewer people dying. I think our medical establishment is excessively risk averse and disvalues medically-caused deaths above deaths from disease to a degree that is morally unjustified.)

From a first-person view, though, I lose my intuition that if the vaccine only kills one fewer person than the disease, then the vaccine should not be administered. Suppose I am biking and my bike is coasting down a smooth hill. I can let the bike continue to coast to the bottom of the hill, or I can turn off into a side path that has just appeared. Suddenly I acquire the following information: by the main path there will be a tiger that has a 4% chance of eating any cyclist passing by, while by the side path there will be a different tiger that has “only” a 3.99999999% chance of eating a cyclist. Clearly, I should turn to the side path, notwithstanding the fact that if the tiger on the side path eats me, it will have eaten me because of my free choice to turn, while if the tiger on the main path eats me, that’s just due to my bike’s inertia. Similarly, then, if the vaccine is truly costless (i.e., no inconvenience, no pain, etc.), and it decreases my chance of death from 4% to 3.99999999% (that’s roughly what a one-person difference worldwide translates to), I should go for it.

So, in the case where the vaccine kills only one fewer person than the disease would have killed, from a first-person view, I get the intuition that I should get the vaccine. From a third-person view, I get the intuition that the vaccine shouldn’t be promoted. Perhaps the two intuitions can be made to fit together: perhaps the costless vaccine that kills only one fewer person should not be promoted, but the facts should be made public and the vaccine should be made freely available (since it is costless) to anyone who asks for it.

This suggests an interesting distinction between first-person and third-person decision-making. The doing/allowing distinction, which favors evils not of our causing over evils of our causing even when the latter are non-intentional, seems more compelling in third-person cases. And one can transform third-person cases to be more like first-person through unencouraged informed consent perhaps.

(Of course, in practice, nothing is costless. And in a case where there is such a slight difference in danger as 4% vs. 3.99999999%, the costs are going to be the decisive factor. Even in my tiger case, if we construe it realistically, the effort and risk of making a turn on a hill will override the probabilistic benefits of facing the slightly less hungry tiger.)

Wednesday, April 21, 2021

Is it prudent to start drinking alcohol?

Here is an interesing exercise in decision theory.

Suppose (as is basically the case for me, if one doesn’t count chocolates with alcohol or the one or two spoonfuls of wine that my parents gave me as a kid to allay my curiosity) I am a man who never drunk alcohol. Should I?

Well, 6.8% of American males 12 and up suffer from alcoholism. And 83% of American males 12 and up report having drunk alcohol. It follows that the probability of developing alcoholism after drinking is around 8%, whle the probability of developing alcoholism without drinking is around 0%.

Family history might provide one with some reason to think that in one’s own case the statistics on developing would be more pessimistic or more optimistic, but let’s suppose that family history does not provide significant data one way or another.

So, the question is: are the benefits of drinks containing alcohol worth an 8% chance of alcoholism?

Alcoholism is a very serious side-effect. It damages people’s moral lives in significant ways, besides having serious physical health repercussions. The moral damage is actually more worrying to me than the physical health repercussions, both because of the direct harms to self and the indirect harms to others, but the physical health considerations are easier to quantify. Alcoholism reduces life expectancy by about 30%. Thus, developing alcoholism is like getting a 30% chance of instant death at a very young age. Hence, the physical badness of facing an 8% chance of developing alcoholism is like a young child’s facing a 2.4% chance of instant death.

At this point, I think, we can get some intuitions going. Imagine that a parent is trying to decide whether their child should have an operation that has about a 2.4% chance of death. There definitely are cases where it would be reasonable to go for such an operation. But here is one that is not. Suppose that the child has a condition that makes them unable to enjoy any food that has chocolate in it—the chocolate is harmless to them, but it renders the food containing it pleasureless. There is a childhood operation that can treat this condition, but about one in forty children who have the operation die on the operating table. It seems to me clear that parents should refuse this operation and doctors should not offer it, despite the fact that chocolate has great gustatory pleasures associated it. Indeed, I think it is unlikely that the medical profession would approve of the operation.

But I doubt that alcohol’s morally legitimate pleasures exceed those of chocolate.

I said that the moral ills of alcoholism are larger than the physical, but harder to quantify. Still, we can say something. If the physical badness of an 8% chance of alcoholism is like a young child’s facing a 2.4% chance of instant death, and there is a worse moral effect, it follows that the overall badness of an 8% chance of alcoholism is at least as bad as a young child’s facing a 5% chance of instant death. And with a one in twenty chance of death, there are very few operations we would be willing to have performed on a child or ourselves. The operation would have to correct a very dangerous or a seriously debilitating condition. And alcohol does neither.

This suggests to me that if the only information one has is that one is male, the risk of alcoholism is sufficient that the virtue of prudence favors not starting to drink. If one is female, the risk is smaller, but it still seems to me to be sufficiently large for prudence to favor not starting to drink.

There are limitations of the above argument. If one has already started drinking, one may have additional data that goes beyond the base rates of alcoholism—for instance, one may know that in twenty years of drinking, one has not had any serious problems with moderation, in which case the argument does not apply (but of course one might also have data that makes alcoholism a more likely outcome than at the base rate). Similarly, one might have data from family history showing that the danger of alcoholism is smaller than average or from one’s own personal history showing that one lacks the “addictive personality” (but in the latter case, one must beware of self-deceit).

I am a little suspicious of the above arguments because of the Church’s consistent message, clearly tied to Jesus’s own practice, that the drinking of alcohol is intrinsically permissible.

It may be that I am overly cautious in thinking which degree of risk prudence bids us to avoid. Perhaps one thing to say is that while there are serious reasons of prudence not to start drinking, I may be underestimating the weight of the benefits of drinking.

I also think the utilities were different in the past. If spices and chocolate are unavailable or prohibitively expensive, wine might be the main gustatory pleasure available to one, and so the loss of gustatory pleasure would be a more serious loss. Likewise, alcoholic drinks may have health benefits over unsafe drinking water. Finally, even now, one might live in a cultural setting where there are few venues for socialization other than over moderate alcoholic consumption.

Of course, in my own case there are also hedonic reasons not to drink alcoholic drinks: the stuff smells like a disinfectant.

Is it permissible to fix cognitive mistakes?

Suppose I observe some piece of evidence, attempt a Bayesian update of my credences, but make a mistake in my calculations and update incorrectly. Suppose that by luck, the resulting credences are consistent and satisfy the constraint that the only violations of regularity are entailed or contradicted by my evidence. Then I realize my mistake. What should I do?

The obvious answer is: go back and correct my mistake.

But notice that going back and correcting my mistake is itself a transition between probabilities that does not follow the Bayesian update rule, and hence a violation of the standard Bayesian update rule.

To think a bit more about this, let’s consider how this plays out on subjective and objective Bayesianisms. On subjective Bayesianism, consistency, the Bayesian update rule and perhaps the constraint that the only violations of regularity are entailed or contradicted by my evidence. My new “mistaken” credences would have been right had I started with other consistent and regular priors. So there is nothing about my new credences that makes them in themselves rationally worse than the ones that would have resulted had I done the calculation right. The only thing that went wrong was the non-Bayesian transition. And if I now correct the mistake, I will be committing the rational sin of non-Bayesian transition once again. I have no justification for that.

Moreover, the standard arguments for Bayesian update apply just as much now in my new “mistaken” state: if I go back and correct my mistake, I will be subject to a diachronic Dutch Book, etc.

So, I should just stick to my guns, wherever they now point.

This seems wrongheaded. It sure seems like I should go back and fix my mistake. This, I think, shows that there is something wrong with subjective Bayesianism.

What about objective Bayesianism? Objective Bayesianism adds to the consistency, update and (perhaps) regularity restrictions in subjective Bayesianism some constraints on the original priors. These constraints may be so strict that only one set of original priors counts as permissible or they may permissive enough to allow a range of original priors. Now note that the standard arguments for Bayesian update still apply. It looks, thus, like correcting my mistake will be adding a new rational sin to the books. And so it seems that the objective Bayesian also has to say that the mistake should not be fixed.

But this was too quick. For it might be that my new “mistaken” posteriors are such that given my evidential history they could not have arisen from any permissible set of original priors. If so, then it’s like my being in possession of stolen property—I have posteriors that I simply should not have—and a reasonable case can be made that I should go back and fix them. This fix will violate Bayesian update. And so we need to add an exception to the Bayesian update rules: it is permissible to engage in a non-Bayesian update in order to get to a permissible credential state, i.e., a credential state that could have arisen from a permissible set of priors given one’s evidential history. This exception seems clearly right. For imagine that you are the mythical Bayesian agent prior to having received any evidence—all you have are your original priors, and no evidence has yet shown up. Suddenly you realize that your credences violate the objective rules on what the priors should be. Clearly you should fix that.

Thus, the objective Bayesian does have some room for justifying a “fix mistakes” exception to the Bayesian update rule. That exception will still violate the standard arguments for Bayesian update, and so we will have to say something about what’s wrong with those arguments—perhaps the considerations they give, while having some force, do not override the need for one’s credences to be such that they could be backtracked to permissible original priors.

Considerations of mistakes gives us reasons to prefer objective Bayesianism to subjective Bayesianism. But the objective Bayesian is not quite home free. Consider first the strict variety where there is only one permissible set of original priors. We have good empirical reason to think that there are about as many sets of original priors as there are people on earth. And on the strict version of objective Bayesianism, at most one of these sets of original priors is permissible. Thus it’s overwhelmingly unlikely that my original priors are permissible. Simply fixing my last mistake is very unlikely to move me to a set of posteriors that are correct given the unique set of permissible original priors and my evidential history. So it’s a matter of compounding one rational sin—my mistake—with another, without fixing the underlying problem. Maybe I can have some hope that fixing the mistake gets me closer to having posteriors that backtrack to the unique permissible original priors. But this is not all that clear.

What about permissible objective Bayesianism? Well, now things depend on our confidence that our original priors were in fact permissible and that no priors that generate our new “mistaken” posteriors given our evidential history would have been permissible. If we have a high enough confidence in that, then we have some reason to fix the mistake. But given the obvious fact that human beings so often reason badly, it seems unlikely that my original priors were in fact permissible—if Bayesianism is objective, we should believe in the “original cognitive sin” of bad original priors. Perhaps, just as I speculated on strict objective Bayesianism, we have some reason to hope that our actual original priors were closer to permissible than any priors that would generate our new “mistaken” posteriors. Perhaps.

So every kind of Bayesian has some difficulties with what to do given a miscalculation. Objective Bayesians have some hope of having an answer, but only if they have some optimism in our actual original priors being not too far from permissibility.

It is interesting that the intuition that we should fix our “mistaken” posteriors leads to a rather “Catholic” view of things: although doubtless there is original cognitive sin in our original priors, these priors are sufficiently close to permissibility that cognitive repairs make rational sense. We have depravity of priors, but not total depravity.

Monday, April 19, 2021

How I learned to be a bit less judgmental about social distancing

Earlier in the pandemic, I was very judgmental of students hanging around outside in groups and not respecting six-foot spacing. Fairly quickly I realized that it is inadvisable for the university to rebuke students for doing this, since such rebukes are likely to lead to their taking such interactions to private indoor venues, which would be much worse from a public health standpoint. But that practical consideration did not alleviate my strong judgmental feelings.

Eventually, however, these observations have made me realize that in our species, there is a natural desire to spend time in relatively close physical proximity to each other. And indeed, this is quite unsurprising in warmblooded social animals. Realizing that social distancing—however rationally necessary—requires people to go against their natural instincts has made me quite a bit less judgmental about noncompliance.

It took observation of others to realize this, because apart from practicalities, I find myself to prefer something like two meter spacing for social interaction with people outside my family. Greater physical distance from people outside my family has been quite pleasant for me. A conversation at two meters feels a little bit less stressful than at one meter. But looking at other people, it is evident that my preference here is literally unnatural, and that for other people such distancing is quite a burden.

Of course, sometimes it is morally necessary to go against one’s natural desires. It is natural to flee fires, but fire fighters need to go against that desire. And in circumstances where it is morally necessary to go against natural desires, people like me who lack the relevant natural desires are particularly fortunate and should not be judgmental of those for whom the actions are burden.

Desires for another's action

Suppose that Alice is a morally upright officer fighting against an unjust aggressor in a bloody war. The aggressor’s murderous acts include continual slaughter of children. Alice has sent Bob for a mission behind enemy lines. Bob’s last message said that Bob has found a way to end the war. The enemy has been led to war by a regent representing a three-year-old king. If the three-year-old were to die, the crown would pass to a peaceloving older cousin who would immediately end the war. And Bob has just found a way to kill the toddler king. Moreover, he can do it in such a way that it looks like it is a death of natural causes and will not lead to vengeful enemy action.

Alice responds to the message by saying that the child-king is an innocent noncombatant and that she forbids killing him as that would be murder. It seems that Alice now has two incompatible desires:

  • that Bob will do the right thing by refraining from murdering the child, and

  • that Bob will assassinate the child king, thereby preventing much slaughter, including of children.

And there is a sense in which Alice wants the assassination more than she wants Bob to do the right thing. For what makes the assassination undesirable—the murder of a child—occurs in greater numbers in the no-assassination scenario.

But in another sense, it was the desire to have Bob do the right thing that was greater. For that was the desire that guided Alice’s action of forbidding the assassination.

What should we say?

Here is a suggestion: Alice desires that Bob do the right thing, but Alice wishes that Bob would assassinate the king. What Alice desires and what Alice wishes for are in this case in conflict.

And here is a related question. Suppose someone you care about wants you to do one thing but wishes you to do another. Which should you do?

In the above case, the answer is given by morality: assassinating the three-year-old king is wrong, no matter the consequences. And considerations of authority concur. But what if we bracket morality and authority, and simply ask what Bob should do insofar as he cares about Alice who is his friend. Should he follow Alice’s desires or her wishes? I think this is not so clear. On the one hand, it seems more respectful to follow someone’s desires. On the other hand, it seems more beneficent to follow someone’s wishes.

Saturday, April 17, 2021

Regular Hyperreal and Qualitative Probabilities Invariant Under Symmetries

I just noticed that my talk "Regular Hyperreal and Qualitative Probabilities Invariant Under Symmetries" is up on YouTube. And the paper that this is based on (preprint here) has  just been accepted by Synthese.



Friday, April 16, 2021

Obedience to God out of gratitude

Some philosophers want to ground our duty to obey God’s commands in the need to show gratitude to God for all the goods God has done for us. I think there is something to that, but I want to point out a complication.

When someone has done something good for us, that generates moral reasons to do something good for them. But that is different from generating moral reasons to obey their commands. After all, being obeyed need not be good for the person issuing the commands. Imagine that you are on a ship along with someone who has already done many good things for you and your family. The ship is sinking. There is one last space left in a lifeboat. You start to push your benefactor into that space. Your benefactor interrupts: “Don’t! Get into it yourself!” In this case, it would be bad for your benefactor for you to obey their commmand, and obedience to this command would not, I think, be a right expression of gratitude. You might have other moral reasons to obey the command, such as that if someone is offering to make the ultimate sacrifice, you should not deprive them of that choice. But gratitude for past benefits is not a reason to obey your benefactor when your benefactor would not in fact benefit from your obedience.

So the mere fact that you are commanded something by your benefactor does not generate a gratitude-based reason for obedience. It is only when your benefactor would benefit from your obedience that such a reason is generated.

Now things get a little complicated in the case of God. It seems we cannot benefit God, as God has perfect beatitude. But as we learn from Aquinas’ discussion of love for God, it’s more complicated than that. There are internal and external benefits and harms a person might receive. Internal benefits and harms affect the person’s intrinsic properties in a positive way—think here of pleasure and pain, virtue and vice, etc. But there are external benefits and harms: when people speak badly about you behind your back, the loss of reputation is an external harm, even if you never find out about it and it never affects any intrinsic property of yours. Similarly, because friends are other selves, if x loves y, then benefits and harms to y are benefits and harms to x, albeit perhaps only external ones. Thus, we can benefit God by benefiting those that God loves, namely everyone.

So, when God commands us—as in fact he does—to love our neighbor, then our obedience to that command does benefit God, albeit externally. So it seems we do have a gratitude-based moral reason to do what God says. But notice that as far as the argument goes right now, that is not a reason to love our neighbor out of obedience to God. The reason to love our neighbor out of gratitude to God would remain even if God did not command us to love our neighbor.

But this isn’t the whole story. For, first, when we show gratitude to a benefactor by bestowing some internal or external benefits on them, gratitude seems to call on us to have a preference for bestowing those benefits that the benefactor asks us to bestow. Thus, the fact that some benefit to the benefactor is requested by the benefactor adds to the gratitude-based reasons for bestowing that benefit. And, second, it seems plausible that having one’s commands be obeyed is itself an external benefit—it is a way of being honored.

Thus, that God has commanded us to love our neighbor intensifies our reasons based on gratitude to God to love our neighbor. And even if God were to command something seemingly arbitrary and not in itself beneficial, like abstinence from pork, we externally benefit God insofar as we obey him. But in the latter case something stronger is needed than that: for the obedience to be a form of gratitude, it needs to be the case that on balance we benefit God through the obedience. That being obeyed is good as far as it goes does not show that being obeyed is on balance good. It is good as far as it goes for our benefactor to be obeyed when they say to go into the lifeboat, but it might actually on balance be better for them to be pushed into the lifeboat.

Thus there is a limit to how far this justification of divine authority goes. If we obey God out of gratitude, our reason for obedience cannot simply consist in the facts that God has commanded us and that God has bestowed great benefits on us. Our reason would also need to include the fact that on balance it bestows a benefit—an external one, to be sure—on God if we obey.

Is it the case that whenever God commands something, it always bestows a benefit on balance on God that we obey him? That initially sounds like a reasonable thesis, but we can easily imagine cases where it is not so. Consider cases where my disobedience to God would prevent massive disobedience by others. For instance, a malefactor offers me a strong temptation to disobey God, and tells me that if I refuse the temptation, then a thousand other people will be offered the same temptation, but if I give in, I will be the only one. Looking at how strong the temptation is, I conclude that if it’s offered to a thousand people, about 500 of them will succumb to it. Thus, if I obey God, there will be much less obedience of God in the world. Hence, my obedience to God actually leads to God being less honored and receiving less external benefit. But nonetheless I need to obey. Hence, the duty of obedience is not grounded in gratitude.

In summary: Normally, gratitude does give us moral reason to obey God. But if I am right, then the moral reason to obey God that comes from gratitude needs to include the assumption that it is good for God to be obeyed. And we can imagine cases where that assumption is false and yet obedience is still required.

Thursday, April 15, 2021

An exercise in vacuity

I’ve been thinking what an Aristotelian would say about the thesis that the normative supervenes on the non-normative. That thesis holds that:

  1. any two possible worlds that have the same non-normative facts have the same normative facts.

So, the first question I asked myself was: What sorts of non-normative facts are there on the Aristotelian view of the world? First, every fact involving a natural kind is normative, since natural kind concepts are normative concepts—sheep are the sorts of things that should have four legs (among other things) and electrons are the sorts of things that should repel other electrons (among other things). Second, because natural kind membership is essential, it seems that any facts about particulars will also be normative, since all particulars other than God (and by divine simplicity there are no non-normative facts about God) are essentially members of natural kinds. Could there at least be some non-normative facts about how many objects exist? I don’t think so. For to exist is to be a substance, or to be related appropriately to a substance (say, by being an accident of a substance). But a part of what it is to be a substance is to have a form which governs how one ought to behave, and that’s normative. So existential facts are normative, too.

In fact, it’s looking to me like there are no non-normative facts on an Aristotelian view. Hence, if (1) were to hold, we would have to have:

  1. any two possible worlds have the same normative facts.

But since all facts are normative, it would follow that:

  1. any two possible worlds have the same facts.

But since possible worlds are distinguished by their facts, we see that on an Aristotelian view, the supervenience thesis basically says:

  1. there is only one possible world.

And that’s false.

Wednesday, April 14, 2021

Aquinas and Descartes on substance dualism

Roughly, Aquinas thinks of a substance as something that:

  1. is existentially independent of other things, and

  2. is complete in its nature.

There is a fair amount of work needed to spell out the details of 1 and 2, and that goes beyond my exegetical capacities. But my interest is in structural points. Things that satisfy (1), Aquinas calls “subsistent beings”. Thus, all substances are subsistent beings, but the converse is not true, because Aquinas thinks the rational soul is a subsistent being and not a substance.

Descartes, on the other hand, understands substance solely in terms of (1).

Now, historically, it seems to be Descartes and not Thomas who set the agenda for discussions of the view called “substance dualism”. Thus, it seems more accurate to think of substance dualists as holding to a duality of substance in Descartes’ sense of substance than in Aquinas’.

But if we translate this to Thomistic vocabulary, then it seems we get:

  1. A “substance dualist” in the modern sense of the term is someone who thinks there are two subsistent beings in the human being.

And now it looks like Aquinas himself is a substance dualist in this sense. For Aquinas thinks that there are two subsistent beings in Socrates: one of them is Socrates (who is a substance in the Thomistic sense of the word) and the other is Socrates’ soul (which is a merely subsistent being). To make it sound even more like substance dualism, note that Thomas thinks that Socrates is an animal and animals are bodies (as I have learned from Christopher Tomaszewski, there are two senses of body: one is for the material substance as a whole and the other is for the matter; it is body in the sense of the material substance that Socrates is, not body in the sense of matter). Thus, one of these subsistent beings or substances-in-the-Cartesian-sense is a body and the other is a soul, just as on standard Cartesian substance dualism.

But of course there are glaring difference between Aquinas’ dualism and typical modern substance dualisms. First, and most glaringly, one of the two subsistent beings or Cartesian substances on Aquinas’s view is a part of the other: the soul is a part of the human substance. On all the modern substance dualisms I know of, neither substance is a part of the other. Second, of the two subsistent beings or Cartesian substances, it is the body (i.e., the material substance) that Aquinas identifies Socrates with. Aquinas is explicit that we are not souls. Third, for Aquinas the body depends for its existence on the soul—when the soul departs from the body, the body (as body, though perhaps not as matter) perishes (while on the other hand, the soul depends on the matter for its identity).

Now, let’s move to Descartes. Descartes’ substance dualism is widely criticized by Thomists. But when Thomists criticize Descartes for holding to a duality of substances, there is a danger that they are understanding substance in the Thomistic sense. For, as we saw, if we understand substance in the Cartesian sense, then Aquinas himself believes in a duality of substances (but with important structural differences). Does Descartes think there is a duality of substances in the Thomistic sense? That is not clear to me, and may depend on fine details of exactly how the completeness in nature (condition (2) above) is understood. It seems at least in principle open to Descartes to think that the soul is incomplete in its nature without the body or that the body is incomplete in its nature without the soul (the pineal gland absent the soul sure sounds incomplete) or that each is incomplete without the other.

So, here is where we are at this point: When discussing Aquinas, Descartes and substance dualism we need to be very careful whether we understand substance in the Thomistic or the Cartesian sense. If we take the Cartesian sense, both thinkers are substance dualists. If we take the Thomistic sense, Aquinas clearly is not, but it is also not clear that Descartes is. There are really important and obvious structural differences between Thomas and Descartes here, but they should not be seen as differences in the number of substances.

And here is a final exegetical remark about Aquinas. Aquinas’ account of the human soul seems carefully engineered to make the soul be the sort of thing—namely, a subsistent being—that can non-miraculously survive in the absence of the substance—the human being—that it is normally a part of. This makes it exegetically probable that Aquinas believed that the soul does in fact survive in the absence of the human being after death. And thus we have some indirect evidence that, in contemporary terminology, Aquinas is a corruptionist: that he thinks we do not survive death though our souls do (but we come back into existence at the resurrection). For if he weren’t a corruptionist, his ontology of the soul would be needlessly complex, since the soul would not need to survive without a human being if the human being survived death.

And indeed, I think Aquinas’s ontology is needlessly complex. It is simpler to have the soul not be a subsistent being. This makes the soul incapable of surviving death in the absence of the human being. And that makes for a better view of the afterlife—the human being survives the loss of the matter, and the soul survives but only as part of the human being.

Tuesday, April 13, 2021

A metaphysical argument for survivalism

Corruptionist Thomists think that after death and before the resurrection, our souls exist in a disembodied state and have mental states, but we do not exist. For we are not our souls. Survivalist Thomists think we continue to exist between death and the resurrection. They agree that we are not our souls, but tend to think that in the disembodied we have our souls as proper parts.

Here is a metaphysical argument against corruptionism and for survivalism.

  1. An accident that has a subject is a part of that subject.

  2. There are mental state accidents in the disembodied state.

  3. All mental state accidents in the disembodied state have a subject.

  4. The soul does not have accidents as parts.

  5. Therefore, the mental state accidents in the disembodied state have something other than the soul as their subject.

  6. The only two candidates for a subject of mental state accidents are the soul and the person.

  7. Therefore, the mental state accidents in the disembodied state have the person as their subject.

  8. Therefore, the person exists in the disembodied state.

(This argument is a way of turning Jeremy Skrzypek’s accident-based defense of survivalism into a positive argument for survivalism. Maybe Skrzypek has already done this, too.)

The argument is slightly complicated by the fact that Thomists accept the possibility of subjectless accidents existing miraculously (in the Eucharist). Nonetheless, I do not know of any Thomists who think the disembodied state is such a miracle. Given that Thomists generally think that the survival of the soul after death is not itself miraculous, they are unlikely to require the miracle of subjectless accidents in that case, and hence will accept premise 3.

Premise 2 is common ground between survivalists and corruptionists, as both agree that there is suffering in hell and purgatory and joy in heaven even in the disembodied state.

I think the controversial premises are 1 and 4. I myself am inclined to deny the conjunction of the two premises (even though I think survivalism is true for other reasons).

Premise 1 is a core assumption of compositional metaphysics, and compositional metaphysics is one of the main attractions of Thomism.

One reason to accept premise 4 is that the soul is the form of the human being, and one of the main tasks for forms in Aristotelian metaphysics is to unify complex objects. But if forms are themselves complex, then they are also in need of unification, and we are off on a regress. So forms should be simple, and in particular should not have accidents as parts.

Another reason to accept 4 is that if the soul or form has mental state accidents as parts, it becomes very mysterious what else the form is made of besides these accidents. Perhaps there is the esse or act of being. But it seems wrong to think of the form as made of accidents and esse. (I myself reject the idea that objects are “made of” their parts. But the intuition is a common one.)

Monday, April 12, 2021

Pascal's wager and decision theory

From time to time I find myself musing whether Pascal’s Wager doesn’t simply completely destroy ordinary probabilistic decision theory. Consider an ordinary decision, such as whether to walk or bike to work. There are various perfectly ordinary considerations in favor of one or the other. Biking is faster and more fun, but walking is safer and provides more opportunity for thought.

But in addition to all these, there are considerations having to do with one’s eternal destiny. It is hard to deny that there is a positive probability that we will have an eternal afterlife and that our daily choices will affect whether this afterlife is happy or miserable. But even tiny differences in the probability of eternal happiness infinitely swamp all the ordinary considerations in the decision whether to walk or bike. If the opportunity for more leisurely reflection afforded by walking even slightly increases one’s chance at eternal happiness, that infinite contribution to expected utility completely overcomes all the ordinary considerations. But on the other hand, biking would allow one to arrive at work earlier, and thereby take on a larger share of work burdens, which would lead to growth in virtue, and increase chances of eternal happiness. So in the end, it seems, many of our ordinary everyday decisions end up turning into exercises in balancing tiny differences in the probability of eternal joy, as these swamp all the other ordinary consdierations. And that seems wrong.

One move here is to say that the question of how ordinary approximately morally neutral decisions affect the afterlife is one that we have so little information on that we should bracket the infinities, and just focus on the finite stuff we know about. But on the other hand, does that make sense? After all, perhaps we should put all our mental energies into figuring out this stuff that we have so little information on, as the infinities in the utilities swamp everything else?

Decisions in heaven

Suppose I will live forever in heaven, and I have two infinite decks of cards. Each card specifies the good things that will happen to me over the next day. Every card in the left deck provides a hundred units of goods. Every card in the right deck provides a thousand units of goods.

Each day I get to draw the top card from a deck I choose and then I get the specified goods.

Consider three of the strategies I could opt for:

  1. Always draw from the left deck.

  2. Always draw from the right deck.

  3. Alternate between decks.

Clearly, strategy 1 is not a good idea, so let’s put that aside.

There is an obvious argument for preferring 2 to 3. If I opt for strategy 2, then every other day I will be much better off than on strategy 3, and on the other days I will be at least as well off as on strategy 3.

But there is also an argument for preferring 3 to 2: on option 3, over the course of eternity, I get all the goods from both decks.

Moreover, even if one does not buy the argument that option 3 is better than option 2, it seems no worse: for while on option 3, the greater goods of the right deck get delayed more, a good is no less valuable for being pushed further off into the future.

Friday, April 9, 2021

Punishment, criticism and authority

It is always unjust to punish without the right kind of authority over those that one punishes.

Sometimes that authority may be given to us by them (as in the case of a University’s authority over adult students, or maybe even in the case of mutual authority in friendship) and sometimes it may come from some other relationship (as in the case of the state’s authority over us). But in any case, such authority is sparse. The number of entities and persons that have this sort of authority over us is several orders of magnitude smaller than the number of people in society.

This means that typically when we learn that someone is behaving badly, we do not have the authority to punish them. I wonder what this does or does not entail.

Clearly, it does not mean that we are not permitted to criticize them. Criticism as such is not punishment, but the offering of evaluative information. We do not need any authority to state a truth to a random person (though there may be constraints of manners, confidentiality, etc.), including an evaluative truth. But what if that truth is foreseen to hurt? If it is merely foreseen but not intended to hurt, this is still not punishment (it’s more like a Double Effect case). But what if it is also intended to hurt?

Well, not every imposition of pain is a punishment. Nor does every imposition of pain require authority. Suppose I see that you are asleep a hundred meters from me, and I see a deadly snake, for whose bite there is no cure, approaching you. I pull out an air rifle and shoot you in the leg, intending to cause you pain that wakes you up and allows you to escape the snake. Likewise, it could be permissible to offer intentionally hurtful criticism in order to change someone’s behavior without any need for authority (though it may not be often advisable).

But there is a difference between imposing a hurt and doing so punitively. In the air rifle case, the imposition of pain is not punitive. But in the case of criticism, it is psychologically very easy to veer from imposing the criticism for the sake of reformation to a retributive intention. And to impose pain retributively—even in part, and even by truthful words—without proper authority is a violation of justice.

There are two interesting corollaries of the above considerations.

First, we get an apparently new argument against purely reformatory views of punishment. For it seems that the imposition of pain through accurate criticism in order to reform someone’s behavior would count as punishment on a purely reformatory view, and hence would have to require proper authority (unless we deny the thesis I started with, that punishment without authority is unjust).

Second, we get an interesting asymmetry between punishment and reward that I never noticed before. There is nothing unjust about rewarding someone whom we have no authority over when they have done a good thing (though in particular cases it could violate manners, be paternalistic, etc.) In particular, there need be nothing wrong with what one might call retributive praise even in the absence of authority: praise intended to give a pleasure to the person praised as a reward for their good deeds. But for punishment, things are different. This is no surprise, because in general harsh treatment is harder to justify than pleasant treatment.

Wednesday, April 7, 2021

What does it mean for persons to have infinite value?

It is intuitive to say that persons have infinite value, and recently Rasmussen and Bailey have given some cool arguments for this thesis.

But what does it mean to say that humans have infinite value? If we think of values as something very much like numbers, then I guess it just means that humans have the value +∞. But we shouldn’t think of values as numbers. For instance, to do that loses sight of incommensurability.

We probably should think of value-comparison as more fundamental than “having value z”. Thus, there is a relation of being at least as valuable as on possible items of evaluation (substances, properties, pluralities, whatever). This relation is reflexive and at least arguably transitive.

We can now define:

  1. x is more valuable than y if and only if x is at least as valuable as y but y is not at least as valuable as x.

Next we can try to define a relation of being infinitely more valuable than. One approach is:

  1. x is infinitely more valuable than y if and only if x is more valuable than any finite plurality of duplicates of y.

I am not quite sure this works, given that sometimes the value of an item rests in the fact that it is the only one of its kind, and then a plurality of duplicates might lose out on an aspect of the value. If we focus on intrinsic value, perhaps we don’t need to worry about this. Or maybe we can proceed probabilistically:

  1. x is infinitely more valuable than y if and only if for every natural number n, a 1/n chance of x is more valuable than certainty of y.

Or perhaps we can take being infinitely more valuable than as a primitive transitive and irreflexive relation.

But now, if what we have are the above ingredients, what does it mean to say that something has infinite value? Here are two options, a maximal and a minimal one:

  1. Maximal: x has infinite value if and only if x is infinitely more valuable than everything else.

  2. Minimal: x has infinite value if and only if x is infinitely more valuable than something else that has positive value.

On the maximal option 4, you and I do not have infinite value, since you are not infinitely more valuable than I and I am not infinitely more valuable than you. Indeed, only a being like God is a plausible candidate for having infinite value in the maximal sense.

In the minimal option 5, “has positive value” is added to avoid the potential problem that literally everything that has positive value has infinite value, because anything with positive value is infinitely more valuable than something with no value or with negative value. What does it mean for something to have positive value? I guess it’s for it to be more valuable than nothing. (I am using a very broad sense of “item”, including such “items” as “nothing”, when I talk of value in this post.)

But option 5 probably doesn’t capture the intuition that infinite value distinguishes persons from, say, trees. For while arguably a person is infinitely more valuable than a tree, it is also quite plausible to me that a tree is infinitely more valuable than some non-living things like fundamental particles. Or if you don’t share that intuition, suppose eternalism. Then a tree that exists for a year could be infinitely more valuable than a tree that exists for an instant, since there could turn out to be infinitely many instants in a year.

In any case, whether these speculations about the value of trees are right, the important point is that the intuition we were trying to capture with the statement that persons have infinite value was that persons have a lot of value. But having infinitely more value than something of positive value could just mean that you have infinitely more value than something of infinitesimally positive value, which is compatible with not having much value at all.

If the above is right, then it’s false or unhelpful to talk of persons having infinite value simpliciter. What may make sense, however, are specific comparisons such as:

  1. A person has infinitely more value than a dollar

or:

  1. A person has infinitely more value than a tree.

We might try for something more daring, though:

  1. A person has infinitely more value than any non-person.

I think (8) if true would capture a fair amount of the original intuition, and do so without any arbitrary singling out of a unit of comparison like a dollar or a tree. But I do not know if (8) is true. There could be kinds of good that we have no concept of, and those kinds of good could be at least incommensurable with the goods of persons. Something with such a good need not be infinitely less valuable than a person—they might be mutually incommensurable.

So, speaking for myself, I am happy with sticking to a fairly arbitrary unit, and going for something like (6) or (7).

Non-propositional representations

I used to think that it’s quite possible that all our mental representations of the world are propositional in nature. To do that, I had to have a broad notion of proposition, much broader than what we normally consider to be linguistically expressible. Thus, I was quite happy with saying that Picasso’s Guernica expresses a proposition about war, a proposition that cannot be stated in words. Similarly, I was quite fine—my Pittsburgh philosophical pedigree comes out here—with the idea that an itch or some other quale might represent the world propositionally.

That broad view of propositions still sounds right. But I am now thinking there is a different problem for propositionalism about our representational states: the problem of estimates. A lot of my representations of the world are estimates. When I estimate my height at six feet, there is a proposition in the vicinity, namely the proposition that my height is exactly six feet. But that proposition is one that I am quite confident is false. There are even going to be times when I wouldn’t even want to say that my best estimate of something is approximately right—but it’s still my best estimate.

The best propositionally-based of what happens when I estimate my height at six feet seems to me to be that I believe a proposition about myself, namely that my evidence about my height supports a probability density whose mean is at six feet. But there are two problems with this. First, the representational state now becomes a representation of something about me—facts about what evidence I have—than about the world. Second, and worse, I don’t know that I would stick my neck out far enough to even make that claim about evidence unequivocally—my insight into the evidence I have is limtied. Moreover, even concerning evidence, what I really have is only estimates of the force of my evidence, and the problem comes back for them.

So I think that estimating is a way of representing that is not propositional in nature. Notice, though, that estimates are often well expressible through language. So on my view, linguistic expressibility (in the ordinary sense of “linguistic”—maybe there is such a thing as the “language of painting” that Picasso used) is neither necessary for a representation of the world to be propositional in nature.

I now wonder whether vagueness isn’t something similar. Perhaps vague sentences represent the world but not propositionally. But just as we can often—but not always—reason as if sentences expressing estimates expressed propositions, we can often reason as if vague sentences expressed propositions. The “logic” of the non-propositional representations is close enough to the logic of propositional ones—except when it’s not, but we can usually tell when it’s not (e.g., we know what sorts of gruesome inferences not draw from the estimate that a typical plumber has 2.2 children).

Tuesday, April 6, 2021

Quasi-divinization and love

When we deeply love someone, we are apt to raise them to a quasi-divine status in our hearts.

If naturalism is right, this is misguided, for the evolved clouds of particles that are the people we love do not in fact have any quasi-divine status.

If theism is right, then this quasi-divinization could well be appropriate: for persons participate in God in such a way that they are in God’s image and likeness. But although not necessarily misguided, the quasi-divinization is dangerous, lest it cross the line into idolatry. (See C. S. Lewis’s Four Loves.)

I think that the theistic outlook on the quasi-divinization in love better fits with the plausible observation that this kind of deep love is sometimes both laudable and yet still morally dangerous, while on the naturalistic outlook, it is merely misguided.

Monday, April 5, 2021

Best estimates and credences

Some people think that expected utilities determine credences and some thing that credences determine expected utilities. I think neither is the case, and want to sketch a bit of a third view.

Let’s say that I observe people playing a slot machine. After each game, I make a tickmark on a piece of paper, and if they win, I add the amount of the win to a subtotal on a calculator. After a couple of hours—oddly not having been tossed out by the casino—I divide the subtotal by the number of tickmarks and get the average payout. If I now get an offer to play the slot machine for a certain price, I will use the average payout as an expected utility and see if that expected utility exceeds the price (in a normal casino, it won’t). So, I have an expected utility or prevision. But I don’t have enough credences to determine that expected utility: for every possible payout, I would need a credence in getting that payout, but I simply haven’t kept track of any data other than the sum total of payouts and the number of games. So, here the expected utility is not determined by the credences.

The opposite is also not true: expected utilities do not determine credences.

Now consider another phenomenon. Suppose I step on an analog scale, and it returns a number w1 for my weight. If that’s all the data I have, then w1 is my best estimate for the weight. What does that mean? It certainly does not mean that I believe that my weight is exactly w1. It also does not mean that I believe that my weight is close to w1—for although I do believe that my weight is close to w1, I also believe it is close to w1 + 0.1 lb. If I were an ideal epistemic agent, then for every one of the infinitely many possible intervals of weight, I would have a credence that my weight lies in that interval, and my best estimate would be an integral of the weight function over the probability space with respect to my credence measure. But I am not an ideal epistemic agent. I don’t actually have much of a credence for the hypothesis that my weight lies between w1 − 0.2 lb and w1 + 0.1 lb, say. But I do have a best estimate.

This is very much what happened in the slot machine case. So expected values are not the only probabilistic entity not determined by our credences. Rather, they are a special case of best estimates. The expected utility of the slot machine game is simply my best estimate at the actual utility of the slot machine game.

We form and use lots of such best estimates.

Note that the best estimate need not even be a possible value for the thing we are estimating. My best estimate payoff for the slot-machine given my data might be $0.94, even though I might know that in fact all actual payouts are multiples of a dollar.

With this in mind, we can take credences to be nothing else than best estimates at the truth value, where we think of truth value as either 0 (false) or 1 (true). (Here, I think of the fact that the standard Polish word for probability is “prawdopodobieĊ„stwo”—truthlikeness, verisimilitude.) Just as in the case above, when my best estimate for the truth is 0.75, I do not think the actual truth value is 0.75: I like classical logic, and think the only two possible values are 0 and 1.

Here, then, is a picture of what one might call our probabilistic representation of the world. We have lots of best estimates. Some of these are best estimates of utilities. Some are best estimates of other quantities, such as weights, lengths, cardinalities, etc. Some are best estimates of truth values. A consistent agent is one such that there exists a probability function such that all of the agent’s best estimates are mathematical expetations of the corresponding values with respect to the probability function. In particular, this probability function would extend the agent’s credences, i.e., the agent’s best estimates for truth values.

On this picture, there is no privileging between expected utilities, credences or other best estimates. It’s just estimates all around.

Thursday, April 1, 2021

Going against currently expected utilities

Today I am making an important decision between A and B. The expected utilities of A and B depend on a large collection of empirical propositions p1, ..., pn. Yesterday, I spent a long time investigating the truth values of these empirical propositions and I calculated the expected utility of A to be much higher than that of B. However, today I have forgotten the results of my investigations into p1, ..., pn, though I still remember that A had a higher expected utility given these investigations.

Having forgotten the results of my investigations into p1, ..., pn, my credences for them have gone back to some sort of default priors. Relative to these defaults, I know that B has higher expected utility than A.

Clearly, I should still choose A over B: I should go with the results of my careful investigations rather than the default priors. Yet it seems that I also know that relative to my current credences, the expected utility of B is higher than that of A.

This seems very strange: it seems I should go for the option with the smaller expected utility here.

Here is one possible move: deny that expected utilities are grounded in our credences. Thus, it could be that I still hold a higher expected utility for A even though a calculation based on my current credences would make B have the higher expected utility. I like this move, but it has a bit of a problem: I may well have forgotten what the expected utilities of A and B were, and only remember that A’s was higher than B’s.

Here is a second move: this is a case where I now have inconsistent credences. For if I keep my credences in p1, ..., pn at their default levels, I have a piece of evidence I have not updated my credences on, namely this: the expected utility of A is higher than that of B relative to the posterior credences obtained by gathering the now-forgotten evidence. What I should do is update my credences in p1, ..., pn on this piece of evidence, and calculate the expected utilities. If all goes well—but right now I don’t know if there is any mathematical guarantee that it will—then I will get a new set of credences relative to which A has a higher expected utility than B.