Monday, November 29, 2021

Simultaneous causation and occasionalism

In an earlier post, I said that an account that insists that all fundamental causation is simultaneous but secures the diachronic aspects of causal series by means of divine conservation is “a close cousin to occasionalism”. For a diachronic causal series on this theory has two kinds of links: creaturely causal links that function instantaneously and divine conservation links that preserve objects “in between” the instants at which creaturely causation acts. This sounds like occasionalism, in that the temporal extension of the series is entirely due to God working alone, without any contribution from creatures.

I now think there is an interesting way to blunt the force of this objection by giving another role to creatures using a probabilistic trick that I used in my previous post. This trick allows created reality to control how long diachronic causal series take, even though all creaturely causation is simultaneous. And if created reality were to control how long diachronic causal series take, a significant aspect of the diachronicity of diachronic causal series would involve creatures, and hence the whole thing would look rather less occasionalist.

Let me explain the trick again. Suppose time is discrete, being divided into lots of equally-spaced moments. Now imagine an event A1 that has a probability 1/2 of producing an event A2 during any instant that A1 exists in, as long as A1 hasn’t already produced A2. Suppose A1 is conserved for as long as it takes to produce A2. Then the probability that it will take n units of time for A2 to be produced is (1/2)n + 1. Consequently, the expected wait time for A2 to happen is:

  • (1/2)⋅0 + (1/4)⋅1 + (1/8)⋅2 + (1/16)⋅3 + ... = 1.

We can then similarly set things up so that A2 causes A3 on average in one unit of time, and A3 on causes A4 on average in one unit of time, and so on. If n is large enough, then by the Central Limit Theorem, it is likely that the lag time between A1 and An will be approximately n units of time (plus or minus an error on the order of n1/2 units), and if the units of time are short enough, we can get arbitrarily good precision in the lag time with arbitrarily high precision.

If the probability of each event triggering the next at an instant is made smaller than 1/2, then the expected lag time from A1 to An will be less than n, and if the probaility is bigger than 1/2, the expected lag time will be bigger than n. Thus the creaturely trigger probability parameter, which we can think of as measuring the “strength” of the causal power, controls how long it takes to get to An through the “magic” of probabilistic causation and the Central Limit Theorem. Thus, the diachronic time scale is controlled precisely by creaturely causation—even though divine conservation is responsible for Ai persisting until it can cause Ai + 1. This is a more significant creaturely input than I thought before, and hence it is one that makes for rather less in the way of occasionalism.

This looks like a pretty cool theory to me. I don’t believe it to be true, because I don’t buy the idea of all causation being simultaneous, but I think it gives a really nice.

Simultaneous causation and determinism

Consider the Causal Simultaneity Thesis (CST) that all causation is simultaneous. Assume that simultaneity is absolute (rather than relative). Assume there is change. Here is a consequence I will argue for: determinism is false. In fact, more strongly, there are no diachronic deterministic causal series. What is surprising is that we get this consequence without any considerations of free will or quantum mechanics.

Since there is a very plausible argument from presentism to CST (a non-simultaneous fundamental causal relation could never obtain between two existent things given presentism), we get an argument from presentism to indeterminism.

Personally, I am inclined to think of this argument as a bit of evidence against CST and hence against presentism, because it seems to me that there could be a deterministic world, even though there isn’t. But tastes differ.

Now the argument for the central thesis. The idea is simple. On CST, as soon as the deterministic causes of an effect are in place, their effect is in place. Any delay in the effect would mean a violation of the determinism. There can be nothing in the deterministic causes to explain how much delay happens, because all the causes work simultaneously. And so if determinism is true—i.e., if everything has a deterministic cause—then all the effects happen all at once, and everything is already in the final state at the first moment of time. Thus there is no change if we have determinism and CST.

The point becomes clearer when we think about how it is an adherent of CST explains diachronic causal series. We have an item A that starts existing at time t1, persists through time t2 (kept in existence not by its own causal power, as that would require a diachronic causal relation, but either by a conserver or a principle of existential inertia), then causes an item B, which then persists through time t3 and then causes an item C, and so on. While any two successive items in the causal series A, B, C, ... must overlap temporally (i.e., there must be a time at which they both exist), we need not have temporal overlap between A and C, say. We can thus have things perishing and new things coming into being after them.

But if the causation is deterministic, then as soon as A exists, it will cause B, which will cause C, and so on, thereby forcing the whole series to exist at once, and destroying change.

In an earlier post, I thought this made for a serious objection to CST. I asked: “Why does A ‘wait’ until t2 to cause B?” But once we realize that the issue above has to do with determinism, we see that an answer is available. All we need to do is to suppose there is probabilistic causation.

For simplicity (and because this is what fits best with causal finitism) suppose time is discrete. Then we may suppose that at each moment of time at which A exists it has a certain low probability pAB of causing B if B does not already exist. Then the probability that A will cause B precisely after n units of time is (1 − pAB)npAB. It follows mathematically that “on average” it will cause B after pAB/(1 − pAB) fundamental units of time.

It follows that for any desired average time delay, a designer of the universe can design a cause that has that delay. Let’s say that we want B to come into existence on average u fundamental units of time after A has come into existence. Then the designer can give A a causal power of producing B at any given moment of time at which B does not already exist with probability pAB = 1/(1 + u).

The resulting setup will be indeterministic, and in particular we can expect significant random variation in how long it takes to get B from A. But if the designer wants more precise timing, that can be arranged as well. Let’s say that our designer wants B to happen very close to precisely one second after A. The designer can then ensure that, say, there are a million instants of time in a second, and that A has the power to produce an event A1 with a probability at any given instant such that the expected wait time will be 0.0001 seconds (i.e., 100 fundamental units of time), and A1 the power to produce A2 with the same probability, and so on, with A10000 = B. Then by the Central Limit Theorem, the average wait time between A and B can be expected to be fairly close to 10000 × 0.0001 = 1 seconds, and the designer can get arbitrarily high confidence of an arbitrarily high precision of delay by inserting more instants in each second, and more intermediate causes between A and B, with each intermediate cause having an average delay time of 100 fundamental units (say). (This uses the fact that the geometric distribution has a finite third moment and the Barry-Esseen version of the Central Limit Theorem.)

Thus, a designer of the universe can make an arbitrarily precise and reliable near-deterministic changing universe despite CST. And that really blunts the force of my anti-deterministic observation as a consideration against CST.

Tuesday, November 23, 2021

Plural and singular grounding

Here’s a tempting principle:

  1. If x and y ground z, then the fusion of x and y grounds z.

In other words, we don’t need proper pluralities for grounding—their fusions do the job just as well.

But the principle is false. For the principle is only plausible if any two things have a fusion. But if x and y do not overlap, then x and y ground their fusion. And then (1) would say that the fusion grounds itself, which is absurd.

This makes it very plausible to think that plural objectual grounding does not reduce to singular objectual grounding.

Failures of supervenience on Lewis's system

Start with the concept of “narrowly physical” for facts about the arrangement of physical entities and first-order physical properties such as “charge” and “mass”.

Here are two observations I have not seen made:

  1. On Lewis-Ramsey accounts of laws, laws of nature concerning narrowly physical facts do not supervene on narrowly physical facts.

  2. On Lewis’s account of causation, causal facts about narrowly physical events do not supervene on narrowly physical facts.

This means that in a Lewisian system we have at least four things we could mean by “physical”:

  1. narrowly physical

  2. grounded in the laws of narrowly physical facts and/or the narrowly physical facts themselves

  3. grounded in the causal facts about narrowly physical events and/or the narrowly physical facts themeselves

  4. grounded in the causal facts about narrowly physical events, the laws concerning narrowly physical facts and/or the narrowly physical facts themselves.

Here’s a corollary for the philosophy of mind:

  1. On a Lewisian system, we should not even expect the mental properties of purely narrowly physical beings to supervene on narrowly physical facts.

Argument for (1): The laws are the optimal systematization of particular facts. But now imagine a possible world where there is just a coin that is tossed a trillion times, and with no discernible pattern lands heads about half the time. In the best systematization, we attribute a chance of 1/2 to the coin landing heads. But now imagine a possible world with the same narrowly physical facts, but where there is an angel that thought about ℵ3 about a million times—each time, with a good prior mental explanation of the train of thought—and each of these times was a time just before the coin landed heads. Then the best systematization of the coin tosses will no longer make them simply have a chance of 1/2 of landing heads. Rather, they will have a chance 1/2 of landing heads when the angel didn’t just think about ℵ3.

Argument for (2): Add to the world in the above argument some cats and suppose that on any day when the fattest cat in the world eats n mice, that leads the angel to think about ℵn, though there are other things that can get the angel to think about ℵn. We can set things up so that the fattest cat’s eating three mice in a day causes the coin to land heads on the Lewisian counterfactual account of causation, but if we subtract the angel from the story, this will no longer be the case.

Monday, November 22, 2021

Functionalism implies the possibility of zombies

Endicott has observed that functionalism in the philosophy of mind contradicts the widely accepted supervenience of the mental on the physical, because you can have worlds where the functional features are realized by non-physical processes.

My own view is that a functionalist physicalist shouldn’t worry about this much. It seems to be a strength of a functionalist view that it makes it possible to have non-physical minds, and the physicalist should only hold that in the actual world all the minds are physical (call this “actual-world physicalism”).

But here is something that might worry a physicalist a little bit more.

  • If functionalism and actual-world physicalism are true, there is a possible world which is physically exactly like ours but where there is no pain.

Here is why. On functionalism, pain is constituted by some functional roles. No doubt an essential part of that role is the detection of damage and the production of aversive behavior. Let’s suppose for simplicity that this role is realized in C-fiber firing in all beings capable of pain (the argument generalizes straightforwardly if there are multiple realizers). Now imagine a possible world physically just like this one, but with two modifications: there are lots of blissful non-physical angels, and all C-fiber equipped brains have an additional non-physical causal power to trigger C-fiber firing whenever an angel thinks about that brain. It is no longer true that the functional trigger for C-fiber firing is damage. Now, the functional trigger for C-fiber firing is the disjunction of damage and being thought about by an angel, and hence C-fiber firing no longer fulfills the functional role of pain. But now add that the angels never actually think about a rain while that brain is alive, though they easily could. Then the world is physically just like ours, but nobody feels any pain.

One might object that a functional role of a detector is unchanged by adding a disjunct to what is being detected. But that is mistaken. After all, imagine that we modify the hookups in a brain so that C-fiber firing is triggered by damage and lack of damage. Then clearly we’ve changed the functional role of C-fiber firing—now, the C-fibers are triggered 100% of the time, no matter what—even though we’ve just added a disjunct.

We can also set up a story where it is the aversive behavior side of the causal role that is removed. For instance, we may suppose that there is a magical non-physical aura normally present everywhere in the universe, and C-fiber firing interacts with this aura to magically move human beings in the opposite direction to the one their muscles are moving them to. The aura does nothing else. Thus, if the aura is present and you receive a painful stimulus, you now move closer to the stimulus; if the aura is absent, you move further away. It is no longer the case that C-fibers have the function of producing aversive behavior. However, we may further imagine that at times random abnormal holes appear in the aura, perhaps due to a sport played by non-physical pain-free imps, and completely coincidentally a hole has always appeared around any animal while its C-fibers were firing. Thus, the physical aspects of that world can be exactly the same as in ours, but there is no pain.

The arguments generalize to show that functionalists are committed to zombies: beings physically just like us but without any conscious states. Interestingly, these are implemented as the reverse of the zombies dualists think up. The dualist’s zombies lack non-physical properties that the dualist (rightly) thinks we have, and this lack makes them not be conscious. But my new zombies are non-conscious precisely because they have additional non-physical properties.

Note that the arguments assume the standard physicalist-based functionalism, rather than Koons-Pruss Aristotelian functionalism.

Friday, November 19, 2021

A privation theory of evil without lacks of entities

Taking the privation theory literally, evil is constituted by the non-existence of something that should exist. This leads to a lot of puzzling questions of what that “something” is in cases such as error and pain.

But I am now wondering whether one couldn’t have a privation theory of evil on which evil is a lack of something, but not of an entity. What do I mean? Well, imagine you’re a thoroughgoing nominalist, believing in neither tropes nor universals. Then you think that there is no such thing as red, but of course you can say that sometimes a red sign fades to gray. It is natural to say that the faded sign is lacking the due color red, and the nominalist should be able to say this, too.

Suppose that in addition to being a thoroughgoing nominalist, you are a classical theist. Then you will want to say this: the sign used to participate in God by being red, but now it no longer thusly participates in God (though it still otherwise participates in God). Even though you can’t be a literal privation theorist, and hold that some entity has perished from the sign, you can be a privation theorist of sorts, by saying that the sign has in one respect stopped participating in God.

A lot of what I said in the previous two paragraphs is fishy. The “thusly” seems to refer to redness, and “one respect” seems to involve a quantification over respects. But presumably nominalists say stuff like that in contexts other than God and evil. So they probably think they have a story to tell about such statements. Why not here, then?

Furthermore, imagine that instead of a nominalist we have a Platonist who does not believe in tropes (not even the trope of participating). Then the problems of the “thusly” and “one respect” and the like can be solved. But it is still the case that there is no entity missing from the sign. Yet we still recognizably have a privation theory.

This makes me wonder: could it be that a privation theory that wasn’t committed to missing entities solve some of the problems that more literal privation theories face?

An omnipotence principle from Aquinas

Aquinas believes that it follows from omnipotence that:

  1. Any being that depends on creatures can be created by God without its depending on creatures.

But, plausibly:

  1. If x and y are a couple z, then z depends on x and y.

  2. If x and y are a couple z, then necessarily if z exists, z depends on x and y.

  3. Jill and Joe Biden are a couple.

  4. Jill and Joe Biden are creatures.

But this leads to a contradiction. By (4), we have a couple, call it “the Bidens”, consisting by Jill and Joe Biden, and by (2) that couple depends on Jill and Joe Biden. By (1) and (5), God can create the Bidens without either Jill or Joe Biden. But that contradicts (3).

So, Aquinas’ principle (1) implies that there are no couples. More generally, it implies that there are no beings that necessarily depend on other creatures. All our artifacts would be like that: they would depend on parts. Thus, Aquinas’ principle implies there are no artifacts.

Thomists are sometimes tempted to say that artifacts, heaps and the like are accidental beings. But the above argument shows that that won’t do. God’s power extends to all being, and whatever being creatures can bestow, God can bestow absent the creatures. If the accidental beings are beings, God can create them without their parts. But a universe with a heap and yet nothing heaped is absurd. So, I think, we need to deny the existence of accidental beings.

If we lean on (1) further, we get an argument for survivalism. Either Socrates depends on his body or not. If Socrates does not depend on his body, he can surely survive without his body after death. But if Socrates does depend on his body, then by (1) God can create Socrates disembodied, since Socrates’ body is a creature. But if God can create Socrates disembodied, surely God can sustain Socrates disembodied, and so Socrates can survive without his body. In fact, the argument does not apply merely to humans but to every embodied being: bacteria, trees and wolves can all survive death if God so pleases.

Things get even stranger once we get to the compositional structure of substances. Socrates presumably depends on his act of being. But Socrates’ act of being is itself a creature. Thus, by (1), God could create Socrates without creating Socrates’ act of being. Then Socrates would exist without having any existence.

I like the sound of (1), but the last conclusion seems disastrous. Perhaps, though, the lesson we get from this is that the esse of Socrates isn’t an entity? Or perhaps we need to reject (1)?

Valuing and behavioral tendencies

It is tempting to say that I value a wager W at x provided that I would be willing to pay any amount up to x for W and unwilling to pay an amount larger than x. But that’s not quite right. For often the fact that a wager is being offered to me would itself be relevant information that would affect how I value the wager.

Let’s say that you tossed a fair coin. Then I value a wager that pays ten dollars on heads at five dollars. But if you were to try to sell me that wager for a dollar, I wouldn’t buy it, because your offering it to me at that price would be strong evidence that you saw the coin landing tails.

Thus, if we want to define how much I value a wager at in terms of what I would be willing to pay for it, we have to talk about what I would be willing to pay for it were the fact that the wager is being offered statistically independent of the events in the wager.

But sometimes this conditional does not help. Imagine a wager W that pays $123.45 if p is true, where p is the proposition that at some point in my life I get offered a wager that pays $123.45 on some eventuality. My probability of p is quite low: it is unlikely anybody will offer me such a wager. Consequently, it is right to say that I value the wager at some small amount, maybe a few dollars.

Now consider the question of what I would be willing to pay for W were the fact that the wager is being offered statistically independent of the events in the wager, i.e., independent of p. Since my being offered W entails p, the only way we can have the statistical independence is if my being offered W has credence zero or p has credence one. It is reasonable to say that the closest possible world where one of these two scenarios holds is a world where p has credence one because some wager involving a $123.45 has already been offered to me. In that world, however, I am willing to pay up to $123.45 for W. Yet that is not what I value W at.

Maybe when we ask what we would be willing to pay for a wager, we mean: what we would be willing to pay provided that our credences stayed unchanged despite the offer. But a scenario where our credences stay unchanged despite the offer is a very weird one. Obviously, when an offer is made, your credence that the offer is made goes up, unless you’re seriously irrational. So this new counterfactual question asks us what we would decide in worlds where we are seriously irrational. And that’s not relevant to the question of how we value the wager.

Maybe instead of asking about the prices at which I would accept an offer, I should instead ask about the prices at which I would make an offer. But that doesn't help either. Go back to the fair coin case. I value a wager that pays you ten dollars on heads at negative five dollars. But I might not offer it to you for eight dollars, because it is likely that you would pay eight dollars for this wager only if you actually saw that the coin turned out heads, in which case this would be a losing proposition for me.

The upshot is, I think, that the question of what one values a wager at is not to be defined in terms of simple behavioral tendencies or even simple counterfactualized behavioral tendencies. Perhaps we can do better with a holistic best-fit analysis.

Thursday, November 18, 2021

The Paradox of Charity

We might call the following three statements "the Paradox of Charity":

  1. In charity, we love our neighbor primarily because of our neighbor’s relation to God.

  2. In the best kind of love, we love our neighbor primarily because of our neighbor’s intrinsic properties.

  3. Charity is the best kind of love.

I think this paradox discloses something very deep.

Note that the above three statements do not by themselves constitute a strictly logical contradiction. To get a strictly logical contradiction we need a premise like:

  1. No intrinsic property of our neighbor is a relation to God.

Now, let’s think (2) through. I think our best reason for accepting (2) is not abstract considerations of intrinsicness, but particular cases of properties. In the best kind of love, perhaps, we love our neighbor because our neighbor is a human being, is a finite person, has a potential for human flourishing, etc. We may think that these features are intrinsic to our neighbor, but we directly see them as apt reasons for the best kind of love, without depending on their intrinsicness.

But suppose ontological investigation of such paradigm properties for which one loves one’s neighbor with the best kind of love showed that these properties are actually relational rather than intrinsic. Would that make us doubt that these properties are a fit reason for the best kind of love? Not at all! Rather, if we were to learn that, we would simply deny (2). (And notice that plenty of continentally-inclined philosophers do think that personhood is relational.)

And that is my solution. I think (1), (3) and (4) are true. I also think that the best kind of neighbor love is motivated by reasons such as that our neighbor is a human being, or a person, or has a potential for human flourishing. I conclude from (1), (3) and (4) that these properties are relations to God.

But how could these be relations to God? Well, all the reality in a finite being is a participation in God. Thus, being human, being a finite person and having a potential for human flourishing are all ways of participating in God, and hence are relations to God. Indeed, I think:

  1. Every property of every creature is a relation to God.

It follows that no creature has any intrinsic property. The closest we come to having intrinsic properties are what one might call “almost intrinsic properties”—properties that are relational to God alone.

We can now come back to the original argument. Once we have seen that all creaturely properties are participations in God, we have no reason to affirm (2). But we can still affirm, if we like:

  1. In the best kind of love, we love our neighbor primarily because of our neighbor’s almost intrinsic properties, i.e., our neighbor’s relations only to God.

And there is no tension with (1) any more.

Wednesday, November 17, 2021

First person survivalship bias?

Suppose I take a nasty fall while biking. But I remain conscious. Here is the obvious first thing for a formal epistemologist to do: increase my credence in the effectiveness of this brand of helmets. But by how much?

In an ordinary case of evidence gathering, I simply conditionalize on my evidence. But this is not an ordinary case, because if things had gone otherwise—namely, if I did not remain conscious—I wouldn’t be able to update or think in any way. It seems like I am now subject to a survivorship bias. What should I do about that? Should I simply dismiss the evidence entirely, and leave unchanged my credence in the effectiveness of helmets? No! For I cannot deny that I am still conscious—my credence for that is now forced to be one. If I leave all my other credences unchanged, my credences will become inconsistent, assuming they were consistent before, and so I have to do something to my other credences to maintain consistency.

It is tempting to think that perhaps I need to compensate for survivorship bias in some way, perhaps updating my credence in the effectiveness of the helmet to be bigger than my priors but smaller than the posteriors of a bystander who had the same priors as I did but got to observe my continued consciousness without a similar bias, since they would have been able to continue to think even were I to become unconscious.

But, no. What I should do is simply update on my consciousness (and on the impact, but if I am a perfect Bayesian agent, I have already done that as soon as it was evident that I would hit the ground), and not worry about the fact that if I weren’t conscious, I wouldn’t be around to update on it. In other words, there is no such problem as survivorship bias in the first person, or at least not in cases like this.

To see this, let’s generalize the case. We have a situation where the probability space is partitioned into outcomes E1, ..., En, each with non-zero prior credence. I will call an outcome Ei normal if on that outcome you would know for sure that Ei has happened, you would have no memory loss, and would be able to maintain rationality. But some of the outcomes may be abnormal. I will have a bit more to say about the kinds of abnormality my argument can handle in a moment.

We can now approach the problem as follows: Prior to the experiment—i.e., prior to the potentially incapacitating observation—you decide rationally what kind of evidence update procedures to adopt. On the normal outcomes, you get to stick to these procedures. On the abnormal ones, you won’t be able to—you will lose rationality, and in particular your update will be statistically independent of the procedure you rationally adopted. This independence assumption is pretty restrictive, but it plausibly applies in the bike crash case. For in that case, if you become unconscious, your credences become fixed at the point of impact or become scrambled in some random way, and you have no evidence of any connection between the type of scrambling and the rational update procedure you adopted. My story can even handle cases where on some of the abnormal outcomes you don’t have any credences, say because your brain is completely wiped or you cease to exist, again assuming that this is independent of the update procedure you adopted for the normal outcomes.

It turns out to be a theorem that under conditions like this, given some additional technical assumptions, you maximize expected epistemic utility by conditionalizing when you can, i.e., whenever a normal outcome occurs. And epistemic utility arguments are formally interchangeable with pragmatic arguments (because rational decisions about wager adoption yield a proper epistemic utility), so we also get a pragmatic argument. The theorem will be given at the end of this post.

This result means we don’t have to worry in firing squad cases that you wouldn’t be there if you weren’t hit: you can just happily update your credences (say, regarding the number of empty guns, the accuracy of the shooters, etc.) on your not being hit. Similarly, you can update on your not getting Alzheimer’s (which is, e.g., evidence against your siblings getting it), on your not having fallen asleep yet (which may be evidence that a sleeping pill isn’t effective), etc., much as a third party who would have been able to observe you on both outcomes should. Whether this applies to cases where you wouldn’t have existed in the first place on one of the items in the partition—i.e., whether you can update on your existence, as in fine-tuning cases—is a more difficult question, but the result makes some progress towards a positive answer. (Of course, it woudn’t surprise me if all this were known. It’s more fun to prove things oneself than to search the literature.)

Here is the promised result.

Theorem. Assume a finite probability space. Let I be the set of i such that Ei is normal. Suppose that epistemic utility is measured by a proper accuracy scoring rule si when Ei happens for i ∈ I, so that the epistemic utility of a credence assignment ci is si(ci) on Ei. Suppose that epistemic utility is measured by a random variable Ui on Ei (not dependent on the choice of the cj for j ∈ I) for i not in I. Let U(c)=∑iI 1Ei ⋅ si(ci)+∑iI 1Ei ⋅ Ui. Assume you have consistent priors p that assign non-zero credence to each normal Ei, and the expectation Ep of the second sum with respect to these priors is well defined. Then the expected value of U(c) with respect to p is maximized when ci(A)=p(A ∣ Ei) for i ∈ I. If additionally the scoring rules are strictly proper, and the p-expectation of the second sum is finite, then the expected value of U(c) is uniquely maximized by that choice of ci.

This is one of those theorems that are shorter to prove than to state, because they are pretty obvious once fairly clearly formulated.

Normally, all the si will be the same. It's worth thinking if any useful generalization is gained by allowing them to be different. Perhaps there is. We could imagine situtions where depending on what happens to you, your epistemic priorities rightly change. Thus, if an accident leaves you with some medical condition, knowing more about that medical condition will be valuable, while if you don't get that medical condition, the value of knowing more about it will be low. Taking that into account with a single scoring rule is apt to make the scoring rule improper. But in the case where you are conditioning on that medical condition itself, the use of different but individually proper scoring rules when the condition eventuates and when it does not can model the situation rather nicely.

Proof of Theorem: Let ci be the result of conditionalizing p on Ei. Then the expectation of si(ci′) with respect to ci is maximized when (and only when, if the conditions of the last sentence of the theorem hold) ci′=ci by propriety of si. But the expectation of si(ci′) with respect to ci equals 1/p(Ei) times the expectation of 1Ei ⋅ si(ci′) with respect to p. So the latter expectation is maximized when (and only when, given the additional conditions) ci′=ci.

Tuesday, November 16, 2021

Functionalism and multiple realizability

Functionalism holds that two (deterministic) minds think the same thoughts when they engage in the same computation and have the same inputs. What does it mean for them to engage in the same computation?

This is a hard question. Suppose two computers run programs that sort a series of names in alphabetical order, but they use different sorting algorithms. Given the same inputs, are the two computers engaging in the same computation?

If we say “no”, then functionalism doesn’t have the degree of multiple realizability that we thought it did. We have no guarantee that aliens who behave very much like us think very much like us, or even think at all, since the alien brains may have evolved to compute using different algorithms from us.

If we say “yes”, then it seems we are much better off with respect to multiple realizability. However, there is a tricky issue here: What counts as the inputs and outputs? We just said that the computers using different sorting algorithms engage in the same computation. But the computer using a quicksort typically returns an answer sooner than a computer using a bubble sort, and heats up less. In some cases, the time at which an output is produced itself counts as an output (think of a game where timing is everything). And heat is a kind of output, too.

In my toy sorting algorithm example, presumably we didn’t count the timing and the heat as features of the outputs because we assumed that to the human designers and/or users of the computers the timing and heat have no semantic value, but are merely matters of convenience (sooner and cooler are better). But when we don’t have a designer or user to define the outputs, as in the case where functionalism is applied to randomly evolved brains, things are much more difficult.

So, in practice, even if we answered “yes” in the toy sorting algorithm case, in a real-life case where we have evolved brains, it is far from clear what counts as an output, and hence far from clear what counts as “engaging in the same computation”. As a result, the degree to which functionalism yields multiple realizability is much less clear.

Subjective guilt and war

One of the well-known challenges in accounting for killing in a just war is the thought that even soldiers fighting on a side without justice think they have justice on their side, hence are subjectively innocent, and thus it seems wrong to kill them.

But I wonder if there isn’t an opposite problem. As is well-known, human beings have a very strong visceral opposition to killing. Even those who kill with justice on their side are apt to feel guilty, and it wouldn’t be surprising if often they not only feel guilty but judge themselves to have done wrong. Thus, it could well be that soldiers who kill on both sides of a war have a tendency to be subjectively guilty, even if one of the sides is waging a just war.

Or perhaps things work out this way: Soldiers who kill tend to be subjectively guilty unless they are waging a clearly just war. If so, then those who are on a side without justice are indeed apt to be subjectively guilty, since rarely does a side without justice appear manifestly just. And those who are on a side with justice are may very well also be subjectively guilty, unless the war is one of those where justice is manifest (as was the case for the Allies in World War II).

I doubt that things work out all that neatly.

In any case, the above considerations do show that a side with justice has very strong moral reason to make that justice as manifest as possible to the soldiers. And when that is not possible, those in charge should be persons of such evident integrity that it is easy to trust their judgment.

Monday, November 15, 2021

Intrinsic evil

Consider this argument:

  1. An action is intrinsically evil if and only if it is wrong to do no matter what.

  2. In doing anything wrong, one does something (at least) prima facie bad with insufficient moral reason.

  3. No matter what, it is wrong to do something prima facie bad with insufficient moral reason.

  4. So in doing anything wrong, one performs an intrinsically evil action.

This conclusion seems mistaken. Lightly slapping a stranger on a bus in the face is wrong, but not intrinsically wrong, because if a malefactor was going to kill everyone on the bus who wasn’t slapped by you, then you should go and slap everybody. Yet the argument would imply that in lightly slapping a stranger on a bus you do something intrinsically wrong, namely slap a stranger with insufficient moral reason. But it seems mistaken to think that in slapping a stranger lightly you perform an intrinsically evil action.

The above argument threatens to eviscerate the traditional Christian distinction between intrinsic and extrinsic evil. What should we say?

Here is a suggestion. Perhaps we should abandon (1) and instead distinguish between reasons why an action is wrong. Intrinsically evil actions are wrong for reasons that do not depend on consideration of consequences and extrinsically evil actions are wrong but not for any reasons that do not depend on consideration of consequences.

Thus, lightly slapping a stranger with insufficient moral reason is extrinsically evil because any reason that makes it wrong is a reason that depends on consideration of consequences. On the other hand, one can completely explain what makes an act of murder wrong without adverting to consequences.

But isn’t the death of the victim a crucial part of the wrongness of murder, and yet a consequence? After all, if the cause of death is murder, then the death is a consequence of the murder. Fortunately we can solve this: the act is no less wrong if the victim does not die. It is the intention of death, not the actuality of death, that is a part of the reasons for wrongness.

So, when we distinguish between acts made wrong by consequences and and wrong acts not made wrong by consequences, by “consequences” we do not mean intended consequences, but only actual or foreseen or risked consequences.

But what if Alice slaps Bob with the intention of producing an on-balance bad outcome? That act is wrong for reasons that have nothing to do with actual, foreseen or risked consequences, but only with her intention. Here I think we can bite the bullet: to slap an innocent stranger with the intention of producing an on-balance bad outcome is intrinsically wrong, just as it is intrinsically wrong to slap an innocent stranger with the intention of causing death.

Note that this would show that an intrinsically evil action need not be very evil. A light slap with the intention of producing an on-balance slightly bad outcome is wrong, but not very wrong. (Similarly, the Christian tradition holds that every lie is intrinsically evil, but some lies are only slight wrongs.)

Here is another advantage of running the distinction in this way, given the Jewish and Christian tradition. If an intrinsically evil action is one that is evil independently of consequences, it could be that such an action could still be turned into a permissible one on the basis of circumstantial factors not based in consequences. And God’s commands can be such circumstantial factors. Thus, when God commands Abraham to kill Isaac, the killing of Isaac becomes right not because of any new consequences, but because of the circumstance of God commanding the killing.

Could we maybe narrow down the scope of intrinsically evil actions even more, by saying that not just consequences, but circumstances in general, aren’t supposed to be among the reasons for wrongness? But if we do that, then most paradigm cases of intrinsically evil actions will fail: for instance, that the victim of a murder is innocent is a circumstance (it is not a part of the agent’s intention).

Trust and scepticism

To avoid scepticism, we need to trust that human epistemic practices and reality match up. This trust is clearly at least a part of a central epistemic virtue.

Now, trusting persons is a virtue, the virtue of faith. But trusting in general, apart from trusting persons, is not. Theism can thus neatly account for how the trusting that is at the heart of human epistemic practices is virtuous: it is an implicit trust in our creator.

Friday, November 12, 2021

Another way out of the metaphysical problem of evil

The metaphysical problem of evil consists in the contradiction between:

  1. Everything that exists is God or is created by God.

  2. God is not an evil.

  3. God does not create anything that is an evil.

  4. There exists an evil.

The classic Augustinian response is to deny (4) by saying that evil “is” just a lack of a due good. This has serious problems with evil positive actions, errors, pains, etc.

Here is a different way out. Say that a non-fundamental object x is an object x such that the proposition that x exists is wholly grounded in some proposition that makes no reference to x. Now we deny (3) and replace it with:

  1. God does not create anything fundamental that is an evil.

How could God create something non-fundamental that is an evil? By a combination of creative acts and refrainings from creative acts whose joint outcome grounds the existence of the non-fundamental evil, while foreseeing without intending the non-fundamental evil. Of course, this requires the kind of story about intention that the Principle of Double Effect uses.

Thus, consider George Shaw’s erroneous (initial) error that there are no platypuses. God creates George Shaw. He creates Shaw’s belief. He creates platypuses. The belief isn’t an evil. The platypuses aren’t an evil. The combination of the belief and the platypuses is an error. But the combination of the two is not a fundamental entity (even if the belief and the platypuses are). God can intend the belief to exist and the platypuses to exist without intending the combination to exist.

A variant virtue ethic centered on virtues and not persons

I’ve been thinking a bit about the virtue ethical claim that the right (i.e., obligatory) action is one that a virtuous person would do and the wrong one is one that a virtuous person wouldn’t do. I’ve argued in my previous posts that this is a problematic claim, since given either naturalism or the Hebrew Scriptures, it is possible for a virtuous person to do something wrong.

Maybe instead of focusing on the person, the virtue ethicist can focus on the virtues. Here is an option:

  1. An action is wrong if and only if it could not properly (non-aberrantly) flow from the relevant virtues.

This principle is compatible with a virtuous person doing something wrong, as long as that wrong thing doesn’t flow from virtue.

The “properly” in (1) is an “in the right way” condition. Once we have allowed, as I think we should, that a virtuous person can do the wrong thing, we should also allow that a wrong action can flow from virtue in some aberrant way. For instance, we can imagine a wholly virtuous person falling prey to a temptation to brag about being wholly virtuous (and instantly losing the virtue, of course). The bragging flows from the virtue—but aberrantly.

A down-side of (1) is that it is a pretty strong condition on permissibility. One might think that there are some permissible morally neutral actions which can be done by a perfectly virtuous person but which do not flow from their virtue. If we accept (1), then in effect we are saying that there are no morally neutral actions. I think that is the right thing to say.

The big problem with (1) is the “properly”.

Naturalists shouldn't be virtue ethicists

Virtue ethics is committed to this claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would not have chosen A.

But (1) implies this generalization:

  1. A person who has the relevant virtues explanatorily prior to a choice never chooses wrongly.

In my previous post I argued that Aristotelian Jews and Christians should deny (2), and hence (1).

Additionally, I think naturalists should deny (1). For we live in a fundamentally indeterministic world given quantum mechanics. If a virtuous person were placed in a position of choosing between aiding and insulting a stranger, there will always be a tiny probability of their choosing to insult the stranger. We shouldn’t say that they wouldn’t insult the stranger, only that they would be very unlikely to do so (this is inspired by Alan Hajek’s argument against counterfactuals).

And (2) itself is dubious, unless we have such a high standard of virtue that very few people have virtues. For in our messy chaotic world, very little is at 100%. Rare exceptions should be expected when human behavior is involved.

(Perhaps a dualist virtue ethicist who does not accept the Hebrew Scriptures could accept (1) and (2), holding that a virtuous soul makes the choices and is not subject to the indeterminacy of quantum mechanics and the chaos of the world.)

There is a natural way out of the above arguments, and that it so to change (1) to a probabilistic claim:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would be very unlikely to have chosen A.

But (3) is false. Suppose that Alice is a virtuous person who has a choice to help exactly one of a million strangers. Whichever stranger she chooses to help, she does no wrong. But it is mathematically guaranteed that there is at least one stranger such that her chance of helping them is at most one in a million (for if pn is her chance of helping stranger number n, then p1 + ... + p1000000 ≤ 1, since she cannot help more than one; given that 0 ≤ pn for all n, it follows mathematically that for some n we have pn ≤ 1/1000000). So her helping a particular such stranger is very unlikely to be chosen, but isn’t wrong.

Or for a less weighty case, suppose I say something perfectly morally innocent to start off a conversation. Yet it is very unlikely that a virtuous person would have said so. Why? Because there are so very many perfectly morally innocent ways to start off a conversation, it is very unlikely that they would have chosen the same one I did.

Christians and Jews should not be Aristotelian virtue ethicists

If virtue ethics is correct:

  1. An choice is wrong if and only if a person with the relevant virtues and in these circumstances wouldn’t have made that choice. (Premise)

If Aristotelian virtue ethics is correct:

  1. An adult lacking a virtue is defective. (Premise)

But:

  1. Humans became defective because of the choice of Adam and Eve to eat the forbidden fruit. (Premise)

And it seems that:

  1. Adam and Eve were adult humans when they chose to eat the forbidden fruit. (Premise)

Thus it seems:

  1. When Adam and Eve chose to eat the forbidden fruit, they were not lacking relevant virtues. (By 2–4)

  2. Thus, persons (namely Adam and Eve!) with the relevant virtues and in their circumstances did choose to eat the forbidden fruit. (By 5)

  3. Thus, their choice to eat the forbidden fruit wasn’t wrong. (1 and 6)

  4. But their choice was wrong. (Premise)

  5. Contradiction!

Here is one thing the classic virtue ethicist can question about this argument: the derivation of (5) depends on how we read premise (1). We could read (1) as:

  1. A choice of A is wrong if and only if a person who had the relevant virtues explanatorily prior to having chosen A and was in these circumstances would not have chosen A

or as:

  1. A choice of A is wrong if and only if a person who had the relevant virtues while having chosen A and was in these circumstances would not have chosen A.

If we opt for (10), the derivation of (5) works, and the argument stands. But if we opt for (11) then we can say that as soon as Adam and Eve chose to eat the fruit, they no longer counted as virtuous.

Could the virtue ethicist thus opt for (11) in place of (10)? I don’t think so. It seems central to virtue ethics that the right choices are ones that result from virtue. And that is what (10) captures. To a great extent (11) would trivialize virtue ethics, in that obviously in doing a bad thing one isn’t virtuous.

Ethics and multiverse interpretations of quantum mechanics

Somehow it hasn’t occurred to me until yesterday that quantum multiverse theories (without the traveling minds tweak) undercut half of ethics, just as Lewis’s extreme modal realism does.

For whatever we do, total reality is the same, and hence no suffering is relieved, no joy is added, etc. The part of ethics where consequences matter is all destroyed. There is no point to preventing any evil, since doing so just shifts which branch of the multiverse one inhabits.

At most what is left of ethics is agent-centered stuff, like deontology. But that’s only about half of ethics.

Moreover, even the agent-centered stuff may be seriously damaged, depending on how one interprets personal identity in the quantum multiverse.

Consider three theories.

On the first, I go to all the outgoing branches, with a split consciousness. On this view, no matter what, there will be branches where I act well and branches where I act badly. So much or all of the agent-centered parts of ethics will be destroyed.

On the second, whenever branching happens, the persons in the branches are new persons. If so, then there are no agent-centered outcomes—if I am deliberating between insulting or comforting a suffering person, no matter what, I will do neither, but instead a descendant of me will insult and another descendant will comfort. Again, it’s hard to fit this with the agent-centered parts of ethics.

The third is the infinitely many minds theory on which there are infinitely many minds inhabiting my body, and whenever a branching happens, infinitely many move into each branch. In particular, I will move into one particular branch. On this theory, if somehow I can control which branch I go down (which is not clear), there is room for agent-centered outcomes. But this is not the most prominent of the multiverse theories.

Thursday, November 11, 2021

Forwards causal closure and dualist theories of perception

A standard dualist theory of perception goes like this:

  1. You sense stuff physically, the data goes to the brain, the brain processes the data, and out of the processed data produces qualia.

There is a lot of discussion of the “causal closure” of the physical. What people generally mean by this is that the physical is causally backwards-closed: the cause of a physical thing is itself physical. This is a controversial doctrine, not least because it seems to imply that some physical things are uncaused. But what doesn’t get discussed much is a more plausible doctrine we might call the forwards causal closure of the physical: physical causes only have physical effects. Forwards causal closure of the physical is, I think, a very plausible candidate for a conceptual truth. The physical isn’t spooky—and it is spooky to have the power of producing something spooky. (One could leave this at this conceptual argument, or one could add the scholastic maxim that one cannot cause what one does not in some sense have.)

By forwards closure, on the standard dualist theory, the brain is not a physical thing. This is a problem. It is supposed to be one of the advantages of the standard dualist theory that it is compatible with property dualism on which people are physical but have non-physical properties. But if the brain is not physical, there is no hope for people to be physical! Personally, I don’t mind losing property dualism, but it sure sounds absurd to hold that the brain is not physical.

Recently, I have been thinking about a non-causal dualist theory that goes like this:

  1. You sense stuff physically, the data goes to the brain, the brain processes the data, and the soul “observes” the brain’s processed data. (Or, perhaps more precisely, the person "feels" the neural data through the soul.)

To expand on this, what makes one feel pain is not the existence of a pain quale, but a sui generis “observation” relation between the soul and the brain’s processed data. This observation relation is not caused by the data, but takes place whether there is data there or not (if there isn’t, we have a perceptual blank slate). The soul is not changed intrinsically by the data: the “observation” of a particular datum—say, a datum representing a sharp pain in a toe—is an extrinsic feature of the soul. Note that unlike the standard theory, this up-front requires substance dualism of some sort, since the observing entity is not physical given the sui generis nature of the “observation” relation.

The non-causal dualist theory allows one to maintain forwards closure of the physical and the physicality of the brain. For the brain doesn’t cause a non-physical effect. The brain simply gets “observed”.

It is however possible that the soul causes an effect in the brain—for instance, the “observation” relation may trigger quantum collapse. Thus, the theory may violate backwards closure. And that’s fine by me. Backwards closure does not follow conceptually from the concept of the physical—a physical thing doesn’t become spooky for having a spooky cause.

There is a difficulty here, however. Suppose that the soul acts on the “observed” data, say by causing one to say “You stepped on my foot.” Wouldn’t we want to say that the brain data correlated with the pain caused one to say “You stepped on my foot”?

I think this temptation is resistable. Ridiculously oversimplifying, we can imagine that the soul has a conditional causal power to cause an utterance of “You stepped on my foot” under the condition of “observing” a certain kind of pain-correlated neural state. And while it is tempting to say that the satisfied conditions of a conditional causal power cause the causal power to go off, we need not say that. We can, simply, say that the causal power goes off, and the cause is not the condition, but the thing that has the causal power, in this case the soul.

On this story, if you step on my foot, you don’t cause me to say “You stepped on my foot”, though you do cause the condition of my conditional causal power to say so. We might say that in an extended sense there is a “causal explanation” of my utterance in terms of your stepping, and your stepping is “causally prior” to my utterance, even though this causal explanation is not itself an instance of causation simpliciter. If so, then all the stuff I say in my infinity book on causation should get translated into the language of causal explanation or causal priority. Or we can just say that there is a broad and a narrow sense of “cause”, and in the broad sense you cause me to speak and in the narrow you do not.

I think there is a very good theological reason to think this makes sense. For we shouldn’t say that our actions cause God to act. The idea of causing God to do anything seems directly contrary to divine transcendence. God is beyond our causal scope! Just as by forwards closure a physical thing cannot cause a spiritual effect, so too by transcendence a created thing cannot cause a divine effect. Yet, of course, our actions explain God’s actions. God answers prayers, rewards the just and punishes the unrepentant wicked. There is, thus, some sort of quasi-causal explanatory relation here that can be used just as much for non-causal dualist perception.

Wednesday, November 10, 2021

Online talk: A Norm-Based Design Argument

Thursday November 11, 2021, at 4 pm Eastern (3 pm Central), the Rutgers Center for Philosophy of Religion and the Princeton Project in Philosophy of Religion present a joint colloquium: Alex Pruss (Baylor), "A Norm-Based Design Argument".

The location will be https://rutgers.zoom.us/s/95159158918

"The whole is bigger than the part"

Some people don’t like Cantorian ways of comparing the sizes of sets because they want to have a “whole is bigger than the (proper) part” principle, denying which they consider to be counterintuitive.

Suppose that there is a relation ≤ which provides a way of comparing the sizes of sets of real numbers (or just the sizes of countable sets of real numbers) such that:

  1. the comparison satisfies the “the whole is bigger than the part” principle, so that if A is a proper subset of B, then A < B

  2. there are no incommensurable sets: given any A and B, at least one of A ≤ B and B ≤ A holds

  3. the relation ≤ is transitive and reflexive.

Then the Banach-Tarski paradox follows from (a)–(c) without any use of the Axiom of Choice: there is a way to decompose a ball into a finite number of pieces and move them around to form two balls of the same size as the original. And Banach-Tarski feels like a direct violation of the "whole is bigger" principle!

Thus, intuitive as the “whole is bigger” principle is, the price of being able to compare the sizes of sets of real numbers in conformity with the principle is quite high. I suspect that most people who think that denying the “whole is bigger” principle also think Banach-Tarski is super problematic.

For our next observation, let’s add one more highly plausible condition:

  1. the relation ≤ is weakly invariant under reflections of the real line: for any reflection ρ, we have A ≤ B if and only if ρA ≤ ρB.

Proposition: Conditions (a)–(d) are contradictory.

So, I think we should deny that, in the context of comparing the number of elements of a set, the whole is bigger than the proper part.

Proof of Proposition: Write A ∼ B iff A ≤ B and B ≤ A. Then I claim we have A ∼ ρA for any reflection ρ. For otherwise we either have A < ρA or ρA < A by (b). If we have A < ρA, then we also have ρA < ρ2A by (d), and since ρ2A = A, we have ρA < A, a contradiction. If we have ρA < A, then we have ρ2A < ρA by (d), and hence A < ρA, again a contradiction.

Since any translation τ can be made out of two reflections, it follows that A ∼ τA as well. Let τ be translation by one unit to the right. Then {0, 1, 2, ...} ∼ τ{0, 1, 2, ...} = {1, 2, 3, ...}, which contradicts (a).

Monday, November 8, 2021

Infinite Dedekind finite sets

Most paradoxes of actual infinities, such as Hilbert’s Hotel, depend on the intuition that:

  1. A collection is bigger than any proper subcollection.

A Dedekind infinite set is one that has the property that it is the same cardinality as some proper subset. In other words, a Dedekind infinite set is precisely one that violates (1).

In Zermelo-Fraenkel (ZF) set theory, it is easy to prove that any Dedekind infinite set is infinite. More interestingly, assuming the consistency of ZF, there are models of ZF with infinite sets that are Dedekind finite.

It is easy to check that if A is a Dedekind finite set, then A and every subset of A satisfies (1). Thus an infinite but Dedekind finite set escapes most if not all the standard paradoxes of infinity. Perhaps enemies of actual infinity, should thus only object to Dedekind infinities, not all infinities?

However, infinite Dedekind finite sets are paradoxical in their own special way: they have no countably infinite subsets—no subsets that can be put into one-to-one correspondence with the natural numbers. You might think this is absurd: shouldn’t you be able to take one element of an infinite Dedekind finite set, then another, then another, and since you’ll never run out of elements (if you did, the set wouldn’t be finite), you’d form a countably infinite sequence of elements? But, no: the problem is that repeating the “taking” requires the Axiom of Choice, and infinite Dedekind finite sets only live in set-theoretic universes without the Axiom of Choice.

In fact, I think infinite Dedekind finite sets are much more paradoxical than a run-of-the-mill Dedekind infinite sets.

Do we learn anything philosophical here? I am not sure, but perhaps. If infinite Dedekind finite sets are extremely paradoxical, then by the same token (1) seems an unreasonable condition in the infinite case. For Dedekind finitude is precisely defined by (1).

Top-down mereology and the special and general composition questions

Van Inwagen distinguishes the General Composition Question:

  • (GCQ) What are the nonmereological necessary and sufficient conditions for the xs to compose y?

from the Special Composition Question:

  • (SCQ) What are the nonmereological necessary and sufficient conditions for the xs to compose something?

He thinks that the GCQ is probably unanswerable, but attempts to give an answer to the SCQ. Note that an answer to the GCQ immediately yields an answer to the SCQ by existential quantification over y.

There are two main families of mereological theories:

  • Bottom-Up: The proper parts explain the whole.

  • Top-Down: The whole explains the proper parts.

Van Inwagen generally eschews talk of explanation, but the spirit of his work is in the bottom-up camp.

It’s interesting to ask how the GCQ and SCQ look to theorists in the top-down camp. On top-down theories, the xs that compose y are explained by or identical to y. It seems unlikely to suppose that in all cases there would be some relation among the xs that does not involve y which marks the xs out as all parts of one whole. That would be like thinking there is a necessary and sufficient condition for Alice, Bob and Carl to be siblings that makes no reference to a parent. Therefore, it is likely that any top-down answer to the SCQ must make reference to the whole that is composed of the xs. But if we can give such an answer, then it is very likely that we can also give an answer to the GCQ.

If my plausible reasoning is right, then on top-down theories either:

  1. An answer can be given to the GCQ, or

  2. No answer can be given to the SCQ.

Wednesday, November 3, 2021

Distant collaboration

Suppose mereological universalism is true, and that I make a pizza and an alien a long time ago in galaxy far, far away (or even in another universe) makes a sandwich. Then I and the alien have engaged in an amazing collaboration spanning time and space, and maybe even across universes, and produced a fusion of a pizza and a sandwich. Surely I cannot so very easily collaborate in the production of things with beings so far off!

Substantial change and simultaneous causation

Some philosophers hold that all fundamental instances of causal relations are simultaneous. Many of these philosophers are Aristotelians, though presentism provides a plausible route to this simultaneity doctrine independently of (other) Aristotelian considerations. I think the phenomenon of substantial change shows that this simultaneity doctrine is false.

When a horse changes into a carcass (or an electron-positron pair changes into a pair of photons) we have substantial change. Clearly, in such a case, the horse is the cause of the carcass.

Notice, however, that there is never a time at which both the horse and the carcass exists. This means that substantial change involves properly diachronic rather than simultaneous causation. And while there is a way of building a diachronic causal explanation out of simultaneous causation and persistence, I don’t see any way of doing that here. The trick in that way was to use the persistence of one or both of the relata of simultaneous causation to extend the relationship temporally. But here adding the earlier persistence of the horse or the later persistence of the carcass does not help, because it is still not the case that the horse and carcass have any moment of co-existence.

I think that this is where an Aristotelian will try to bring in matter. The horse has matter. When the horse perishes, its matter persists and comes to make up a carcass. We have a simultaneous relation between the horse and its matter, and then we have a simultaneous relation between the matter and the carcass.

But this doesn’t solve the problem. For the horse doesn’t just cause a heap of matter—it causes a carcass of a particular sort, made up of substances other than the substance of the horse. What persists in substantial change, on a classic Aristotelian view, is at most the prime matter. And the prime matter does not explain the form had by the carcass (or the parts of the carcass, if the carcass counts as a heap of substances).

We can even see the problem at the level of the accidents. Take the horse’s shape Sh and the carcass’s very similar shape Sc. Then, clearly, Sh causes Sc. One can see this empirically: if one rearranges the legs of the dying horse, the carcass’s shape changes correspondingly. But the only relevant thing that on the Aristotelian story persists across the change from horse to carcass is the matter: Sh does not persist, nor does anything that grounds Sh.

On reflection, the last line of thought shows that there could be a problem even for accidental change. For it seems likely to the case that a substance has an accident A which partially causes itself to be replaced by an accident B incompatible with itself. For instance, consider my current shape S1. In a moment, my body will shift into a new shape S2. The shape S1 partially causes the shape S2. Yet there is never a moment where I have both shapes. Indeed, at any time where I have shape S1, that’s the only shape I have. So, the shape S1 cannot cause any different future shape of me, assuming causation is always simultaneous.

This problem may be less serious than for substantial change, however, because one might say that S1 does not cause S2, but there is some deeper persisting accident that first causes me to have S1 and then causes me to have S2, so that there is no more a causal relationship between S1 and S2 than between a shadow of a moving person first appearing in one place and then in another. I think it is implausible to think that all cases where an accident A partially causes its immediate replacement by an accident B can be accounted by positing A and B to be mere epiphenomena, but I am not sure I have as good an argument against this as I do against the substantial change case.

I conclude from all this that while simultaneous causation is possible, it is not the case that all diachronic causation reduces to simultaneous causation.

Monotheism and anthropomorphism

Xenophanes famously lambasted Greek religion for its anthropomorphism:

if cattle or lions had hands, so as to paint with their hands and produce works of art as men do, they would paint their gods and give them bodies in form like their own-horses like horses, cattle like cattle.

Two and a half millenia later, accusations of anthropomorphism continue to be made against monotheistic religions, typically by naturalists.

I was thinking about this, and had an odd thought. According to monotheism, the root of all explanation is the activity of God. According to standard naturalism, the root of all explanation is the activity of the fundamental physical entities, either particles or fields. But humans are more like fundamental physical entities than like the God of the monotheistic religions. The difference between us and the fundamental physical entities is merely finite. The difference between us and God is infinite. Thus, in an important sense, it is standard naturalism that is more anthropomorphic in its fundamental explanatory agents than monotheism.

If we do not feel this—if we feel ourselves more God-like than electron-like—then we are infinitely elevating ourselves or infinitely demoting God or both.

That said, the three Western monotheistic religions do think that the physical universe is made for us. Thus, while the religions are not anthropomorphic, they do have an anthropocentric view of our physical universe. Interestingly, though, to some (albeit lesser) extent so does the most plausible current naturalist view, namely a multiverse theory together with the weak anthropic principle.

Tuesday, November 2, 2021

Two theories of divine conservation

Here are two theories of divine conservation, tendentiously labeled:

  • Occasionalist conservation: That a creature that previously existed continues to exist is solely explained by God’s power.

  • Concurrentist conservation: That a creature that previously existed continues to exist is explained by God’s power concurring with creaturely causal powers (typically, the creature’s power to continue to exist).

It is usual in classical theism to say that divine conservation is very similar to divine creation. This comparison might seen to favor occasionalist conservation. However, that is not so clear once we realize that classical theism holds that all finite things are created by God, and hence creation itself comes in two varieties:

  • Creation ex nihilo: God creates something by the sole exercise of his power.

  • Concurrentist creation: God creates things by concurring with a creaturely cause.

Most of the objects familiar to us are the product of concurrentist creation. Thus, an acorn is produced by God in concurrence with an oak tree, and a car inconcurrence with a factory. (The human soul is an exception according to Catholic tradition.)

Because of this, even if we opt for concurrentist conservation, we can still save the comparison between conservation and creation, as long as we remember that often creation is concurrentist creation.

Which of the two theories of conservation should we prefer?

On general principles, I think we have some reason to prefer concurrentist conservation, simply because it preserves the explanatory connections within the natural world better.

However, if we insist on presentism, then we may be stuck with occasionalist conservation, because presentism makes cross-time causal relations problematic.

[Edited Nov. 4 2020 to replace "cooperation" with the more usual term "concurrence".]

Leibniz on the PSR

According to the Principle of Sufficient Reason (PSR), every contingent fact has a sufficient reason. What does “sufficient” mean here? A natural thought is that it means that the reason is logically sufficient for the fact. My own work on the PSR rejects this natural thought. I say that a sufficient reason is one that suffices to explain the fact, not necessarily one that suffices for the fact to be true. I occasionally worry that this is too wimpy a take on the PSR, indeed a kind of bait-and-switch.

When I worry about this, it helps me to come back to Leibniz, whom nobody considers a wimp with respect to the PSR. How does Leibniz understand “sufficient”?

In the Principles of Nature and Grace, Leibniz talks of the

grand principe … qui porte que rien ne se fait sans raison suffisante; c’est-à-dire que rien n’arrive sans qu’il soit possible à celui qui connaîtrait assez les choses de rendre une raison qui suffise pour déterminer pourquoi il en est ainsi, et non pas autrement [great principle … which holds that nothing happens without sufficient reason; that is to say, that nothing happens without its being possible for someone who knows enough about how things are to give a reason that suffices to determine why it is so and not otherwise]. (my italics)

Leibniz does not say that the reason is sufficient to determine the fact. Rather, Leibniz carefully says that the reason is sufficient to determine why the fact occurred. You can read off the explanation, the answer to the why question, from the reason, but no claim is made that you can read the explained fact off from it.

Indeed, the only necessitation in the paragraph is hypothetical:

De plus, supposé que des choses doivent exister, il faut qu’on puisse rendre raison pourquoi elles doivent exister ainsi, et non autrement. [Further, supposing things must exist, it has to be possible to give a reason why they must exist so and not otherwise.] (my italics)

I wish Leibniz had this weaker picture of sufficient reason consistently. Sadly for me, he does not. In a 1716 letter to Bourguet he writes:

Mr. Clark … n’a pas bien compris la force de cette maxime, que rien n’arrive sans une raison suffisante pour le determiner. [Mr. Clark … has not understood well the force of the maxim that nothing happens without a reason sufficing to determine it.]

Oh well.

I comfort myself, however, that my philosophical hero does, after all, have two kinds of necessity, and hopefully the determination in the PSR involves the weaker one.

Monday, November 1, 2021

Divine conservation, existential inertia, presentism and simultaneous causation

As a four-dimensionalist, I have been puzzled both by the arguments that divine conservation is necessary secure the persistence of substances and the idea of existential inertia as a metaphysical principle.

Temporal extent seems little different metaphysically to me from spatial thickness, the “problem of persistence” seems to me to be a pseudo-problem, and both solutions to this pseudo-problem seem to me to be confused.

On the existential inertia side, a metaphysical principle that objects continue to exist unless their existence is interrupted by some other cause seems as ridiculous to me as a principle that objects are maximally thick (and long and deep) unless and until their thickness (or length or depth) is stopped by other causes. And divine action is needed to secure persistence only to the extent that it is needed to secure thickness (and length or depth). That said, I do think divine action is needed to secure thickness, as well as all other accidents of a thing, because substances are in some sense causes of their accidents, but all creaturely causation requires divine cooperation. But that, I think, is a slightly different line of argument from the arguments for persistence of substances (in particular, I don’t have a good argument for it that doesn’t already presuppose theism, while the arguments for conservation are supposed to provide reasons for accepting theism).

However, I now see how it is that presentism yields a real problem of persistence. Here’s the line of thought. First, note that contrary to the protestations of some presentists, it is very plausible that:

  1. Presentism implies that all causation is simultaneous.

For something that exists, at least at the time at which it is caused, cannot have as its cause something that doesn’t exist. But given presentism, only something present exists. So at a time at which E is caused, if the cause of E did not exist, we would have the exercise of a non-existent causal power, which is absurd.

But even if all causation is simultaneous, nonetheless:

  1. There is diachronic causal explanation.

Setting the alarm at night explains why it goes off in the morning, even if by the simultaneity thesis (1), setting the alarm cannot be the cause of the alarm going off. Diachronic causal explanation cannot simply be causation. So what is it? Here is the best presentist story I know (and it’s not original to me).

First, we can get some temporal extension by the following trick. Imagine a thing A persists over an interval of time from t1 to t2. At t2 is causes a thing B that persists over an interval of time from t2 to t3. The existence of A at t1 then causally explains the existence of B at t3. Note, however, that the existence of A at t1 does not cause the existence of B at t3. Causation happens at t2 (or perhaps over an interval of times—thus, A might persist until some time t2.5 < t3, and be causing B over all of the interval from t2 to t2.5), but not at any earlier time, since at earlier times A doesn’t exist. Thus, by supplementing the simultaneous causal relation between A and B at t2 with the persistence of A before t2 and/or the persistence of B after t2, we have, we can extend the relation into what one might call a fundamental instance of diachronic causal explanation.

Thus, a fundamental link in diachronic causal explanation consists of an instance of causation preceded and/or followed by an instance of persistence of the causing thing and/or the caused thing respectively. And a non-fundamental instance of diachronic causal explanation is a chain of fundamental links of diachronic causal explanations. (It may be that these diachronic causal explanations are very close to what Aquinas calls per accidens causal sequences.)

But for this to be genuine explanation, the persistence of the cause and/or effect needs to have an explanation. Divine conservation provides a very neat explanation: God necessarily exists eternally, and is simultaneous with everything (there may be some complications, though, with a timeless being given presentism), so God can cause A to persist from t1 to t2 and B to persist from t2 to t3. Thus, fundamental links in diachronic causal explanations depend on divine conservation.

An existential inertia view also gives a solution, but a far inferior one. For existential inertia requires the earlier existence of A, together with the metaphysical principle of existential inertia, to explain the later existence of A. But such a cross-time explanatory relation seems too much like the already rejected idea of cross-time causation. For it’s looking like A qua existing at t1 explains A existing at t2. But at t2, according to presentism A qua existing at t1 is in the unreal past, and it is absurd to suppose that what is in the unreal past can explain something real now.

In summary, given presentism, all fundamental explanatory relations need to be simultaneous. But it is an evident fact that there are diachronic causal explanatory relations. The only way to build those out of simultaneous explanatory relations is by supposing a being that can be simultaneous with things that exist at more than one time—a timelessly eternal being—whose causal efficacy provides the diachronic aspects of the explanatory linkage.

That said, I think there are two serious weaknesses in this story. The first is that it’s a close cousin of occasionalism. For there is no purely non-divine explanatory chain from the setting of the alarm at night to the alarm going off in the morning—divine action explains the persistences that make the chain diachronic.

A second problem is the puzzle of what explains why A causes B at t2 rather than as soon as A comes into existence. Why does A “wait” until t2 to cause B? Crucial to the story is that A is the whole cause, which then persists from t1 to t2. But why doesn’t it cause B right away, with B then causing whatever effect it has right away, and with everything in the whole causal history of the universe happening at once? Again, one might give this an occasionalist solution—A causes B only because God cooperates with creaturely causation, and God might hold off his cooperation until t2. But this makes the story even more occasionalist, by making God involved in the timing of causation.

Shuffling infinitely many cards

Imagine there is an infinite stack of cards labeled with the natural numbers (so each card has a different number, and every natural number is the number of some card). In the year 2021 − n, you perfectly shuffled the bottom n cards in the stack.

Now you draw the bottom card from the deck. Whatever card you see, you are nearly certain that the next card will have a bigger number. Why? Well, let’s say that the card you drew has the number N on it. Next consider the next M cards in the deck for some number M much bigger than N. At most N − 1 of these have numbers smaller than N on them. Since these bottom M cards were perfectly shuffled during the year 2021 − (M + 1), the probability that the number you draw is bigger than N is at most (N − 1)/M. And since M can be made arbitrarily large, it follows that the probability that the number you draw is bigger than N is infinitesimal. And the same reasoning applies to the next card and so on. Thus, after each card you draw, you are nearly certain that the next card will have a bigger number.

And, yet, here’s something you can be pretty confident of: The bottom 100 cards are not in ascending order, since they got perfectly shuffled in 1921, and after that you’ve shuffled smaller subsets of the bottom 100 cards, which would not make the bottom 100 cards any less than perfectly shuffled. So you can be quite confident that your reasoning in the previous paragraph will fail. Indeed, intuitively, you expect it to fail about half the time. And yet you can’t rationally resist engaging in this reasoning!

The best explanation of what went wrong is, I think, causal finitism: you cannot have a causal process that has infinitely many causal antecedents.

Determinism and thought

Occasionally, people have thought that one can refute determinism as follows:

  1. If determinism is true, then all our thinking is determined.

  2. If our thinking is determined, then it is irrational to trust its conclusions.

  3. It is not irrational to trust the conclusions of our thinking.

  4. So, determinism is not true.

But now notice that, plausibly, even if we have indeterministic free will, other animals don’t. And yet it seems at least as reasonable to trust a dog’s epistemic judgment—say, as to the presence of an intruder—as a human’s. Nor would learning that a dog’s thinking is determined or not determined make any difference to our trust in its reliability.

One might respond that things are different in a first-person case. But I don’t see why.