Thursday, August 18, 2022

Reasons and permissions

The fact that a large animal is attacking me would give me both permission and reason to kill the animal. On the basis of cases like that, one might hypothesize that permissions to ϕ come from particularly strong reasons to ϕ.

But there are cases where things are quite different. There is an inexpensive watch on a shelf beside me that I am permitted to destroy. What gives me that permission? It is that I own it. But the very thing that gives me permission, my ownership, also gives me a reason not to smash it. So sometimes the same feature of reality that makes ϕing permissible is also a reason against ϕing.

This is a bit odd. For if it were impermissible to destroy the watch, that would be a conclusive reason against the smashing. So it seems that my ownership moves me from having a conclusive reason against smashing to not having a conclusive reason against smashing. Yet it does that while at the same time being a reason not to smash. Interesting.

I suspect there may be an argument against utilitarianism somewhere in the vicinity.

Error in "Non-classical probabilities invariant under symmetries"

Yesterday, I discovered an error in the proof of “Theorem 1” of this recent paper of mine (arxiv version). The error occurs in the harder direction of Lemma 2. I do not know how to fix the error. Here’s what I know to remain of the “Theorem”. The proof that (i) implies (ii)–(v) is unaffected. The proof that (iv) implies (ii)–(v) is also unaffected, and likewise unaffected is the equivalence of (ii), (iii) and (v).

But I no longer know if any of (ii)–(v) imply (i). However, (i) is true under the stronger assumption that G is supramenable or that there exist invariant hyperreal probabilities.

The above remarks suffice for almost all the philosophical points in the paper (the philosophical point that behavior for countable sets is decisive is no longer supported in the full conditional probability case), and all the applications I mention in the paper.

I do not know if “Theorem 1” is true. This is an interesting mathematical question.

Update: The error has been fixed and Theorem 1's proof now works.

Non-uniqueness of "uniform" full conditional probabilities

Consider a fair spinner that uniformly chooses an angle between 0 and 360. Intuitively, I’ve just fully described a probabilistic situation. In classical probability theory, there is indeed a very natural model of this: Lebesgue probability measure on the unit circle. This model’s probability measure can be proved to be the unique function λ on the subsets of the unit circle that satisfies these conditions:

  1. Kolmogorov axioms with countable additivity

  2. completeness: if λ(B) is zero and A ⊆ B, then λ is defined for A

  3. rotational invariance

  4. at least one arc on the circle of length greater than zero and less than 360 has an assigned probability

  5. minimality: any other function that satisfies 1-4 agrees with λ on the sets where λ is defined.

In that sense “uniformly chooses” can be given a precise and unique meaning.

But we may be philosophically unhappy with λ as our probabilistic model of the spinner for one of two reasons. First, but less importantly, we may want to have meaningful probabilities for all subsets of the unit circle, while λ famously has “non-measurable sets” where it is not defined. Second, we may want to do justice to such intuitions as that it is more likely that the spinner will land exactly at 0 or 180 than that it will land exactly at 0. But λ as applied to any finite (in fact, any countable) set of positions yields zero: there is no chance of the spinner landing there. Moreover, we want to be able to update our probabilities on learning, say, that the spinner landed on 0 or 180—presumably, after learning that disjunction, we want 0 and 180 to have probability 1/2—but λ provides no guidance how to do that.

One way to solve this is to move to probabilities whose values are in some field extending the reals, say the hyperreals. Then we can assign a non-zero (but in some cases infinitesimal) probability to every subset of the circle. But this comes with two serious costs. First, we lose rotational invariance: it is easy to prove that we cannot have rotational invariance in such a context. Second, we lose uniqueness: there are many ways of assigning non-zero probabilities, and we know of no plausible set of conditions that makes the assignment unique. Both costs put in serious question whether we have captured the notion of “uniform distribution”, because uniformity sure sounds like it should involve rotational invariance and be the kind of property that should uniquely determine the probability model given some plausible assumptions like (1)–(5).

There is another approach for which one might have hope: use Popper functions, i.e., take conditional probabilities to be primitive. It follows from results of Armstrong and the supramenability of the group of rotations on the circle that there is a rotation-invariant (and, if we like, rotation and reflection invariant) finitely-additive full conditional probability on the circle, which assigns a meaningful real number to P(A|B) for any subsets A and B with B non-empty. Moreover, if Ω is the whole circle, then we can further require that P(A|Ω) = λ(A) if λ(A) is defined. And now we can compare the probability of two points and the probability of one point. For although P({x,y}|Ω) = λ({x,y}) = 0 = λ({x}) = P({x}|Ω) when x ≠ y, there is a natural sense in which {x, y} is more likely than {x} because P({x}|{x,y}) = 1/2.

Unfortunately, the conditional probability approach still doesn’t have uniqueness, and this is the point of this post. Let’s say that what we require of our conditional probability assignment P is this:

  1. standard axioms of finitely-additive full conditional probabilities

  2. (strong) rotational and reflection invariance

  3. being defined for all pairs of subsets of the circle with the second one non-empty

  4. P(A|Ω) = λ(A) for any Lebesgue-measurable A.

Unfortunately, these conditions fail to uniquely define P. In fact, they fail to uniquely define P(A|B) for countably infinite B.

Here’s why. Let E be a countably infinite subset of the circle with the following property: for any non-identity isometry ρ of the circle (combination of rotations and reflections), E ∩ ρE is finite. (One way to generate E is this. Let E0 be any singleton. Given En, let Gn be the set of isometries ρ such that ρx = y for some x, y in E. Then Gn is finite. Let z be any point not in {ρx : ρ ∈ Gn, x ∈ E}. Let En + 1 = En ∪ {z} (since z is not unique, we’re using the Axiom of Dependent Choice, but a lot of other stuff depends on stronger versions of Choice anyway). Let E be the union of the En. Then it’s easy to see that E ∩ ρE contains at most one point for any non-identity isometry ρ.)

Let μ be any finitely additive probability on E that assigns zero to finite subsets. Note that μ is not unique: there are many such μ. Now define a finitely additive measure ν on Ω as follows. If A is uncountable, let ν(A) = ∞. Otherwise, let ν(A) = ∑ρμ(EρA), where the sum is taken over all isometries ρ. The condition that E ∩ ρE is finite for non-identity ρ and that μ is zero for finite sets ensures that if A ⊆ E, then ν(A) = μ(A). It is clear that ν is isometrically invariant.

Let λ* be any invariant extension of Lebesgue measure to a finitely additive measure on all subsets of the circle. By Armstrong’s results (most relevantly Proposition 1.7), there is a full conditional probability P satisfying (6)–(8) and such that P(A|E) = μ(AE) and P(A|Ω) = λ*(A) (here we use the fact that ν(A) = ∞ whenever λ*(A) > 0, since λ*(A) > 0 only for uncountable A). Since μ wasn’t unique and E is countable, conditions (6)–(9) fail to uniquely define P for countably additive conditions.

Wednesday, August 17, 2022

Murder without an intention of harm

I used to think that every murder is an intentional killing. But this is incorrect: beheading John the Baptist was murder even if the intention was solely to put his head on a platter rather than to kill him. Cases like that once made me think something like this: murder is an intentional injury that one expects to be lethal. (Cf. Ramraj 2000.)

But now I think there can be cases of murder where there is no intent to injure at all. Suppose that amoral Alice wants to learn what an exploding aircraft looks like. To that end, she launches an anti-aircraft missile at a civilian jetliner. SHe has the ordinary knowledge that the explosion will kill everyone on board, but in her total amorality she no more intends this than the ordinary person intends to contribute to wearing out shoes when going for a walk. Alice has committed murder, but without any intention to kill.

In terms of the Principle of Double Effect, Alice’s wrongdoing lies in the lack of proportionality between the foreseen gravely bad effect (mass slaughter) and the foreseen trivially good effect (satisfaction of desire for unimportant knowledge), rather than in a wrongful intention, at least if we bracket questions of positive law.

It is tempting to conclude that every immoral killing is a murder. But that’s not right, either. If Bob is engaged in a defensive just war and has been legitimately ordered not to kill any of the enemy before 7 pm no matter what (so as not to alert the enemy, say), and at 6 pm he kills an enemy invader in self-defense, then he does not commit murder, but he acts wrongly in disobeying an order.

It seems that for an immoral act to be a murder it needs to be wrong because of the lethality of the harm as such, rather than due to some incidental reason, such as the lethality of the harm as contrary to a valid order.

Friday, August 12, 2022

Two kinds of norms

On a natural law theory of morality, some moral facts are nature-relative, i.e., grounded in the particular kind of nature the particular rational being has, and other moral facts are structural, a part of the very structure of rationality, and will apply to any possible rational being.

Thus, the norm of monogamy for human beings (assuming, as I think, that there is one) is surely be nature-relative—it seems very likely that there could be rational animals to whom a different reproductive strategy would be natural. But what Aquinas calls the first precept of the natural law, namely that the good is to be pursued and the bad is to be avoided, is structural—it applies to any rational being.

I think that evidence for a human norm being nature-relative is that either the space of norms contains other very similar norms nearby or it’s vague which precise norm obtains. For instance, take the norm of respecting one’s parents. This norm implies that we should favor our parents over strangers in our beneficence. However, how much more should we favor our parents over strangers? If there is a precise answer to that question, then there will be other nearby norms—not human ones—that give a slightly different precise answer (requiring a greater or smaller degree of favoring). On the other hand, that the good is to be pursued does not seem to have very similar norms near it—it has an elegant simplicity that a variant norm like “The good is to be pursued except when it is an aesthetic good” does not have.

I used to think the norms involved in double-effect reasoning were structural and hence applied to all possible rational being. I am no longer confident of this. Take the case of pushing a large person in front of a trolley without their permission in order to stop the trolley from hitting five others. We can now imagine a continuum of cases depending on how thick the clothing worn by the large person is. If the clothing is sufficiently thick, the large person has an extremely small probability of being hurt. If the clothing is ordinary clothing, the large person is nearly certain to die. In between is a continuum of probabilities of death ranging from negligibly close to zero to negligibly close to one, and a continuum of probabilities of other forms of injury. It is wrong to push the large person if the chance of survival is one in a trillion. It is not wrong to push the large person if the chance of their being at all hurt is one in a trillion. Somewhere there is a transition between impermissibility and permissibility. Either that transition is vague or it’s sharp. If it’s sharp, then there are norms very similar to the one we have. If it’s vague, then it’s vague which of many very similar norms we have.

In either case, I think this is evidence that the relevant norm here is nature-relative rather than structural. If this is right, then even if it is wrong for us to push the large person in the paradigmatic case where death is, say, 99.99% certain, there could be rational beings for whom this is not wrong.

This leads to an interesting hypothesis about God’s ethics (somewhat similar to things Mark Murphy has considered):

  1. God is only subject to structural moral norms and does not have any nature-relative moral norms.

I do not endorse (1), but I think it is a hypothesis well worth considering.

Monday, August 8, 2022

Might well

It’s occurred to me that the “might well happen that” operator makes for an interesting modality. It divides into an epistemic and a metaphysical version. In both cases, if it might well happen that p, then p is possible (in the respective sense). In both cases, there is a tempting paraphrase of the operator into a probability: on the epistemic side, one might say that it might well happen that p if and only if p has a sufficiently high epistemic probability, and on the metaphysical side, one might say that it might well happen that p if and only if p has a sufficiently high chance given the contextually relevant background. In both cases, it is not clear that the probabilistic paraphrase is correct—there may be (might well be!) cases of “might well happen that” where numerical probabilities have no place. And in both cases, “might well happen that” seems context-sensitive and vague. It might well be that thinking about this operator could lead to progress on something interesting.

Monday, August 1, 2022

Triple effect, looping trolley and felix culpa

Frances Kamm uses her principle of triple effect to resolve the loop version of the trolley problem. On the loop version, as usual, the main track branches into are two tracks, track A with five people and track B with one person, and the trolley is heading for track A. But now the two tracks join via a loop, so if there were no one on either track, a trolley that goes on track A will come back on track B and vice versa. If we had five people on track A and no one on track B, and we redirected the trolley to track B, it would go on track B, loop around, and fatally hit the people on track A anyway. But the one person actually on track B is big enough that if the trolley goes on track B, it will be stopped by the impact and the five people will be saved.

The problem with redirecting to track B on the loop version of the trolley problem is that it seems that a part of your intention is that the trolley should hit the person on track B, since it is that impact which stops the trolley from hitting the five people on track A. And so you are intending harm to the person on track B.

In her Intricate Ethics book, Kamm gives basically this story about redirecting the trolley in the loop case:

  • Initial Intention: Redirect trolley to track A to prevent the danger of five people being hit from the front.

  • Initial Defeater: The five people come to be in danger of being hit from the back by the trolley.

  • Defeater to Initial Defeater: The one person on track B blocks the trolley and prevents the dangers of being hit from the back.

The important point here is that the defeater to the defeater is not intended—it is just a defeater to a defeater. Thus there is no intention to block the trolley via the one person on track B, and hence that person’s being hit is not a case of their intentionally being used as a means to saving lives.

But this defeater-defeater story is mistaken as it stands. For given the presence of the person on track B, there is no danger of the five people being hit from the back. Thus, there is no initial defeater here.

Now, if you don’t know about the one person on track B, you would have a defeater to the redirection, namely the defeater that there is danger of being hit from the back. But learning about the person on track B would not provide a defeater to that defeater—it would simply remove the defeater by showing that the danger doesn’t exist.

That the story doesn’t have a defeater-defeater structure does not mean that one is intending the one person to be hit. Kamm might still be right in thinking there is no intention to block the trolley via the one person on track B. But I am dubious of Kamm’s story now, because I am dubious that the danger of being hit from the front yields a worthy initial intention. For there is nothing particularly bad about being hit from the front. It is only the danger of being hit simpliciter that seems worth preventing.

It is interesting to me to note that even if Kamm’s story doesn’t have defeater-defeater form, the main place where I want to use her triple effect account seems to still have defeater-defeater form. That place is the felix culpa, where God allows Adam and Eve to exercise their free will, even though he knows that this would or might well (depending on details about theories of foreknowledge and middle knowledge) result in their sinning, and God’s reasoning involves the great goods of salvation history that come from Adam and Eve’s sin.

  • Initial Intention: Allow Adam and Eve to exercise their free will.

  • Initial Defeater: They will or might well sin.

  • Defeater to Initial Defeater: Great goods will come about.

Here the initial defeater is not mistaken as in the looping trolley case—the sin or its possibility is really real. Moreover, while it’s not an initially worthy intention to prevent people from being hit from the front, unless they aren’t going to be hit from behind (or some other direction) either, it is an initially worthy intention to allow Adam and Eve to exercise their free will, even if no further goods come about, because free will is intrinsically good.

Thus we can criticize Kamm’s own use of triple effect while yet preserving what I think is a really important theological application.

Wednesday, July 27, 2022

The accuracy argument for probabilism

A standard scoring rule argument for probabilism—the doctrine that credence assignments should satisfy the axioms of probability—goes as follows. If s is a scoring rule on a finite probability space Ω, so that s(c)(ω) is the epistemic utility of credence assignment c at ω in Ω, and (a) s is strictly proper and (b) s is continuous, then for any credence c that does not satisfy the axioms of probability, there is a credence p that does satisfy them such that s(p)(ω) is better than s(c)(ω) for all ω. This means that it’s stupid to have a non-probabilistic credence c, since you could instead replace it with p, and do better, no matter what.

Here is a problem with the dialectics behind this argument. Let P be the set of all credence assignments that satisfy the axioms of probability. But suppose that I think that there is some nonempty set M of credence assignments that do not satisfy the axioms of probability but are rationally just as good as those in P. Then I will think there is some way of making decisions using credences in M, just as good as the way of making decisions using credences in P. The best candidate in the literature for this is to use a level set integral, which allows one to assign an expected value EcU to any utility assignment U even if c is not a probability. Note that EpU is the standard mathematical expectation with respect to p if p is a probability.

The argument for probabilism assumed two things about the scoring rule: strict propriety and continuity. Strict propriety is the claim that:

  1. Eps(p) > Eps(c) whenever c is a credence other than p

for any probability p. In words, by the lights of a probability p, then we get the best expected epistemic utility if we make p be our credence.

Now, if I am not convinced by the argument that (1) should hold for any probability p and any credence c other than p, then I will be unmoved by the scoring rule argument for probabilism. So suppose that I am convinced. But recall that I think that credences in M are just as rationally good as the probabilities in P. Because of this, if I find (1) convincing for all probabilities p, I will also find it convincing for all credences p in M, where Ep is my preferred way of calculating expected utilities—say, a level set integral.

Thus, if I am convinced by the argument for strict propriety, I will just as much accept (1) for p in M as for p in P. But now we have:

Theorem 1. If Ep is strongly monotonic for all p ∈ P ∪ M and coincides with mathematical expectation for p ∈ P, and (1) holds for all p in P ∪ M, where M is non-empty, then s is not continuous on P.

(Strong monotonicity means that if U < V everywhere then EpU < EpV. The Theorem follows immediately from the Pettigrew-Nielsen-Pruss domination theorem.)

Suppose then that I am convinced that a scoring rule s should be continuous (either on P or on all of P ∪ M). Then the conclusion I am apt to draw is that there just is no scoring rule that satisfies all the desiderata I want: continuity as well as (1) holding for all p ∈ P ∪ M.

In other words, the only way the argument for probabilism will be convincing to me is if my reason to think (1) is true for all p in P is significantly stronger than my reason to think (1) is true for all p in M, and I have a sufficiently strong reason to think that there is a scoring rule that satisfies all the true rational desiderata on a scoring rule to conclude that (1) holding for all p in M is not among the true rational desiderata even though its holding for all p in P is.

And once I additionally learn about the difficulties in defining sensible scoring rules on infinite spaces, I will be less confident in thinking there is a scoring rule that satisfies all the true rational desiderata on a scoring rule.

Tuesday, July 26, 2022

Instrumental bads

Suppose that a process Q has a chance r of producing some non-instrumentally bad result B, and nothing else of relevance. That fact gives us reason not to actualize Q. But suppose Q is actualized. Is it bad?

Well, if it’s bad, it seems it’s only instrumentally bad. It is no worse to be killed by a well-aimed arrow than by a well-aimed bullet, even though in the case of the well-aimed arrow the process of a deadly projectile’s flight lasts longer. Yet if a process producing a bad result were non-instrumentally bad, it would be worse if it lasted longer.

So we now have four options:

  1. Q is always instrumentally bad (whether or not B eventuates)

  2. Q is never instrumentally bad

  3. Q is instrumentally bad if and only if B eventuates

  4. Q is instrumentally bad if and only if B does not eventuate.

Option (4) is crazy. Option (2) destroys the very idea of an instrumental bad. So that leaves options (1) and (3).

If we opt for option (1), then we can have a world that contains instrumental bads without any non-instrumental bads—just imagine that Q obtains, B does not eventuate, and nothing else that’s bad ever happens. This seems a little counterintuitive: instrumental bads are derivatively bad, but how can something be derivatively bad without anything that is non-derivatively bad?

That suggests we should go for option (3): a process that has a chance of leading to a non-instrumental bad is bad only when the non-instrumental bad eventuates.

But now imagine Molinism is true. Suppose that God knows that Q, if actualized, would not lead to B, even though it has a non-zero chance r of doing so. In that case, the fact that Q has a chance r > 0 of leading to B is no reason for God not to actualize Q. But that something is bad is always a reason not to actualize it. If instrumental bads are an exception for this, then instrumental bads aren’t bads.

Now, I think Molinism is false. But whether (3) is true should not, it seems, depend on whether Molinism is true. So if (3) is false on Molinism, it is simply false.

So we seem to be stuck!

Maybe the right move is this. Fake money isn’t money and merely instrumental bads aren’t bad. This allows us an escape from the Molinism argument. For if merely instrumental bads aren’t bad, there is no problem about the fact that the Molinist God has no reason not to produce them.

Another move might be to say that (3) is true, but disproves Molinism. This doesn’t strike me as right, but maybe it’s defensible.

Until this is resolved, one really shouldn’t be running any arguments that depend on instrumental bads being actually bad.

The intrinsic badness of certain future tensed facts on presentism

It is bad that tomorrow someone will be in intense pain. On eternalism, we can easily explain this: tomorrow’s pain is just as real as today’s. But on presentism and growing block, future pains don’t exist.

Presumably, the presentist and growing blocker will say that the tensed fact of there being an intense pain tomorrow is bad, and this bad tensed fact presently exists.

Is this badness of the future tensed fact about the pain an instrumental or non-instrumental badness? If it’s instrumental, it is not clear what it could be instrumental to. The main candidate (apart from special cases where there is an obvious candidate, such as when the pain leads to despair) is that the fact that there will be a pain tomorrow is instrumental to tomorrow’s pain. But the fact that tomorrow there will be pain won’t cause that pain—otherwise, it would be trivial that every future event has a cause.

So the present badness of there being a pain tomorrow would be non-instrumental. But now imagine two scenarios with finite time lines.

  • Scenario A: There is a mindless universe with a day of random particle movement, followed by the formation of a brain which has intense pain for a minute, followed by the end of time.

  • Scenario B: There is a mindless universe with a century of random particle movement, followed by the formation of a brain which has intense pain for a minute, followed by the end of time.

Let’s suppose we find ourselves at the last moment of time in one scenario or the other. Then in Scenario A, there was a day of the obtaining of a “future pain fact”, and in Scenario B, there was a century of the obtaining of a “future pain fact”. If a future pain fact is a non-instrumentally bad thing, then there was non-instrumentally bad stuff in Scenario B for a much longer period of time than in Scenario A, and so Scenario B is much worse than Scenario A with respect to future pain. But that seems mistaken: the greater length of time during which there is a future pain fact does not seem any reason to prefer one scenario over another.

Friday, July 22, 2022

Should the A-theorist talk of tensed worlds?

For this post, suppose that an A-theory of time is true, so there is an absolute present. If we think of possible worlds as fully encoding how things can be so that:

  1. A proposition p is possible if and only if p holds at some world,

then we live in different possible worlds at different times. For today a Friday is absolutely present and tomorrow a Saturday is absolutely present, and so how things are is different between today and tomorrow (or, in terms of propositions, that it’s Saturday is false but possible, so there must be a world where it’s true). In other words, given (1), the A-theorist is forced to think of worlds as tensed, as centered on a time.

But there is something a little counterintuitive about us living in different worlds at different times.

However, the A-theorist can avoid the counterintuitive conclusion by limiting truth at worlds to propositions that cannot change their truth value. The most straightforward way of doing that is to say:

  1. Only propositions whose truth value cannot change hold at worlds

and restrict (1) to such propositions.

This, however, requires the rejection of the following plausible claim:

  1. If (p or q) is true at a world w then p is true at w or q is true at w.

For the disjunction that it’s Friday or it’s not Friday is true at some world, since it’s a proposition that can’t change truth value, but neither disjunct can be true at a world by (2).

Alternately, we might limit the propositions true at a world to those expressible in B-language. But if our A-theorist is a presentist, then this still leads to a rejection of (3). For on presentism, the fundamental quantifiers quantify over present things, and the quantifiers of B-language are defined in terms of them. In particular, the B-language statement “There exist (tenselessly) dinosaurs” is to be understood as the disjunction “There existed, exist or will exist dinosaurs.” But if we have (3), then worlds will have to be tensed, because different disjuncts of “There existed, exist or will exist dinosaurs” will hold at different times. A similar issue comes up for growing block.

So on the most popular A-theories (presentism and growing block), we have to either allow that we inhabit different worlds at different times or deny (3). I think the better move is to allow that we inhabit different worlds at different times.

Thursday, July 21, 2022

Mill on injustice

Mill thinks that:

  1. An action is unjust if society has a utility-based reason to punish that actions of that type.

  2. An action is wrong if there is utility-based reason not to peform that action.

Mill writes as if the unjust were a subset of the wrong. But it need not be. Suppose that powerful aliens have a weird religious view on which dyeing one’s hair green ought to be punished with a week in jail, and they announce that any country that refuses to enforce such a punishment as part of the criminal code will be completely annihilated. In that case, according to (1), dyeing one’s hair green is unjust. But it is not guaranteed to be wrong according to (2). The pleasure of having green hair could be greater than the unpleasantness of a week in jail, depending on details about the prison system and one’s aesthetic preferences.

The problem with (1), I think, is that utility-based reasons to punish actions of some type need have little to do with moral reasons, utilitarian or not, against actions of that type.

Tuesday, July 19, 2022

The three big mysteries of the concrete world

There are three big mysterious aspects of the concrete world around us:

  • the causal

  • the mental

  • the normative.

The three mysteries are interwoven. Teleology is the domain of the interplay of the causal and the normative. And the mental always comes along with the normative, and often with the causal.

There is no hope of reducing the normative or the mental to the causal. Some have tried to reduce the normative to the mental, either via relativism (reducing to the finite mental) or Plantingan proper functionalism (reducing to the divine mental), neither of which appears particularly appealing in the end. I’ve toyed with reducing the mental to the normative, but while there is some hope of making progress on intentionality in this way, I doubt that there is a solution to the problem of consciousness in this direction.

Theism provides an elegant non-reductive story on which the three mysterious aspects of concrete reality are all found interwoven in one perfect being, and indeed follow from the perfection of that perfect being.

I wonder, too, if there is some way of seeing the three mysteries as reflective of the persons of the Trinity. Maybe the Father, the ultimate source of the other persons, is reflected in causality. The Son, the Logos, in the mental. And the Spirit, the loving concord of the Father and the Son may be reflected in the normative. But such analogies can be drawn in many ways, and I wouldn’t be very confident of them.

Friday, July 15, 2022

Necessity and the open future

Suppose the future is open. Then it is not true that tomorrow Jones will freely mow the lawn. Moreover, it is necessarily not true that Jones will freely mow the lawn, since on open future views it is impossible for an open claim about future free actions to be true. But what is necessarily not true is impossible. Hence it is impossible that Jones will freely mow the lawn. But that seems precisely the kind of thing the open futurist wishes to avoid saying.

Wednesday, July 13, 2022

Two difficulties for wavefunction realism

According to wavefunction realism, we should think of the wavefunction of the universe—considered as a square-integrable function on R3n where n is the number of particles—as a kind of fundamental physical field.

Here are two interesting consequences of wavefunction realism. First, it seems like it should be logically possible for the fundamental physical field to take any logically coherent combination of values on R3n. But now imagine that the initial conditions of the wavefunction “field” are have it take a combination of values that is not a square-integrable function, either because it is nonmeasurable or because it is measurable but non-square-integrable. Then the Schroedinger equation “wouldn’t know” what to do with the wavefunction. In other words, for quantum physics to work, given wavefunction realism, we need a very special initial combination of values of the “wavefunction field”. This is not a knockdown argument, but it does suggest an underexplored need for fine-tuning of initial conditions.

Second, the solutions to the Schroedinger equation, understood distributionally, are only defined up to sets of measure zero. In other words, even though the Schroedinger equation is generally considered to be deterministic (any indeterminism in quantum mechanics comes in elsewhere, say in collapse), nonetheless the solutions to the equation are underdetermined when they are considered as square-integrable fields on R3n—if ψ(⋅,t) is a solution for a given set of initial conditions, so is any function that differs from ψ(⋅,t) only on a set of measure zero. Granted, any two candidates for the wavefunction that differ only on a set of measure zero provide the exact same empirical predictions. However, it is still troubling to think that so much of physical reality would be ungoverned by the laws. (There might be a solution using the lifting theorem mentioned in footnote 6 here, though.)