## Sunday, December 31, 2017

### Reiter snowflake growth model

The previous post's quick cellular automaton approach to snowflakes was a bit crude. For more realism (but still with a lot of oversimplification), I wrote code for the Reiter snowflake growth model. This is a "real-valued" cellular automaton (or a finite-differences evolution).

With some randomness, things look even more realistic:

## Saturday, December 23, 2017

### Cellular automaton snowflake generator

I made a simple cellular automaton snowflake generator in OpenSCAD. By default it uses Stephen Wolfram's rule that a hex cell stays alive once alive and a cell is generated if it has exactly one neighbor.

Adding a tiny bit of indeterminism--a chance of 0.5 of generating a cell instead of certainty--makes things look more like a real snowflake, though. Tap on Customizer in the above link if you want to play with it.
And here it is on our Christmas tree. Merry Christmas!

## Monday, December 18, 2017

### What are properties?

A difficult metaphysical question is what makes something be a property rather than a particular.

In general, heavy-weight Platonism answers the question of what makes x be F, when being F is fundamental, as follows: x instantiates the property of Fness.

It is hard to see what could be more fundamental on Platonism than being a property. So, a heavy-weight Platonist has an elegant answer as to what makes something be a property: it instantiates the second-order property of propertyhood.

## Tuesday, December 12, 2017

### Zero chance events

A standard thing in the philosophy of science to say such stochastic explanation questions is that one can given an answer in terms of the objective chance of the event, even when these chances are less than 1/2.

But consider the question: Why did this atom decay exactly at t1?

Here, the objective chance may well be zero. And surely that an event had zero chance of happening does nothing to explain the event. After all, that the decay at t1 had zero chance does not distinguish the atom’s decaying at t1 from the atom’s turning into a square circle at t1. And to explain something we minimally need to say something that distinguishes it from an impossibility.

Here, I think, the causal powers theorist can say something (even though I may just want to reject the presuppositions; see the Response to Objection 2, below). Stochastic systems have a plurality of causal powers for incompatible outcomes. The electron in a mixed-spin state may have both a causal power to have its spin measured as up and to have its spin measured as done. Normally, some of the causal powers are apt to prevail more than others, and hence have a greater chance than others. But even the weaker causal powers are there, and we can explain the event by citing them. The electron’s spin was measured as, say, up because it had a causal power to that outcome; had it been measured as, say, down, that would have been because it had a causal power to that outcome. We can give further detail here: we can say that one of these causal powers is stronger than the other. And the stronger causal power has, because it is stronger, a higher chance of prevailing. But even the weaker causal power can prevail, and when it does, we can explain the outcome in terms of it.

This story works just fine even when the chances are zero. The weaker causal power could be so weak that the chance associated with it has to be quantified as zero. But we can still explain the activation of the weaker causal power.

So, going back to the decay, we can say that the atom had a causal power to decay at t1, and that’s why it decayed at t1. That causal power was of minimal strength, and so the chance of the decay has to be quantified as zero. But we still have an explanation.

The causal powers story about the atom encodes information that the chances do not. The chances do not distinguish the atom’s turning into a square circle from the atom’s decaying exactly at t1. The causal powers do, since it has a power to decay but no power to turn into a square circle.

Objection 1: Let’s say that the atom has twice as high a chance of decaying over the interval of times [0, 2] as over the interval of times [0, 1]. How do we explain that in terms of causal powers, given that there are equally many (i.e., continuum many) causal powers to decay at precise times in [0, 2] as there are causal powers to decay at precise times in [0, 1]?

Response: It could be that just as the causal power story carries information the chance story does not, the chance story could carry information the causal power story does not, and both stories reflect aspects of reality.

Another story could be that there are causal powers associated with intervals as well as points of times, and the causal power to decay at a time [0, 2] is twice as strong as the causal power to decay at a time in [0, 1]. There are difficulties here, however, with thinking about the fundamentality relations between the powers associated with different intervals. I fear that there is no avoiding an infinite sequence of causal powers that violates causal finitism, and I am inclined to reject the possibility of exact decay times—and hence reject the explanatory question I started this post with. I don’t see much hope for a measurement of an exact time after all. But someone with other commitments about finitism could have a story.

Objection 2: This is just like a dormitive power explanation of opium making someone sleepy.

Response: Opium’s dormitive power is fundamental or not. If opium has a fundamental dormitive power, then the dormitive power explanation is perfectly fine. That’s just the kind of explanation we have to have at the fundamental level. If the dormitive power explanation is not fundamental, then the explanation is correct but not as informative as an explanation in terms of more fundamental things would be.

Likewise, the power to decay at t1 either is or is not fundamental. If it is fundamental, then the explanation in terms of the power is perfectly fine. If it is not, then there is a more fundamental explanation. But probably the more fundamental explanation will also involve minimal strength powers with zero activation chances, too.

## Friday, December 8, 2017

### Two explanatory stories in Natural Law

One of the most fundamental claims of classical Natural Law (NL), as I understand it, is that:

1. The right exercise of our wills is precisely that which fullfills the proper functions of the will.

This claim is, I think, close to trivial. What is much less trivial is the further NL claim that the “fulfills the proper functions” explains the “right”. There are two (at least) ways of running this explanatory story:

A. To fulfill the proper function of the will is good for us, and it’s right to pursue what’s good for us.

B. It is directly true that the right is what fulfills the will’s proper function. Exercising the proper function of the will, like exercising any other natural faculty, of course good for us, but that isn’t what makes it right.

Story A makes the theory a form of eudaimonism, since it implies that what is good for us is generally to be pursued.

Story B does not claim that what is good for us is generally to be pursued, though it is compatible with that claim. Story B claims that one of the things that are good for us—the proper exercise of the will—is to be done, but it does not claim that other things good for us are to be pursued, and does not even claim that that one thing is to be pursued (for it is a different thing to do what is right and to pursue doing what is right). As far as it goes,

Story B is compatible with, say, total selflessness, the theory that the one thing to be pursued is the good of everybody else. To get total selflessness, all one needs is to supplement Story B with the theory that the proper function of our will is fulfilled precisely in the pursuit of the good of everybody else. Likewise, Story B is compatible with eudaimonism—one just needs to add that the pursuit of our good is what in fact fulfills our will. But it is also compatible with kakodaimonism, the theory that the one thing to be pursued is one’s own languishing. (One might think that it would be self-defeating to pursue one’s own harm if pursuit of one’s harm were the proper function of our wills, since the pursuit would fulfill one’s will and hence be good for one. But that would be to confuse the good pursued with the good of pursuit.)

In other words, Story B has much less in the way of normative ethics implications: it is very strictly a story about the meta-level.

There is reason to prefer Story A: it leads to a helpful normative ethics by itself.

There is reason to prefer Story B: the normative ethics that Story A leads to is a form of rational egoism.

I like Story B. But Story B must be supplemented with an account of what fulfills the will.

The answer to that is love.

### From particular perfections to necessary existence

This argument is valid:

1. Necessarily, any morally perfect being can morally perfectly deal with any possible situation.

2. Necessarily, one can only morally deal with a situation one would exist in.

3. So, necessarily, any morally perfect being is a necessary being.

That said, (1) sounds a bit fishy to me. One may want to say instead:

1. Necessarily, any morally perfect being can morally perfectly deal with any possible situation in which it exists.

But that’s actually a bit weaker than we want. Imagine a being that can deal with one situation and only with it: the case where it has promised to eat a delicious cookie that is being offered to it. But imagine, too, that the being can only exist in that one situation. Then (4) is satisfied, but surely being able to fulfill a promise to eat a cookie isn’t enough for moral perfection. So we do actually want to strengthen (4). Maybe there is something in between (1) and (4) that works. Maybe there isn’t.

There are other arguments of the above sort that one can run, based on premises like:

1. A maximally powerful being can weakly actualize any possibility.

2. An epistemically perfect being can know any possible proposition.

3. A rationally perfect being can rationally deal with any possible situation.

It is looking like moral perfection, maximal power, epistemic perfection and rational perfection each individually imply necessary existence.

If this is right, then we have an ontological argument:

1. Possibly, there is a morally perfect or a maximally powerful or an epistemically perfect or a rationally perfect being.

2. So, possibly there is a necessary being. (By arguments like above.)

3. So, there is a necessary being.

I am not saying that this a super-convincing argument. But it does provide some evidence for its conclusion.

## Thursday, December 7, 2017

### Pro-life outreach to fellow Christians

At a recent pro-life event that I participated in, the question was asked the panel how to convince a pastor that one’s church should support the pro-life cause, notwithstanding pro-choice congregation members. An answer was offered by a panelist that talked well of standard texts of Scripture that have bearing on the humanity of the fetus.

A day after the event, one of the audience members told me that there was too much focus on the status of the fetus, because even if students are convinced that human life starts at conception, they still think that because of conflict between the rights of the fetus and the rights of the mother, abortion is permissible.

In light of this, it seems to me that a crucial part of pro-life outreach to fellow Christians—including but not just pastors—is to focus on more general texts about our duties towards the vulnerable and needy. While a major part of the debate over abortion is indeed focused on the moral status of the fetus, both motivationally and intellectually it seems really important to focus on a deep underlying assumption that we do not have much in the way of onerous duties towards others, unless we have voluntarily undertaken those duties. Yet the Gospel teaches that we do have such duties, duties binding under pain of eternal damnation. Thus in addition to a reliance on texts about the status of the unborn, one needs motivationally powerful texts like:

Then he will say to those at his left hand, ‘Depart from me, you cursed, into the eternal fire prepared for the devil and his angels; for I was hungry and you gave me no food, I was thirsty and you gave me no drink, I was a stranger and you did not welcome me, naked and you did not clothe me, sick and in prison and you did not visit me.’ Then they also will answer, ‘Lord, when did we see thee hungry or thirsty or a stranger or naked or sick or in prison, and did not minister to thee?’ Then he will answer them, ‘Truly, I say to you, as you did it not to one of the least of these, you did it not to me.’ And they will go away into eternal punishment, but the righteous into eternal life. (Matthew 25:41-46).

These texts make it clear that we have highly onerous duties towards others, duties we may have done nothing to acquire. It is very difficult to defend disconnection from the violinist while thinking about such texts.

But of course if we use such texts then we had better be sure that we live so that they do not condemn us, too. For they are indeed terrifying texts on many fronts. May God have mercy on all our souls!

## Wednesday, December 6, 2017

### Being a bad person and doing wrong

Until recently, I assumed everyone agreed to something like this principle:

1. If performing an action constitutes you as a bad person, the action is morally wrong.

Virtue ethicists, of course, make this a biconditional that defines wrongness, but I would have assumed that just about everybody would agree that the conditional (1) is true.

But I am now thinking that (1) is not accepted as widely as I thought it is. What makes me think this is the way that Thomson’s violinist case resonates with so strongly with so many people, and presumably continues to do so even if one adds the necessary proviso that the violinist is one’s own minor child (otherwise it wouldn’t be applicable to typical cases of abortion). Yet it seems utterly obvious to me to that:

1. If the violinist is your own minor child, disconnecting from the violinist makes you a bad parent and a bad person.

But one cannot consistently accept (1) and (2) and think it is morally permissible to disconnect from the violinist when the violinist is one’s own child.

I would love to see empirical data on whether people who find the violinist case compelling deny (1) or deny (2). Thomson herself probably denies (2).

## Tuesday, December 5, 2017

### More on omniscience

In an earlier post, I argued that the definition of omniscience as knowing every truth and believing nothing but truths is insufficient for omniscience because an omniscient being would also be certain, and knowledge of every truth does not guarantee certainty of every truth.

Here’s another thing that the definition leaves out. Normally, when we say that someone knows or believes p, we are talking about non-occurrent knowledge. We say things like: “Alice knows the atomic number of carbon”, even while Alice is not thinking about carbon. However, I think an omniscient being—one that enjoys the perfection of knowledge—will need to have occurrent knowledge of all truths. Moreover, the omniscient being will need to always be attending to every piece of that knowledge to a maximal degree. (It is not a perfection in us to attend to everything we think maximally, because for us attending to one thing often excludes attending to another. But that’s due to our imperfection.)

### Symmetry and Thomson's violinist

I’ve been thinking about Thomson’s Violinist case. I should say about that case that it seems utterly obvious to me that in the case where the violinist is your child and you are in no long term danger from the connection, it’s a vicious failure of parental duties to disconnect.

But my current interest is not so much in figuring out the case itself, as trying to figure out why so many people find it compelling. To that end, I’ve been thinking about two symmetric cases.

Lifeboat: You, who are a really bad pianist, and a really bad violinist find yourselves drifting in a lifeboat in space, kidnapped and put there by the Music Lovers’ Society, to keep you both from performing publicly. The lifeboat is designed for one person, and the hyperspace engines don’t work with the mass of two people. Fortunately, your calculations show that in nine months the lifeboat drift will get you back to earth, and there is air, food and water enough for two for nine months. But it’s really uncomfortable. You both have to sit squished together on an uncomfortable chair, you’re away from your friends (though you can talk to them on hyperspace Skype whenever you like), from your job, etc. Waste disposal is handled hygienically but it’s a rather disgusting process under the circumstances. However, when the violinist is asleep, you could just push him out of the airlock, and then use the hyperspace engines to get back home in a day. Of course, the violinist could do the same to you.

Frankenkidney: Both you, who are an excellent pianist, and an excellent violinist are suffering from kidney failure. The Music Lovers’ Society kidnapped both of you and out of two malfunctioning kidneys made a single functioning frankenkidney. When you awake, you are taped to the violinist, with your abdomens touching. You know that where the abdomens are touching you each have a hole, and in that hole is that frankenkidney. You could wait nine months because that’s how long it takes for the lab to culture brand new kidneys for you and the violinist. Or you could pull the frankenkidney from the violinist’s body, put it in yours, and then have some straightforward surgery to close up the hole. Of course, the violinist could do the same to you.

I am thinking—perhaps too optimistically—that in these cases people would say: “You’re in the same boat as the violinist and just need to make the best of it.”

Notice that in terms of the consequences of your decision, as well as desert and contract, these cases are very much like the original violinist story. Your personal space is encroached on by a violinist who is innocent of the encroachment. You can end the encroachment at the expense of the violinist’s life.

But the cases are different from the original. In the original, the situation benefits the violinist and harms you. In the symmetric cases, either you are both harmed (Lifeboat) or both benefited (Frankenkidney). Moreover, in the original case, the violinist continues to derive a benefit from you without giving anything back. (I think that’s something disanalogous to the abortion case, by the way, since the life of one’s child is a benefit to one, even if one does not see it this way.)

However, I do not know that these differences about the flow of benefits and harms matter, given that neither you nor the violinist have any responsibility for being in the situation in any of the three cases. Suppose we take the Lifeboat case and add this to the story:

Lifeboat Supplement: After a few hours in the lifeboat, the violinist’s body’s thermoregulation has shut down and the lifeboat’s heating system is working poorly. Your body heat is enough to maintain a liveable temperature for the two of you in the lifeboat, and medicine can fix his problem once you get back to civilization, but if the violinist threw you out the airlock, he’d soon die of hypothermia. You'd do fine physically without him, though.

In the supplemented lifeboat story, there is a net flow of life-giving benefits from you to the violinist, just as in the original violinist case. But it would be absurd that as soon as the violinist’s body’s thermoregulation shuts down you can throw him out the airlock. Yet the resulting story is now very much like the original violinist story, I think.

My conjecture is that a central reason the violinist case seems compelling to so many has to do with the asymmetry between the two parties, and when one primes the thinking by starting with a more symmetric situation, as in the Lifeboat case, the intuitions change, and perhaps stay changed even if one adds an asymmetric supplement.

By the way, for another symmetric analogy, see Himma.

## Monday, December 4, 2017

### Omniscience, omnipotence and perfection

Recently, I’ve been worried about arguments like this:

1. It is always more perfect to be able to do more things.

2. Being able to do impossible things is a way of being able to do more things.

3. So, a perfect being can do impossible things.

But I really don’t want to embrace 3.

It’s just occurred to me, though, that the argument 1-3 is parallel to the clearly silly argument:

1. It is always more perfect to know more things.

2. Knowing falsehoods is a way of knowing more things.

3. So, a perfect being knows falsehoods.

Once we realize that among “more things” there could be falsehoods, it becomes clear that 4 as it stands is false, but needs to be restricted to the truths. But arguably what truths are to knowledge, that possibles are to power (I think this may be a Jon Kvanvig point, actually). So we should restrict 1 to the possibles.

## Friday, December 1, 2017

### Laws of nature and moral rules

There is a lot to be said for the Mill-Ramsey-Lewis (MRL) account of laws as the axioms of a system that optimizes a balance of informativeness and simplicity. But there are really serious problems. The deepest is that the MRL regularities seem to systematize but not explain.

Similarly, there is a lot to be said for rule utilitarianism, but it also suffers from really serious problems. The deepest is probably that it just does not seem to be a compelling moral reason to do something harmful that under normal circumstances it is beneficial.

The MRL account of laws and rule utilitarianism are similar and a number of the problems facing them are structurally similar. Most deeply, the MRL laws don’t move things physically and rule utilitarian rules don’t move us morally. But there are also structurally similar technical problems, such as the account of simplicity, the way in which simplicity is to be balanced with informativeness or beneficiality, the apparent influence of future facts on present laws or moral truths, etc.

It is interesting that many of the problems of both accounts can be solved by bringing in theism. For instance, one can get a theistic MRL account of laws by saying that laws are the divinely willed axioms of a system that optimizes a divinely defined balance of informativeness and simplicity. And one can get a theistic rule utilitarian account by saying that laws are the divinely commanded rules that optimize a divinely defined balance of beneficiality and simplicity.

(I myself would prefer not to go for something quite so simple on the moral side: I’d prefer to insert our natures to mediate between God and our duties.)

## Thursday, November 30, 2017

### Self-sacrifice and bigotry

Consider:
Case 1: A child is drowning in a dirty pond. You can easily pull out the child. But you’ve got cuts all over your dominant arm and the water is full of nasty bacteria and medical help is a week away. If you go in the water to pull out the child, your arm will get infected, become gangrenous and in a week it will be amputated. There will be no social losses or gains to you.
Case 2: A child is drowning in a clean pond. You can easily pull out the child. But the child is a member of a despised minority group, and you will be ostracized by your friends and family for life for your rescue. There will be no physical losses or gains to you.

Here is my intuition. In both cases, it would be a good thing to rescue the child. But in Case 1, unless you have special duties (e.g., it’s your own child), you do not have a duty to rescue given the physical costs. In Case 2, however, you do have a duty to rescue, despite the social costs.

The difference between the two cases does not, I think, lie in its being worse to lose an arm than to be ostracized. Imagine your community has a rite of passage that involves swimming in the dirty pond with the cuts on your arm, and you’d be ostracized if you don’t. You might well reasonably judge it worthwhile—but still, I think, the intuition remains that in Case 2 you ought to pull out the child, while in Case 1 it’s supererogatory. So it seems then you might have a duty to undertake the greater sacrifice (facing social stigma in Case 2) without a duty to undertake the lesser sacrifice (amputation in Case 1). But for simplicity let’s just suppose that the harms to you in the two cases are on par.

Is it that physical harm excuses one from the duty to rescue the child but social harm does not? I don’t think so.

Case 3: A child is being murdered by drowning in a clean pond. You can easily pull out the child. But if you do, the murderer will punish you for it by transporting you away from your home community to a foreign community where you will never learn the difficult language and hence will not have friends.

We can set this up so the harm in all three cases is equal. But my intuition is that Case 2 is like Case 1: in both cases it is supererogatory to rescue the child but there is no duty.

In Cases 2 and 3 we have equal social harms, but I feel a difference. (Maybe you don’t!) Here’s one consideration that would explain the difference. That an action gains one the praise and friendship of bigots qua bigots does not count much in favor of the action, even if, and perhaps even especially if, such praise and friendship would make one’s life significantly more pleasant. Similarly, that an action loses one the friendship of bigots, and does so precisely through their bigotry, is not much of a consideration against the action. I say “not much”, because there might be instrumental gains and losses in both cases to be accounted for.

Here’s a second consideration. Perhaps if I refrain from doing something directly because doing it will lose me bigots’ friendship or gain me their stigma, I am thereby complicit in the bigotry. In Case 2, then, I need to ignore the direct loss of goods of social connectedness in considering whether to rescue the child. I need to say to myself: “When those are the conditions of their friendship, so much the worse for their friendship.” In Case 3, I have similar social losses, but I don't lose the friendship of bigots qua bigots, so the loss counts a lot more.

But note that one can still legitimately consider the instrumental harms from the loss of goods of social connectedness. Consider:

Case 4: A child is drowning in a clean pond, but you have a wound that will become gangrenous and force amputation absent medical help. You can easily pull out the child. But the child is a member of a despised minority group, and if you rescue the child, the only doctor in town will refuse to have anything to do with you. As a result, your wound will become gangrenous by the time you find another doctor, and you will require amputation.

I think in Case 4, you are permitted not to rescue the child, just as in Case 1.

## Wednesday, November 29, 2017

### Inductive evidence of the existence of non-spatial things

Think about other plausibly fundamental qualities beyond location and extension: thought, charge, mass, etc. For each one of these, there are things that have it and things that don’t have it. So we have some inductive reason to think that there are things that have location and things that don’t, things that have extension and things that don’t. Admittedly, the evidence is probably pretty weak.

## Tuesday, November 28, 2017

### Wronger and wronging

Here’s an interesting thing. An act doesn’t necessarily become any more wrong for wronging someone.

Alice and Bob respectively come across a derelict spaceship. Their sensors show that there is intelligent life aboard. Each blasts the respective spaceship as target practice. Bob’s sensors malfunctioned: there was no intelligent life on the ship he blasted. Alice’s sensors were just fine. Alice wronged the people she killed. Bob wronged no one, as there was no one there to be wronged. But what Bob did was no less wrong than what Alice did.

Note 1: Bob’s case differs from standard cases of attempted murder. For in standard cases of attempted murder, the intended victim is wronged.

Note 2: I am not claiming that Bob wrongs no one. Bob wrongs both God and himself. But Alice also wrongs God and herself, just as much as Bob does, and additionally wrongs the people she kills. That additional wronging doesn’t make her act wronger, though.

Note 3: One might argue that Bob and Alice wrong all the people who have the property that they might (epistemically? alethically?) have been on the ship. Sure, but what if there are no such people in Bob's case? Perhaps Bob, unbeknownst to himself, is alone in his universe.

### An anti-Aristotelian argument for divine simplicity

The doctrine of divine simplicity fits comfortably with Aquinas’s Aristotelian framework. But it is interesting that anti-Aristotelianism also leads to divine simplicity.

1. The proper parts are more fundamental than the whole. (Mereological anti-Aristotelianism.)

2. Nothing is more fundamental than God.

3. So, God has no proper parts.

Of course, as an Aristotelian I reject 1, so while I accept the conclusion of this argument, I can’t use the argument myself.

## Monday, November 27, 2017

### First-orderism

It’s notoriously hard to characterize the physical precisely enough to attack or defend physicalism. To attack physicalism, however, it is enough to attack a characterization broader than physicalism, and to defend physicalism, a narrower characterization will do.

Here’s a suggestion along the “broader” lines. We can characterize reductive physicalism over-broadly as:

• Reductive first-orderism: All facts about the concrete (variants: contingent, spatiotemporal) features of the world reduce to first-order facts.

This may take in some theories other than what one intuitively counts as reductive physicalism, but if the object is ciriticism, that’s all we need.

Note how this characterization nicely shows how paradigm examples of magic would violate reductive physicalism: for paradigm examples of magic involve causation irreducibly by virtue of the meaning of a spell, gesture, etc., and meaning is a higher-order property. It also shows why irreducible Aristotelian teleology has no place in a reductive physicalist story: for teleological properties are second-order (I think).

Moreover, if we see reductive physicalism in the above way, it’s also easy to see that it’s false, by an argument of Leon Porter. For any first-order fact can be expressed in a first-order language. But, famously, the property of truth cannot be reduced to properties expressible in a first-order language (Tarski’s indefinability of truth; or, more simply, note that if you could express something equivalent to the property of truth in a first-order language, you could express the Liar Sentence in a first-order language). And some concrete objects, namely inscriptions, have this property of truth. Hence, some concrete objects have a property that cannot be expressed in first-order terms, contrary to reductive first-orderism.

### Change in transubstantiation

The two main parts of the doctrine of transubstantiation that get philosophically discussed are that after consecration we have:

• Real Presence: Christ's body and blood is really there.
• Real Absence: bread and wine is no longer there.
But there may be another part: that the bread and wine change into the body and blood rather than simply being replaced by the body and blood. Certainly the Council of Trent uses the language of "conversion" of bread and bread wine, but it is not completely clear to me that they mean to define there to be something more than replacement. Aquinas talks unclearly (to me) of the substantial change as a kind of "order" in the two substances.

Besides the general puzzle of how change differs from replacement, there are at least two philosophical difficulties about the change. The first is that on some versions--not mine--of Aristotelian metaphysics, what makes substantial change be a change is the persistence of matter. But there is no matter persisting here (indeed, Aquinas' remark emphasizes this). The second is that what the bread and wine change into, namely Christ's body, is already there. But it seems that if x changes into y, then y doesn't exist prior to the change.

Leibniz considers a theory on which the bread and wine change into new parts of Christ's body. This solves the second problem, but at the expense of having to say that the bread changes into a mere part of Christ's body, which does not appear to be what the Church means. Trent does say that whole Christ comes to be present. I suppose one could have a hybrid theory on which the bread and wine change into new parts of Christ's body, and the rest of Christ's body then additionally comes to be present, but not by conversion. While I do not have decisive textual evidence, this does not seem to me to be what Trent means. And it is grotesque to think that Christ gets fatter at transubstantiation.

While it could well be that the Council doesn't mean anything beefy about the "conversion", and perhaps all it is an "order" between the two substances (cf. Aquinas), an order constituted by by non-coincidental replacement in the same location. That would simplify things metaphysically. But I want to try for something metaphysically thicker.

Here's the thought. On my Aristotelian metaphysics, nothing persists in substantial change. But when a change of substance x into substance y, a rather special causal power is triggered in x, the causal power of giving rise to y while perishing. The exercise of such a causal power is what makes it be the case that x has changed into y. There isn't any matter persisting in the change, so the first of the two philosophical problems with the Eucharistic change disappears. What about the second? Here's my suggestion. Normally, the existence of Christ's body at later times is caused by its existence at earlier times. But what if we say that the bread miraculously gets a special causal power, the power of causing Christ's body to exist just as the bread perishes? Then the existence of Christ's body after consecration will be causally overdetermined by two things: the bread's exercising that causal power and Christ's body exercising its ordinary causal power to make itself persist.

The bread in perishing is an overdetermining cause of the existence of Christ's body, and that is exactly how substantial change happens on my view. The main metaphysical difference here is that normally substantial change is not overdetermined, while here it is.

## Tuesday, November 21, 2017

### Omniscience

A standard definition of omniscience is:

• x is omniscient if and only if x knows all truths and does not believe anything but truths.

But knowing all truths and not believing anything but truths is not good enough for omniscience. One can know a proposition without being certain of it, assigning a credence less than 1 to it. But surely such knowledge is not good enough for omniscience. So we need to say: “knows all truths with absolute certainty”.

I wonder if this is good enough. I am a bit worried that maybe one can know all the truths in a given subject area but not understand how they fit together—knowing a proposition about how they fit together might not be good enough for this understanding.

Anyway, it’s kind of interesting that even apart from open theist considerations, omniscience isn’t quite as cut and dried as one might think.

### Perfect rationality and omniscience

1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

2. A perfectly rational agent must believe anything there is overwhelming evidence for.

3. A perfectly rational agent must have consistent beliefs.

4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

6. So, a perfectly rational agent is never in a lottery situation. (3,5)

7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.

## Saturday, November 18, 2017

### Bayesianism and anomaly

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

I suspect that often this happens: T is much better confirmed than A. For T tends to be a unified theoretical body that has been confirmed as a whole by a multitude of different kinds of observations, while A is a conjunction of a large number of claims that have been individually confirmed. Suppose, say, that P(T)=0.999 while P(A)=0.9, where all my probabilities are implicitly conditional on some background K. Given the observation E, and the fact that T and A entail its negation, we now know that the conjunction of T and A is false. But we don’t know where the falsehood lies. Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

1. T is true and A is false

2. T is false and A is true

3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

1. P(E|∼A ∧ T)=0.5
2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

Note that this post weakens, but does not destroy, the central arguments of this paper.

### A consideration making the theodical defeat of evil a bit easier

For an evil to be defeated, in the theodical sense, the evil needs to be not only compensated for in the sufferer’s life, but it needs to be interwoven into a good in the sufferer’s life in such a way that the meaning of the evil is radically transformed in that life.

A requirement of the defeat of evil guards against theodicies where the sufferer gets the short end of the stick, the evil being permitted for the sake of goods to other individuals, or abstract impersonal goods like elegant laws of nature. Defeat appears to have an innate intrapersonality to it.

It occurs to me, however, that in heaven the requirement of defeat can sometimes be met through goods that happen to someone other than the sufferer. For all in heaven are friends of the best sort, and as Aristotle says, a friend (of the best sort) is another self, so that what happens to the friend happens to one. So if Alice has suffered an evil and Bob got a proportionate good out of God’s permitting the evil to Alice, if Alice and Bob are friends in the deepest sense, then the evil that happened to Alice is just as much a part of Bob’s life, and the good to Bob is just as much a part of Alice’s. Thus, defeat can be achieved interpersonally given friendship, without any worries about Alice getting the short end of the stick.

And abstract impersonal goods—like aesthetic ones—can become deeply personal through appreciation.

Thus, the intrapersonality condition in defeat can be met more easily than seems at first sight.

## Thursday, November 16, 2017

### Truth-value open theism

Consider the view that there are truth values about future contingents, but (as Swinburne and van Inwagen think) God doesn’t know future contingents. Call this “truth-value open theism”.

1. Necessarily, a perfectly rational being believes anything there is overwhelming evidence for.

2. Given truth-value open theism, God has overwhelming but non-necessitating evidence for some future contingent proposition p.

3. If God has overwhelming but non-necessitating evidence for some contingent proposition p, there is a possible world where God has overwhelming evidence for p and p is false.

4. So, if truth-value open theism is true, either (a) there is a possible world where God fails to believe something he has overwhelming evidence for or (b) there is a possible world where God believes something false. (2-3)

5. So, if truth-value open theism is true, either (a) there is a possible world where God fails to be perfectly rational or (b) there is a possible world where God believes something false. (1,4)

6. It is an imperfection to possibly fail to be perfectly rational.

7. It is an imperfection to possibly believe something false.

8. So, if truth-value open theism is true, God has an imperfection. (6-7)

And God has no imperfections.

To argue for (2), just let p be the proposition that somebody will freely do something wrong over the next month. There is incredibly strong inductive evidence for (2).

### A version of the cosmological argument from preservation

Suppose that all immediate causation is simultaneous. The only way to make this fit with the obvious fact that there is diachronic causation is to make diachronic causation be mediate. And there is one standard way of making mediate diachronic causation out of immediate synchronic causation: temporally extended causal relata. Suppose that A lasts from time 0 to time 3, B lasts from time 2 to time 5, and C lasts from time 4 to time 10 (these can be substances or events). Then A can synchronically cause B at time 2 or 3, B can synchronically cause C at time 4 or 5, and one can combine the two immediate synchronic causal relations into a mediate diachronic causal relation between A and C, even though there is no time at which we have both A and C.

The problem with this approach is explaining the persistence of A, B and C over time. If we believe in irreducibly diachronic causation, then we can say that B’s existence at time 2 causes B’s existence at time 3, and so on. But this move is not available to the defender of purely simultaneous causation, except maybe at the cost of an infinite regress: maybe B’s existence from time 2.00 to time 2.75 causes B’s existence from time 2.50 to time 3.00; but now we ask about the causal relationship between B’s existence at time 2.00 and time 2.75.

So if we are to give a causal explanation of B’s persistence from time 2 to time 5, it will have to be in terms of the simultaneous causal efficacy of some other persisting entity. But this leads to a regress that is intuitively vicious.

Thus, we must come at the end to at least one persisting entity E such that E’s persistence from some time t1 to some time t2 has no causal explanation. And if we started our question with asking about the persistence of something that persists over some times today, then these times t1 and t2 are today.

Even if we allow for some facts to be unexplained contingent “brute” facts, the persistence of ordinary objects over time shouldn’t be like that. Moreover, it doesn’t seem right to suppose that the ultimate explanations of the persistence of objects involve objects whose own persistence is brute. For that makes it ultimately be a brute fact that reality as a whole persists, a brute and surprising fact.

So, plausibly, we have to say that although E’s persistence from t1 to t2 has no causal explanation, it has some other kind of explanation. The most plausible candidate for this kind of explanation is that E is imperishable: that it is logically impossible for E to perish.

Hence, if all immediate causation is simultaneous, very likely there is something imperishable. And the imperishable entity or entities then cause things to exist at the time at which they exist, thereby explaining their persistence.

On the theory that God is the imperishable entity, the above explains why for Aquinas preservation and creation are the same.

(It’s a pity that I don’t think all immediate causation is simultaneous.)

Problem: Suppose E immediately makes B persist from time 2 to time 4, by immediately causing it to exist at all the times from 2 to 4. Surely, though, E exists at time 4 because it existed at time 2. And this “because” is hard to explain.

Response: We can say that B exists at time 4 because of its esse (or act of being) at time 2, provided that (a) B’s esse at time 2 is its being caused by E to exist at time 2, and (b) E causes B to exist at time 4 because (non-causally because) E caused B to exist at time 2. But once we say that B exists at time 4 because of its very own esse at time 2, it seems we’ve saved the “because” claim in the problem.

### Two moment presentism

The biggest problem for presentism is the problem of diachronic relations, especially causation. If E is earlier than F and E causes F, then at any given time, this instance of causation will have to either be a relation between two non-existent relata or a relation between one existent and one non-existent relatum, and this is problematic. Here’s a variant on presentism that solves that problem.

Suppose time is discrete, but instead of supposing that a single moment is always actual, suppose that always two successive moments are actual. Thus, if the moments are numbered 0, 1, 2, 3, …, first 0 and 1 are actual, then 1 and 2 are actual, then 2 and 3 are actual, and so on. We then say that the present contains both of the successive moments: the present is not a moment. It is never the case that a single moment is actual, except maybe at the beginning or end of the sequence (those are variants whose strengths and weaknesses need evaluation). Strictly speaking, then, we should label times with pairs of moments: time 1–2, time 2–3, etc. (There are now two variants: on one of them, time 2–3 consists of nothing but the two moments, or it also has an “in between”.)

We then introduce two primitive tense operators: “Just was” and “Is about to be”. Thus, if an object is yellow from times 0 through 2 and blue from time 3 onward, then at time 2–3 it just was yellow and is about to be blue. We can say that an object is F at time 2–3, where Fness is something stative rather than processive, provided that it just was F and is about to be F. We might want to say that it is changing from being F1 to being F2 if it just was F1 and is about to be F2 instead (or maybe there is something more to change than that).

We can now get cases of direct diachronic causation between events at neighboring moments, and because both of the moments are present, our “two-moment presentist” will say that when the two moments are both present, causation is a relation between two existent relata, one at the earlier moment and the other at the later. Of course, there will be cases of indirect diachronic causation to talk about, where some event at time 2 causes an event at time 4 by means of an event at time 3, but the two-moment presentist can break this up into two direct instances of diachronic causation, one of which did/does/will take take place at time 2–3 and the other of which did/does/will take place at time 3–4.

I bet this view is in the literature. It’s too neat a solution to the problem not to have been noticed.

### A spatial "in between"

In my last post I offered the suggestion that someone who thinks time is discrete has reason to think that there is something in between the moments—a continuous unbroken (but perhaps breakable) interval.

Consideration 1: Imagine that space is discrete, arranged on a grid pattern, and I touch left and right index fingers together. It could happen that the rightmost spatial points of my left fingertip is side-by-side with the leftmost spatial points of my right fingertip, but nonetheless my hands aren’t joined into a single solid. One way to represent this setup would be to say that a spatial point in my left fingertip is right next to a spatial point in my right fingertip, but the interval between these spatial points is not within me.

But positing a spatial “in between” isn’t the only solution: distinguishing internal and external geometry is another.

Consideration 2: Zeno’s Stadium argument can be read as noting that if space and time are discrete, then an object moving at one point per unit of time rightward and an equal length object moving at one point per unit of time leftward can pass by each other without ever being side-by-side. Positing an “in between”, such that objects may be “inbetween places when they are in between times, may make this less problematic.

## Wednesday, November 15, 2017

### A non-reductive eternalist theory of change

It is sometimes said that B-theorists see change as reducible to temporal variation of properties—being non-F at t1 but F at t2 (the “at-at theory of change”)—while A-theorists have a deeper view of change.

But isn’t the A-theorist’s view of change just something like: having been non-F but now being F? But that’s just as reductive as the B-theorist’s at-at theory of change, and it seems just as much to be a matter of temporal variation. Both approaches have this feature: they analyze change in terms of the having and not having of a property. Note, also, that the A-theorist who gives the having-been-but-now-being story about change is committed to the at-at theory being logically sufficient for change from being non-F to being F.

I think there may be something to the intuition that the at-at theory doesn’t wholly capture change. But moving to the A-theory does not by itself solve the problem. In fact, I think the B-theory can do better than the best version of the A-theory.

Let me sketch an Aristotelian story about time. Time is discrete. It has moments. But it is not exhausted by moments. In addition to moments there are intervals between moments. These intervals are in fact undivided, though they might be divisible (Aristotle will think they are). At moments, things are. Between moments, things become. Change is when at one moment t1 something is non-F, at the next moment t2 it is F, and during the interval between t1 and t2 it is changing from non-F to F.

On this story, the at-at theory gives a necessary condition for changing from non-F to F, but perhaps not a sufficient one. For suppose temporally gappy existence is possible, so that an object can cease to exist and come back. Then it is conceivable that an object exist at t1 and at t2, but not during the interval between t1 and t2. Such an object might be brought back into existence at t2 with the property of Fness which it lacked at t1, but it wouldn’t have changed from being non-F to being F.

But there is a serious logical difficulty with the above story: the law of excluded middle. Suppose that a banana turns from non-blue (say, yellow) to blue over the interval I from t1 to t2. What happens during the interval? By excluded middle, the banana is non-blue or blue. But which is it? It cannot be non-blue on a part of the interval I and blue on another part, for that would imply a subdivision of the interval on the Aristotelian view of time. So it must be blue over the whole interval or non-blue over the whole interval. But neither option seems satisfactory. The interval is when it is changing from non-blue to blue; it shouldn’t already be at either endpoint during the interval. Thus, it seems, during I the banana is neither non-blue nor blue, which seems a contradiction.

But the B-theorist has a way of blocking the contradiction. She can take one of the standard B-theoretic solutions to the problem of temporary intrinsics and use that. For instance, she can say that the banana is neither blue-during-I and nor non-blue-during-I. There is no contradiction here, nor any denial of excluded middle.

What the theory denies is temporalized excluded middle:

1. For any period of time u, either s during u or (not s) during u

but it affirms:

1. For any period of time u, either s during u or not (s during u).

A typical presentist is unable to say that. For a typical presentist thinks that if u is present, then s during u if and only if s simpliciter, so that (1) follows from (2), at least if u is present (and then, generalizing, even if it’s not). Such a typical presentism, which identifies present truth with truth simpliciter is I think the best version of the A-theory.

Thinking of time as made up of moments and intervals is, I think, quite fruitful.

## Tuesday, November 14, 2017

### Freedom, responsibility and the open future

Assume the open futurist view on which freedom is incompatible with there being a positive fact about what I choose, and so there are no positive facts about future (non-derivatively) free actions.

Suppose for simplicity that time is discrete. (If it’s not, the argument will be more complicated, but I think not very different.) Suppose that at t2 I freely choose A. Let t1 be the preceding moment of time.

Then:

1. At t2, it is already a fact that I choose A, and so I am no longer free with respect to A.

2. At t1, I am still free with respect to choosing A, but I am not yet responsible with respect to A.

Thus:

1. At no time am I both free and responsible with respect to A.

This seems counterintuitive to me.

### Open theism and divine perfection

1. It is an imperfection to have been close to certain of something that turned out false.

2. If open theism is true, God was close to certain of propositions that turned out false.

3. So, if open theism is true, God has an imperfection.

4. God has no imperfections.

5. So, open theism is not true.

I think (1) is very intuitive and (4) is central to theism. It is easy to argue for (2). Consider giant sentence of the form:

1. Alice’s first free choice on Monday is F1, Bob’s first free choice on Tuesday is F2, Carol’s first free choice on Tuesday is F3, …

where the list of names ranges over the names of all people living on Monday, and the Fi are "right", "not right" and "not made" (the last means that the agent will not make any free choices on Tuesday).

Exactly one proposition of the form (6) ends up being true by the end of Monday.

Suppose we’re back on the Sunday before that Monday. Absent the kind of knowledge of the future that the open theist denies to God, God will rationally assign probabilities to propositions of the form (6). These probabilities will all be astronomically low. Even though Alice may be very virtuous and her next choice is very likely to be right, and Bob is vicious and his next choice is very likely to be wrong, etc., given that any proposition of the form (6) has 7.6 billion conjuncts, the probability of that proposition is tiny.

Thus, on Sunday God assigns miniscule probabilities to all the propositions of the form (6), and hence God is close to certain of the negations of all such propositions. But come Tuesday, one of these negated propositions turns out to be false. Therefore, on Tuesday—i.e., today—there a proposition that turned out false that God was close to certain of. And that yields premise (2).

(I mean all my wording to be neutral between the version of open theism where future contingents have a truth value and the one where they do not.)

Moreover, even without considerations of perfections, being close to certain of something that will turn out to be false is surely inimical to any plausible notion of omniscience.

## Monday, November 13, 2017

### Flying rings

My five-year-old has been really enjoying our Aerobie Pro flying disk, but it has too much range to use at home or in a backyard. The patent has expired, so I designed a 3D-printable version with a similar airfoil profile and customizable diameter and wing-chord. The inner one is 100mm diameter (20mm chord), and can be used indoors. Here are the files.

### Open theism and utilitarianism

Here’s an amusing little fact. You can’t be both an open theist and an act utilitarian. For according to the act utilitarian, to fail to maximize utility is wrong. It is impossible for God to do the wrong thing. But given open theism, it does not seem that God can know enough about the future in order to be necessarily able to maximize utility.

## Thursday, November 9, 2017

### Proportionality in Double Effect is not a simple comparison

It is tempting to make the final “proportionality” condition of the Principle of Double Effect say that the overall consequences of the action are good or neutral, perhaps after screening off any consequences that come through evil (cf. the discussion here).

But “good or neutral” is not a necessary condition for permissibility. Alice is on a bridge above Bob, and sees an active grenade roll towards Bob. If she does nothing, Alice will be shielded by the bridge from the explosion. But instead she leaps off the bridge and covers the grenade with her body, saving Bob’s life at the cost of her own.

If “good or neutral” consequences are required for permissibility, then to evaluate the permissibility of Alice’s action it seems we would need to evaluate whether Alice’s death is a worse thing than Bob’s. Suppose Alice owns three goldfish while Bob owns two goldfish, and in either case the goldfish will be less well cared for by the heirs (and to the same degree). Then Alice’s death is mildly worse than Bob’s death, other things being equal. But it would be absurd to say that Alice acted wrongly in jumping on the grenade because of the impact of this act on her goldfish.

Thus, the proportionality condition in PDE needs to be able to tolerate some differences in the size of the evils, even when these differences disfavor the course of action that is being taken. In other words, although the consequences of jumping on the grenade are slightly worse than those of not doing so, because of the impact on the goldfish, the bad consequences of jumping are not disproportionate to the bad consequences of not jumping.

On the other hand, if it was Bob’s goldfish bowl, rather than Bob, that was near the grenade, the consequences of jumping would be disproportionate to the consequences of not jumping, since Alice’s death is disproportionately bad as compared to the death of Bob’s goldfish.

Objection: The initial case where Alice jumps to save Bob’s life fails to take into account the fact that Alice’s act of self-sacrifice adds great value to the consequences of jumping, because it is a heroic act of self-sacrifice. This added increment of value outweighs the loss to Alice’s extra goldfish, and so I was incorrect to judge that the consequences are mildly negative.

Response: First, it seems to be circular to count the value of the act itself when evaluating the act’s permissibility, since the act itself only has positive value if it is permissible. And anyway one can tweak the case to avoid this difficulty. Suppose that it is known that if Alice does not jump on the grenade, Carl who is standing beside her will. And Carl only owns one goldfish. Then whether Alice jumps or not, the world includes a heroic act. And it is better that Carl jump than that Alice, other things being equal, as Carl only has one goldfish depending on him. But it is absurd that Alice is forbidden from jumping in order that a man with fewer goldfish might do it in her place.

Question: How much of a difference in value can proportionality tolerate?

Response: I don’t know. And I suspect that this is one of those parameters in ethics that needs explaining.

### A simple "construction" of non-measurable sets from coin-toss sequences

Here’s a simple “construction” of a non-measurable set out of coin-toss sequences, i.e., of an event that doesn’t have a well-defined probability, going back to Blackwell and Diaconis, but simplified by me not to use ultrafilters. I’m grateful to John Norton for drawing my attention to this.

Let Î© be the set of all countably infinite coin-toss sequences. If a and b are two such sequences, say that a ∼ b if and only if a and b differ only in finitely many places. Clearly ∼ is an equivalence relation (it is reflexive, symmetric and transitive).

For any infinite coin-toss sequence a, let ra be the reversed sequence: the one that is heads wherever a is tails and vice-versa. For any set A of sequences, let rA be the set of the corresponding sequences. Observe that we never have a ∼ ra, and that U is an equivalence class under ∼ (i.e., a maximal set all of whose members are ∼-equivalent) if and only if rU is an equivalence class. Also, if U is an equivalence class, then rU ≠ U.

Let C be the set of all unordered pairs {U, rU} where U is an equivalence class under ∼. (Note that every equivalence class lies in exactly one such unordered pair.) By the Axiom of Choice (for collections of two-membered sets), choose one member of each pair in C. Call the chosen member “selected”. Then let N be the union of all the selected sets.

Here are two cool properties of N:

1. Every coin-toss sequence is in exactly one of N and rN.

2. If a and b are coin-toss sequences that differ in only finitely many places, then a is in N if and only if b is in N.

We can now prove that N is not measurable. Suppose N is measurable. Then by symmetry P(rN)=P(N). By (1) and additivity, 1 = P(N)+P(rN), so P(N)=1/2. But by (2), N is a tail set, i.e., an event independent of any finite subset of the tosses. The Kolmogorov Zero-One Law says that every (measurable) tail set has probability 0 or 1. But that contradicts the fact that P(N)=1/2, so N cannot be measurable.

An interesting property of N is that intuitively we would think that P(N)=1/2, given that for every sequence a, exactly one of a and ra is in N. But if we do say that P(N)=1/2, then no finite number of observations of coin tosses provides any Bayesian information on whether the whole infinite sequence is in N, because no finite subsequence has any bearing on whether the whole sequence is in N by (2). Thus, if we were to assign the intuitive probability 1/2 to P(N), then no matter what finite number of observations we made of coin tosses, our posterior probability that the sequence is in N would still have to be 1/2—we would not be getting any Bayesian convergence. This is another way to see that N is non-measurable—if it were measurable, it would violate Bayesian convergence theorems.

And this is another way of highlighting how non-measurability vitiates Bayesian reasoning (see also this).

We can now use Bayesian convergence to sketch a proof that N is saturated non-measurable, i.e., that if A ⊆ N is measurable, then P(A)=0 and if A ⊇ N is measurable, then P(A)=1. For suppose A ⊆ N is measurable. Suppose that we are sequentially observing coin tosses and forming posteriors for A. These posteriors cannot ever exceed 1/2. Here is why. For a coin toss sequence a, let rna be the sequence obtained by keeping the first n tosses fixed and reversing the rest of the tosses. For any any finite sequence o1, ..., on of observations, and any infinite sequence a of coin-tosses compatible with these observations, at most one of a and rna is a member of N (this follows from (1) and the fact that ra ∈ N if and only if rna ∈ N by (2)). By symmetry P(A ∣ o1...on)=P(rnA ∣ rn(o1...on)) (where rnA is the result of applying rn to every member of A). But rn(o1...on) is the same as o1...on, so P(A ∣ o1...on)=P(rnA ∣ o1...on). But A and rnA are disjoint, so P(A ∣ o1...on)+P(rnA ∣ o1...on)≤1 by additivity, and hence P(A ∣ o1...on)≤1/2. Thus, the posteriors for A are always at most 1/2. By Bayesian convergence, however, almost surely the posteriors will converge to 1 or 0, respectively, depending on whether the sequence being observed is actually in A. They cannot converge to 1, so the probability that the sequence is in A must be equal to 0. Thus, P(A)=0. The claim that if A ⊇ N is measurable then P(A)=1 is proved by noting that then N − A ⊇ rN (as rN is the complement of N), and so by the above argument with rN in place of N, we have P(N − A)=0 and thus P(A)=1.

## Tuesday, November 7, 2017

### Why might God refrain from creating?

Traditional Jewish and Christian theism holds that God didn’t have to create anything at all. But it is puzzling what motive a perfectly good being would have not to create anything. Here’s a cute (I think) answer:

• If (and only if) God doesn’t create anything, then everything is God. And that’s a very valuable state of affairs.

Bob has the belief that there are infinitely many people in a parallel universe, and that they wear numbered jerseys: 1, 2, 3, …. He also believes that he has a system in a laboratory that can cause indigestion to any subset of these people that he can describe to a computer. Bob has good evidence for these beliefs and is (mirabile!) sane.

Consider four scenarios:

1. Bob attempts to cause indigestion to all the odd-numbered people.

2. Bob attempts to cause indigestion to all the people whose number is divisible by four.

3. Bob attempts to cause indigestion to all the people whose number is either odd or divisible by four.

4. Bob yesterday attempted to cause indigestion to all the odd-numbered people and on a later occasion to all the people whose number is divisible by four.

In each scenario, Bob has done something very bad, indeed apparently infinitely bad: he has attempted infinite mass sickening.

In scenarios 1-3, other things being equal, Bob’s guilt is equal, because the number of people he attempted to cause indigestion to is the same—a countable infinity.

But now we have two arguments about how bad Bob’s action in scenario 4 is. On the one hand, in scenario 4 he has attempted to sicken the exact same people as in scenario 3. So, he is equally guilty in scenario 4 as in scenario 3.

On the other hand, in scenario 4, Bob is guilty of two wrong actions, the action of scenario 1 and that of scenario 2. Moreover, as we saw before, each of these actions on its own makes him just as guilty as the action in scenario 3 does. Doing two wrongs, even two infinite wrongs, is worse than just doing one, if they are all of the same magnitude. So in scenario 4, Bob is guiltier than in scenario 3. One becomes the worse off for acquiring more guilt. But if 4 made Bob no guiltier than 3 would have, it would make Bob no guiltier than 1 would have, and so after committing the first wrong in 4, since he would already have the guilt of 1, Bob would have no guilt-avoidance reason to refrain from the second wrong in 4, which is absurd.

How to resolve this? I think as follows: when accounting guilt, we should look at guilty acts of will rather than consequences or attempted consequences. In scenario 4, although the total attempted harm is the same as in each of scenarios 1-3, there are two guilty acts of will, and that makes Bob guiltier in scenario 4.

We could tell the story in 4 so that there is only one act of will. We could suppose that Bob can self-hypnotize so that today he orders his computer to sicken the odd-numbered people and tomorrow those whose number is divisible by four. In that case, there would be only one act of will, which will be less bad. It’s a bit weird to think that Bob might be better off morally for such self-hypnosis, but I think one can bite the bullet on that.

### Evidence that I am dead

I just got evidence that I am dead, in an email that starts:

Dear expired [organization] member,
You might think this is pretty weak evidence. Maybe "expired" doesn't mean "dead" here. But the email continues:
Thank you for your past support of [organization]. Your membership has recently expired, and we would like to take this opportunity to urge you to renew your membership.
But last year I acquired a life membership...

Sorry, I couldn't resist sharing this.

### From a dualism to a theory of time

This argument is valid:

1. Some human mental events are fundamental.

2. No human mental event happens in an instant.

3. If presentism is true, every fundamental event happens in an instant.

4. So, presentism is not true.

Premise (1) is widely accepted by dualists. Premise (2) is very, very plausible. That leaves (3). Here is the thought. Given presentism, that a non-instantaneous event is happening is a conjunctive fact with one conjunct about what is happening now and another conjunct about what happened or will happen. Conjunctive facts are grounded in their conjuncts and hence not fundamental, and for the same reason the event would not be fundamental.

But lest four-dimensionalist dualists cheer, we can continue adding to the argument:

1. If temporal-parts four-dimensionalism is true, every fundamental event happens in an instant.

2. So, temporal-parts four-dimensionalism is not true.

For on temporal-parts four-dimensionalism, any temporally extended event will be grounded in its proper temporal parts.

The growing block dualist may be feeling pretty smug. But suppose that we currently have a temporally extended event E that started at t−2 and ends at the present moment t0. At an intermediate time t−1, only a proper part of E existed. A part is either partly grounded in the whole or the whole in the parts. Since the whole doesn’t exist at t−1, the part cannot be grounded in it. So the whole must be partly grounded in the part. But an event that is partly grounded in its part is not fundamental. Hence:

1. If growing block is true, every fundamental event happens in an instant.

2. So, growing block is not true.

There is one theory of time left. It is what one might call Aristotelian four-dimensionalism. Aristotelians think that wholes are prior to their parts. An Aristotelian four-dimensionalist thinks that temporal wholes are prior to their temporal parts, so that there are temporally extended fundamental events. We can then complete the argument:

1. Either presentism, temporal-parts four-dimensionalism, growing block or Aristotelian four-dimensionalism is true.

2. So, Aristotelian four-dimensionalism is true.

## Monday, November 6, 2017

### Statistically contrastive explanations of both heads and tails

Say that an explanation e of p rather than q is statistically contrastive if and only P(p|e)>P(q|e).

For instance, suppose I rolled an indeterministic die and got a six. Then I can give a statistically contrastive explanation of why I rolled more than one (p) rather than rolling one (q). The explanation (e) is that I rolled a fair six-sided die. In that case: P(p|e)=5/6 > 1/6 = P(q|e). Suppose I had rolled a one. Then e would still have been an explanation of the outcome, but not a statistically contrastive one.

One might try to generalize the above remarks to conclude to this thesis:

1. In indeterministic stochastic setups, there will always be a possible outcome that does not admit of a statistically contrastive explanation.

The intuitive argument for (1) is this. If one indeterministic stochastic outcome is p, either there is or is not a statistically contrastive explanation e of why p rather not p is the case. If there is no such statistically contrastive explanation, then the consequent of (1) is indeed true. Suppose that there is a statistically contrastive explanation e, and let q be the negation of p. Then P(p|e)>P(q|e). Thus, e is a statistically contrastive explanation of why p rather than q, but it is obvious that it cannot be a statistically contrastive explanation of why q rather than p.

The intuitive argument for (1) is logically invalid. For it only shows that e is not the statistically contrastive explanation for why q rather than p, while what needed to be shown is that there is no statistically contrastive explanation.

In fact, (1) is false. The indeterministic stochastic situation is Alice’s flipping of a coin. There are two outcomes: heads and tails. But prior to the coin getting flipped, Bob uniformly chooses a random number r such that 0 < r < 1 and loads the coin in such a way that the chance of heads is r. Suppose that in the situation at hand r = 0.8. Let H be the heads outcome and T the tails outcome. Then here is a constrastive explanation for H rather than T:

• e1: an unfair coin with chance 0.8 of heads was flipped.

Clearly P(H|e1)=0.8 > 0.2 = P(T|e1). But suppose that instead tails was obtained. We can give a constrastive explanation of that, too:

• e2: an unfair coin with chance at least 0.2 of tails was flipped.

Given only e2, the chance of tails is somewhere between 0.2 and 1.0, with the distribution uniform. Thus, on average, given e2 the chance of tails will be 0.6: P(T|e2)=0.6. And P(H|e2)=1 − P(T|e2)=0.4. Thus, e2 is actually a statistically contrastive explanation of T. And note that something like this will work no matter what value r has as long as it’s strictly between 0 and 1.

It might still be arguable that given indeterministic stochastic situations, something will lack a statistically contrastive explanation. For instance, while we can give a statistically contrastive explanation of heads rather than tails, and a statistically contrastive explanation of tails rather than heads. But it does not seem that we can give a statistically contrastive explanation of why the coin was loaded exactly to degree 0.8, since that has zero probability. Of course, that’s an outcome of a different stochastic process than the coin flip one, so it doesn't support (1). And the argument needs to be more complicated than the invalid argument for (1).

### Cheap Makey Makey alternative

The Makey Makey is a cool electronic gadget that lets kids make a USB controller out of any somewhat conductive stuff, like bananas, play dough, etc. Unfortunately, it's about \$50 (there is also a \$30 clone). Also, annoying, it requires a ground connection for the user. I made a capacitive version that costs about \$3 using a \$2 stm32f103c8 board. It emulates either a keyboard or a gamepad/joystick.

Here are instructions.

### Projection and the imago Dei

There is some pleasing initial symmetry between how a theist (or at least Jew, Christian or Muslim) can explain features of human nature by invoking the doctrine that we are in the image of God and using this explanatory schema:

1. Humans are (actually, normally or ideally) F because God is actually F

and how an atheist can explain features attributed to God by projection:

1. The concept of God includes being actually F because humans are (actually, normally or ideally) F.

Note, however, that while schemata (1) and (2) are formally on par, schema (1) has the advantage that it has a broader explanatory scope than (2) does. Schema (1) explains a number of features (whether actual or normative) of the nature of all human beings, while schema (2) only explains a number of features of the thinking of a modest majority (the 55% who are monotheists) of human beings.

There is also another interesting asymmetry between (1) and (2). Theist can without any damage to their intellectual system embrace both (1) and a number of the instances of (2) that the atheist embraces, since given the imago Dei doctrine, projection of normative or ideal human features onto God can be expected to track truth with some probability. On the other hand, the atheist cannot embrace any instances of (1).

Note, too, that evolutionary explanations do not undercut (1), since there can be multiple correct explanations of one phenomenon. (This phenomenon is known to people working on Bayesian inference.)

## Saturday, November 4, 2017

### Neo-Aristotelian Perspectives on Contemporary Science

The collection Neo-Aristotelian Perspectives on Contemporary Science (eds: Simpson, Koons and Teh) is now available. It's divided into a physical sciences and a life sciences part.

My piece on the Traveling Forms interpretation is in the physical sciences part (interestingly, though, that interpretation is more about us than about physics).

## Thursday, November 2, 2017

### Four problems and a unified solution

A similar problem occurs in at least four different areas.

1. Physics: What explains the values of the constants in the laws of nature?

2. Ethics: What explains parameters in moral laws, such as the degree to which we should favor benefits to our parents over benefits to strangers?

3. Epistemology: What explains parameters in epistemic principles, such as the parameters in how quickly we should take our evidence to justify inductive generalizations, or how much epistemic weight we should put on simplicity?

4. Semantics: What explains where the lines are drawn for the extensions of our words?

There are some solutions that have a hope of working in some but not all the areas. For instance, a view on which there is a universe-spawning mechanism that induces random value of constants in laws of nature solves the physics problem, but does little for the other three.

On the other hand, vagueness solutions to 2-4 have little hope of helping in the physics case. Actually, though, vagueness doesn’t help much in 2-4, because there will still be the question of explaining why the vague regions are where they are and why they are fuzzy in the way there are—we just shift the parameter question.

In some areas, there might be some hope of having a theory on which there are no objective parameters. For instance, Bayesianism holds that the parameters are set by the priors, and subjective Bayesianism then says that there are no objective priors. Non-realist ethical theories do something similar. But such a move in the case of physics is implausible.

In each area, there might be some hope that there are simple and elegant principles that of necessity give rise to and explainingthe values of the parameters. But that hope has yet to be born out in any of the four cases.

In each area, one can opt for a brute necessity. But that should be a last resort.

In each area, there are things that can be said that simply shift the question about parameters to a similar question about other parameters. For instance, objective Bayesianism shifts the question of about how much epistemic weight we should put on simplicity to the question of priors.

When the questons are so similar, there is significant value in giving a uniform solution. The theist can do that. She does so by opting for these views:

1. Physics: God makes the universe have the fundamental laws of nature it does.

2. Ethics: God institutes the fundamental moral principles.

3. Epistemology: God institutes the fundamental epistemic principles for us.

4. Semantics: God institutes some fundamental level of our language.

In each of the four cases there is a question of how God does this. And in each there is a “divine command” style answer and a “natural law” style answer, and likely others.

In physics, the “divine command” style answer is occasionalism; in ethics and epistemology it just is “divine command”; and in semantics it is a view on which God is the first speaker and his meanings for fundamental linguistic structs are normative. None of these appeal very much to me, and for the same reason: they all make the relevant features extrinsic to us.

In physics, the “natural law” answer is theistic Aristotelianism: laws supervene on the natures of things, and God chooses which natures to instantiate; theistic natural law is a well-developed ethical theory, and there are analogues in epistemology and semantics, albeit not very popular ones.

## Wednesday, November 1, 2017

### Theistic Natural Law and the Euthyphro Problem

Theistic Natural Law (TNL) theory seems to be subject to the Euthyphro problem much as divine command theory (DCT) is. On DCT, the Euthyphro problem takes the form of the question:

1. Why did God command what he commanded rather than commanding otherwise?

On TNL, the Euthyphro problem takes the form of the question:

1. Why did God create beings with the natures he did rather than creating beings with other natures?

In both cases, one can respond by talking of the essential goodness of God, by virtue of which he makes a good choice as to how to fittingly match the non-normative with the normative features of creatures. In the DCT case, God makes the match by benevolently choosing what sorts of creatures to create and what sorts of commands to give them. In the TNL case, God makes the match by benevolently choosing the non-deontic and deontic features of natures and then creating creatures with these natures. Thus, in the DCT case, God has reason to coordinate the sociality of creatures with the command to cooperate, while in the TNL case God has reason to actualize natures that either both include sociality and the duty to cooperate or to actualize natures that include neither.

So in what way is TNL better off than DCT with regard to the Euthyphro problem? The one thing I can think of in the vicinity is this: TNL allows for there to be deontic features that necessarily every natural includes, and it allows for there to be some deontic features of creatures that are entailed by the non-deontic features. For instance, perhaps every possible nature of an agent includes a prohibition against pointless imposition of torture, and every possible nature of a linguistic agent includes a prohibition against lying. But I am not sure this difference is really relevant to the Euthyphro problem.

I do prefer TNL to DCT, but not because of the Euthyphro problem. My reason for the preference is that many moral obligations appear to be intrinsic features of us.

Of course, the above arguments presuppose a particular picture of how natural law works. But I like that picture.