Thursday, November 30, 2017

Self-sacrifice and bigotry

Case 1: A child is drowning in a dirty pond. You can easily pull out the child. But you’ve got cuts all over your dominant arm and the water is full of nasty bacteria and medical help is a week away. If you go in the water to pull out the child, your arm will get infected, become gangrenous and in a week it will be amputated. There will be no social losses or gains to you.
Case 2: A child is drowning in a clean pond. You can easily pull out the child. But the child is a member of a despised minority group, and you will be ostracized by your friends and family for life for your rescue. There will be no physical losses or gains to you.

Here is my intuition. In both cases, it would be a good thing to rescue the child. But in Case 1, unless you have special duties (e.g., it’s your own child), you do not have a duty to rescue given the physical costs. In Case 2, however, you do have a duty to rescue, despite the social costs.

The difference between the two cases does not, I think, lie in its being worse to lose an arm than to be ostracized. Imagine your community has a rite of passage that involves swimming in the dirty pond with the cuts on your arm, and you’d be ostracized if you don’t. You might well reasonably judge it worthwhile—but still, I think, the intuition remains that in Case 2 you ought to pull out the child, while in Case 1 it’s supererogatory. So it seems then you might have a duty to undertake the greater sacrifice (facing social stigma in Case 2) without a duty to undertake the lesser sacrifice (amputation in Case 1). But for simplicity let’s just suppose that the harms to you in the two cases are on par.

Is it that physical harm excuses one from the duty to rescue the child but social harm does not? I don’t think so.

Case 3: A child is being murdered by drowning in a clean pond. You can easily pull out the child. But if you do, the murderer will punish you for it by transporting you away from your home community to a foreign community where you will never learn the difficult language and hence will not have friends.

We can set this up so the harm in all three cases is equal. But my intuition is that Case 2 is like Case 1: in both cases it is supererogatory to rescue the child but there is no duty.

In Cases 2 and 3 we have equal social harms, but I feel a difference. (Maybe you don’t!) Here’s one consideration that would explain the difference. That an action gains one the praise and friendship of bigots qua bigots does not count much in favor of the action, even if, and perhaps even especially if, such praise and friendship would make one’s life significantly more pleasant. Similarly, that an action loses one the friendship of bigots, and does so precisely through their bigotry, is not much of a consideration against the action. I say “not much”, because there might be instrumental gains and losses in both cases to be accounted for.

Here’s a second consideration. Perhaps if I refrain from doing something directly because doing it will lose me bigots’ friendship or gain me their stigma, I am thereby complicit in the bigotry. In Case 2, then, I need to ignore the direct loss of goods of social connectedness in considering whether to rescue the child. I need to say to myself: “When those are the conditions of their friendship, so much the worse for their friendship.” In Case 3, I have similar social losses, but I don't lose the friendship of bigots qua bigots, so the loss counts a lot more.

But note that one can still legitimately consider the instrumental harms from the loss of goods of social connectedness. Consider:

Case 4: A child is drowning in a clean pond, but you have a wound that will become gangrenous and force amputation absent medical help. You can easily pull out the child. But the child is a member of a despised minority group, and if you rescue the child, the only doctor in town will refuse to have anything to do with you. As a result, your wound will become gangrenous by the time you find another doctor, and you will require amputation.

I think in Case 4, you are permitted not to rescue the child, just as in Case 1.

Wednesday, November 29, 2017

Inductive evidence of the existence of non-spatial things

Think about other plausibly fundamental qualities beyond location and extension: thought, charge, mass, etc. For each one of these, there are things that have it and things that don’t have it. So we have some inductive reason to think that there are things that have location and things that don’t, things that have extension and things that don’t. Admittedly, the evidence is probably pretty weak.

Tuesday, November 28, 2017

Wronger and wronging

Here’s an interesting thing. An act doesn’t necessarily become any more wrong for wronging someone.

Alice and Bob respectively come across a derelict spaceship. Their sensors show that there is intelligent life aboard. Each blasts the respective spaceship as target practice. Bob’s sensors malfunctioned: there was no intelligent life on the ship he blasted. Alice’s sensors were just fine. Alice wronged the people she killed. Bob wronged no one, as there was no one there to be wronged. But what Bob did was no less wrong than what Alice did.

Note 1: Bob’s case differs from standard cases of attempted murder. For in standard cases of attempted murder, the intended victim is wronged.

Note 2: I am not claiming that Bob wrongs no one. Bob wrongs both God and himself. But Alice also wrongs God and herself, just as much as Bob does, and additionally wrongs the people she kills. That additional wronging doesn’t make her act wronger, though.

Note 3: One might argue that Bob and Alice wrong all the people who have the property that they might (epistemically? alethically?) have been on the ship. Sure, but what if there are no such people in Bob's case? Perhaps Bob, unbeknownst to himself, is alone in his universe.

An anti-Aristotelian argument for divine simplicity

The doctrine of divine simplicity fits comfortably with Aquinas’s Aristotelian framework. But it is interesting that anti-Aristotelianism also leads to divine simplicity.

  1. The proper parts are more fundamental than the whole. (Mereological anti-Aristotelianism.)

  2. Nothing is more fundamental than God.

  3. So, God has no proper parts.

Of course, as an Aristotelian I reject 1, so while I accept the conclusion of this argument, I can’t use the argument myself.

Monday, November 27, 2017


It’s notoriously hard to characterize the physical precisely enough to attack or defend physicalism. To attack physicalism, however, it is enough to attack a characterization broader than physicalism, and to defend physicalism, a narrower characterization will do.

Here’s a suggestion along the “broader” lines. We can characterize reductive physicalism over-broadly as:

  • Reductive first-orderism: All facts about the concrete (variants: contingent, spatiotemporal) features of the world reduce to first-order facts.

This may take in some theories other than what one intuitively counts as reductive physicalism, but if the object is ciriticism, that’s all we need.

Note how this characterization nicely shows how paradigm examples of magic would violate reductive physicalism: for paradigm examples of magic involve causation irreducibly by virtue of the meaning of a spell, gesture, etc., and meaning is a higher-order property. It also shows why irreducible Aristotelian teleology has no place in a reductive physicalist story: for teleological properties are second-order (I think).

Moreover, if we see reductive physicalism in the above way, it’s also easy to see that it’s false, by an argument of Leon Porter. For any first-order fact can be expressed in a first-order language. But, famously, the property of truth cannot be reduced to properties expressible in a first-order language (Tarski’s indefinability of truth; or, more simply, note that if you could express something equivalent to the property of truth in a first-order language, you could express the Liar Sentence in a first-order language). And some concrete objects, namely inscriptions, have this property of truth. Hence, some concrete objects have a property that cannot be expressed in first-order terms, contrary to reductive first-orderism.

Change in transubstantiation

The two main parts of the doctrine of transubstantiation that get philosophically discussed are that after consecration we have:

  • Real Presence: Christ's body and blood is really there.
  • Real Absence: bread and wine is no longer there.
But there may be another part: that the bread and wine change into the body and blood rather than simply being replaced by the body and blood. Certainly the Council of Trent uses the language of "conversion" of bread and bread wine, but it is not completely clear to me that they mean to define there to be something more than replacement. Aquinas talks unclearly (to me) of the substantial change as a kind of "order" in the two substances.

Besides the general puzzle of how change differs from replacement, there are at least two philosophical difficulties about the change. The first is that on some versions--not mine--of Aristotelian metaphysics, what makes substantial change be a change is the persistence of matter. But there is no matter persisting here (indeed, Aquinas' remark emphasizes this). The second is that what the bread and wine change into, namely Christ's body, is already there. But it seems that if x changes into y, then y doesn't exist prior to the change.

Leibniz considers a theory on which the bread and wine change into new parts of Christ's body. This solves the second problem, but at the expense of having to say that the bread changes into a mere part of Christ's body, which does not appear to be what the Church means. Trent does say that whole Christ comes to be present. I suppose one could have a hybrid theory on which the bread and wine change into new parts of Christ's body, and the rest of Christ's body then additionally comes to be present, but not by conversion. While I do not have decisive textual evidence, this does not seem to me to be what Trent means. And it is grotesque to think that Christ gets fatter at transubstantiation.

While it could well be that the Council doesn't mean anything beefy about the "conversion", and perhaps all it is an "order" between the two substances (cf. Aquinas), an order constituted by by non-coincidental replacement in the same location. That would simplify things metaphysically. But I want to try for something metaphysically thicker.

Here's the thought. On my Aristotelian metaphysics, nothing persists in substantial change. But when a change of substance x into substance y, a rather special causal power is triggered in x, the causal power of giving rise to y while perishing. The exercise of such a causal power is what makes it be the case that x has changed into y. There isn't any matter persisting in the change, so the first of the two philosophical problems with the Eucharistic change disappears. What about the second? Here's my suggestion. Normally, the existence of Christ's body at later times is caused by its existence at earlier times. But what if we say that the bread miraculously gets a special causal power, the power of causing Christ's body to exist just as the bread perishes? Then the existence of Christ's body after consecration will be causally overdetermined by two things: the bread's exercising that causal power and Christ's body exercising its ordinary causal power to make itself persist.

The bread in perishing is an overdetermining cause of the existence of Christ's body, and that is exactly how substantial change happens on my view. The main metaphysical difference here is that normally substantial change is not overdetermined, while here it is.

Tuesday, November 21, 2017


A standard definition of omniscience is:

  • x is omniscient if and only if x knows all truths and does not believe anything but truths.

But knowing all truths and not believing anything but truths is not good enough for omniscience. One can know a proposition without being certain of it, assigning a credence less than 1 to it. But surely such knowledge is not good enough for omniscience. So we need to say: “knows all truths with absolute certainty”.

I wonder if this is good enough. I am a bit worried that maybe one can know all the truths in a given subject area but not understand how they fit together—knowing a proposition about how they fit together might not be good enough for this understanding.

Anyway, it’s kind of interesting that even apart from open theist considerations, omniscience isn’t quite as cut and dried as one might think.

Perfect rationality and omniscience

  1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

  2. A perfectly rational agent must believe anything there is overwhelming evidence for.

  3. A perfectly rational agent must have consistent beliefs.

  4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

  5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

  6. So, a perfectly rational agent is never in a lottery situation. (3,5)

  7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.

Saturday, November 18, 2017

Bayesianism and anomaly

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

I suspect that often this happens: T is much better confirmed than A. For T tends to be a unified theoretical body that has been confirmed as a whole by a multitude of different kinds of observations, while A is a conjunction of a large number of claims that have been individually confirmed. Suppose, say, that P(T)=0.999 while P(A)=0.9, where all my probabilities are implicitly conditional on some background K. Given the observation E, and the fact that T and A entail its negation, we now know that the conjunction of T and A is false. But we don’t know where the falsehood lies. Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

  1. T is true and A is false

  2. T is false and A is true

  3. both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

  1. P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

  1. P(E|∼A ∧ T)=0.5
  2. P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

  1. P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

  1. P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05
  2. P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

Note that this post weakens, but does not destroy, the central arguments of this paper.

A consideration making the theodical defeat of evil a bit easier

For an evil to be defeated, in the theodical sense, the evil needs to be not only compensated for in the sufferer’s life, but it needs to be interwoven into a good in the sufferer’s life in such a way that the meaning of the evil is radically transformed in that life.

A requirement of the defeat of evil guards against theodicies where the sufferer gets the short end of the stick, the evil being permitted for the sake of goods to other individuals, or abstract impersonal goods like elegant laws of nature. Defeat appears to have an innate intrapersonality to it.

It occurs to me, however, that in heaven the requirement of defeat can sometimes be met through goods that happen to someone other than the sufferer. For all in heaven are friends of the best sort, and as Aristotle says, a friend (of the best sort) is another self, so that what happens to the friend happens to one. So if Alice has suffered an evil and Bob got a proportionate good out of God’s permitting the evil to Alice, if Alice and Bob are friends in the deepest sense, then the evil that happened to Alice is just as much a part of Bob’s life, and the good to Bob is just as much a part of Alice’s. Thus, defeat can be achieved interpersonally given friendship, without any worries about Alice getting the short end of the stick.

And abstract impersonal goods—like aesthetic ones—can become deeply personal through appreciation.

Thus, the intrapersonality condition in defeat can be met more easily than seems at first sight.

Thursday, November 16, 2017

Truth-value open theism

Consider the view that there are truth values about future contingents, but (as Swinburne and van Inwagen think) God doesn’t know future contingents. Call this “truth-value open theism”.

  1. Necessarily, a perfectly rational being believes anything there is overwhelming evidence for.

  2. Given truth-value open theism, God has overwhelming but non-necessitating evidence for some future contingent proposition p.

  3. If God has overwhelming but non-necessitating evidence for some contingent proposition p, there is a possible world where God has overwhelming evidence for p and p is false.

  4. So, if truth-value open theism is true, either (a) there is a possible world where God fails to believe something he has overwhelming evidence for or (b) there is a possible world where God believes something false. (2-3)

  5. So, if truth-value open theism is true, either (a) there is a possible world where God fails to be perfectly rational or (b) there is a possible world where God believes something false. (1,4)

  6. It is an imperfection to possibly fail to be perfectly rational.

  7. It is an imperfection to possibly believe something false.

  8. So, if truth-value open theism is true, God has an imperfection. (6-7)

And God has no imperfections.

To argue for (2), just let p be the proposition that somebody will freely do something wrong over the next month. There is incredibly strong inductive evidence for (2).

A version of the cosmological argument from preservation

Suppose that all immediate causation is simultaneous. The only way to make this fit with the obvious fact that there is diachronic causation is to make diachronic causation be mediate. And there is one standard way of making mediate diachronic causation out of immediate synchronic causation: temporally extended causal relata. Suppose that A lasts from time 0 to time 3, B lasts from time 2 to time 5, and C lasts from time 4 to time 10 (these can be substances or events). Then A can synchronically cause B at time 2 or 3, B can synchronically cause C at time 4 or 5, and one can combine the two immediate synchronic causal relations into a mediate diachronic causal relation between A and C, even though there is no time at which we have both A and C.

The problem with this approach is explaining the persistence of A, B and C over time. If we believe in irreducibly diachronic causation, then we can say that B’s existence at time 2 causes B’s existence at time 3, and so on. But this move is not available to the defender of purely simultaneous causation, except maybe at the cost of an infinite regress: maybe B’s existence from time 2.00 to time 2.75 causes B’s existence from time 2.50 to time 3.00; but now we ask about the causal relationship between B’s existence at time 2.00 and time 2.75.

So if we are to give a causal explanation of B’s persistence from time 2 to time 5, it will have to be in terms of the simultaneous causal efficacy of some other persisting entity. But this leads to a regress that is intuitively vicious.

Thus, we must come at the end to at least one persisting entity E such that E’s persistence from some time t1 to some time t2 has no causal explanation. And if we started our question with asking about the persistence of something that persists over some times today, then these times t1 and t2 are today.

Even if we allow for some facts to be unexplained contingent “brute” facts, the persistence of ordinary objects over time shouldn’t be like that. Moreover, it doesn’t seem right to suppose that the ultimate explanations of the persistence of objects involve objects whose own persistence is brute. For that makes it ultimately be a brute fact that reality as a whole persists, a brute and surprising fact.

So, plausibly, we have to say that although E’s persistence from t1 to t2 has no causal explanation, it has some other kind of explanation. The most plausible candidate for this kind of explanation is that E is imperishable: that it is logically impossible for E to perish.

Hence, if all immediate causation is simultaneous, very likely there is something imperishable. And the imperishable entity or entities then cause things to exist at the time at which they exist, thereby explaining their persistence.

On the theory that God is the imperishable entity, the above explains why for Aquinas preservation and creation are the same.

(It’s a pity that I don’t think all immediate causation is simultaneous.)

Problem: Suppose E immediately makes B persist from time 2 to time 4, by immediately causing it to exist at all the times from 2 to 4. Surely, though, E exists at time 4 because it existed at time 2. And this “because” is hard to explain.

Response: We can say that B exists at time 4 because of its esse (or act of being) at time 2, provided that (a) B’s esse at time 2 is its being caused by E to exist at time 2, and (b) E causes B to exist at time 4 because (non-causally because) E caused B to exist at time 2. But once we say that B exists at time 4 because of its very own esse at time 2, it seems we’ve saved the “because” claim in the problem.

Two moment presentism

The biggest problem for presentism is the problem of diachronic relations, especially causation. If E is earlier than F and E causes F, then at any given time, this instance of causation will have to either be a relation between two non-existent relata or a relation between one existent and one non-existent relatum, and this is problematic. Here’s a variant on presentism that solves that problem.

Suppose time is discrete, but instead of supposing that a single moment is always actual, suppose that always two successive moments are actual. Thus, if the moments are numbered 0, 1, 2, 3, …, first 0 and 1 are actual, then 1 and 2 are actual, then 2 and 3 are actual, and so on. We then say that the present contains both of the successive moments: the present is not a moment. It is never the case that a single moment is actual, except maybe at the beginning or end of the sequence (those are variants whose strengths and weaknesses need evaluation). Strictly speaking, then, we should label times with pairs of moments: time 1–2, time 2–3, etc. (There are now two variants: on one of them, time 2–3 consists of nothing but the two moments, or it also has an “in between”.)

We then introduce two primitive tense operators: “Just was” and “Is about to be”. Thus, if an object is yellow from times 0 through 2 and blue from time 3 onward, then at time 2–3 it just was yellow and is about to be blue. We can say that an object is F at time 2–3, where Fness is something stative rather than processive, provided that it just was F and is about to be F. We might want to say that it is changing from being F1 to being F2 if it just was F1 and is about to be F2 instead (or maybe there is something more to change than that).

We can now get cases of direct diachronic causation between events at neighboring moments, and because both of the moments are present, our “two-moment presentist” will say that when the two moments are both present, causation is a relation between two existent relata, one at the earlier moment and the other at the later. Of course, there will be cases of indirect diachronic causation to talk about, where some event at time 2 causes an event at time 4 by means of an event at time 3, but the two-moment presentist can break this up into two direct instances of diachronic causation, one of which did/does/will take take place at time 2–3 and the other of which did/does/will take place at time 3–4.

I bet this view is in the literature. It’s too neat a solution to the problem not to have been noticed.

A spatial "in between"

In my last post I offered the suggestion that someone who thinks time is discrete has reason to think that there is something in between the moments—a continuous unbroken (but perhaps breakable) interval.

I think a similar thought can be had about discrete space.

Consideration 1: Imagine that space is discrete, arranged on a grid pattern, and I touch left and right index fingers together. It could happen that the rightmost spatial points of my left fingertip is side-by-side with the leftmost spatial points of my right fingertip, but nonetheless my hands aren’t joined into a single solid. One way to represent this setup would be to say that a spatial point in my left fingertip is right next to a spatial point in my right fingertip, but the interval between these spatial points is not within me.

But positing a spatial “in between” isn’t the only solution: distinguishing internal and external geometry is another.

Consideration 2: Zeno’s Stadium argument can be read as noting that if space and time are discrete, then an object moving at one point per unit of time rightward and an equal length object moving at one point per unit of time leftward can pass by each other without ever being side-by-side. Positing an “in between”, such that objects may be “inbetween places when they are in between times, may make this less problematic.

Wednesday, November 15, 2017

A non-reductive eternalist theory of change

It is sometimes said that B-theorists see change as reducible to temporal variation of properties—being non-F at t1 but F at t2 (the “at-at theory of change”)—while A-theorists have a deeper view of change.

But isn’t the A-theorist’s view of change just something like: having been non-F but now being F? But that’s just as reductive as the B-theorist’s at-at theory of change, and it seems just as much to be a matter of temporal variation. Both approaches have this feature: they analyze change in terms of the having and not having of a property. Note, also, that the A-theorist who gives the having-been-but-now-being story about change is committed to the at-at theory being logically sufficient for change from being non-F to being F.

I think there may be something to the intuition that the at-at theory doesn’t wholly capture change. But moving to the A-theory does not by itself solve the problem. In fact, I think the B-theory can do better than the best version of the A-theory.

Let me sketch an Aristotelian story about time. Time is discrete. It has moments. But it is not exhausted by moments. In addition to moments there are intervals between moments. These intervals are in fact undivided, though they might be divisible (Aristotle will think they are). At moments, things are. Between moments, things become. Change is when at one moment t1 something is non-F, at the next moment t2 it is F, and during the interval between t1 and t2 it is changing from non-F to F.

On this story, the at-at theory gives a necessary condition for changing from non-F to F, but perhaps not a sufficient one. For suppose temporally gappy existence is possible, so that an object can cease to exist and come back. Then it is conceivable that an object exist at t1 and at t2, but not during the interval between t1 and t2. Such an object might be brought back into existence at t2 with the property of Fness which it lacked at t1, but it wouldn’t have changed from being non-F to being F.

But there is a serious logical difficulty with the above story: the law of excluded middle. Suppose that a banana turns from non-blue (say, yellow) to blue over the interval I from t1 to t2. What happens during the interval? By excluded middle, the banana is non-blue or blue. But which is it? It cannot be non-blue on a part of the interval I and blue on another part, for that would imply a subdivision of the interval on the Aristotelian view of time. So it must be blue over the whole interval or non-blue over the whole interval. But neither option seems satisfactory. The interval is when it is changing from non-blue to blue; it shouldn’t already be at either endpoint during the interval. Thus, it seems, during I the banana is neither non-blue nor blue, which seems a contradiction.

But the B-theorist has a way of blocking the contradiction. She can take one of the standard B-theoretic solutions to the problem of temporary intrinsics and use that. For instance, she can say that the banana is neither blue-during-I and nor non-blue-during-I. There is no contradiction here, nor any denial of excluded middle.

What the theory denies is temporalized excluded middle:

  1. For any period of time u, either s during u or (not s) during u

but it affirms:

  1. For any period of time u, either s during u or not (s during u).

A typical presentist is unable to say that. For a typical presentist thinks that if u is present, then s during u if and only if s simpliciter, so that (1) follows from (2), at least if u is present (and then, generalizing, even if it’s not). Such a typical presentism, which identifies present truth with truth simpliciter is I think the best version of the A-theory.

Thinking of time as made up of moments and intervals is, I think, quite fruitful.

Tuesday, November 14, 2017

Freedom, responsibility and the open future

Assume the open futurist view on which freedom is incompatible with there being a positive fact about what I choose, and so there are no positive facts about future (non-derivatively) free actions.

Suppose for simplicity that time is discrete. (If it’s not, the argument will be more complicated, but I think not very different.) Suppose that at t2 I freely choose A. Let t1 be the preceding moment of time.


  1. At t2, it is already a fact that I choose A, and so I am no longer free with respect to A.

  2. At t1, I am still free with respect to choosing A, but I am not yet responsible with respect to A.


  1. At no time am I both free and responsible with respect to A.

This seems counterintuitive to me.

Open theism and divine perfection

  1. It is an imperfection to have been close to certain of something that turned out false.

  2. If open theism is true, God was close to certain of propositions that turned out false.

  3. So, if open theism is true, God has an imperfection.

  4. God has no imperfections.

  5. So, open theism is not true.

I think (1) is very intuitive and (4) is central to theism. It is easy to argue for (2). Consider giant sentence of the form:

  1. Alice’s first free choice on Monday is F1, Bob’s first free choice on Tuesday is F2, Carol’s first free choice on Tuesday is F3, …

where the list of names ranges over the names of all people living on Monday, and the Fi are "right", "not right" and "not made" (the last means that the agent will not make any free choices on Tuesday).

Exactly one proposition of the form (6) ends up being true by the end of Monday.

Suppose we’re back on the Sunday before that Monday. Absent the kind of knowledge of the future that the open theist denies to God, God will rationally assign probabilities to propositions of the form (6). These probabilities will all be astronomically low. Even though Alice may be very virtuous and her next choice is very likely to be right, and Bob is vicious and his next choice is very likely to be wrong, etc., given that any proposition of the form (6) has 7.6 billion conjuncts, the probability of that proposition is tiny.

Thus, on Sunday God assigns miniscule probabilities to all the propositions of the form (6), and hence God is close to certain of the negations of all such propositions. But come Tuesday, one of these negated propositions turns out to be false. Therefore, on Tuesday—i.e., today—there a proposition that turned out false that God was close to certain of. And that yields premise (2).

(I mean all my wording to be neutral between the version of open theism where future contingents have a truth value and the one where they do not.)

Moreover, even without considerations of perfections, being close to certain of something that will turn out to be false is surely inimical to any plausible notion of omniscience.

Monday, November 13, 2017

Flying rings

My five-year-old has been really enjoying our Aerobie Pro flying disk, but it has too much range to use at home or in a backyard. The patent has expired, so I designed a 3D-printable version with a similar airfoil profile and customizable diameter and wing-chord. The inner one is 100mm diameter (20mm chord), and can be used indoors. Here are the files.

Open theism and utilitarianism

Here’s an amusing little fact. You can’t be both an open theist and an act utilitarian. For according to the act utilitarian, to fail to maximize utility is wrong. It is impossible for God to do the wrong thing. But given open theism, it does not seem that God can know enough about the future in order to be necessarily able to maximize utility.

Thursday, November 9, 2017

Proportionality in Double Effect is not a simple comparison

It is tempting to make the final “proportionality” condition of the Principle of Double Effect say that the overall consequences of the action are good or neutral, perhaps after screening off any consequences that come through evil (cf. the discussion here).

But “good or neutral” is not a necessary condition for permissibility. Alice is on a bridge above Bob, and sees an active grenade roll towards Bob. If she does nothing, Alice will be shielded by the bridge from the explosion. But instead she leaps off the bridge and covers the grenade with her body, saving Bob’s life at the cost of her own.

If “good or neutral” consequences are required for permissibility, then to evaluate the permissibility of Alice’s action it seems we would need to evaluate whether Alice’s death is a worse thing than Bob’s. Suppose Alice owns three goldfish while Bob owns two goldfish, and in either case the goldfish will be less well cared for by the heirs (and to the same degree). Then Alice’s death is mildly worse than Bob’s death, other things being equal. But it would be absurd to say that Alice acted wrongly in jumping on the grenade because of the impact of this act on her goldfish.

Thus, the proportionality condition in PDE needs to be able to tolerate some differences in the size of the evils, even when these differences disfavor the course of action that is being taken. In other words, although the consequences of jumping on the grenade are slightly worse than those of not doing so, because of the impact on the goldfish, the bad consequences of jumping are not disproportionate to the bad consequences of not jumping.

On the other hand, if it was Bob’s goldfish bowl, rather than Bob, that was near the grenade, the consequences of jumping would be disproportionate to the consequences of not jumping, since Alice’s death is disproportionately bad as compared to the death of Bob’s goldfish.

Objection: The initial case where Alice jumps to save Bob’s life fails to take into account the fact that Alice’s act of self-sacrifice adds great value to the consequences of jumping, because it is a heroic act of self-sacrifice. This added increment of value outweighs the loss to Alice’s extra goldfish, and so I was incorrect to judge that the consequences are mildly negative.

Response: First, it seems to be circular to count the value of the act itself when evaluating the act’s permissibility, since the act itself only has positive value if it is permissible. And anyway one can tweak the case to avoid this difficulty. Suppose that it is known that if Alice does not jump on the grenade, Carl who is standing beside her will. And Carl only owns one goldfish. Then whether Alice jumps or not, the world includes a heroic act. And it is better that Carl jump than that Alice, other things being equal, as Carl only has one goldfish depending on him. But it is absurd that Alice is forbidden from jumping in order that a man with fewer goldfish might do it in her place.

Question: How much of a difference in value can proportionality tolerate?

Response: I don’t know. And I suspect that this is one of those parameters in ethics that needs explaining.

A simple "construction" of non-measurable sets from coin-toss sequences

Here’s a simple “construction” of a non-measurable set out of coin-toss sequences, i.e., of an event that doesn’t have a well-defined probability, going back to Blackwell and Diaconis, but simplified by me not to use ultrafilters. I’m grateful to John Norton for drawing my attention to this.

Let Ω be the set of all countably infinite coin-toss sequences. If a and b are two such sequences, say that a ∼ b if and only if a and b differ only in finitely many places. Clearly ∼ is an equivalence relation (it is reflexive, symmetric and transitive).

For any infinite coin-toss sequence a, let ra be the reversed sequence: the one that is heads wherever a is tails and vice-versa. For any set A of sequences, let rA be the set of the corresponding sequences. Observe that we never have a ∼ ra, and that U is an equivalence class under ∼ (i.e., a maximal set all of whose members are ∼-equivalent) if and only if rU is an equivalence class. Also, if U is an equivalence class, then rU ≠ U.

Let C be the set of all unordered pairs {U, rU} where U is an equivalence class under ∼. (Note that every equivalence class lies in exactly one such unordered pair.) By the Axiom of Choice (for collections of two-membered sets), choose one member of each pair in C. Call the chosen member “selected”. Then let N be the union of all the selected sets.

Here are two cool properties of N:

  1. Every coin-toss sequence is in exactly one of N and rN.

  2. If a and b are coin-toss sequences that differ in only finitely many places, then a is in N if and only if b is in N.

We can now prove that N is not measurable. Suppose N is measurable. Then by symmetry P(rN)=P(N). By (1) and additivity, 1 = P(N)+P(rN), so P(N)=1/2. But by (2), N is a tail set, i.e., an event independent of any finite subset of the tosses. The Kolmogorov Zero-One Law says that every (measurable) tail set has probability 0 or 1. But that contradicts the fact that P(N)=1/2, so N cannot be measurable.

An interesting property of N is that intuitively we would think that P(N)=1/2, given that for every sequence a, exactly one of a and ra is in N. But if we do say that P(N)=1/2, then no finite number of observations of coin tosses provides any Bayesian information on whether the whole infinite sequence is in N, because no finite subsequence has any bearing on whether the whole sequence is in N by (2). Thus, if we were to assign the intuitive probability 1/2 to P(N), then no matter what finite number of observations we made of coin tosses, our posterior probability that the sequence is in N would still have to be 1/2—we would not be getting any Bayesian convergence. This is another way to see that N is non-measurable—if it were measurable, it would violate Bayesian convergence theorems.

And this is another way of highlighting how non-measurability vitiates Bayesian reasoning (see also this).

We can now use Bayesian convergence to sketch a proof that N is saturated non-measurable, i.e., that if A ⊆ N is measurable, then P(A)=0 and if A ⊇ N is measurable, then P(A)=1. For suppose A ⊆ N is measurable. Suppose that we are sequentially observing coin tosses and forming posteriors for A. These posteriors cannot ever exceed 1/2. Here is why. For a coin toss sequence a, let rna be the sequence obtained by keeping the first n tosses fixed and reversing the rest of the tosses. For any any finite sequence o1, ..., on of observations, and any infinite sequence a of coin-tosses compatible with these observations, at most one of a and rna is a member of N (this follows from (1) and the fact that ra ∈ N if and only if rna ∈ N by (2)). By symmetry P(A ∣ o1...on)=P(rnA ∣ rn(o1...on)) (where rnA is the result of applying rn to every member of A). But rn(o1...on) is the same as o1...on, so P(A ∣ o1...on)=P(rnA ∣ o1...on). But A and rnA are disjoint, so P(A ∣ o1...on)+P(rnA ∣ o1...on)≤1 by additivity, and hence P(A ∣ o1...on)≤1/2. Thus, the posteriors for A are always at most 1/2. By Bayesian convergence, however, almost surely the posteriors will converge to 1 or 0, respectively, depending on whether the sequence being observed is actually in A. They cannot converge to 1, so the probability that the sequence is in A must be equal to 0. Thus, P(A)=0. The claim that if A ⊇ N is measurable then P(A)=1 is proved by noting that then N − A ⊇ rN (as rN is the complement of N), and so by the above argument with rN in place of N, we have P(N − A)=0 and thus P(A)=1.

Tuesday, November 7, 2017

Why might God refrain from creating?

Traditional Jewish and Christian theism holds that God didn’t have to create anything at all. But it is puzzling what motive a perfectly good being would have not to create anything. Here’s a cute (I think) answer:

  • If (and only if) God doesn’t create anything, then everything is God. And that’s a very valuable state of affairs.

Adding infinite guilt

Bob has the belief that there are infinitely many people in a parallel universe, and that they wear numbered jerseys: 1, 2, 3, …. He also believes that he has a system in a laboratory that can cause indigestion to any subset of these people that he can describe to a computer. Bob has good evidence for these beliefs and is (mirabile!) sane.

Consider four scenarios:

  1. Bob attempts to cause indigestion to all the odd-numbered people.

  2. Bob attempts to cause indigestion to all the people whose number is divisible by four.

  3. Bob attempts to cause indigestion to all the people whose number is either odd or divisible by four.

  4. Bob yesterday attempted to cause indigestion to all the odd-numbered people and on a later occasion to all the people whose number is divisible by four.

In each scenario, Bob has done something very bad, indeed apparently infinitely bad: he has attempted infinite mass sickening.

In scenarios 1-3, other things being equal, Bob’s guilt is equal, because the number of people he attempted to cause indigestion to is the same—a countable infinity.

But now we have two arguments about how bad Bob’s action in scenario 4 is. On the one hand, in scenario 4 he has attempted to sicken the exact same people as in scenario 3. So, he is equally guilty in scenario 4 as in scenario 3.

On the other hand, in scenario 4, Bob is guilty of two wrong actions, the action of scenario 1 and that of scenario 2. Moreover, as we saw before, each of these actions on its own makes him just as guilty as the action in scenario 3 does. Doing two wrongs, even two infinite wrongs, is worse than just doing one, if they are all of the same magnitude. So in scenario 4, Bob is guiltier than in scenario 3. One becomes the worse off for acquiring more guilt. But if 4 made Bob no guiltier than 3 would have, it would make Bob no guiltier than 1 would have, and so after committing the first wrong in 4, since he would already have the guilt of 1, Bob would have no guilt-avoidance reason to refrain from the second wrong in 4, which is absurd.

How to resolve this? I think as follows: when accounting guilt, we should look at guilty acts of will rather than consequences or attempted consequences. In scenario 4, although the total attempted harm is the same as in each of scenarios 1-3, there are two guilty acts of will, and that makes Bob guiltier in scenario 4.

We could tell the story in 4 so that there is only one act of will. We could suppose that Bob can self-hypnotize so that today he orders his computer to sicken the odd-numbered people and tomorrow those whose number is divisible by four. In that case, there would be only one act of will, which will be less bad. It’s a bit weird to think that Bob might be better off morally for such self-hypnosis, but I think one can bite the bullet on that.

Evidence that I am dead

I just got evidence that I am dead, in an email that starts:

Dear expired [organization] member,
You might think this is pretty weak evidence. Maybe "expired" doesn't mean "dead" here. But the email continues:
Thank you for your past support of [organization]. Your membership has recently expired, and we would like to take this opportunity to urge you to renew your membership.
But last year I acquired a life membership...

Sorry, I couldn't resist sharing this.

From a dualism to a theory of time

This argument is valid:

  1. Some human mental events are fundamental.

  2. No human mental event happens in an instant.

  3. If presentism is true, every fundamental event happens in an instant.

  4. So, presentism is not true.

Premise (1) is widely accepted by dualists. Premise (2) is very, very plausible. That leaves (3). Here is the thought. Given presentism, that a non-instantaneous event is happening is a conjunctive fact with one conjunct about what is happening now and another conjunct about what happened or will happen. Conjunctive facts are grounded in their conjuncts and hence not fundamental, and for the same reason the event would not be fundamental.

But lest four-dimensionalist dualists cheer, we can continue adding to the argument:

  1. If temporal-parts four-dimensionalism is true, every fundamental event happens in an instant.

  2. So, temporal-parts four-dimensionalism is not true.

For on temporal-parts four-dimensionalism, any temporally extended event will be grounded in its proper temporal parts.

The growing block dualist may be feeling pretty smug. But suppose that we currently have a temporally extended event E that started at t−2 and ends at the present moment t0. At an intermediate time t−1, only a proper part of E existed. A part is either partly grounded in the whole or the whole in the parts. Since the whole doesn’t exist at t−1, the part cannot be grounded in it. So the whole must be partly grounded in the part. But an event that is partly grounded in its part is not fundamental. Hence:

  1. If growing block is true, every fundamental event happens in an instant.

  2. So, growing block is not true.

There is one theory of time left. It is what one might call Aristotelian four-dimensionalism. Aristotelians think that wholes are prior to their parts. An Aristotelian four-dimensionalist thinks that temporal wholes are prior to their temporal parts, so that there are temporally extended fundamental events. We can then complete the argument:

  1. Either presentism, temporal-parts four-dimensionalism, growing block or Aristotelian four-dimensionalism is true.

  2. So, Aristotelian four-dimensionalism is true.

Monday, November 6, 2017

Statistically contrastive explanations of both heads and tails

Say that an explanation e of p rather than q is statistically contrastive if and only P(p|e)>P(q|e).

For instance, suppose I rolled an indeterministic die and got a six. Then I can give a statistically contrastive explanation of why I rolled more than one (p) rather than rolling one (q). The explanation (e) is that I rolled a fair six-sided die. In that case: P(p|e)=5/6 > 1/6 = P(q|e). Suppose I had rolled a one. Then e would still have been an explanation of the outcome, but not a statistically contrastive one.

One might try to generalize the above remarks to conclude to this thesis:

  1. In indeterministic stochastic setups, there will always be a possible outcome that does not admit of a statistically contrastive explanation.

The intuitive argument for (1) is this. If one indeterministic stochastic outcome is p, either there is or is not a statistically contrastive explanation e of why p rather not p is the case. If there is no such statistically contrastive explanation, then the consequent of (1) is indeed true. Suppose that there is a statistically contrastive explanation e, and let q be the negation of p. Then P(p|e)>P(q|e). Thus, e is a statistically contrastive explanation of why p rather than q, but it is obvious that it cannot be a statistically contrastive explanation of why q rather than p.

The intuitive argument for (1) is logically invalid. For it only shows that e is not the statistically contrastive explanation for why q rather than p, while what needed to be shown is that there is no statistically contrastive explanation.

In fact, (1) is false. The indeterministic stochastic situation is Alice’s flipping of a coin. There are two outcomes: heads and tails. But prior to the coin getting flipped, Bob uniformly chooses a random number r such that 0 < r < 1 and loads the coin in such a way that the chance of heads is r. Suppose that in the situation at hand r = 0.8. Let H be the heads outcome and T the tails outcome. Then here is a constrastive explanation for H rather than T:

  • e1: an unfair coin with chance 0.8 of heads was flipped.

Clearly P(H|e1)=0.8 > 0.2 = P(T|e1). But suppose that instead tails was obtained. We can give a constrastive explanation of that, too:

  • e2: an unfair coin with chance at least 0.2 of tails was flipped.

Given only e2, the chance of tails is somewhere between 0.2 and 1.0, with the distribution uniform. Thus, on average, given e2 the chance of tails will be 0.6: P(T|e2)=0.6. And P(H|e2)=1 − P(T|e2)=0.4. Thus, e2 is actually a statistically contrastive explanation of T. And note that something like this will work no matter what value r has as long as it’s strictly between 0 and 1.

It might still be arguable that given indeterministic stochastic situations, something will lack a statistically contrastive explanation. For instance, while we can give a statistically contrastive explanation of heads rather than tails, and a statistically contrastive explanation of tails rather than heads. But it does not seem that we can give a statistically contrastive explanation of why the coin was loaded exactly to degree 0.8, since that has zero probability. Of course, that’s an outcome of a different stochastic process than the coin flip one, so it doesn't support (1). And the argument needs to be more complicated than the invalid argument for (1).

Cheap Makey Makey alternative

The Makey Makey is a cool electronic gadget that lets kids make a USB controller out of any somewhat conductive stuff, like bananas, play dough, etc. Unfortunately, it's about $50 (there is also a $30 clone). Also, annoying, it requires a ground connection for the user. I made a capacitive version that costs about $3 using a $2 stm32f103c8 board. It emulates either a keyboard or a gamepad/joystick.

Here are instructions.

Projection and the imago Dei

There is some pleasing initial symmetry between how a theist (or at least Jew, Christian or Muslim) can explain features of human nature by invoking the doctrine that we are in the image of God and using this explanatory schema:

  1. Humans are (actually, normally or ideally) F because God is actually F

and how an atheist can explain features attributed to God by projection:

  1. The concept of God includes being actually F because humans are (actually, normally or ideally) F.

Note, however, that while schemata (1) and (2) are formally on par, schema (1) has the advantage that it has a broader explanatory scope than (2) does. Schema (1) explains a number of features (whether actual or normative) of the nature of all human beings, while schema (2) only explains a number of features of the thinking of a modest majority (the 55% who are monotheists) of human beings.

There is also another interesting asymmetry between (1) and (2). Theist can without any damage to their intellectual system embrace both (1) and a number of the instances of (2) that the atheist embraces, since given the imago Dei doctrine, projection of normative or ideal human features onto God can be expected to track truth with some probability. On the other hand, the atheist cannot embrace any instances of (1).

Note, too, that evolutionary explanations do not undercut (1), since there can be multiple correct explanations of one phenomenon. (This phenomenon is known to people working on Bayesian inference.)

Saturday, November 4, 2017

Neo-Aristotelian Perspectives on Contemporary Science

The collection Neo-Aristotelian Perspectives on Contemporary Science (eds: Simpson, Koons and Teh) is now available. It's divided into a physical sciences and a life sciences part.

My piece on the Traveling Forms interpretation is in the physical sciences part (interestingly, though, that interpretation is more about us than about physics).

Thursday, November 2, 2017

Four problems and a unified solution

A similar problem occurs in at least four different areas.

  1. Physics: What explains the values of the constants in the laws of nature?

  2. Ethics: What explains parameters in moral laws, such as the degree to which we should favor benefits to our parents over benefits to strangers?

  3. Epistemology: What explains parameters in epistemic principles, such as the parameters in how quickly we should take our evidence to justify inductive generalizations, or how much epistemic weight we should put on simplicity?

  4. Semantics: What explains where the lines are drawn for the extensions of our words?

There are some solutions that have a hope of working in some but not all the areas. For instance, a view on which there is a universe-spawning mechanism that induces random value of constants in laws of nature solves the physics problem, but does little for the other three.

On the other hand, vagueness solutions to 2-4 have little hope of helping in the physics case. Actually, though, vagueness doesn’t help much in 2-4, because there will still be the question of explaining why the vague regions are where they are and why they are fuzzy in the way there are—we just shift the parameter question.

In some areas, there might be some hope of having a theory on which there are no objective parameters. For instance, Bayesianism holds that the parameters are set by the priors, and subjective Bayesianism then says that there are no objective priors. Non-realist ethical theories do something similar. But such a move in the case of physics is implausible.

In each area, there might be some hope that there are simple and elegant principles that of necessity give rise to and explainingthe values of the parameters. But that hope has yet to be born out in any of the four cases.

In each area, one can opt for a brute necessity. But that should be a last resort.

In each area, there are things that can be said that simply shift the question about parameters to a similar question about other parameters. For instance, objective Bayesianism shifts the question of about how much epistemic weight we should put on simplicity to the question of priors.

When the questons are so similar, there is significant value in giving a uniform solution. The theist can do that. She does so by opting for these views:

  1. Physics: God makes the universe have the fundamental laws of nature it does.

  2. Ethics: God institutes the fundamental moral principles.

  3. Epistemology: God institutes the fundamental epistemic principles for us.

  4. Semantics: God institutes some fundamental level of our language.

In each of the four cases there is a question of how God does this. And in each there is a “divine command” style answer and a “natural law” style answer, and likely others.

In physics, the “divine command” style answer is occasionalism; in ethics and epistemology it just is “divine command”; and in semantics it is a view on which God is the first speaker and his meanings for fundamental linguistic structs are normative. None of these appeal very much to me, and for the same reason: they all make the relevant features extrinsic to us.

In physics, the “natural law” answer is theistic Aristotelianism: laws supervene on the natures of things, and God chooses which natures to instantiate; theistic natural law is a well-developed ethical theory, and there are analogues in epistemology and semantics, albeit not very popular ones.

Wednesday, November 1, 2017

Theistic Natural Law and the Euthyphro Problem

Theistic Natural Law (TNL) theory seems to be subject to the Euthyphro problem much as divine command theory (DCT) is. On DCT, the Euthyphro problem takes the form of the question:

  1. Why did God command what he commanded rather than commanding otherwise?

On TNL, the Euthyphro problem takes the form of the question:

  1. Why did God create beings with the natures he did rather than creating beings with other natures?

In both cases, one can respond by talking of the essential goodness of God, by virtue of which he makes a good choice as to how to fittingly match the non-normative with the normative features of creatures. In the DCT case, God makes the match by benevolently choosing what sorts of creatures to create and what sorts of commands to give them. In the TNL case, God makes the match by benevolently choosing the non-deontic and deontic features of natures and then creating creatures with these natures. Thus, in the DCT case, God has reason to coordinate the sociality of creatures with the command to cooperate, while in the TNL case God has reason to actualize natures that either both include sociality and the duty to cooperate or to actualize natures that include neither.

So in what way is TNL better off than DCT with regard to the Euthyphro problem? The one thing I can think of in the vicinity is this: TNL allows for there to be deontic features that necessarily every natural includes, and it allows for there to be some deontic features of creatures that are entailed by the non-deontic features. For instance, perhaps every possible nature of an agent includes a prohibition against pointless imposition of torture, and every possible nature of a linguistic agent includes a prohibition against lying. But I am not sure this difference is really relevant to the Euthyphro problem.

I do prefer TNL to DCT, but not because of the Euthyphro problem. My reason for the preference is that many moral obligations appear to be intrinsic features of us.

Of course, the above arguments presuppose a particular picture of how natural law works. But I like that picture.

Captain Proton's Ray Gun

The kids and I are big Star Trek fans (well, the 5-year-old is just a fan, not a big fan, as yet), and my son wanted to have Captain Proton's Ray Gun. Captain Proton is a cheesy character in a fictional series of 1950s movies in Star Trek Voyager. So, I guess, he's a fictional fictional character. I found some photos of a prop, traced the images in Inkscape, exported to OpenSCAD, and made 3D printable files, which are here. I printed it (it prints in two halves that join together), but we have yet to paint it (may not paint it right away, as in silver and gray it will look too much like a real gun at a distance to use outside the house).