Friday, October 29, 2021

Evidentialism and epistemic utilities

Epistemic value is the value of true belief and disvalue of false belief.

Let p be the proposition that there is such a thing as epistemic value.

Suppose p is true. Then, plausibly, the higher your credence in p, the more epistemic value your credence has. The closer your credence is to certainty, the closer to truth your representation is. Let tp(r) be the value of having credence r in p when in fact p is true. Then tp(r) is a strictly increasing function of r.

Suppose p is false. Then whatever credence you have in p, the epistemic value of that credence is zero.

Now suppose you are not sure about p, so your credence in p is an r such that 0 < r < 1. Consider now the idea of setting your credence to some other value r′. What is the expected epistemic value of doing so? Well, if p is false, there will be no epistemic value, and if p is true, you will have epistemic value tp(r′). Your current probability for p is r. So your expected epistemic value is

• rtp(r′) + (1 − r)⋅0 = rtp(r′).

Thus, to maximize your expected epistemic value, you should set r′=1. In other words, no matter that your evidence may not support p, you should still have credence one in p, if you should maximize expected epistemic value.

What do we learn from this?

First, either evidentialism (the view that your degree of belief should be proportioned to the evidence) is false or else expected epistemic utility maximization is the wrong way to think about epistemic normativity.

Second, there are cases where the right epistemic scoring rule is improper. For given a proper epistemic scoring rule and a consistent credence assignment, we never get a recommendation of a change of credence. The scoring rule underlying the above epistemic value assignments is clearly improper, and yet is also clearly right.

Thursday, October 28, 2021

Symmetric qualitative (and other) probabilities

I recently worked out the precise conditions under which one can have Popper functions, hyperreal probabilities or qualitative probabilities that are invariant under some group of symmetries and are regular in the sense that they assign a bigger probability to non-empty sets than to the empty set.

But what if we don’t require regularity? Then the following is mainly a matter of putting together known theorems:

Proposition. Suppose G is a group acting on set Ω* ⊇ Ω where Ω is non-empty. Then the following are equivalent:

1. There is a finitely additive G-invariant real-valued probability measure on the powerset of Ω

2. There is a finitely additive G-invariant hyperreal probability measure on the powerset of Ω

3. There is a finitely additive approximately G-invariant hyperreal probability measure on the powerset of Ω

4. There is a strongly G-invariant total qualitative probability ⪅ on the powerset of Ω such that ⌀ < Ω

5. There is a strongly G-invariant partial qualitative probability ⪅ on the powerset of Ω such that ⌀ < Ω

6. The set Ω is not G-paradoxical.

The definitions are in the paper I linked to at the top, except that approximate G-invariance only requires that P(A)−P(gA) be infinitesimal rather than requiring that it be zero.

Proof: Trivially, (a) implies (b) which implies (c). The standard part of a finitely additive approximately G-invariant hyperreal probability measure will be a finitely additive G-invariant real-valued probability measure, so (c) implies (a). Thus, (a)–(c) are equivalent.

Condition (a) implies condition (d): just define A ⪅ B iff P(A)≤P(B) where P is the measure in (a). And (d) implies (e) trivially.

Now we show that not-(f) implies not-(e). Suppose Ω is G-paradoxical, so Ω has disjoint subsets A and B with partitions A1, ..., Am and B1, ..., Bn respectively, and there are elements g1, ..., gm and h1, ..., hn of G such that g1A1, ..., gmAm and h1B1, ..., hnBn are each a partition of Ω. Then by a standard result on qualitative probabilities (use the proof of Krantz, et al., Lemma 5.3.1.2):

1. A = A1 ∪ ... ∪ Am ≈ g1A1 ∪ ... ∪ gmAm = Ω

2. B = B1 ∪ ... ∪ Bn ≈ h1B1 ∪ ... ∪ hnBn = Ω.

Since ⌀ < Ω, we have ⌀ < A by (1). By the proof of Corollary 5.3.1.2 in Krantz, et al., we have B < Ω iff ⌀ < Ω − B. But A ⊆ Ω − B, and ⌀ < A, so indeed we must have B < Ω, which contradicts (2).

Finally, Tarski’s Theorem says that (f) implies (a). □

Note 1: The two results from Krantz et al. are given for total qualitative probabilities, but the proofs do not use totality. (In the linked paper, I didn’t notice that Krantz et al. are working with total qualitative probabilities, but fortunately all works out.)

Note 2: There is a pleasing direct construction of a partial qualitative probability satisfying (d). For each A ⊆ Ω, let [A] be the corresponding member of the equidecomposability type semigroup. Then define A ⪅ B providing there is a c in the semigroup such that [A]+c ≤ [B]+c. It turns out that the condition ⌀ < Ω is then equivalent to 2[Ω]≠[Ω], i.e., is equivalent to the non-paradoxicality of Ω under G.

Wednesday, October 27, 2021

More on attempted murder and attempted theft

In an old post, I observe the curious phenomenon that a typical attempted murder is not an attempt to murder and a typical attempted theft is not not attempt to steal. For one only attempts to do something that one intends to do. But that the killing or the taking in fact constitutes a murder or a theft is, in typical cases, irrelevant to the criminal’s ends. For instance, in typical cases of theft, if it were to turn out that the object is in fact abandoned property, the thief’s ends would be just as well served by taking the object. Hence, the thief’s end is to take the object, and whether the object is owned by someone, and hence whether the taking constitutes theft, is irrelevant to the thief’s ends, and hence is not intended.

I then attempted to come up with an account of “attempted M” for a broad spectrum of misdeeds M. The idea was that “M” is a thick and morally loaded description, such as “murder” or “theft”, while there is thin and morally unloaded description “N”, such as “killing” or “taking”. Then I suggested that:

1. An action is an attempted M if and only if the agent is trying for N in circumstances in which success at N would constitute M.

But I wasn’t happy with (1) in light of a weird counterexample of trying to shoot someone with a smart raygun that, unbeknownst to the shooter, only shoots people whom it is just to kill, and doing so in a case where the killing would in fact be unjust. This seems a clear case of attempted murder (only attempted, because the raygun recognized that the killing would be unjust and refused to fire). I said that the problem with (1) is that in these circumstances success at killing would not constitute murder, since the raygun would only succeed if the killing weren’t a case of murder.

My analysis of the counterexample needs a bit of work to spell out. The actual circumstances include two kinds of facts:

1. the facts in virtue of which killing the victim would be murder (the victim’s innocence, etc.), and

2. the fact that the raygun cannot be used to commit murder.

When we ask whether success at killing would constitute murder, we are asking a counterfactual question, and we now need to be clear on whether we keep fixed (a) and drop (b) or keep (b) fixed and drop (a). To have a counterexample to (1), we need to ensure that the right way to evaluate the counterfactual about success at killing involves fixing (b) and dropping (a). I think we can ensure this. We can presumably set things up so that the raygun refuses to commit murder at all nearby worlds, but at some nearby world the victim is an aggressor whom it is permissible to kill. But this should have been stated.

So, it does seem we have a counterexample to (1). One might attempt to fully subjectivize (1) as follows instead:

1. An action is an attempted M if and only if the agent is trying for N and believes that success at N would constitute M.

But this is mistaken. An SS officer might have convinced himself that the killing of innocents that he is attempting is not in fact a murder, but that doesn’t make it not be a murder (whether the convincing reduces culpability is a separate question).

I think what we actually want to do is keep the moral standards objective while subjectivizing everything else. Roughly, we want something like this:

1. An action is an attempted M if and only if the agent is trying for N and were the moral standards fixed as they actually are and were the rest of the circumstances as the agent believes them to be, then the success at N would constitute M.

I doubt this captures all the cases, but it makes some progress over (1) I think. I suspect that our concept of an attempted murder or an attempted theft is rather messy and gerrymandered.

Note that (3) does not fit with the legal doctrine of “impossible attempts” on which an attempt that “couldn’t succeed” doesn’t count. Thus, attempting to kill with magic spells does not legally count as attempted murder, even though (3) says it is attempted murder. In this case, I am inclined to just say that the legal doctrine is false to the phrase “attempted murder”, but there is good reason not to prosecute such impossible attempts (say, because doing so leads to prosecution of “thought crimes”). If we want to build in a doctrine of impossible attempts, we can add to (3) the claim that there is an epistemically nearby world where the circumstances other than moral standards are as the agent believes them to be, where the epistemic nearness is measured by the standards of a reasonable person rather than perhaps the agent.

Are free actions a counterexample to the PSR?

I’ve argued somewhat as follows in the past:

1. Necessarily, no one is responsible for a brute fact—an unexplained contingent fact.

2. Necessarily, someone is responsible for every free decision or free action.

3. So, it is impossible for a free decision or free action to be a brute fact.

But then:

1. Necessarily, a counterexample to the Principle of Sufficient Reason (PSR) is a brute fact.

2. So, no free decision or free action can be a counterexample to the PSR.

One may imagine someone, however, arguing that although a free decision or a free action cannot be a counterexample to the Principle of Sufficient Reason, a contrastive report, such as that x freely chose to do A rather than B, could be a counterexample to the Principle of Sufficient Reason. But notice that if x freely chose to do A rather than B, then x is responsible for choosing to do A rather than B. Similarly if x freely chose to do A for reason R rather than B for S, then x is responsible for doing so. Freedom implies responsibility. But no one is responsible for a brute fact, so such contrastive reports cannot be reports of a brute fact.

Objection 1: Incompatibilism is true, and on incompatibilism it is obvious that no possible explanation can be given for why x freely chose to do A for R rather than B for S. Hence the Principle of Sufficient Reason is false.

Response: Given that no one is responsible for what has no explanation, if the “no possible explanation” claim is correct, then free will is impossible. Thus, rather than showing that the PSR is false, the argument would show that if incompatibilism is true, free will is impossible. As a libertarian, I think free will is possible (and actual). But it is important to keep clear on what it is that is really endangered by the argument: it is free will and not the PSR.

Objection 2: Freedom is a necessary but not sufficient condition for responsibility.

Response: I am not sure about this. When I think about what other conditions we need to add to freedom to yield responsibility, the only one I can think of is something like knowledge of what is at stake. But it is arguable that without knowledge of what is at stake, a choice is not free. Moreover, even if one does not know what is at stake with A and B beyond what is contained in the respective reasons R and S, one will still be responsible for choosing A for R rather than B for S if one chooses freely for these reasons. One just won’t be responsible for the further aspects, beyond those captured by R and S, that one does not know.

But let’s grant for the sake of argument that other conditions need to be added to freedom to yield responsibility. If so, then the claim has to be that free but non-responsible decisions or actions or contrastive reports thereof are a counterexample to the PSR although free and responsible ones are not. In other words, one has to hold that the alleged additional conditions that need to be added to freedom to yield responsibility are what secures explicability. But given that the most plausible candidate for the other conditions is knowledge of what is at stake, this is implausible. For a free action based on mistaken or limited knowledge is no less explicable than an action based on full knowledge, once one takes into account the agent’s epistemic deficiency.

Tuesday, October 26, 2021

Risability

Aristotle says that a necessary accident of the human being is risability—the capability for laughter. As far as I can tell, necessary accidents are supposed to derive from the essence of a thing. So, how do we derive risability from the essence of the human being?

Here’s an idea. The essence is to be a rational animal. A rational being reflects on itself. But to have an animal that is simultaneously rational—that’s objectively funny. Thus, a rational being that is an animal is always in a position to discover something objectively funny, namely itself. And it just wouldn’t be rational not to laugh at that funny thing!

Can one lie without asserting a proposition?

I am starting to think that one can lie without asserting a proposition.

Let’s say that a counterintelligence agent tells an enemy spy that a new weapons technology has just been deployed, in order to dissuade the enemy from invading. The description of the technology contains nonsensical technobabble. This seems to be a lie. If it is, my argument is complete, because nonsense does not express a proposition.

But suppose we say it’s mere BS. Let’s now complicate the case. The counterintelligence agent passes to the enemy spy a fake classified document saying “We have just built a weapon that shoots three simultaneous hyperquark beams.” The spy is taken in by the BS, but also wishes to deter war. And thus the spy reports to her government: “The enemy has just built a weapon that shoots ten simultaneous hyperquark beams.” It is clear that the spy is not merely engaging in BS. The spy sure seems to be lying. But the spy is no more asserting a proposition than the counterintelligence agent did.

If we say that the counterintelligence agent is lying, then we have to allow that one can lie without even taking oneself to assert a proposition.

If we think that the counterintelligence agent is only BSing, but that in my more complicated case the enemy spy is lying to her government, then we should say that to lie one needs to take oneself to be asserting a false proposition, but one need not be actually asserting a proposition, true or false.

In either case, one can lie without asserting a proposition.

Perhaps I am wrong. Perhaps what the spy does in the more complicated case is neither BS nor a lie, but engaging in a verbal deceit we don’t have a good name for.

Monday, October 25, 2021

A quick argument against subjective Bayesianism

1. You should assign a prior probability less than 1/2 to the hypothesis that over the lifetime of the universe there were exactly 100 tosses of a fair coin and they were all heads.

2. The hypothesis in (1) is contingent.

3. If there is a contingent hypothesis to which you should assign a prior probability less than 1/2, then subjective Bayesianism is false.

4. So, subjective Bayesianism is false.

On two arguments for Bayesian regularity

Standard Bayesianism requires regularity: it requires that I not assign prior probability zero to any contingent proposition. There are two main reasons for this: one technical and one epistemological.

The technical reason is that it is difficult to make sense of conditionalizing on events with probability zero. (Granted, there are technical ways around this, but there are also problems with these.) But the difficulty of conditionalizing on events with probability zero does not give one any reason to prohibit assigning probability to zero to events that one would never conditionalize on.

The Bayesian agent conditionalizes on evidence. But while the question of what constitutes evidence is highly controversial, there are some plausible things we could say about what could and could not be evidence for beings like us. Thus, the proposition that it’s looking like the multimeter is showing 3.1V seems like the sort of thing that could be evidence for a being like us, but the conjunction of the propositions constituing Relativity Theory does not seem like the sort of thing that could be evidence for a being like us (maybe it could be evidence for some supernatural being that has an infallible vision of the laws of nature; and maybe God could make us be such beings; but we don’t need to adapt our epistemology to such out of the world possibilities).

If this is right, then the technical difficulties with conditionalizing on events with probability zero do not give us a good reason to assign a non-zero prior probability to Relativity Theory, or any other proposition that is not of the right sort to constitute a body of potential evidence (where a body of potential evidence is a consistent finite conjunction of individual pieces of evidence).

There is, however, a second reason not to assign prior probability zero to any contingent proposition. If we assign prior probability zero to some hypothesis H, say Relativity Theory, then the only way a body of evidence E could raise the probability of H to something non-zero would be if P(E)=0 (for if P(E)>0, then P(H|E)=P(HE)/P(E)≤P(H)/P(E)=0). Thus, if we assign prior probability zero to a hypothesis, it seems that we will be unacceptably stuck at probability zero for that hypothesis no matter what evidence comes in. This is not a merely technical reason: it is an epistemological one.

Note that this formulation of the second reason for regularity depends on the first, though in a subtle way. The first reason gave us reason to have regularity for evidential propositions, i.e., propositions reporting a body of evidence. The second reason, if formulated as above, tells us that if we should have regularity for evidential propositions, then we should also have regularity for contingent hypotheses that are not themselves evidential propositions.

But now notice that the second reason for regularity seems to show rather more if we think it through. The reasoning here is that propositions like Relativity Theory should be confirmable but if we assign credence zero to them, they are not confirmable (assuming the first reason successfully shows that all bodies of evidence have non-zero probability). But now notice that the requirement of confirmability for a hypothesis shows something a lot stronger than that the hypothesis have non-zero probability. For surely it is not merely our view that Relativity Theory should be confirmable given infinite time. Rather, Relativity Theory should be the sort of proposition that would be confirmable by observation prior to the heat death of the universe, or maybe even within a single human lifetime. But the number of potential pieces of observational evidence for a being like us is finite (there are only finitely many perceptual states our brain can distinguish), and gathering a piece of evidence takes a minimum amount of time, and if Relativity Theory starts with a sufficiently low prior probability, we have no hope of confirming it before the deadline.

Hence, the confirmability intuition, if correct, yields a lot more than regularity: it yields substantive non-formal constraints on the priors. We shouldn’t assign a prior of, say, 10−100 to Relativity Theory, at least not if our priors for observational evidence propositions are anything like what we tend to think they are. I am not, however, claiming that every contingent proposition should be confirmable before the heat death of the universe. We would not expect the proposition that there have been 1010000 fair and independent coin tosses made over the lifetime of the universe and that they all turned out to be heads to be confirmable in this strong sense.

In any case, here is what I think has happened. The first reason for regularity, the technical one, only applied to potential bodies of evidence. The second, on the other hand, shows more than it claims: it yields non-formal constraints on priors that go beyond regularity. In particular, I think, the subjective Bayesian is on thin ice if they want to require regularity.

Wednesday, October 20, 2021

A New Testament argument against young earth creationism

The New Testament says, in multiple places, that the end of the world will come soon.

It’s been about two thousand years and the end of the world has not come.

If the world is only about 10,000 years old, then 2,000 years is about 20% of the age of the world. And that’s not soon. So, if the New Testament is right, the world must be rather more than 10,000 years old.

Indeed, we have this: the older we think the world to be, the easier it is to accept the New Testament teaching that the end of the world would come soon after apostolic times.

On standard scientific views, the 2000 years that we’ve had since the time of Jesus is about one percent of the time humans have been on earth, one two millionth of the age of the earth, and one seven millionth of the age of the universe. A blip.

Is Lewis's identity theory a type-type identity theory?

David Lewis’s 1983 identity theory of mind holds that:

1. For each mental state type M there is a causal role RM such that to be a state of type M is to fulfill RM.

2. For each actually occurring mental state type M, the causal role RM is fulfilled by physical states and only physical states.

It is normal to take Lewis’s identity theory to be a type-type identity theory.

But a type-type identity theory identifies being a state of type M with some physical state type. So whether Lewis’s identity theory is a type-type identity theory depends then on whether fulfilling RM counts as a physical state type.

Here are two accounts of what makes a type T be a physical type:

1. Everything falling under T is physical.

2. Necessarily everything falling under T is physical.

If (3) is the right account of the physicality of a state type, then Lewis’s theory is a type-type identity theory, because everything that fulfills RM is physical according to (2).

However, (3) is an inadequate account of the physicality of a type. Consider the type ghost. That’s paradigmatically not a physical type. But in fact, trivially, everything that is a ghost is physical, simply because there are no ghosts. If one objects that only instantiated types count, then we can note that by (3) the type ghost-or-pig also counts as a physical type, whereas it surely does not.

It seems to me that (4) is a much better account of a physical type. However, on (4) for Lewis’s theory to count as a type-type identity theory, he would need a version of (2) strengthened by deleting “actually” and inserting “Necessarily” in front. And Lewis’s arguments do not establish such a stronger version of (2). Lewis’s arguments are quite compatible with RM having non-physical realizers in other possible worlds.

That said, perhaps (4) is not the right account of the physicality of a type either. Consider the type believed by God to be an electron. Necessarily, everything falling under this type is an electron, hence physical. But because the definition of the type makes use of supernaturalist vocabulary, the type does not seem to be physical. This criticism points towards an acocunt of physical type like this:

1. The type T is expressible wholly in terms that natural science uses.

It’s essential for this to fit with Lewis’s theory that causation be one of the terms that natural science uses. But now imagine that we live in a world where one being causes spacetime, and it’s a non-physical being. Clearly, the type cause of spacetime is expressible wholly in natural scientific vocabulary, but given that the one and only instance of this type is non-physical, it sure doesn’t sound like a physical type! Indeed, if this (5) is how we understand physical types, then a type-type identity theory does not even imply a token-token identity theory!

We might try to combine (3) with (5):

1. Everything falling under T is physical and the type T is expressible wholly in terms that natural science uses.

But now imagine that there is no being that causes spacetime and all spatiotemporal entities, but that it is possible for there to be such a being, and that any such being would necessarily be non-physical. In that case causes spacetime and all spatiotemporal entities satisfies (6) trivially, but is surely not a physical type, because the only possible instances of it would be non-physical. (If one objects that types need to be instantiated, just disjoin this type with the type pig, as we did in the ghost case.)

So perhaps our best bet is to combine (4) with (5). But any account on which (4) is a necessary condition for the physicality of a type is an account that goes beyond Lewis’s, because it requires the stronger version of (2) with actuality replaced by necessity.

I conclude that Lewis’s account isn’t really a type-type identity theory, except in the inadequate senses of physicality of type given by (3), (5) or (6).

If we reshuffled the atoms in the observable universe, how likely is it we would get any molecules?

Here’s an amusing question. Let’s say that I took all the atoms in the observable universe and shuffled their positions by independently randomly and uniformly choosing positions for them through the volume of the observable universe. What is the probability that I would get any molecules?

It turns out not to be hard to answer this if we just want to get a very rough upper bound. The above-linked Wikipedia article on the observable universe gives us some useful data:

• Volume of observable universe (V): about 4 × 1080 m3

• Number of atoms in observable universe (N): about 1080

Next, all the bonds in molecules that I can find references to are under about 4 × 10−10 m, and anyway the most relevant one is hydrogen-hydrogen, which is much smaller. Thus, if we keep a sphere of volume of v = (4/3)π(4 × 10−10)3 m3 ≈ 3 × 10−28 around each atom empty of other atoms, we can suppose there are no molecules. What’s the probability of doing that? It’s

• p = ((V − v)/V)((V − 2v)/(V − v))⋯((V − Nv)/(V − (N − 1)v)).

Lots of stuff cancels out and we get:

• p = (V − Nv)/V = 1 − Nv/V.

Thus, the probability that we won’t succeed in clearing such a space around each atom is:

• 1 − p = Nv/V ≈ 10−28.

So, it’s extremely unlikely that a random rearrangement of the atoms in the observable universe would result in a single molecule.

Does this have any interesting philosophical consequences? I don’t know. I wanted to do this calculation to have a better intuitive picture of how incredibly unlikely it would be to get the observable universe by chance to be remotely like what we have—having at least one molecule is my “remotely like what we have” condition.

Of course, nobody thinks our current observable universe was produced directly by chance in its current state. But if something like Liouville’s theorem is applicable to the observable universe, and my above estimate gets close to the probability in the relevant phase space, then the probability of an initial state that results in at least one molecule in 13.8 billion years is going to be the same as the probability of getting at least one molecule directly by the chance arrangement. But I know little about this kind of physics stuff.

Tuesday, October 19, 2021

Civility

I am planning in the future on deleting comments whose style falls short of academic standards of civility due to such things as sarcasm, insults, ungrounded accusations, or a general failure of a measured, calm and respectful tone. I would probably have already done so if other people than I were the targets of the violations of civility, but in the future I plan to do so even when I am the target, in the interests of discouraging uncivil discourse. Moreover, commenters should count on a high likelihood of being banned after about three violations, and earlier if the violations are more egregious. If your comment is deleted, feel free to re-post in better style. If you've been banned and want to be reinstated, email me.

Spacetime and Aristotelianism

For a long time I’ve been inclining towards relationalism about space (or more generally spacetime), but lately my intuitions have been shifting. And here is an argument that seems to move me pretty far from it.

Given general relativity, the most plausible relationalism is about spacetime, not about space.

Given Aristotelianism, relations must be grounded in substances.

Here is one possibility for this grounding:

1. All spatiotemporal relations are symmetrically grounded: if x and y are spatiotemporally related, then there is an x-to-y token relation inherent in x and a y-to-x token relation inherent in y.

But this has the implausible consequence that there is routine backwards causation, because if I walk a step to the right, then that causes different tokens of Napoleon-to-me spatiotemporal relations to be found in Napoleon than would have been found in him had I walked a step to the left.

So, we need to suppose:

1. Properly timelike spatiotemporal relations are grounded only in the later substance.

But what about spacelike spatiotemporal relations? Presumably, they are symmetrically or asymmetrically grounded.

If they are symmetrically grounded, then we have routine faster-than-light causation, because if I walk a step to the right, then that causes different tokens of x-to-me spatiotemporal relations to be found in distant objects throughout the universe.

Moreover, on the symmetric grounding, we get the odd consequence that it is only the goodness of God that guarantees that you are the same distance from me as I am from you.

If they are asymmetrically grounded, then we have arbitrariness as to which side they are grounded on, and it is a regulative ideal to avoid arbitrariness. And we still have routine faster-than-light causation. For presumably it often happens that I make a voluntary movement and someone on the other side of the earth makes a voluntary movement spacelike related to my movement (because there are so many people!), and now wherever the spatiotemporal relations is grounded, it will have to be affected by the other’s movement.

I suppose routine faster-than-light causation isn’t too terrible if it can’t be used to send signals, but it still does seem implausible. It seems to me to be another regulative ideal to avoid nonlocality in our theories.

What are the alternatives to relationalism? Substantivalism is one. We can think of spacetime as a substance with an accident corresponding to every point. And then we have relationships to these accidents. There is a lot of technical detail to work out here as to how the causal relationships between objects and spacetime points and the geometry of spacetime work out, and whether it fits with an Aristotelian view. I am mildly optimistic.

Another approach I like is a view on which spacetime position is a nonrelational position determinable accident. Determinable accidents have determinates which one can represent as values. These values may be numerical (e.g., mass or charge), but they may be more complex than that. It’s easiest in a flat spacetime: spacetime position is then a determinable whose determinates can be represented as quadruples of real numbers. In a non-flat spacetime, it’s more complicated. One option for the values of determinate positions is that they are “pointed spacetime manifold portions”, i.e., intersections of a Lorentzian manifold with a backwards lightcone (with the intended interpretation that the position of the object is at the tip of the lightcone). (What we don’t want is for the positions to be points in a single fixed manifold, because then we have backwards causation problems, since as I walk around, the shifting of my mass affects which spacetime manifold Napoleon lived in.)

Monday, October 18, 2021

Talk against privation theory of evil

I'm giving a Zoom talk against the privation theory of evil (with an alternative provided) for Liverpool University on Thursday at 9 am Central Time / 15:00 UK time. You need to register if you are interested in attending.

A potential explanation why we don't observe violations of the PSR

A standard puzzle for the opponent of the Principle of Sufficient Reason (PSR) is to explain why we don’t observe objects coming into existence ex nihilo. Here is a thought that I think hasn’t been explored enough. Maybe when an object comes into existence ex nihilo, it is unlikely that the object would end up being spatiotemporally related to things already in existence. In other words, perhaps the typical object coming into existence ex nihilo forms a new universe, not spatiotemporally related with any other universe.

If this is right, then the opponent of the PSR should take multiverse hypotheses very seriously.

That said, such random multiverse hypotheses lead to very compellingly sceptical scenarios.

Physicalism, persons, fission and eliminativism

People are philosophically unhappy about nonlocality in quantum mechanics. It is interesting to me that there is an eerily similar nonlocality on standard psychological theories of personal identity. For on those theories:

1. You survive if your memories survive in one living person.

2. You perish if your memories fission between more than one living person.

Now imagine that your brain is frozen, the data from it is destructively read, and then sent to two different stations, A and B, located in opposite directions five light minutes away from your original brain. At each station, a coin is simultaneously flipped (say, in the rest frame of your original brain). If it’s heads (!), the data is put into a freshly cloned brain in a vat, and if it’s tails, the data is deleted.

On a psychological theory, if both coins land heads you perish by (2). But if exactly one coin lands heads, you survive at that station. So whether you exist at one station depends on what happens simultaneously (according to one frame) at a station ten light minutes away.

Note, however, that this is not explicable via quantum nonlocality, because quantum nonlocality depends on entanglement, and there is no relevant entanglement in this thought experiment. It would be a nonlocality beyond physics.

I think one lesson here is that ostensibly physicalist or physicalist-friendly theories of persons or minds can end up sounding oddly dualist. For if dualism were true, it wouldn’t be utterly surprising if facts about where your soul reappears could have a faster-than-light dependence on far away events, since souls aren’t governed by the laws of physics. Similarly, on functionalism plus psychological theories of personal identity, you could move between radically different physical embodiments or even between a physical embodiment and a nonphysical realization. That, too, sounds rather like what you would expect dualism to say.

If I were a physicalist, I would perhaps be inclined to be drawn by these observations towards eliminativism about persons. For these observations suggest that even physicalist pictures of the person may be too deeply influenced by the dualist roots of philosophical and theological reflection on personhood. If these roots are seen as intellectually corrupt by the physicalist, then it should be somewhat attractive to deny the existence of persons.

Friday, October 15, 2021

Another problem with evolutionary accounts of teleology

The crucial thought behind evolutionary accounts of proper function or teleology is that organisms succeed in reproducing because they are fulfilling a function, and wouldn’t have reproduced otherwise. In a paper with Koons, I offer a Great Grazing Ground objection to all such accounts.

Here, I want to offer a perhaps neater objection. Imagine Twin Earth just like Earth, with a biological history just like ours. But there is an extremely powerful alien, akin to Frankfurt’s counterfactual intervener, who has a script for all the details of biological history on that planet. That script, completely by chance, matches all the events on both Twin Earth and Earth. But the counterfactual intervener has the following immovable policy: if there is any deviation from the script on Twin Earth, the alien restores conditions to the same ones that would have resulted according to the script. Moreover, the alien’s restoration would occur before there is any deviation in reproduction or survival. But, by good fortune, no intervention is ever needed: everything actually follows the script.

Imagine, for instance, that a bird is attacked by a predator. On Earth, it escapes on its wings and reproduces. The same happens on Twin Earth. But on Twin Earth, had the bird not flown, the alien would have intervened, moved the bird out of danger, and then restored everything to the post-flight situation in the script. Consequently, on Twin Earth, it is false that had the bird not flown, it wouldn’t have reproduced. Indeed, apart from trivial cases like “Had x not reproduced, it wouldn’t have reproduced”, on Twin Earth the counterfactuals allegedly defining proper function or teleology hold. Yet events on Twin Earth are just as on Earth, and the alien doesn’t do anything but watch. And it is deeply implausible that simply by watching the alien destroys proper function or teleology.

An asymmetry between physical and emotional pain

Here is a puzzling asymmetry. It seems that:

• Typically, we should seek to remove serious physical pains, even when these pains are normal and we are unable to alleviate the underlying problem.

• Typically, when emotional pains are serious but normal, we should not seek to remove them, except by alleviating the underlying problem.

Thus, if one has lost a leg in an accident, it seems one should be given pain killers, whether or not the leg can be reattached, and even if one’s degree of pain is proper to the loss. But if one has lost a friend, the grief should not be removed, unless it can be done by restoring the friend (there is, after all, more than one sense of “lost a friend”).

Structurally, it seems that leg and friend cases are parallel: In both cases, there is a harm, which it is normal to perceive painfully.

Solution 1: The difference is due to instrumental factors. In the case of the loss of a friend, the pain helps one to restructure oneself mentally in the tragic new circumstances. In the case of the loss of a leg, however, assuming one is already seeking medical attention, the pain is unlikely to lead to any further goods.

Solution 2: Due to the Fall, typically our physical pains are excessive. We feel more pain for a physical loss than we should given that our primary ends are not physical in nature. The appearance of asymmetry is due to an equivocation on “normal”: the kind of pain we feel at physical damage is statistically normal for fallen human beings, but is not really normal. On the other hand, when we talk of normal emotional pains, there the pains are either really normal, correctly grasping the tragedy of the situation, or else they are actually deficient. (A standard theological intuition is that Jesus suffered mentally more at evils than any of us, because his virtue made him more acutely aware of the badness of these evils.)

Thursday, October 14, 2021

Constructive presence

This morning, I was reading the Georgia Supreme Court’s Simpson v. State (1893) decision on a cross-state shooting, and loved this example, which is exactly the kind of example contemporary analytic philosophers like to give: "a burglary may be committed by inserting into a building a hook, or other contrivance, by means of which goods are withdrawn therefrom; and there can be no doubt that, under these circumstances, the burglar, in legal contemplation, enters the building."

Dualist eliminativism

Eliminativism holds that our standard folk-psychological concepts of mental functioning—say, thoughts, desires, intentions and awareness—have no application or are nonsense. Usually, eliminativism goes hand in hand with physicalism and scientism: the justification for eliminativism is the idea that the truly applicable concepts of mental functioning are going to be the ones of a developed neuroscience, and it is unlikely that these will match up our current folk psychology.

But we can make a case for eliminativism on deeply humanistic grounds independent of neuroscience. We start with the intuition that the human being is very mysterious and complex. Our best ways of capturing the depths of human mental functioning are found neither in philosophy nor in science, but literature. This is very much what we would expect if our standard concepts did not correctly apply to the mind’s functioning, but were only rough approximations. Art flourishes in limitations of medium, and the novelist and poet uses the poor tool of these concepts to express the human heart. Similarly, the face expresses the soul (to tweak Wittgenstein’s famous dictum), and yet what we see in the face is more complex, more mysterious than what we express with our folk psychological vocabulary.

There is thus a shallowness to our folk-psychological vocabulary which simply does not match the wondrous mystery of the human being.

Finally, and here we have some intersection with the more usual arguments for eliminativism, our predictive ability with respect to human behavior is very poor. Just think how rarely we can predict what will be said next in conversation. And even our prediction of our own behavior, even our mental behavior, is quite poor.

The above considerations may be compatible with physicalism, but I think it is reasonable to think that they actually support dualism better. For on physicalism, ultimately human mental function would be explicable in the mechanistic terminology of physics, and my considerations suggest an ineffability to the human being that may be reasonably thought to outpace mechanistic expressions.

But whether or not these considerations in fact support dualism over physicalism, they are clearly compatible with dualism. And so we have a corner of logical space not much explored by (at least Western) philosophers: dualist eliminativism. I do not endorse this view, but in some moods I find it attractive. Though I would like it to come along with some kind of a story about the approximate truth of our ordinary claims about the mind.

Disembodied existence and physicalism

Consider the following standard Cartesian argument:

1. I can imagine myself existing without a body.

2. So, probably, I can exist without a body.

3. If I can exist without a body, I am not physical.

4. So, I am not physical.

It is my impression that the bulk of physicalist concern about this argument focuses on the inference from (1) to (2). But it seems to me that it would be much more reasonable for the physicalist to agree to (2) but deny (3). After all, our best physicalist theory of the person is functionalism combined with a psychological account of personal identity. But on that theory, for me to exist without a body all that’s needed is for my memories to be transfered into a spiritual computational system which is functionally equivalent to my current neural computational system, and that seems quite plausibly possible.

The physicalist need not claim that I am essentially physical, only that I am in fact physical, i.e., that in the actual world, the realizer of my functioning is physical.

Wednesday, October 13, 2021

A pedagogical universe

Our science developed over milennia, progressing from false theory to less false theory. Why did we not give up long ago? I take it this is because the false theories, nonetheless, had rewards associated with them: although false, they allowed for prediction and technological control in ways that were useful (in a broad sense) to us.

Thus, the success of our science depends not just on a “uniformity of nature” on which the correct fundamental scientific theories are elegant and uniform. Most of our historical progress in physics has not involved correct scientific theories—and quite possibly, we do not have any correct fundamental theories in physics yet. The success of our science required low-hanging fruit for us to pick along the way, fruit that would guide us in the direction of truth.

We can imagine worlds where the ultimate physics requires an enormous degree of sophistication (much as we expect to be the case in our world) and there is little in the way of low-hanging fruit (except maybe for the lowest level of low-hanging fruit, involving the regularities needed to enable evolution of intelligence in the first place) in the form of approximately true theories that rewards us with prediction and control so that beings like us would just give up on science. Our world is better than that.

Indeed, our world seems to be pedagogically arranged for us, arranged to gradually teach us science (and other things), much as we teach our children, with intellectual and practical rewards. There is a design argument for the existence of God from this (closely related to this one).

Friday, October 8, 2021

Deceit and Double Effect

Suppose you inform me of something true, p, and as a result I come to believe it. Then very likely you’ve deceived me about something!

For there is surely some falsehood q that I believed previously with a high confidence. But presumably I did not believe the conjunction p&q before I got your information, since I didn’t believe p. But now that you’ve informed me of p, I am likely to believe p&q, and yet that is a falsehood.

Sometimes this argument doesn’t work (maybe sometimes I believed p&q when I didn’t believe q, and maybe sometimes my belief in p is sufficiently marginal that I still don’t believe p&q), but most of the time it does.

This means that we are typically deceiving people all the time in conversation! This sounds bad, unless we make a distinction between foreseeing and intending. You can foresee (now that you saw the above argument) that whenever you inform me of something this is likely to deceive me about something else. But merely foreseen deceit counts for very little morally as long as you don’t intend the deceit.

Thursday, October 7, 2021

Could the PSR be contingent?

The Principle of Sufficient Reason (PSR) says that every contingent truth has an explanation. Most people who accept the PSR think it is a necessary principle. And there is good reason, because it seems more like a candidate for a fundamental necessary truth than a merely contingent fact. And epistemically, if the PSR is contingent, it is hard to see why we should think ourselves lucky enough for it to be true.

All that said, it is interesting to investigate the question a bit more. So, let’s suppose the PSR is contingently true. Then according to the PSR, the PSR has an explanation, like every other contingent truth. What could the explanation of the PSR be like?

Since we’ve assumed the PSR to be contingent, the explanation can’t simply involve derivation of the PSR from necessary metaphysical principle.

The explain a contingent PSR is to explain why no contingent unexplained thing has happened.

Here is one suggestion. Perhaps there is a necessary being which has the power to prevent the existence of contingent unexplained events. This necessary being freely, but with good reason that the necessary being necessarily has, chooses to exercise this power. Thus, the explanation of why no contingent unexplained thing has happened is that the necessary being freely chose to prevent all such things. And the necessary being’s free choice is explained by reasons.

I am not sure what I think of the plausibility of a hypothesis of a being as having the power to prevent things from popping into existence causelessly if such popping is otherwise metaphysically possible.

Here is another much less metaphysically loaded attempt. It seems to me that whether one accepts the PSR or not, one should accept instances of the following kind of explanatory schema for contingent events E:

1. Event E did not happen because there is no explanation of E.

If the PSR is necessarily true, then the fact that there is no explanation of E entails that E did not happen. However, I think we should accept instances of (1) even if the PSR is contingently true and even if it is not true at all. In those cases, that there is no explanation of E may not entail that E did not happen, but we shouldn’t think that explanations must entail the events they explain. (If we thought that, we would have to reject most scientific explanations.)

Now imagine we have an infinite list of all possible contingent events that could happen but did not happen, E1, E2, ..., and an infinite list of all contingent events that did happen, F1, F2, .... We can then say:

1. The PSR is true because E1 did not happen, E2 did not happen, E3 did not happen, while on the other hand F1, F2, ... did happen.

And why did Ei not happen?

1. Ei did not happen because there is no explanation of Ei.

And of course each of the Fi does have an explanation, because the PSR is, we have assumed, true.

This seems like an explanation of the contingent truth of the PSR.

Both options seem a bit fishy, though. I can’t say exactly what’s wrong with them, though.

Wednesday, October 6, 2021

A cosmological argument from a PSR for ordinary truths

Often in cosmological arguments the Principle of Sufficient Reason (PSR) is cleverly applied to vast propositions like the conjunction of all contingent truths or to highly philosophical claims like that there is something rather than nothing or that there is a positive contingent fact. But at the same time, the rhetoric that is used to argue for the PSR is often based on much more ordinary propositions, such as Rescher’s example of an airplane crash which I re-use at the start of my PSR book. And this can feel like a bait-and-switch.

To avoid this criticism, let’s suppose a PSR limited to “ordinary” propositions, i.e., the kind that occur in scientific practice or daily life.

1. Necessarily we have the Ordinary PSR that every contingent ordinary truth has an explanation. (Premise)

2. That there is an electron is an ordinary proposition. (Premise)

3. It is possible that there is exactly one contingent being, an electron. (Premise)

4. Necessarily, if no electron is a necessary being, then any explanation of why there is an electron involves the causal activity of a non-electron. (Premise)

5. Let w be a possible world where there is exactly one contingent being, an electron. (By 3)

6. At w, there is an explanation of why there is an electron. (By 1, 2 and 4)

7. At w, there is a non-electron that engages in causal activity. (By 4, 5 and 6)

8. At w, every non-electron is a necessary being. (By 5)

9. At w, there is a necessary being that engages in causal activity. (By 7 and 8)

10. So, there is a necessary being that possibly engages in causal activity. (By 9 and S5)

So, we have a cosmological argument from the necessity of the Ordinary PSR.

Objection: All that the ordinary cases of the PSR show is that actually the Ordinary PSR is true, not that it is necessarily true.

Response: If the Ordinary PSR is merely contingently true, then it looks like we are immensely lucky that there are no exceptions whatsoever to the Ordinary PSR. In other words, if the Ordinary PSR is merely contingently true, we really shouldn’t believe it to be true—we shouldn’t think ourselves this lucky. So if we are justified in believing the Ordinary PSR to be at least contingently true, we are justified in believing it to be necessarily true.

Tuesday, October 5, 2021

Preliminary notes on Cartesian scoring rules

Imagine an agent for whom being certain that a proposition p is true has infinite value if p is in fact true. This could be a general Cartesian attitude about all propositions, or it could be a special attitude to a particular proposition p.

Here is one way to model this kind of Cartesian attitude. Suppose we have a single-proposition accuracy scoring rule s(r, i) which represents the epistemic utility of having credence r when the proposition in fact has truth value i, where i is either 0 (false) or 1 (true). The scores can range over the whole interval [ − ∞, ∞], and I will assume that s(r, i) is finite whenever 0 < r < 1, and continuous at r = 0 and r = 1. Additionally, I suppose that the scoring rule is proper, in the sense that the expected utility of sticking to your current credence r by your own lights is at least as good as the expected utility of any other credence. (When evaluating expected utilities with infinities, I use the rule 0 ⋅ ±∞=0.)

Finally, I say the scoring rule is Cartesian with respect to p provided that s(1, 1)=∞. (We might also have s(0, 0)=∞, but I do not assume it. There are cases where being certain and right that p is much more valuable than being certain and right that ∼p.)

Pretty much all research on scoring rules focuses on regular scoring rules. With a regular scoring rule, is allowed to have an epistemic utility −∞ when you are certain of a falsehood (i.e., s(1, 0)= − ∞ and/or s(0, 1)= − ∞), the possibility of a +∞ epistemic utility is ruled out, and indeed epistemic utilities are taken to be bounded above. Our Cartesian rules are all non-regular.

I’ve been thinking about proper Cartesian scoring rules for about a day, and here are some simple things that I think I can show:

1. They exist. (As do strictly proper ones.)

2. One can have an arbitrarily fast rate growth of s(r, 1) as r approaches 1.

3. However, s(r, 1)/s(r, 0) always goes to zero as r approaches 1.

Claim (2) shows that we can value near-certainty-in-the-truth to an arbitrarily high degree, but there is a price to be paid: one must disvalue near-certainty-in-a-falsehood way more.

One thing that’s interesting to me is that (3) is not true for non-Cartesian proper scoring rules. There are bounded proper scoring rules, and then s(1, 1)/s(1, 0) can be some non-zero ratio. (Relevant to this is this post.) Thus, assuming propriety, going Cartesian—i.e., valuing certainty of truth infinitely—implies an infinitely greater revulsion from certainty in a falsehood.

A consequence of (2) is that you can have proper Cartesian scoring rules that support what one might call obsessive hypothesis confirmation: even if gathering further evidence grows increasingly costly for roughly the same Bayes factors, given a linear conversion between epistemic and practical utilities, it could be worthwhile to continue to continue gathering evidence for a hypothesis no matter how close to certain one is. I don’t think all Cartesian scoring rules support obsessive hypothesis confirmation, however.

Friday, October 1, 2021

A simple moral preference circle with infinities

Here is a simple moral preferability circle. Suppose there are infinite many human strangers numbered ..., −3, −2, −1, 0, 1, 2, 3, ... all of whom, in addition to two cats, are about to drown. Consider these options:

A. Save the strangers numbered 0, 1, 2, ....

B. Save the strangers numbered −1, −2, −3, ... and one cat.

C. Save the strangers numbered 1, 2, 3, ... and both cats.

Option B beats Option A: If we had to choose between strangers 0, 1, 2, ... and strangers −1, −2, −3, ..., we should clearly be indifferent. Toss in the cat, and now it looks like we have a reason to save the second set of strangers.

Option C beats Option B: If we had to choose between strangers −1, −2, −3, ... and strangers 1, 2, 3, ..., we should be indifferent. But now observe that in Option C one more cat is saved, and it sure looks like we should go for C.

Option A beats Option C: Option A replaces the two cats with stranger 0, and surely it’s better to save one human over two cats.

If you don’t think we have moral reasons to save cats, replace saving the cats from drowning with saving two human strangers from ten minutes of pain.

I am now toying with an intuitively very appealing solution to problems like the above: we have no moral rules in such outlandish cases. I think this can be said on either natural law or divine command theory. On natural law, it is unsurprising if our nature does not provide guidance in situations where we are far from our natural environment. On divine command theory, why would God bother giving us commands that apply to situations so far from ones we are going to be in?

Musings on personal qualitative identity

Consider the popular concept of “identity”, in the sense of what one “identifies with/as”. Let’s call this “personal qualitative identity”. We can think of someone’s personal qualitative identity as a plurality of properties that the person correctly takes themselves to have and that are important, in a way that needs explication, to the person’s image of themselves.

There are a few analytic quibbles we could ask about what I just said. Couldn’t someone have properties they do not actually have as part of their identity? Surely there are lots of people who have excellences of various sorts at the heart of their self-image but lack these excellences. I don’t want to count mistakenly self-attributed properties as part of a person’s identity, because there is a kind of respect we have towards another’s personal qualitative identity that requires it to be factive. In these cases, maybe I would say that the person’s taking themselves to have the excellences is a part of their identity, but not the actual possession of the excellences.

In an opposed criticism, one might want to require the person to know that they have the properties, and not merely to correctly think they have them. But that is asking for too much. Suppose Alice identifies as ethnically Slovak, on the basis of misreading the handwriting on an old geneological document that actually said she was Slovenian. But suppose the document was wrong, and Alice in fact is Slovak rather than Slovenian. Surely it is correct to say that being Slovak is a part of her identity, even though Alice does not know that she is Slovak.

But the really central and difficult thing in the concept of personal qualitative identity is the kind of “self-identificational” importance that the person attaches to them. We have plenty of properties that we correctly believe, and even know, ourselves to have, but which lack the kind of first-person importance that makes them a part of the personal qualitative identity. There is a contradiction in saying: “It is a part of my (personal qualitative) identity that I am F, but I don’t care about being F.”

In particular, the properties that are a part of the personal qualitative identity enjoy an important role in motivating the person’s actions. Of course, any property one takes oneself to have can motivate action. I don’t much care that my eyes are blue, but my self-attribution of the blueness of my eyes motivates me to write “blue” under “eye color” on government forms. But the properties that are a part of the personal qualitative identity enter into one’s motivations more often, in wider range of contexts, and in a way more significant to oneself.

There is an ambiguity here, though. When one is motivated to act a certain way by a property in one’s identity, is one motivated by the fact that one has the property or by the fact that one identifies with that property? I want to suggest that the right answer should often be the first-order one. It is my duty as a parent to provide for my children, and I identify with my having that duty. But whether I identify with having that duty or not is irrelevant to the reason-giving force of that duty: if I didn’t identify with that duty, I would be just as obligated by it. Indeed, it seems to me to be a failure when I am moved not by my duty but by my identification with the duty. The thought “this is my duty” can be a healthy thought, but adding “and I identify as having it” is morally a thought too many, though sometimes, morally deficient as we are, we need the kick in the behind that the extra thought provides.

In fact, I think there is an interesting moral danger that I think has not been much talked about. If the property F is in my personal qualitative identity, then I also have the higher order property IF of having F in my identity. Logically speaking, this higher order property may or may not itself be a part of my identity. While in some cases it may be appropriate for IF to be a part of my identity in addition to F, in most if not all of those cases, IF should be a less central part of my identity than F, and in many cases it should not be a part of my identity at all. This is because the actual rational motivational force is often largely exhausted by my one’s having F, while a focus on IF adds an illusion of additional rational force.

In general, I think that it is important to be critical about our personal qualitative identities. There are substantive and personally important normative questions about which of one’s properties should enter into the identity. A failing I know myself to have is that I end up promoting generalizations about myself into parts of my personal qualitative identity by having them play too strong a motivational role. That “I am the kind of person who ϕs” should not play much of a role in my deliberations. What matters is whether ϕing, on a given occasion, is a good or a bad thing. Yet I find myself often deciding things on the basis of being, or not being, a certain kind of person. That's deciding on the basis of navel-gazing.

I find the following norm appealing: a property F should be a part of my identity if and only if independently of my attitude to F, my having F has significant rational importance to a broad range of my deliberations. But this austere norm is probably too austere.

Derivative value

Some things have derivative value. One kind of derivation is from whole to parts: a stone can have a special value by virtue of being a part of something of great significance, say a temple. Another kind of derivation is from parts to whole: a golden statue has a value deriving from the value of its atoms. Yet another kind is from friend to friend: if I do good directly to a friend of yours, I benefit you as well.

The distinction between derivative and original value is orthogonal to that between instrumental and non-instrumental value, and probably also to that between intrinsic and extrinsic value.

It is easy to create puzzles with derivative value, because derivative value is not simply additive and double counting must be avoided. Imagine a golden statue made by someone with minimal artistic skill. The maker of that statue then produced something literally worth its weight in gold, and yet they added almost no value to the world, because almost all of the value of the poorly made statue is derivative. Melting down a golder statue worth exactly its weight in gold does no harm to the world! Similarly, dissolving a ten-member committee need be no more harmful than dissolving a five-member one.

If two people are drowning, one friendless and one with ten friends, perhaps there is additional reason to save the one with ten friends, though the point is not clear. But if there is additional reason, it does not scale linearly with the number of friends. If someone had a thousand of friends, that needn’t create much more a reason to save them than if they had a hundred, I suspect.

It is tempting to initially think of derivative value as a faint shadow of original value. Sometimes this is true: the death of Alice considered as a derivative harm to her distant friends is a mere shadow of the badness of that death considered as a harm to Alice. But sometimes it’s not true: the death of Alice considered as a derivative harm to her closest friends approaches the original badness of that death considered as a harm to her. And the inartistic golden statue’s derivative value is not a whit less than the original value of its gold components.

Can we at least say that derivative value is always at most equal to the original value? Maybe, but even that is not completely clear. That Alice is loved by God makes it be the case that a harm to Alice is a harm to God. But it could be that the derivative badness to God gives us reasons to protect Alice that are stronger than those coming from the original badness to Alice, and the derivative badness here might exceed the original badness. (Recall here Anselm’s idea that sin is infinitely bad, because it offends the infinite God.) Perhaps, though, cases of love do not give rise to purely derivative value, because the derivative value is created by an interaction between the original value of the beloved and the original value of the lover. On the other hand, insofar as the inartistic golden statue’s value is purely derivative, it cannot exceed the original value of the parts.

The non-additiveness of derivative value throws a wrench in simple consequentialist systems on which we maximize the total value of everything. Perhaps, though, it is possible to talk about overall value, which is not additive in nature, so this need not be a knock-down argument against consequentialism. But it definitely seems to complicate things.

Note that similar phenomena occur for other properties than value. When one takes ten pounds of gold and makes a statue of it, one may create a ten pound object (assuming for the sake of argument that statues really exist), but one doesn’t add ten pounds to reality. We need to avoid double-counting in the case of derivative mass just as much as for derivative value.