Monday, September 25, 2023

The Principles of Sufficient and Partial Reasons

I have argued that the causal account of metaphysical possibility implies the Principle of Sufficient Reason (see Section 2.2.6.6 here). The argument was basically this: If p is contingently true but unexplained, then let q be the proposition that p is unexplained but true. Consider now a world w where p is false. In w, the proposition q will be possible (by the Brouwer axiom). So by the causal account of modality, something can start a chain of causes leading to q being true. Which, I claimed, is absurd, since that chain would lead both to p being true and to p being unexplained. But the chain would explain p, so we have absurdity.

But it isn’t absurd, or at least not immediately! For the chain need not explain p. It might only explain the aspects of p that do not obtain in w. For a concrete example, suppose that p is a conjunction of p1 and p2, and p1 is false in w but p2 is true. Then a chain that leads to p being true need not explain p2: it might only explain p1, and might leave p2 as is.

I think what my argument for the PSR establishes is a weaker conclusion than the PSR: the Principle of Partial Reason (PPR), that every contingent truth has a partial explanation.

I am pretty sure that PPR plus causal finitism implies PSR, and so the modality argument for PSR can be rescued, albeit at the cost of assuming causal finitism. And, intuitively, it would be weird if PPR were true but PSR were not.

Thursday, September 21, 2023

Dry eternity

Koons and I have used causal paradoxes of infinity, such as Grim Reapers, to argue against infinite causal chains, and hence against an infinite causally-interconnected past. A couple of times people have asked me what I think of Alex Malpass’s Dry Eternity paradox, which is supposed to show that similar problems arise if you have God and an infinite future. The idea is that God is going to stop drinking (holy water, apparently!) at some point, and so he determines henceforth to act by the following rule:

  1. “Every day, God will check his comprehensive knowledge of all future events to see if he will ever drink again. If he finds that he does not ever drink again, he will celebrate with his final drink. On the other hand, if he finds that his final drink is at some day in the future, he does not reward himself in any way (specifically, he does not have a drink all day).”

This leads to a contradiction. (Either there is or is not a day n such that God does not drink on any day after n. If there is such a day, then on day n + 1 God sees that he does not drink on any day after n + 1 and so by the rule God drinks on day n + 1. Contradiction! If there is no such day, then on every day n God sees that he will drink on a day later than n, and so he doesn’t drink on n, and hence he doesn’t ever drink, so that today is a day such that God does not drink on any day after it. Contradiction, again!)

Is this a problem for an infinite future? I don’t think so. For sonsider this rule.

  1. On Monday, God will drink if and only if he foresees that he won’t drink on Tuesday. On Tuesday, God will drink if and only if he remembers that he drank on Monday.

Obviously, this is a rule God cannot adopt for Monday and Tuesday, since then God drinks on Monday if and only if God doesn’t drink on Monday. But this paradox doesn’t involve an infinite future, just two days.

What’s going on? Well it looks like in (2) there are two divine-knowledge-based rules—one for Monday and one for Tuesday—each of which can be adopted individually, but which cannot both be adopted, much like in (1) there are infinitely any divine-knowledge-based rules—one for each future day—any finite number of which can be adopted, but where one cannot adopt infinitely many of them.

What we learn from (2) is that there are logical limits to the ways that God can make use of divine foreknowledge. From (2), we seem to learn that one of these logical limits is that circularity needs to be avoided: a decision on Monday that depends on a decision on Tuesday and vice versa. From (1), we seem to learn that another one of these logical limits is that ungrounded decisional regresses need to be avoided: a decision that depends on a decision that depends on a decision and so on ad infinitum. This last is a divine analogue to causal finitism (the doctrine that nothing can have infinitely many things in its causal history), while what we got from (2) was a divine analogue to the rejection of causal circularity. It would be nice if there were some set of principles that would encompass both the divine and the non-divine cases. But in any case, Malpass’s clever paradox does no harm to causal finitism, and only suggests that causal finitism is a special case of a more general theory that I have yet to discover the formulation of.

The infinite future problem for causal accounts of metaphysical possibility

Starting with my dissertation, I’ve defended an account of metaphysical possibility on which it is nothing other than causal possibility. I would try to define this as follows:

  • p is possible0 iff p is actually true

  • p is possiblen + 1 iff things have the causal power to make it be that p is possiblen.

  • p is possible iff p is possiblen for some n.

I eventually realized that this runs into problems with infinite future cases. Suppose a coin will be tossed infinitely many times, and, as we expect, will come up heads infinitely many times and tails infinitely many times. Let p be the proposition that all the tosses will be heads. Then p is false but possible. Moreover, it is easy to convince oneself that it’s not possiblen for any finite n. Possibilityn involves n branchings from the actual world, while p requires infinitely many branchings from the actual world.

This has worried me for years, and I still don’t have a satisfying solution.

But yesterday I realized a delightful fact. This problem does nothing to undercut the basic insight of my account of metaphysical possibility, namely that metaphysical possibility is causal possibility. All the problem does is undercut one initially plausible way to given an account of causal possibility. But if we agree that there is such a thing as causal possibility, and I think we should, then we can still say that metaphysical possibility is causal possibility, even if we do not know exactly how to define causal possibility in terms of causal powers.

(There is one danger. Maybe the true account of causal possibility depends on metaphysical possibility.)

Wednesday, September 20, 2023

A dilemma for best-systems accounts of laws

Here is a dilemma for best-systems accounts of laws.

Either:

  1. law-based scientific explanations invoke the lawlike generalization itself as part of the explanation, or

  2. they invoke the further fact that this generalization is a law.

Thus, if it is a law that all electrons are charged, and Bob is an electron, on (1) we explain Bob’s charge as follows:

  1. All electrons are charged.

  2. Bob is an electron.

  3. So and that’s why Bob is charged.

But on (2), we replace (3) with:

  1. It is a law that all electrons are charged.

Both options provide the Humean with problems.

If it is just the lawlike generalization that explains, then the explanation is fishy. The explanation of why Bob is charged in terms of all electrons being charged seems too close to explaining a proposition by a conjunction that includes it:

  1. Bob is charged because Bob is charged and Alice is charged.

Indeed both (3)–(5) and (7) are objectionably cases of explaining the mysterious by the more mysterious: the conjunction is more mysterious than its conjunct and the universal generalization is more mysterious than its instances.

On the other hand, suppose that our explanation of why Bob is charged is that it’s a law that all electrons are charged. This sounds correct in general, but is not appealing on a best-systems view. For on a best-systems view, what the claim that it’s a law that all electrons are charged adds to the claim that all electrons are charged is that the generalization that all electrons are charged is sufficiently informative and brief to make it into the best system. But the fact that it is thus informative and brief does not help it explain anything.

Moreover, if the problem with (3)–(5) was that universal generalizations are too much like conjunctions, the problem will not be relieved by adding more conjuncts to the explanation, namely that the generalization is sufficiently informative and brief.

Gratuitous and objective evil

Suppose, highly controversially, that no defensible atheist account of objective value is possible. Now consider a paradigmatic apparently gratuitous horrendous evil E—say, one of the really awful things done to children described by Ivan in the Brothers Karamazov. The following two claims are both intuitive:

  1. E is gratuitous

  2. E is objectively evil.

But if there is no defensible account of objective evil on atheism, then (1) and (2) are in serious tension. For if there cannot be objective evil on atheism, then (2) cannot be true on atheism. Thus, (2) implies theism. But on the other hand, (1) implies atheism, since E gratuitous just in case if God existed, then E would be an evil that God has conclusive moral reason to prevent.

On our initial assumption about atheism, then, we need to choose between (1) and (2). And here there is no difficulty. That the things described by Ivan are objectively evil is way more clear than that God would have conclusive moral reason to prevent them, even if the latter claim is very likely in isolation.

Is a defensible atheist account of objective value possible? I used to think there was no special difficulty, but I’ve since come to be convinced that probably the only tenable account of objective value is an Aristotelian one based on form, and that human form requires something like a divine source. That said, even if objective value is something the atheist can defend, nonetheless knowledge of objective value is very difficult for the atheist. For objective value has to be (I know this is controversial) non-natural, and on atheism it is very difficult to explain how we could acquire the power to get in touch with non-natural aspects of reality.

But if knowledge of objective value is very difficult for the atheist, then we have tension between:

  1. E is gratuitous

  2. I know that E is objectively evil.

And (3) is still, I think, significantly more plausible than (1).

Tuesday, September 19, 2023

The evidential force of there being at least one gratuitous evil is low

Suppose we keep fixed in our epistemic background K general facts about human life and the breadth and depth of evil in the world, and consider the impact on theism of the additional piece of evidence that at least one of the evils is apparently gratuitous—i.e., one such that has resisted finding a theodicy despite strenuous investigation.

Now, clearly, if we found that there is not even one gratuitous evil would be extremely good evidence for the existence of God—for if there is no God, it is amazing if of the many evils there are, none were apparently gratuitous, but less amazing if there is a God. And hence, by a standard Bayesian theorem, finding that there is at least one gratuitous evil must be some evidence against the existence of God. But at the same time, the fact that F is strong evidence for T does not mean that the absence of F is strong evidence against T. Whether it is or is not depends on details.

But the background K contains some relevant facts. One of these is that we are limited knowers, and while we have had spectacular successes in our ability to understand the world and events around us, it is not incredibly uncommon to find things that have (so far) defeated our strenuous investigation. Some of these are scientific questions, and some are interpersonal questions—“Why did he do that?” Given this, it seems unsurprising, even if God exists, that we would sometimes be stymied in figuring out why God did something, including why he failed to prevent some evils. Thus, the probability of at least one of the vast numbers of evils in K being apparently gratuitous, given the existence of God, is pretty high, though slightly lower than given the non-existence of God. This means that the evidential force for atheism of there being at least one apparently gratuitous evil is fairly low.

Furthermore, one can come up with a theodicy for the gratuitous part of a gratuitous evil. When a person’s motives are not transparent to us we are thereby provided with an opportunity for exercising the virtue of trust. And reversely, a person’s always explaining themselves when they have been apparently unjustified does not build trust, on the other hand, but suspicion. Given the evils themselves as part of the background K, that some of them be apparently gratuitous provides us with an opportunity to exercise trust in God in a way that we would not be able to if none of the evils were apparently gratuitous. Given K (which presumably includes facts about us not being always in the luminous presence of God), it would be somewhat surprising if God always made sure we could figure out why he allowed evils. Again, this makes the evidential force for atheism of the apparent gratuity of evil fairly low.

Now, it may well be that when we consider the number or the type (perhaps they are of a type where divine explanations of permission would be reasonably expected) of apparently gratuitous evils, things change. Nothing I have said in this post undermines that claim. My only point is that the mere existence of an apparently gratuitous evil is very little evidence against theism.

Monday, September 18, 2023

Hiddenness and evil

There is some discussion in the literature whether the problem of hiddenness is a species of the problem of evil. I think the theist should say that it is, and can even identify the type of evil it is. There are some important propositions which it is normal for a human being to know, or at least to believe, ignorance of which is constitutive of not having a flourishing life. Examples include not realizing that one’s fellows are persons, not realizing that one is a person, not possessing basic moral truths, etc. If God in fact exists, then the proposition that God exists falls in the same category of propositions it is normal for humans to believe, and without believing which we cannot flourish.

A corollary of this is that if we can find a good theodicy for other cases of ignorance of truths needed for a flourishing human life, then we have hope that that theodicy would apply to ignorance of the existence of God.

Wednesday, September 13, 2023

Ontology and duck typing

Some computer languages (notably Python) favor duck-typing: instead of relying on checking whether an object officially falls under a type like duck, one checks whether it quacks, i.e., whether it has the capabilities of a duck object. You can have a dog object that behaves like a vector, and a vector object that behaves like a dog.

It would be useful to explore how well one could develop an ontology based on duck-typing rather than on categories. For instance, instead of some kind of categorical distinction between particulars and universals, one simply distinguishes between objects that have the capability to instantiate and objects that have the capability to be instantiated, without any prior insistence that if you can be instantiated, then you are abstract, non-spatiotemporal, etc. Now it may turn out that either contingently or necessarily none of the things that are spatiotemporal can be instantiated, but on the paradigm I am suggesting, the explanation of this would not lie in a categorical difference between spatiotemporal entities and entities that have the capability of being instantiated. It may lie in some incompatibility between the capabilities of being instantiated and occupying spacetime (though it’s hard to see what that incompatibility would be) or it may just be a contingent fact that there is no object has both capabilities.

As a theist, I think there is a limit to the duck typing. There will, at least, need to be a categorical difference between God and creature. But what if that’s the only categorical difference?

Tuesday, September 12, 2023

On two problems for non-Humean accounts of laws

There are three main views of laws:

  • Humeanism: Laws are a summing up of the most important patterns in the arrangement of things in spacetime.

  • Nomism: Laws are necessary relations between universals.

  • Powerism: Laws are grounded in the essential powers of things.

The deficiencies of Humeanism are well known. There are also deficiencies in nomism and powerism, and I want to focus on two.

The first is that they counterintuitively imply that laws are metaphysically necessary. This is well-known.

The second is perhaps less well-known. Nomism and powerism work great for fundamental laws, and for those non-fundamental laws that are logical deductions from the fundamental laws. But there is a category of non-fundamental laws, which I will call impure laws, which are not derivable solely from the fundamental laws, but from the fundamental laws conjoined with certain facts about the arrangement of things in spacetime.

The most notorious of the impure laws is the second law of thermodynamics, that entropy tends to increase. To derive this from the fundamental laws, we need to add some fact about the initial conditions, such as that they have a low entropy. The nomic relations between universals and the essential powers of things do not yield the second law of thermodynamics unless they are combined with facts about which universals are instantiated or which things with which essential powers exist.

A less obvious example of an impure law seems to be conservation of energy. The necessary relations between universals will tell us that in interactions between things with precisely such-and-such universals energy is conserved. And it might well be that the physical things in our world only have these kinds of energy-conserving universals. But things whose universals don’t conserve energy are surely metaphysically possible, and the fact that such things don’t exist is a contingent fact, not grounded in the necessary relations between universals. Similarly, substances with causal powers that do not conserve energy are metaphysically possible, and the non-existence of such things is at best a contingent fact. Thus, to derive the law of conservation of energy, we need not only the fundamental laws grounded in relations between universals or essential powers, but we also need the contingent fact that conservation-violators don’t exist.

Finally, the special sciences (geology, biology, etc.) are surely full of impure laws. Some of them perhaps even merely local ones.

One might bite the bullet and say that the impure laws are not laws at all. But that makes the nomist and powerist accounts inadequate to how “law” gets used in science.

The Humean stands in a different position. If they can account for fundamental laws, impure laws are easy, since the additional grounding is precisely a function of patterns of arrangement. The Humean’s difficulty is with the fundamental laws.

There is a solution, and this is for the nomist and powerist to say that “law of nature” is spoken in many ways, analogically. The primary sense is the fundamental laws that the theories nicely account for. But there are also non-fundamental laws. The pure ones are logical consequences of the fundamental laws, and the impure ones are particularly important consequences of the fundamental laws conjoined with important patterns of things in nature. In other words, impure laws are to be accounted for by a hybrid of the non-Humean theory and the Humean theory.

Now let’s come back to the other difficulty: the necessity worry. I submit that our intuitions about the contingency of laws of nature are much stronger in the case of impure laws than fundamental laws or pure non-fundamental laws. It is not much of a bullet to bite to say that matching charges metaphysically cannot attract—it is quite plausible that this is explained by thevery nature of charge. It is the impure laws where contingency is most obvious: it is metaphysically possible for entropy to decrease (funnily enough, many Humeans deny this, because they define the direction of time in terms of the increase of entropy), and it is metaphysically possible for energy conservation to be violated. But on our hybrid account, the contingency of impure laws is accounted for by the Humean element in them.

Of course, we have to check whether the objections to Humeanism apply to the hybrid theory. Perhaps the most powerful objection to a Humean account of laws is that it only sums up and does not explain. But the hybrid theory can explain, because it doesn’t just sum up—it also cites some fundamental laws. Moreover, it may be the case that the patterns that need to be added to get the impure laws could be initial conditions, such as that the initial entropy is law or that no conservation-violators come into existence. But fundamental law plus initial conditions is a perfectly respectable form of explanation.

Ontology as a contingent science

Consider major dividing lines in ontology, such as between trope theory and Platonism. Assume theism. Then all possibilities for everything other than God are grounded in God.

If God is ontologically like us, and in particular not simple, then it is reasonable to think that the correct ontological theory is necessarily determined by God’s nature. For instance, if God has tropes, then necessarily trope theory holds for creatures. If God participates in distinct Platonic forms like Divinity and Wisdom, then necessarily Platonism holds for creatures.

But the orthodox view (at least in Christianity and Judaism) is that God is absolutely simple, and predication works for God very differently from how it works for us. In light of this, why should we think that God had to create a tropist world rather than a Platonic one, or a Platonic one rather than a tropist one? Neither seems more or less suited to being created by God. It seems natural, in light of the radical difference between God and creatures, to think that God could create either kind of world.

If so, then many ontological questions seem to become contingent. And that’s surprising and counterintuitive.

Well, maybe. But I think there is still a way—perhaps not fully satisfactory—of bringing some of these questions back to the realm of necessity. Our language is tied to our reality. Suppose that we live in a tropist world. It seems that the correct account of predication is then a tropist one: A creature is wise if and only if it has a wisdom trope. A Platonic world has no wisdom tropes, and hence no wise creatures. Indeed, nothing can be predicated of any creature in it. What might be going on in the Platonic world is that there are things there that are structurally analogous wise things, or to predication. We can now understand our words “wise” and “predicated” narrowly, in the way they apply to creatures in our world, or we can understand them broadly as including anything structurally analogous to these meanings. If we understand them narrowly, then it is correct to say that “Nothing in the Platonist world is wise” and “Nothing is correctly predicated of anything in the Platonist world.” But in the wide, analogical sense, there are wise things and there is predication in the Platonist world. Note, too, that even in our world it is correct to say “God is wise” and “Something is correctly predicated of God” only in the wide senses of the terms.

On this account, necessity returns to ontology—when we understand things narrowly. But the pretensions of ontology should be chastened by realizing that God could have made a radically different world.

And maybe there is an advantage to this contingentism. Our reasoning in ontology is always somewhat driven by principles of parsimony. But while one can understand why parsimony is appropriately pursued in study of the contingent—for God can be expected to create the contingent parsimoniously, both for aesthetic reasons and to fit reality to our understanding—I have always been mystified why it is appropriately pursued in the study of the necessary. But if ontology is largely a matter of divine creative choice, then parsimony is to be sought in ontological theories just as in physical ones, and with the same theological justification.

The above sounds plausible. But I have a hard time believing in ontology as a contingent science.

Thursday, September 7, 2023

Reverse valve masking

I was exposed to Covid recently, so by University rules I need to mask for a while. I don't particularly love masking at the gym, but I found a nice solution. My go-to mask for physical activity during the pandemic was the Trend Air Stealth N100 respirator, with the valve replaced by a 3D-printed blocker. But now I don't need to protect myself, just others. So I simply put the valve back in, but in reverse, so I get clear air intake but my exhalations go through the N100 filters. The respirator was already pretty breathable, but now it's even better, though it still looks super-weird and I need to remember not to use the respirator with this modification when I actually need protection, e.g., when doing woodworking.



Wednesday, September 6, 2023

On the plurality of bestnesses

According to the best-systems account of laws (BSA), the fundamental laws of nature are the axioms of the system that are true and optimize a balance of informativeness and brevity in a perfectly natural language (i.e., the language cuts reality perfectly at the joints). There are some complications in probabilistic cases, but those will only make my argument below more compelling.

Here is the issue I want to think about: There are many reasonable ways of defining the “balance of informativeness and brevity”.

First, in the case of theories that rule out all but a finite number of worlds, we can say that a theory is more informative if it is compatible with fewer worlds. In such a case, there may be some natural information-theoretic way of measuring informativeness. But in fact, we do not expect the laws of nature to rule out all but a finite number of worlds. We expect them to be compatible with an infinite number of worlds.

Perhaps, though, we get lucky and the laws place restrictions on the determinables in such a way that provides for a natural state space. Then we can try to measure what proportion of that state space is compatible with the laws. This is going to be technically quite difficult. The state space may well turn out to be unbounded and/or infinite dimensional, without a natural volume measure. But even if there is a natural volume measure, it is quite likely that the restrictions placed by the laws make the permitted subset of the state space have zero volume (e.g., if the state space includes inertial and gravitational mass, then laws that say that inertial mass equals gravitational mass will reduce the number of dimensions of the state space, and the reduced space is apt to have zero volume relative to the full space). So we need some way of comparing subsets with zero volume. And mathematically there are many, many tools for this.

Second, brevity is always measured relative to a language. And while the requirement that the language be perfectly natural, i.e., that it cut nature at the joints, rules out some languages, there will be a lot of options remaining. Minimally, we will have a choice point about grouping, Polish notation, dot notation, parentheses, and a slew of other options we haven’t thought of yet, and we will have choice points about the primitive logical operators.

Finally, we have a lot of freedom in how we combine the informativeness and brevity measures. This is especially true since it is unlikely that the informativeness measure is a simple numerical measure, given the zero-volume issue.

We could suppose that there is some objective fact, unknowable to humans, as to what is the right way to define the informativeness and brevity balance, a fact that yields the truth about the laws of nature. This seems implausible. Absent such a fact, what the laws are will be relative to the choice of informativeness and brevity
measure ρ. We might have gotten lucky, and in our world all the measures yield the same laws, but we have little reason to hope for that, and even if this is correct, that’s just our world.

Thus, the story implies that for any reasonable informativeness and brevity measure ρ, we have a concept of a lawρ. This in itself sounds a bit wrong. It makes the concept of a law not sound objective enough. Moreover, corresponding to each reasonable choice of ρ, it seems we will have a potentially different way to give a scientific explanation, and so the objectivity of scientific explanations is also endangered.

But perhaps worst of all, what BSA had going for it was simplicity: we don’t need any fundamental law or causal concepts, just a Humean mosaic of the distribution of powerless properties. However, the above shows that there is enormous complexity in the account of laws. This is not ideological complexity, but it is great complexity nonetheless. If I am right in my preceding post that at least on probabilistic BSA the fact that something is a law actually enters into explanation, and if I am right in this post that the BSA concept of law has great complexity, then this will end up greatly complicating not just philosophy of science, but scientific explanations.

On probabilistic best-systems accounts, laws aren't propositions

According to the probabilistic best-systems account of laws (PBSA), the fundamental laws of nature are the axioms of the system that optimizes a balance of probabilistic fit to reality, informativeness, and brevity in a perfectly natural language.

But here is a tricky little thing. Probabilistic laws include statements about chances, such as that an event of a certain type E has a chance of 1/3. But on PBSA, chances are themselves defined by PBSA. What it means to say “E has a chance of 1/3” seems to be that the best system entails that E has a chance of 1/3. On its face, this is circular: chance is defined in terms of entailment of chance.

I think there may be a way out of this, but it is to make the fundamental laws be sentences that need not express propositions. Here’s the idea. The fundamental laws are sentences in an formal language (with terms having perfectly natural meanings) and an additional uninterpreted chance operator. There are a bunch of choice-points here: is the chance operator unary (unconditional) or binary (conditional)? is it a function? does it apply to formulas, sentences, event tokens, event types or propositions? For simplicity, I will suppose it’s unary function applying to event types, even though that’s likely not the best solution in the final analysis. We now say that the laws are the sentences provable from the axioms of our best system. These sentences include the uninterpreted chance(x) function. We then say stuff like this:

  1. When a sentence that does not use the chance operator is provable from the axioms, that sentence contributes to informativeness, but when that sentence is in fact false, the fit of the whole system becomes  − ∞.

  2. When a sentence of the form chance(E) = p is provable from the axioms, then the closeness of the frequency of event type E to p contributes to fit (unless the fit is  − ∞ because of the previous rule), and the statement as such contributes to informativeness.

I have no idea how fit is to be measured when instead of being able to prove things like chance(E) = p, we can prove less precise statements like chance(E) = chance(F) or chance(E) ≥ p. Perhaps we need clauses to cover cases like that, or maybe we can hope that we don’t need to deal with this.

An immediate problem with this approach is that the laws are no longer propositions. We can no longer say that the laws explain, because sentences in a language that is not fully interpreted do not explain. But we can form propositions from the sentences: instead of invoking a law s as itself an explanation, we can invoke as our explanation the second order fact that s is a law, i.e., that s is provable from the axioms of the best system.

This is counterintuitive. The explanation of the evolution of amoebae should not include meta-linguistic facts about a formal language!

Friday, September 1, 2023

Where are we?

Unless something like the Bohmian interpretation or a spatial collapse theory is right, quantum mechanics gives us good reason to think that the position wavefunction of all our particles is spread across pretty much all of the observable universe. Of course, except in the close vicinity of what we pre-theoretically call “our body”, the wavefunction is incredibly tiny.

What are we to make of that for the “Where am I?” question? One move is to say that we all overlap spatially, occupying most of the observable universe. On a view like this, we better not have position do serious metaphysical or ethical work, such as individuating substances or making moral distinctions based on whether one individual (say, a fetus) is within the space occupied by another.

The other move is to say I am where the wavefunction of my particles is not small. On a view like this, my location is something that comes in degrees depending on what our cut-off for “small” is. We get to save the intuition that we don’t overlap spatially. But the cost of this is that our location is far from a fundamental thing. It is a vague concept, dependent on a cut-off. A more precise thing would be to say things like: “Here I am up to 0.99, and here I am up to 0.50.”

Thursday, August 31, 2023

Substantivalism and locality

I find myself going back and forth between substantivalism and relationalism about spacetime. On a substantivalist theory, the points of spacetime are real.

But here is a problem. It seems essential to the concept of a point that geometric relations between points are essential to them. If two points are a certain distance apart, say, then they couldn’t be a different distance apart. But on General Relativity, where geometric properties are determined by the distribution of mass-energy in the universe, if geometric relations between points are essential to them, locality is violated. For imagine two events that are distantly spacelike separated. Then the geometric relation between the points at which the events are found depends on the distribution of mass-energy between the events. If the geometric properties are essential to the points, then influencing the mass-energy between the events will affect which points these events happen at. And that will be a non-local influence.

Perhaps we can say that only local geometric relations are essential to points. Perhaps the way to say this is that if a point x exists in worlds w1 and w2, then there is a set N of points such that every member of N exists in both worlds, and N is a neighborhood of x in both worlds, and the geometry on N is the same in both worlds.

Wednesday, August 30, 2023

An Aristotelian argument for presentism

Here is a valid argument:

  1. Matter survives substantial change.

  2. It is not possible that there exist two substances of the same species with the very same matter.

  3. If matter survives substantial change, it is possible to have two substances of the same species existing at different times with the very same matter.

  4. So, it is possible to have two substances of the same species existing at different times with the very same matter. (1,3)

  5. If presentism is not true, and it is possible to have two substances at different times existing with the very same matter, it is possible to have two substances of the same species existing with the very same matter.

  6. So, if presentism is not true, it is possible to have two substances of the same species existing with the very same matter.

  7. So, presentism is true. (2, 5)

Let’s think about the premises. I think Aristotle is committed to (1)—it’s essential to his solution to the alleged problem of change. Claim (2) is a famous Aristotelian commitment. Claim (3) is very, very plausible—surely matter moves around in the world, and it is possible to set things up so that I have the same atoms that Henry VIII had at some point in his life. Claim (5) follows when we note that the only two plausible alternatives to presentism are eternalism and growing block, and on both views if two substances of the same species exist at different times with the very same matter, then at the later time it is true that they both exist simpliciter.

However, given that there is excellent Aristotelian reason to deny presentism, the above argument gives some reason for Aristotelians to deny (1) or (2). Or to be more radical, and just deny that there is any such thing as the “matter” of traditional Aristotelianism.

Tuesday, August 29, 2023

Matter and distinctness of substance

According to Aristotelianism, the distinctness of two items of the same species is grounded in the distinctness of their matter. This had better be initial matter, since an item might change all of its matter as it grows.

But now imagine a seed A which grows into a tree. That tree in time produces a new seed B. The following seems possible: the chunk of matter making up A moves around in the tree, and all of it ends up forming B. Thus, A and B are made of the same matter, yet they are distinct. (If one wants them to be at the same time, one can then add a bout of time-travel.)

Probably the best response, short of giving up the distinctness-matter link (which I am happy to give up myself), is to insist that a chunk of matter cannot survive substantial change. Thus, a new seed being a new substance must have new matter. But I worry that we now have circularity. Seed B has different matter from seed A, because seed B is a new substance, which does not allow the matter to survive. But what makes it a new substance is supposed to be the difference in matter.

Monday, August 28, 2023

Are we finite?

Here’s a valid argument with plausible premises:

  1. A finite being has finite value.

  2. Any being with finite value may be permissibly sacrificed for a sufficiently large finite benefit.

  3. It is wrong to sacrifice a human for any finite benefit.

  4. So, a human has infinite value. (2 and 3)

  5. So, a human is an infinite being. (1 and 4)

That conclusion itself is interesting. But also:

  1. Any purely material being made of a finite amount of matter is a finite being.

  2. If human beings are purely material, they are made of a finite amount of matter.

  3. So, human beings are not purely material. (5, 6 and 7)

I am not sure, all that said, whether I buy (2). I think a deontology might provide a way of denying it.

And, of course, work needs to be done to reconcile (5) with the tradition that holds that all creatures are finite, and only God is infinite. Off-hand, I think one would need to distinguish between senses of being “infinite”. Famously, Augustine said that the numbers are finite because they are contained in the mind of God. There is, thus, an absolute sense of the infinite, where only God is infinite, and anything full contained in the divine mind is absolutely finite. But surely there is also a sense in which there are infinitely many numbers! So there must another sense of the infinite, and that might be a sense in which humans might be infinite.

Nor do I really know what it means to say that a human is infinite.

Lots of room for further research if one doesn’t just reject the whole line of thought.

Thursday, August 24, 2023

A sharp world

Here are one way of believing in a totally sharp world:

  1. Epistemicism: All meaningful sentences have a definite truth value, but sometimes it’s not accessible to us.

This has the implausible consequence that there is a fact of the matter whether, say, four rocks can make a heap, or about exactly how much money one needs to have to be filthy rich.

A way of escaping such consequences is:

  1. Second-level epistemicism: For any meaningful sentence s, it is definitely true that s is definitely true, or s is definitely false, or s is definitely vague.

While this allows us to save the common-sense idea that there are people who are vaguely filthy rich, it still has the somewhat implausible consequence that it is always definite whether someone is definitely filthy rich, vaguely filthy rich, or definitely not filthy rich. I think it is easier to bite the bullet here. For while we can expect our intuitions about the meaning of first-order claims like “Sally is filthy rich” to be pretty reliable, our intuitions about the meaning of claims like “It’s vague that Sally is filthy rich” are less likely to be reliable.

Still, we can do justice to the second-level vagueness intuition by going for one of these:

  1. nth level epistemicism: For any meaningful sentence s, and any sequence of D1, ..., Dn − 1 of vagueness operators (from among "vaguely", "definitely" and "definitely not"), the sentence D1...Dn − 1s is definitely true or definitely false.

(Say, with n = 3.)

  1. Bounded-level epistemicism: for some finite n we have nth level epistemicism.

  2. Finite-level epistemicism: For any meaningful sentence s, there is a finite n such that for any sequence of D1, ..., Dn − 1 of vagueness operators, the sentence D1...Dn − 1s is definitely true or definitely false.

The difference between finite-level and bounded-level epistemicism is that the finite-level option allows the level at which vagueness disapppears to vary from sentence to sentence, while on the bounded-level option, there is some level at which it always disappears.

I suspect that if we have finite-level epistemicism, then we have bounded-level epistemicism. For my feeling is that the level of vagueness of a sentence is definitely by something like the maximum level of vagueness of its basic predicates and names. Since there are only finitely many basic predicates and names in our languages, if each predicate and name has a finite level of vagueness, there will be a maximal finite level of vagueness for all our basic predicates and names, and hence for all our sentences. But I am not completely confident about this hand-wavy argument.

In any case, I find pretty plausible that we have bounded-level epistemicism for our languages, but we can extend the level if we so wish by careful stipulation of new predicates. And bounded-level epistemicism is, I think, enough to do justice to the idea that our world is really sharp.

Wednesday, August 23, 2023

The Alpha and the Omega

For a long time I have thought that the identification of God as the Alpha and the Omega in the Book of Revelation is very Aristotelian: God is the efficient and final cause of all. Indeed, Revelation 22:13 explicitly glosses as he arche kai ho telos. This may initially seem an over-metaphysicalization of Scripture, but I think it is a very Scriptural idea that particular aspects of God’s involvement in the world—us being comforted (in a way) that God is the arche and the telos of the upheavals in the Book of Revelation—are mirrors of God’s innate nature.

Tuesday, August 22, 2023

Substances and their existences

I used to think that:

  1. x is a substance just in case  <x exists> is not grounded in any fact about any other entity other than x.

But it is plausible that a finite creature’s existing is its being created. And that Alice is created seems to be grounded in God creating Alice, which seems to be a fact about God.

There is a nice response to this worry. On standard medieval views, creation is a one-way relation. When God creates Alice, there is a relation of being-created-by-God in Alice but no relation of creating-Alice in God. We can say, then, that in an important sense the fact that God creates Alice is not a fact about God, but about Alice, where we say that a fact is about x provided the fact is in part a fact of x’s existing or there being some property or relation in x.

It’s interesting that the very plausible account (1) of substance combined with a theistically plausible view that the esse of a finite thing is its being-created yields the rather abstruse one-way relation thesis.

This line of thought does not, however, fit well with the claim that I made in The Principle of Sufficient Reason that for a caused entity, its esse is its being caused. For Alice is also caused by her parents. And while divine causation may be a one-way relation, it seems unlikely that creaturely causation is.

There are three ways out of this worry. (i) We could say that creaturely causation is also a one-way relation. (ii) We could say that I was slightly wrong, and for a caused entity, its esse is its being primarily caused, i.e., caused by God. (iii) We could modify (1) to:

  1. x is a substance just in case  < x exists> has a grounding in a fact that is neither about any entity other than x nor grounded in a fact about any entity other than x.

For we can then say that while Alice’s being-caused is grounded in her parents’ activity, which is a fact about her parents, it is also grounded in God’s causing Alice, which is not a fact about God in the sense of being grounded in a relation or property of God’s.

I like both (ii) and (iii). What is especially attractive about (ii) is that if the esse of Alice is her being caused, then the esse of Alice is highly disjunctive, being multiply grounded—in God’s causing Alice, in her parents causing her, in her parents’ gametes causing her, maybe even in her grandparents’ causing her, etc. But it doesn’t seem right to say that Alice’s esse is highly disjunctive. So a focus on primary causation seems attractive. And I think—but without a careful examination—that the arguments in Principle of Sufficient Reason work with that modification still.

Monday, August 21, 2023

Two looping trolley scenarios

As part of an argument against the Principle of Double Effect, Thomson argued that if one thinks that it is permissible to redirect a trolley that is heading towards a branch with five people (“Branch” in my diagram) so it heads on a branch towards one, then this redirection remains permissible if one adds a looping track to the right branch that comes back to the left branch, as long as the one person on the right branch is large enough to stop the trolley from hitting the five.

But, Thomson insists, in the looping case the trolley’s hitting the large person on the right branch is a means to the five being saved, and so the defender of Double Effect cannot hold that there is something especially bad about intentionally harming someone.

Subsequently, it’s been noted, implicitly or explicitly (Liao et al.) that there is an ambiguity in Thomson’s story. On one version, “SymLoop” in my diagram, the track becomes symmetric, so that just as the one would block the trolley from hitting the five if the trolley went to the right branch, the five would block the trolley from hitting the one if the trolley stayed on the left branch. On the other hand, in AsymLoop, the left branch continues on, and if the trolley were to go on the left branch without the five being there, inertia would carry it harmlessly forward and away from everyone concerned.

When talking about all looping trolleys with Harrison Lee, it has occurred to me that there is a not implausible view on which:

  1. Redirection in Branch is permissible.

  2. Redirection in SymLoop is permissible.

  3. Redirection in AsymLoop is impermissible.

Here is why. In Branch, we have the standard Double Effect considerations, which I won’t rehearse.

Now, redirecting in AsymLoop is morally the same as a case where a trolley is heading down a straight unbranching path towards five people, and you grab a random large bystander and push them in front of the trolley to save the five (call this “Bystander Push”). For in both AsymLoop and Bystander Push, you are interposing a bystander between the trolley and the five. The only difference is the mechanics of who or what is moved (and motion is relative anyway). And most non-utilitarians agree that pushing the bystander in front of the trolley is wrong.

However, SymLoop is a bit different. Here we have six people towards whom a dangerous trolley is heading, and we try to rearrange the six people in danger in such a way that as few of them die as possible. What is analogous to SymLoop is not Bystander Push, but a case where the trolley is heading down a single straight path in a narrow tunnel (so narrow that stepping off the track won’t save one), on which there are five small people just in front of one large person, and we reorder the large person to be in front of the small ones. Call this Reorder Push.

I think there is good reason to think Reorder Push is permissible. We have a group in danger. By chance, the status quo is that the five small people are protecting the large person. But is that fair? They are smaller in body, but no smaller in dignity. If they were all the same size, so that no matter what order they were in, the same number would die, it would be fair to roll dice to figure out the order—or to just count the status quo as “the dice having already been rolled”. But when they are not the same size, there is a naturally preferred arrangement of the people in danger: the large one first, and then the small ones. (For a variant case, suppose the six people are all standing in a line in the tunnel perpendicular to the track, so that when the trolley comes, they all will die. It would be perfectly reasonable for the five small ones to move behind the one large one, and utterly unreasonable for the large one to move behind the five small ones—the large person shouldn’t get defended at the expense of five.)

If Reorder Push is permissible, so is redirection in SymLoop. In both cases, the trolley is heading towards six, and we are just rearranging.

Now, it may seem that the reasoning behind Reorder Push should be rejected by a non-consequentialist. But I don’t think so. Prior to learning of Thomson’s Loop case (and hence not in order to generate a response to Loop), I wrote a paper on Double Effect where using an idea of Murphy’s I defend a distinction between accomplishing someone’s death and accomplishing someone’s being in lethal danger. On the view I defend, it’s always wrong to accomplish someone’s death, at least under such conditions as juridical innocence, but accomplishing someone’s being endangered, even lethally, is not always wrong. In particular, it’s not always wrong when the person consents to it, or when one has appropriate authority over the person. Thus, just as it is permissible to jump on a grenade to save comrades, it is permissible to push someone on a grenade with with their consent (suppose that the hero is unable to themselves jump, and the person pushing the hero is unable to reach the grenade with their own body), and it may be permissible for an officer to push a non-consenting soldier onto the grenade.

Now, the trolley case is not a case of intentional killing but of intentionally setting up a situation that in fact has lethal danger in it. One does not intend the death of the one in redirecting the trolley, but instead one intends the absorption of kinetic energy—which absorption happens to be a lethal danger to the absorber. This is not absolutely morally forbidden, but is only forbidden in some cases. In particular, it is not forbidden in cases of consent. That’s why pushing a random bystander is wrong, but it is not wrong to push a volunteer who is otherwise unable to move. In the same way, redirection in either SymLoop or AsymLoop would be permissible with the consent of the large person on the right track. But as the case is normally set up, you don’t have this consent.

Now, without the consent of the large person, AsymLoop and SymLoop come apart, as do Bystander and Reorder Push. Grabbing someone towards whom the trolley is not heading, and putting them in front of the trolley, whether by pushing (Bystander Push) or by moving the trolley (AsymLoop) is a wrongful case of accomplishing their lethal endangerment. But when that person happens to be in the lucky status quo where they are in the path of the trolley, but are being protected by the bodies of the five, they ought to refuse that costly protection. They ought in justice to consent to reordering or redirection. Now, in some cases, actual consent and obligation-in-justice to consent have different moral effects (e.g., in sexual cases the difference is very significant), but in other cases they may have similar moral effects. It is quite reasonable to say that in endangerment cases, actual consent and obligation to consent have similar moral effects. (One hint of this is that endangerment cases are ones where authority can have an effect to consent; sexual cases, for instance, are not like that—authority does nothing in the absence of consent there.) Thus, even without consent, redirection in SymLoop is permissible—but not so in AsymLoop.

Final remark: I wonder if it matters whether it is justice or something else that requires the consent in these kinds of cases. Intuition: One has a moral duty to jump in front of a trolley that is heading towards a hundred (but mabe not towards five) people. If so, and if it doesn’t matter whether the obligation is in justice or in some other way (say, charity), then once enough lives come to be at stake, then redirection in AsymLoop and pushing the non-consenting bystander become permissible. But if the obligation has to be one of justice, then one might hold that the redirection and pushing remains wrong even when there are more lives at stake.

Acknowledgment: The thinking here is greatly influenced by arguments from Harrison Lee about volunteering in loop trolley cases, but the conclusions differ.

Full professor position at Baylor Philosophy Department

We have an open area full professor position in our Department. If you qualify, I encourage you to apply. If you know someone who qualifies, I encourage you to encourage you to apply. Email me if you need more information or encouragement.

Waco is a lovely place. Here is a bittern at sunset last week. The spot is an easy one mile bike trail-ride along the river from campus.


Sony A7RII with Retina Xenon Schneider-Kreuznach F/1.9 50mm lens (wide-open, cropped).

Friday, August 18, 2023

Taking, not stealing

Aquinas says that when a starving person takes food needed for survival from someone who has too much, the act is a case of taking but not stealing. Aquinas’ reasoning is that property rights subserve survival, and in case of conflict the property rights cease, and the food ceases to be the property of the one who has too much, and so it is not theft for the poor to take it.

I think what is going on may be a bit more subtle than that. Suppose Alice and Bob both have too much and Carl is starving. Both Alice and Bob refuse their surplus to feed Carl. According to Aquinas’ analysis, both Alice and Bob lose their ownership.

But I think things may be a bit more subtle than that. Suppose that shortly after Alice and Bob’s wrongful refusal, Carl suddenly wins the lottery. It does not seem right to say that Carl can now take Alice and Bob’s surplus. Yet if Alice and Bob lost their ownership upon refusal to feed Carl, then either the surplus now belongs to Carl or it belongs to nobody, and in either case it wouldn’t be stealing from Alice and Bob for Carl to take it. Similarly, if after Carl’s lottery win, Alice were to take Bob’s surplus food, Alice would be stealing from Bob.

We could say that Alice and Bob regain their property when Carl wins the lottery, but it is strange to think that something that belongs to nobody or to Carl suddenly becomes Alice’s, despite Alice having no deep need of it, just because Carl won the lottery.

Here is a different kind of case that I think may shed some light on the matter. As before, suppose Alice has a surplus. Suppose Eva the mobster has informed David that if David doesn’t take Alice’s surplus, then Eva will murder Alice. Any reasonable person in Alice’s place would agree to having her surplus taken by David, but Alice is not a reasonable person. David nonetheless takes Alice’s surplus, thereby saving her life.

I think David acts rightly, precisely because as Aquinas thinks one needs to resolve a conflict between property and life in favor of life. But I don’t think we can analyze using Aquinas’ loss of ownership account. For if David takes Alice’s stuff, then Eva who made David do it is a thief (by proxy). But if under the circumstances Alice loses her ownership, then Eva is not a thief. I think the right thing to say is that Alice retains her ownership, but it is not wrong for David to take her stuff in order to save her life.

What should we say, then? Is David a thief, but a rightly acting thief? That is indeed one option. But I prefer this one. When you own something, that gives you a set of rights over it and against others. I suggest that these rights do not include an unconditional right not to be deprived of the use of the item. Specifically, there is no right not to be deprived of the use of the item when deprivation of use is the only way for someone’s life to be saved. This applies both in Aquinas’s case of starvation and in my mobster case. It is not an infringement on Alice’s ownership over her surplus when Carl takes her stuff to survive or when David takes her stuff in order to save Alice’s life. But when Carl’s need terminates, he does not get to then take Alice’s stuff, as if Alice had lost ownership, and the mastermind behind David’s taking the stuff, who unlike David isn’t acting to save Alice’s life, is a thief.

In fact, if we think about it, it becomes obvious that there is no unconditional right not to be deprived of the use of an owned item. Suppose I have my car on a plot of land that I own, and I foolishly sell you all the land surrounding the small rectangle that the car is physically on top of. By buying the land, you deprive me of the use of my car, barring your good will—I cannot drive the car off the rectangle without trespass. But you don’t steal my car by thus depriving me of its use.

Thus, neither Carl (when in need) nor David is stealing, even though both take something owned by someone else.

Aquinas quotes St. Ambrose with approval: “It is the hungry man’s bread that you withhold, the naked man’s cloak that you store away, the money that you bury in the earth is the price of the poor man’s ransom and freedom.” While St. Ambrose’s sentiment is very plausibly correct, on my account above it is not correct to take it literally. When Alice wrongfully withholds her surplus from starving Carl, the surplus is not literally owned by Carl. It is still owned by Alice, who has a duty to pass ownership to Carl, and Carl in turn is permitted to use Alice’s surplus—but it remains Alice’s, even if wrongfully so.

Indeed, here is an argument against the hypothesis that ownership literally passes to the needy. Suppose Carl is starving, and Alice and Bob refuse their surplus. Now, shortly after Carl coming to be starving, so does Fred. On an account on which ownership literally passes to the needy, Alice’s and Bob’s surplus belongs to Carl, and if Carl comes to claim it and at the same times so does Fred, Carl gets to defend that surplus, violently if necessary, from Fred, as long as that surplus is all needed for Carl’s survival. But it seems plausible that as long as Carl’s and Fred’s need is now equal, they have equal rights, even if Carl came to be needy slightly earlier. Furthermore, suppose Alice has ten loaves of bread and Carl needs one to survive. Which loaf of bread becomes Carl’s possession? Surely not all of them, and surely no specific one. It seems better to say: while Carl is in dire need, Alice has no right to withhold surplus from her. As soon as Carl and any other needy person has taken enough not to be in dire need, Alice may defend the rest of her surplus.

Thursday, August 17, 2023

Tiebreakers

You need to lay off Alice or Bob, or else the company goes broke. For private reasons, you dislike Bob and want to see him suffer. What should you do?

The obvious answer is: choose randomly.

But suppose that there is no way to choose randomly. For instance, perhaps an annoying oracle which has told you the outcome of any process that you could have made use of random decision. The oracle says “If you flip the penny in your pocket, it will come up heads”, and now deciding that Alice is laid off on heads is tantamount to deciding that Alice is laid off.

So what should you do?

There seems to be something rationally and maybe morally perverse in one’s treatment of Alice if one fires her to avoid firing the person that one wants to fire.

But it seems that if one fires Bob, one does so in order to see him suffer, and that’s wrong.

I have two solutions, not mutually exclusive.

The first is that various rules of morality and rationality only make sense in certain normal conditions. Typical rules of rationality simply break down if one is in the unhappy circumstance of knowing that one’s ability to reason rationally is so severely impaired that there is no correlation between what seems rational and what is rational. Similarly, if one is brainwashed into having to kill someone, but is left with the freedom to choose the means, then one may end up virtuously beheading an innocent person if beheading is less painful than any other method of murder available, because the moral rules against murder presuppose that one has freedom of will. It could be that some of our moral rules also presuppose an ability to engage in random processes, and when that ability is missing, then the rules are no longer applicable. And since circumstances where random choices are possible are so normal, our moral intuitions are closely tied to these circumstances, and hence no answer to the question of what is the right thing to do is counterintuitive.

The second is that there is a special kind of reason, a tie-breaker reason. When one fires Bob with the fact that one wants to see him suffering being a tie-breaker, one is not intending to see him suffer. Perhaps what one is intending, instead, is a conditional: if one of Alice and Bob suffers, it’s Bob.

Wednesday, August 9, 2023

Playing with photo developing

My ten year old and I developed a roll of 35mm film that I shot over the past year in my grandfather's Voigtlander Vito I camera. I've never developed film before. 


2022 Heart of Texas Fair. Fomapan 200.

Monday, August 7, 2023

A deterministic collapsing local quantum mechanics without hidden variables beyond the wavefunction

I will give a really, really wacky version of quantum mechanics as a proof of concept that if one wants, one can have all of the following:

  1. Compatibility with experiment

  2. Determinism

  3. Collapse

  4. No “hidden variables” beyond the wavefunction: the wavefunction encompasses all the information about the world

  5. Locality

  6. Schroedinger evolution between collapes.

Here’s the idea. We suppose that the Hilbert space for quantum mechanics is separable (i.e., has a countable basis). A separable Hilbert space has continuum-many vectors, so each quantum state vector can be encoded as a single real number. We suppose, further, that collapse occurs countably many times over the history of the universe. We can now encode all the times and outcomes of the collapses over the history of the universe as a single real number: the outcome of a collapse is a quantum state vector, encodable as a real number, the time of collapse is of course a real number, and a countable sequence of pairs of real numbers can be encoded as a single real number.

We now consider the wavefunction ψ of the universe. For simplicity, consider this as a function on R3n × R where n is the number of particles (if the number of particles changes over time, we will need to tweak this). Say that x ∈ R3n is rational provided that every coordinate of it is a rational number. We now add a new law of nature: ψ(x,t) has the same value for every rational x and every time t, which value encodes the history of all the collapses that ever happen in the history of the universe.

Since standard quantum mechanics does not care about what happens to the wavefunction on sets of measure zero, and the set of rational points of R3n has measure zero, this does not affect Schroedinger evolution between collapses, and so we have 6. We also clearly have 2, 3 and 4. If we suppose a prior probability distribution on the collapses that fits with the Born rule, we get 1. We also have 5, since any open region of space that contains an experiment will also contain the real number encoding the collapse history.

Of course, this is rather nutty. It just shows that because the wavefunction has more room for information than just the quantum state vector—the quantum state vector can be thought of as an equivalence class of wavefunctions differing on sets of measure zero—we can stuff the hidden variables into the wavefunction. Those of us who think that the state vector is the real thing, not the wavefunction, will be quite unimpressed.

Monday, July 31, 2023

Values of disagreement

We live in a deeply epistemically divided society, with lots of different views, including on some of the most important things.

Say that two people disagree significantly on a proposition if one believes it and one disbelieves it. The deep epistemic division in society includes significant disagreement on many important propositions. But whenever two people significantly disagree on a proposition, one of them is wrong. Being wrong about an important proposition is a very bad thing. So the deep division implies some very bad stuff.

Nonetheless, I’ve been thinking that our deep social disagreement leads to some important advantages as well. Here are three that come to mind:

  1. If two people significantly disagree on a proposition, then by bivalence, one of them is right. There is a value in someone getting a matter right, rather than everyone getting it wrong or suspending judgment.

  2. Given our deep-seated psychological desire to convince others that we’re right, if others disagree with us, we will continue seeking evidence in order to convince them. Thus disagreement keeps us investigating, which is beneficial whether or not we are right. If everyone agreed with us, we would be apt to stop investigating, which would mean we would either get us stuck with a falsehood, or at least likely provide us with less evidence of the truth than is available. Moreover, continued investigation is apt to refine our theory, even if the theory was already basically right.

  3. To avoid getting stuck in local maxima in our search for the best theory, it is good if people are searching in very different areas of epistemic space. Disagreement helps make that happen.

Wednesday, July 26, 2023

Committee credences

Suppose the members of a committee individually assign credences or probabilities to a bunch of propositions—maybe propositions about climate change or about whether a particular individual is guilty or innocent of some alleged crimes. What should we take to be “the committee’s credences” on the matter?

Here is one way to think about this. There is a scoring rule s that measures the closeness of a probability assignment to the truth that is appropriate to apply in the epistemic matter at hand. The scoring rule is strictly proper (i.e., such that an individual by their own lights is always prohibited from switching probabilities without evidence). The committee can then be imagined to go through all the infinitely many possible probability assignments q, and for each one, member i calculates the expected value Epis(q) of the score of q by the lights of the member’s own probability assignment pi.

We now need a voting procedure between the assignments q. Here is one suggestion: calculate a “committee score estimate” for q in the most straightforward way possible—namely, by adding the individuals’ expected scores, and choose an assignment that maximizes the committee score estimate.

It’s easy to prove that given that the common scoring rule is strictly proper, the probability assignment that wins out in this procedure is precisely the average  = (p1+...+pn)/n of the individuals’ probability assignments. So it is natural to think of “the committee’s credence” as the average of the members’ credences, if the above notional procedure is natural, which it seems to be.

But is the above notional voting procedure the right one? I don’t really know. But here are some thoughts.

First, there is a limitation in the above setup: we assumed that each committee member had the same strictly proper scoring rule. But in practice, people don’t. People differ with regard to how important they regard getting different propositions right. I think there is a way of arguing that this doesn’t matter, however. There is a natural “committee scoring rule”: it is just the sum of the individual scoring rules. And then we ask each member i when acting as a committee member to use the committee scoring rule in their voting. Thus, each member calculates the expected committee score of q, still by their own epistemic lights, and these are added, and we maximize, and once again the average will be optimal. (This uses the fact that a sum of strictly proper scoring rules is strictly proper.)

Second, there is another way to arrive at the credence-averaging procedure. Presumably most of the reason why we care about a committee’s credence assignments is practical rather than purely theoretical. In cases where consequentialism works, we can model this by supposing a joint committee utility assignment (which might be the sum of individual utility assignments, or might be consensus utility assignment), and we can imagine the committee to be choosing between wagers so as to maximize the agreed-on committee utility function. It seems natural to imagine doing this as follows. The committee expectations or previsions for different wagers are obtained by summing individual expectations—with the individuals using the agreed-on committee utility function, albeit with their own individual credences to calculate the expectations. And then the committee chooses a wager that maximizes its prevision.

But now it’s easy to see that the above procedure yields exactly the same result as the committee maximizing committee utility calculated with respect to the average of the individuals’ credence assignments.

So there is a rather nice coherence between the committee credences generated by our epistemic “accuracy-first”
procedure and what one gets in a pragmatic approach.

But still all this depends on the plausible, but unjustified, assumption that addition is the right way to go, whether for epistemic or pragmatic utility expectations. But given this assumption, it really does seem like the committee’s credences are reasonably taken to be the average of the members’ credences.

Thursday, July 20, 2023

Rachels on doing and allowing

Rachels famously gives us these two cases to argue that the doing–allowing distinction is morally vacuous:

  1. You stand to inherit money from a young cousin, so you push them into a tub so they drown.

  2. You stand to inherit money from a young cousin, and so when you see them drowning in a tub, you don’t pull them out.

The idea is that if there is a doing–allowing distinction, then (1) should be worse than (2), but they both seem equally wicked.

But it’s interesting to notice how things change if you change the reasons from profit to personal survival:

  1. A malefactor informs you that if you don’t push the young cousin into the tub so they drown, you will be shot dead.

  2. A malefactor informs you that if your currently drowning cousin survives, you will be shot dead.

It’s clear that it’s wrong to drown your cousin to save your life. But while it’s praiseworthy to rescue them at the expense of your life, unless you have a special obligation to them beyond cousinage, you don’t do wrong by failing to pull them out. And it seems that the relevant difference between (3) and (4) is precisely that between doing and allowing: you may not execute a drowning to save your life, but you may allow one.

Or consider this variant:

  1. A malefactor informs you that if you don’t push the young cousin into the tub so they drown, two other cousins will be shot dead.

  2. A malefactor informs you that if your currently drowning cousin survives, two other cousins will be short dead.

I take it that pretty much every non-consequentialist will agree that in (5) it’s wrong to drown your cousin, but everyone (consequentialist or not) will also say that in (6) it’s wrong to rescue your cousin.

So there is very good reason to think there is a morally relevant doing–allowing distinction, and cases similar to Rachels’ show it. At this point it is tempting to diagnose our intuitions about Rachels’ original case as based on the fact that the death of your cousin is not sufficiently good to justify allowing the drowning—their death is disproportionately bad for the benefit gained—so we want to blame the agent who cares about their financial good more than the life of their young cousin, an we don’t care whether they are actively or passively killing the cousin.

But things are more complicated. Consider this pair of cases:

  1. Your recently retired cousin has left all their money to famine relief where it will save fifty lives but if your cousin survives another ten years their retirement savings will be largely spent and won’t be enough to save any lives. So you push the cousin into the tub to drown them.

  2. Your recently retired cousin has left all their money to famine relief where it will save fifty lives but if your cousin survives another ten years their retirement savings will be largely spent and won’t be enough to save any lives. So when your cousin is drowning in the tub, you don’t rescue them.

Now it seems we have proportionality: your cousin’s death is not disproportionately bad given the benefit. Yet I have the strong intuition that it’s both wrong to drown them and to fail to save them. I can’t confidently put my finger on what is the relevant difference between (8), on the one hand, and (4) and (6), on the other hand.

But maybe it’s this. In (8), your rescue of your cousin isn’t a cause of the death of the people. The cause of their death is famine. It’s just that you have failed to prevent their death. On the other hand, in (4) and (6), if you rescue, you have caused your own death or the death of the two other cousins, admittedly by means of the malefactor’s wicked agency. In (8), rescuing blocks prevention of deaths; in (4) and (6), rescuring causes deaths. Blocking prevention is different from causing.

This is tricky, though. For drowning someone can be seen as blocking prevention of death. For their breathing prevents death and drowning blocks the breathing!

Maybe the difference lies between blocking a natural process of life-preservation (breathing, say) and blocking an artificial process of life-preservation (sending famine relief, say).

Or maybe I am mistaken about (4) and (6) being cases where rescue is not obligatory. Maybe in (4) and (6) rescue is obligatory, but it wouldn’t be if instead the malefactor told you that if you rescue, then the deadly consequences would follow. For maybe in (4) and (6), you are intending death, while in the modified cases, you are only intending non-rescue? I am somewhat sceptical.

There is a lot of hard stuff here, thus. But I think there is still enough clarity to see that there is a difference between doing and allowing in some cases.

Wednesday, July 19, 2023

Video splitter python script

I'm working on submitting my climbing record to Guinness. They require video--including slow motion video!--but they have a 1gb limit on uploads, and recommend splitting videos into 1gb portions with five second overlap. I made a little python script to do this using ffmpeg. You can specify the maximum size (default: 999999999 bytes) and the aimed-at overlap (default: 6 seconds, to be on the safe side for Guinness), and it will estimate how many parts you need, and split the file into approximately these many. If any of the resulting parts is too big, it will try again with more parts.

Tuesday, July 18, 2023

Doing, allowing and trolleys

Consider a trolley problem where the trolley is heading for:

  • Path A with two people,

but you can redirect it to:

  • Path B with one person.

If that’s the whole story, and everyone is a stranger to you, redirection is surely permitted, and probably even required.

But add one more ingredient: the one person on Path B is you yourself. I am far from sure of this, but I suspect that you aren’t morally required to save two strangers at the expense of your life, though of course it would be praiseworthy if you did. (On the other hand, once the number on Path A is large enough, I think it becomes obligatory to save them.)

Now consider a reverse version. Suppose that the trolley is heading for Path B, where you are. Are you permitted to redirect it to Path A? I am inclined to think not.

So we have these two judgments:

  1. You aren’t obligated to redirect from two people to yourself.

  2. You aren’t permitted to redirect from yourself to two people.

This suggests that in the vicinity of the Principle of Double Effect there is an asymmetry between doing and allowing. For you are permitted to allow two people to be hit by the trolley rather than sacrifice your life, but you are not permitted to redirect the trolley from yourself to the two.

Now, you might object that the whole thing here is founded on the idea, which I am not sure of, that you are not obligated to save two strangers at the expense of your life. While I am pretty confident that you are not obligated to save one stranger at the expense of your life, with two I become unsure. If this is the sticking point, I can modify my case. Instead of having two people fully on Path A, we could suppose that there is one person fully on Path A and the other has a limb on the track. I don’t think you are obligated to sacrifice your life to save one stranger’s life and another’s limb. But it still seems wrong to redirect the trolley from yourself at the expense of a stranger and a limb. So we still have an allowing-doing asymmetry.

Another interesting question: Are you permitted to redirect a trolley heading for you in a way that kills one stranger? I am not sure.

A second indoor climbing world record (uncertified)

On July 15, 2023, I set (still uncertified) my second indoor world climbing record: fastest vertical mile (male), doing 112 climbs, at 14.4 meters each, in a total of 1 hour 42 minutes and 58 seconds (continuous time, including descents and breaks; descents do not count towards the mile). The official best time was Andrew Dahir's 1 hour 51 minutes and 37.5 seconds. I am still working on preparing all the materials for submission to Guinness.

This was a fastest-time for fixed distance (one mile) record. In December, I got a longest-distance (about a kilometer) for a fixed-time (one hour) record. The video below shows the first and last climbs at normal speed and runs the middle 110 climbs at 30X.




I am grateful to Baylor Recreation for all the encouragement I have received, and to the volunteers who made this possible (two timekeepers, two witnesses, two additional safety officers).

Here are some details:
  • I am incredibly impressed with Andrew Dahir who had set both of the records in one day! There is no way I would have the endurance for that.
  • My vertical speed was 938 meters per hour, somewhat lower than the 1014 meters per hour of my December record, but I had to keep it up for a longer time. Still, I think I was less tired this time: the lower pace compensated for the greater distance.
  • The route was a 5.6. 
  • Unlike in my previous record, an auto-belay was used.
  • I started by doing 12 climbs at a slightly higher pace than I could keep up for the full length, followed by a  minute break, followed by ten sets of ten, with about 1.5-2 minute breaks in between. 
  • I got a cramp in the upper right thigh around climb #100, and had to rely more on upper body for the remainder.
  • I had a pacing sheet with dual target times both for beating the record by about 1.5 minutes and for beating the record by about 5 minutes. I consistently stayed ahead of both.
  • I wore my comfy 5.10 lace-up Anasazi shoes (pinks).
  • Mid-way I ducked into the storage room to change into a dry T-shirt.
  • I did a lot of short practices with 1-5 climbs at maximum pace (which I wouldn't be able to keep up much longer) to get my muscle memory of all the moves.
  • I did three full-length practices starting around May. The first one was slightly slower than Dahir's time. The second was about two minutes ahead of the record, and the third about five.
  • I did one mid-length practice about a week ahead, where I unofficially beat my December one hour record.
  • To avoid mishaps with video evidence, I had five cameras pointed at the event. Guinness rules require slow motion footage to be available for one-mile events. That makes sense for a run, but is surprising for a nearly two-hour climb, and to satisfy this requirement one of the cameras was a GoPro capturing at 120fps.
Because Guinness wanted the witnesses to log the individual time of each climb, I have a nice graph of how long each ascent took. I started a little faster, slowed down towards the end. The average ascent was 36 seconds. The fastest was 26 seconds (#1) and the slowest was 50 seconds (#111).
 


Monday, July 10, 2023

Partially defined predicates

Is cutting one head off a two-headed person a case of beheading?

Examples like this are normally used as illustrations of vagueness. It’s natural to think of cases like this as ones where we have a predicate defined over a domain and being applied outside it. Thus, “is being beheaded” is defined over n-headed animals that are being deprived of all heads or of no heads.

I don’t like vagueness. So let’s put aside the vagueness option. What else can we say?

First, we could say that somehow there are deep facts about the language and/or the world that determine the extension of the predicate outside of the domain where we thought we had defined it. Thus, perhaps, n-headed people are beheaded when all heads are cut off, or when one head is cut off, or when the number of heads cut off is sufficient to kill. But I would rather not suppose a slew of facts about what words mean that are rather mysterious.

Second, we could deny that sentences using predicates outside of their domain lack truth value. But that leads to a non-classical logic. Let’s put that aside.

I want to consider two other options. The first, and simplest, is to take the predicates to never apply outside of their domain of definition. Thus,

  1. False: Cutting one head off Dikefalos (who is two headed) is a beheading.

  2. True: Cutting one head off Dikefalos is not a beheading

  3. False: Cutting one head off Dikefalos is a non-beheading.

  4. True: Cutting one head off Dikefalos is not a non-beheading.

(Since non-beheading is defined over the same domain as beheading). If a pre-scientific English-speaking people never encountered whales, then in their language:

  1. False: Whales are fish.

  2. True: Whales are not fish.

  3. False: Whales are non-fish.

  4. True: Whales are not non-fish.

The second approach is a way modeled after Russell’s account of definite descriptors: A sentence using a predicate includes the claim that the predicate is being used in its domain of definition and, thus, all of the eight sentences exhibited above are false.

I don’t like the Russellian way, because it is difficult to see how to naturally extend it to cases where the predicate is applied to a variable in the scope of a quantifier. On the other hand, the approach of taking the undefined predicates to be false is very straightforward:

  1. False: Every marine mammal is a fish.

10: False: Every marine mammal is a non-fish.

This leads to a “very strict and nitpicky” way of taking language. I kind of like it.

Sunday, July 9, 2023

Open futurism and many-worlds quantum mechanics

I’ve been thinking about some odd parallels between the many-worlds interpretation of quantum mechanics and open future views.

On both sets of views, in the case of genuinely chancy future events there is strictly no fact of the matter about what will turn out. On many-worlds, the wavefunction provides a big superposition of the options, but for no one option is it true that it will eventuate. The same is true for open future views, except that what we have instead of a superposition depends on the particular temporal logic chosen.

Yet, despite no fact about outcomes, on both sets of views one would like to be able to make probabilistic predictions about “the outcome”. For instance, one wants to say that if one tosses an indeterministic coin, it is moderately likely that the coin will land on heads and extremely unlikely that it will land on heads. In both cases, this is highly problematic, because on both views it is certain that it is not true that the coin will land on heads. So how can something that is certainly not going to happen be more likely than another event? In both cases, there is a literature trying to answer this problem (and I am not convinced by it).

Anyway, I wonder how far we can take the parallel. The wavefunction in the many-worlds interpretation is a superposition of many options about what the present is like, and is interpreted as a plurality of worlds in which different options are true. Why not do the same in the open-future case? Why not just say that there are now many worlds, including some where the coin will land on heads, some where the coin will land on tails, and some where it will land on edge? After all, if it is reasonable to interpret the superposition this way, why is it not reasonable to interpret the temporal logic this way?

There is, however, one crucial difference. The open futurist insists that reality will collapse: that once the coin lands, there will be a fact about which way it landed. On many-worlds, there is no collapse: there is never a fact about how the coin landed. Nonetheless, this could be accommodated in a many-worlds interpretation of an open-future view: we just suppose that once the coin lands, a lot of the worlds disappear.

So what if there is a parallel? Why does it matter?

Well, here are some things that we might say.

First, in both cases, there is an underlying metaphysics (a non-classical truth assignment to future facts, or a giant superposition), and then we need to interpret that underlying metaphysics. I wonder if it might not be true:

  1. A many-worlds interpretation of the underlying metaphysics is reasonable in the quantum case if and only if it is reasonable in the open-future case.

Suppose (1) is true. Most people think a many-worlds interpretation of open-future is absurd. But then why isn’t the many-worlds interpretation of quantum mechanics (or, more precisely, a quantum mechanics with exceptionlessly unitary evolution and all the facts supervening on the wavefunction) also absurd?

Second, it may well be that the open-futurist finds plausible the standard criticism of many-worlds interpretations that it does not make sense of probabilistic predictions. If so, then they should probably find equally problematic probabilistic predictions on open-future views.