Thursday, October 31, 2019

The local five minute hypothesis, the Big Bang and creation

The local five minute hypothesis is that the earth, with everything on it, and the environment five light-minutes out from it, come into existence five minutes ago.

Let’s estimate the probability of getting something like a local five minute hypothesis by placing particles at random in the observable universe. Of course, in a continuous spacetime the probability of getting exactly the arrangement we have is zero or infinitesimal. But we only need to get things right to within a margin of error of a Planck distance for all practical purposes.

The volume of the observable universe is about 1080 cubic meters. The Planck volume is about 10−105 cubic meters. So, getting a single particle at random within a Planck volume of where it has a probability of about 10−185.

But, if we’re doing our back-of-envelope calculation in a non-quantum setting (i.e., with no uncertainty principle), we also need to set the velocity for the particles. Let’s make our margin of error be the equivalent of moving a Planck distance within ten minutes. So our margin of error for velocity in any direction will be about 10−35 meters in 600 seconds, or about 10−38 meters per second. Speeds range from 0 to the speed of light, or about 108 meters per second, so the probability of getting each of the three components of the velocity right is about 10−46, and since we have three directions right is something like 10−138. The probability of getting both the position and velocity of a particle right is then 10−(185 + 138) = 10−323. Yeah, that’s small. Also, there are about 100 different types of particles, and there are a few other determinables like spin, so let’s multiply that by about 10−3 to get 10−326.

The total mass of planetary stuff within around five light minutes of earth—namely, Earth, Mass and Venus—is around 1025 kilograms. There are no more than about 1025 atoms, and hence about 1027 particles, per kilogram. So, we have 1052 particles we need to arrange within our volume.

We’re ready to finish the calculation. The probability of arranging these many particles with the right types and within our position and velocity margins of error is:

  • (10−326)1052 ≈ 10−102.5 × 1052 ≈ 10−1055.

Notice, interestingly, that most of the 55 comes from the number of particles we are dealing with. In fact, our calculations show that basically getting 10N particles in the right configuration has, very roughly, a probability of around 10−10N + 3.

So what? Well, Roger Penrose has estimated the probability of a universe with an initial entropy like ours at 10−10123. So, now we have two hypotheses:

  • A universe like ours came into existence with a Big Bang

  • The localized five minute hypothesis.

If there is no intelligence behind the universes, and if probabilistic calculations are at all appropriate for things coming into existence ex nihilo, the above probability calculations seem about right, and the localized five minute hypothesis wins by a vast margin: 10−1055 to 10−10123 or, roughly, 1010123 to 1. And if probabilistic calculations are not appropriate, then we cannot compare the hypotheses probabilistically, and lots of scepticism also follows. Hence, if there is no intelligence behind the universe, scepticism about everything more than five minutes ago and more than five light minutes from us follows.

Wednesday, October 30, 2019

1+1=3 or 2+2=4

On numerical-sameness-without-identity views, two entities that share their matter count as one when we are counting objects.

Here is a curious consequence. Suppose I have a statue of Plato made of bronze with the nose broken off and lost. I make up a batch of playdough, sculpt a nose out of it and stick it on. The statue of Plato survives the restoration, and a new thing has been added, a nose. But now notice that I have three things, counting by sameness:

  • The statue of Plato

  • The lump of bronze

  • The lump of playdough.

Yet I only added one thing, the lump of playdough or the nose that is numerically the same (without being identical) as it. So, it seems, 1+1=3.

Now, it is perfectly normal to have cases where by adding one thing to another I create an extra thing. Thus, I could have a lump of bronze and a lump of playdough and they could come together to form a statue, with neither lump being a statue on its own. A new entity can be created by the conjoining of old entities. But that’s not what happens in the case of the statue of Plato. I haven’t created a new entity. The statue was already there at the outset. And I added one thing.

Maybe, though, what should be said is this: I did create a new thing, a lump of bronze-and-playdough. This thing didn’t exist before. It is now numerically the same as the statue of Plato, which isn’t new, but it is still itself a new thing. I am sceptical, however, whether the lump of bronze-and-playdough deserves a place in our ontology. We have unification qua statue, but qua lump it’s a mere heap.

Suppose we do allow, however, that I created a lump of bronze-and-playdough. Then we get another strange consequence. After the restoration, counting by sameness:

  • There are two things that I created: the nose and the lump of bronze-and-playdough

  • There are two things that I didn’t create: the statue of Plato and the lump of bronze.

But there are only three things. Which makes it sound like 2+2=3. That’s perhaps not quite fair, but it does seem strange.

Tuesday, October 29, 2019

Sameness without identity

Mike Rea’s numerical-sameness-without-identity solution to the problem of material constitution holds that the statue and the lump have numerical sameness but do not have identity. Rea explicitly says that numerical sameness implies sharing of all parts but not identity.

Does Rea here mean: sharing of all parts, proper or improper? It had better not be so. For improper parthood is transitive.

Proposition. If improper parthood is transitive and x and y share all their parts (proper and improper), then x = y.

Proof: But suppose that x and y share all parts. Then since x is a part of x, x is a part of y, and since y is a part of y, y is a part of x. Moreover, if x ≠ y, then x is a proper part of y and y is a proper part of x. Hence by transitivity, x would be a proper part of x, which is absurd, so we cannot have x ≠ y. □

So let’s assume charitably that Rea means the sharing of all proper parts. This is perhaps coherent, but it doesn’t allow Rea to preserve common sense in Tibbles/Tib cases. Suppose Tibbles the cat loses everything below the neck and becomes reduced to a head in a life support unit. Call the head “Head”. Then Head is a proper part of Tibbles. The two are not identical: the modal properties of heads and cats are different. (Cats can have normal tails; heads can’t.) This is precisely the kind of case where Rea’s sameness without identity mechanism should apply, so that Head and Tibbles are numerically the same without identity. But Tibbles has Head as a proper part and Head does not have Head as a proper part. But that means Tibbles and Head do not share all their proper parts.

Here may be what Rea should say: if x and y are numerically the same, then any part of the one is numerically the same as a part of the other. This does, however, have the cost that the sharing-of-parts condition now cannot be understood by someone who doesn’t already understand sameness without identity.

Friday, October 25, 2019

The present king of Ruritania

Suppose I am a quack and I announce:

  1. These green pills cured the king of Ruritania of lung cancer.

I am lying, of course. The green pills never cured anyone of lung cancer.

But wait. To lie, I have to assert. To assert, there has to be a proposition that is being expressed. But (1) doesn’t express any proposition, because “Ruritania” is a non-referring name.

Maybe, then, (1) is not a lie, but something that is wrong for the same reason that a lie is wrong. For instance, on Jorge Garcia’s account, lying is wrong as it’s a betrayal of the trust solicited by the very same act. If so, then my pretend assertion of (1) might be wrong for exactly the same reason as a lie.

The point can also be made without relying on non-referring proper names. Suppose Jones has lied, cheated, stolen, plagiarized and defenestrated his friends, but reporting doesn’t make his character black enough for my purposes. So I say:

  1. Dr. Jones has lied, cheated, stolen, plagiarized, defenestrated his enemies, and garobulated his friends.

This doesn’t express a proposition. But it’s just as bad as a lie.

Thursday, October 24, 2019

Perdurance and particles

A perdurantist who believes that particles are fundamental will typically think that the truly fundamental physical entities are instantaneous particle-slices.

But particles are not spatially localized, unless we interpret quantum mechanics in a Bohmian way. They are fuzzily spread over space. So particle-slices have the weird property that they are precisely temporally located—by definition of a slice—but spatially fuzzily spread out. Of course, it is not too surprising if fundamental reality is strange, but maybe the strangeness here should make one suspicious.

There is a second problem. According to special relativity, there are infinitely many spacelike hyperplanes through spacetime at a given point z of spacetime, corresponding to the infinitely many inertial frames of reference. If particles are spatially localized, this isn’t a problem: all of these hyperplanes slice a particle that is located at z into the same slice-at-z. But if the particles are spatially fuzzy, we have different slices corresponding to different hyperplanes. Any one family of slices seems sufficient to ground the properties of the full particle, but there are many families, so we have grounding overdetermination of a sort that seems to be evidence against the hypothesis that the slices are fundamental. (Compare Schaffer’s tiling requirement on the fundamental objects.)

A perdurantist who thinks the fundamental physical entities are fields has a similar problem.

A supersubstantialist perdurantist, who thinks that the fundamental entities are points of spacetime, doesn’t run into this problem. But that’s a really, really radical view.

An “Aristotelian” perdurantist who thinks that particles (or macroscopic entities) are ontologically prior to their slices also doesn’t have this problem.

Wednesday, October 23, 2019

Book in Progress: Norms, Natures and God

I have begun work with a working title of Norms, Natures and God, which should be a book on how positing Aristotelian natures solves problems in ethics (normative and meta), epistemology, semantics, metaphysics and mind, but also how, especially after Darwin, to be an intellectually satisfied Aristotelian one must be a theist. The central ideas for this were in my Wilde Lectures.

There is a github repository for the project with a PDF that will slowly grow (as of this post, it only has a table of contents) as I write. I welcome comments: the best way to submit them is to click on "Issues" and just open a bug report. :-)

The repository will disappear once the text is ready for submission to a publisher.

Perdurance and slices

One of the main problems with perdurance is thought to be that it makes intrinsic properties be primarily properties of slices, and only derivatively of the four-dimensional whole.

The most worrisome case of this problem has to do with mental properties. For if our slices have the mental properties primarily, and we only have them derivatively, then that leads to a sceptical problem (how do I know I am a whole and not a slice?) and besides violates the intuition that we have our mental properties primarily.

But someone who accepts a perdurantist ontology and accepts the idea that we are four-dimensional wholes does not have to say that intrinsic properties are primarily had by slices. For a property that involves a relation to one’s parts can still be intrinsic (having one’s parts is surely intrinsic!). Now instead of saying that, say, Bob has temporary property P at time t in virtue of his slice Bt at t having P, we can say that Bob has P in relation to Bt. This is very similar to how relationalist endurantists say that we have our temporary properties in relation to times, except that times are normally thought of as extrinsic to the object, while the slices are parts of the objects.

In fact, this helps save some intuitions of intrinsicness. For instance, it seems to be an intrinsic property of me that my heart is beating. But if t is now and At is my slice now, then At does not seem to intrinsically have the property of heart-beat. It seems that heart-beat is a dynamical property dependent not just on the state of the object at one time but also at nearby times. Thus, if we want to attribute heart-beat to At primarily, then heart-beat will not be intrinsic, as it will depend on At as well as slices At for t′ near t. But if we see my present heart-beat as a property of the four-dimensional worm, a property the worm has in relation to At (as well as neighboring times), then heart-beat can be an intrinsic property—and it can be had primarily by me, not my slices.

It is plausible that mental properties are dynamical as well: that one cannot tell just from the intrinsic properties of a three-dimensional slice whether thought is happening. (This is pretty much certain given materialism, but I think is plausible even on dualism.) So, again, mental properties aren’t going to be intrinsic properties of slices. But they can be primarily the intrinsic properties of four-dimensional persons, had in relation to their slices.

Tuesday, October 22, 2019

Persistence and internal times

Here are some desiderata for a view of the persistence of objects:

  1. Ordinary objects can change with respect to intrinsic properties.

  2. Ordinary objects are the primary bearers of some of the changeable intrinsic properties.

  3. Ordinary objects are literally present at multiple times.

Endurantism is usually allied with some sort of view on which temporary properties are had in relation to times, and hence the temporary properties are relational and not intrinsic. Perdurantism violates 2: it is the stages, not the ordinary objects, that are the primary bearers of the temporary intrinsics. And no primary bearer of a property can change with respect to it. Exdurantism violates 3: ordinary objects only exist at a single time.

Here is a view that yields all three desiderata. Objects have internal times, and these internal times are literally parts of the objects. Changeable intrinsic properties are relational to the internal times: an object is, say, straight at internal time t1 and bent at internal time t2.

Let’s go through the desiderata. The internal times are parts of the object, and a property obtaining in virtue of relations between one’s own parts can still be intrinsic. Shape, for instance, might be had in virtue of the spatial relationships between the parts of an object—and yet this does not rule out shape being intrinsic (indeed, for David Lewis it’s paradigmatically intrinsic). Similarly, consciousness properties in a split brain might be had relationally to a brain hemisphere, but are still intrinsic since brain hemispheres are parts of the patient. Thus we can have (1).

Moreover, while parts—namely, internal times—are used to account for change, the parts are not the primary bearers of the changeable intrinsic properties. The changeable intrinsic properties to be relational between the ordinary object and the times, but that does nothing to rule out the possibility that some of these properties are primarily had by the object as a whole.

Ordinary objects can be literally present at multiple times. One can ensure this either in an endurantist way, so that the ordinary objects are multiply temporally located 3D objects, or in a four-dimensionalist way, so that the ordinary objects are 4D. Note that the endurantist version may require the ordinary object to have parts—namely, the internal times—that do not themselves endure but that only exist for an external instant. But there is no problem with an enduring object having a short-lived part.

There is another variant of the view. The internal times could be taken to be abstract objects instead of parts of the ordinary object. Arguably, a property that is had in virtue of a relation to an abstract object is not thereby objectionably extrinsic. If it were, then strong Platonists would all count as denying the existence of intrinsic properties.

Monday, October 21, 2019

The sexual, the secret and the sacred

Some ethical truths are intuitively obvious but it is hard to understand the reasons for them. For instance, sexual behavior should be, at least other things being equal, kept private. But why? While I certainly have this intuition, I have always found it deeply puzzling, especially since privacy is opposed to the value of knowledge and hence always requires a special justification.

But here is a line of thought that makes sense to me now. There is a natural connection between the sacred and the ritually hidden recognized across many religions. Think, for instance, of how the holiest prayers of the Tridentine Mass are said inaudibly by the priest, or the veiling of the Holy of Holies in the Temple of Jerusalem, or the mystery religions. The sacred is a kind of mysterium tremendum et fascinans, and ritual hiddenness expresses the mysteriousness of the sacred particularly aptly.

If sexuality is sacred—say, because of its connection with the generation of life, and given the sacredness of human life—then it is unsurprising if it is particularly appropriately engaged in in a context that involves ritual hiddenness.

Note that this is actually more of a ritual hiddenness than an actual secrecy. The fact of sex is not a secret in the case of a married couple, just as the content of the inaudible prayers of the Tridentine Mass is printed publicly in missals, but it is ritually hidden.

I wonder, too, if reflection on ritual hiddenness might not potentially help with the “problem of hiddenness”.

Wednesday, October 16, 2019

An argument that the moment of death is at most epistemically vague

Assume vagueness is not epistemic. This seems a safe statement:

  1. If it is vaguely true that the world contains severe pain, then definitely the world contains pain.

But now take the common philosophical view that the moment of death is vague, except in the case of instant annihilation and the like. The following story seems logically possible:

  1. Rover the dog definitely dies in severe pain, in the sense that it is definitely true that he is in severe pain for the last hours of his life all the way until death, which comes from his owner humanely putting him out of his misery. The moment of death is, however, vague. And definitely nothing other than Rover feels any pain that day, whether vaguely or definitely.

Suppose that t1 is a time when it is vague whether Rover is still alive or already dead. Then:

  1. Definitely, if Rover is alive at t1, he is in severe pain at t1. (By 2)

  2. Definitely, if Rover is not alive at t1, he is not in severe pain at t1. (Uncontroversial)

  3. It is vague whether Rover is alive at t1. (By 2)

  4. Therefore, it is vague whether Rover is in severe pain at t1. (By 3-5)

  5. Therefore, it is vague whether the world contains severe pain at t1. (By 2 and 6, as 2 says that Rover is definitely the only candidate for pain)

  6. Therefore, definitely the world contains pain at t1. (By 1 and 7)

  7. Therefore, definitely Rover is in pain at t1. (By 2 and 8, as before)

  8. Therefore, definitely Rover is alive at t1. (Contradiction to 5!)

So, we cannot accept story 2. Therefore, if principle 1 is true, it is not possible for something with a vague moment of death to definitely die in severe pain, with death definitely being the only respite.

In other words, it is impossible for vagueness in the moment of death and vagueness in the cessation of severe pain to align perfectly. In real life, of course, they probably don’t align perfectly: unconsciousness may precede death, and it may be vague whether it does so or not. But it still seems possible for them to align perfectly, and to do so in a case where the moment of death is vague—assuming, of course, that moments of death are the sort of thing that can be vague. (For a special case of this argument, assume functionalism. We can imagine a being of such a sort that the same functioning constitutes it as existent as constitutes it as conscious, and then vagueness in what counts as functioning will translate into perfectly correlated vagueness in the moment of death and the cessation of severe pain.)

The conclusion I’d like to draw from this argument is that moments of death are not the sort of thing that can be non-epistemically vague.

Note that 1 is not plausible on an epistemic account of vagueness. For the intuition behind 1 depends on the idea that vague cases are borderline cases, and a borderline case of severe pain will be a definite case of pain, just as a borderline case of extreme tallness will be a definite case of tallness. But if vagueness is epistemic, then vague cases aren't borderline cases: they are just cases we can't judge about. And there is nothing absurd about the idea that we might not be able to judge whether there is severe pain happening and not able to judge whether there is any pain happening either.

Fusions and organisms

Suppose you believe the following:

  1. For any physical objects, the xs, there is a physical object y with the following properties:
    1. each of the xs is a part of y;
    2. it is an essential property of y that it have the parts it does; and
    3. necessarily, if all the actual proper parts of y exist, then y exists as well.

For instance, on the standard version of mereological universalism, it seems we could just take y to be the fusion of the xs. And on some versions of monism, we could take y to be the cosmos.

But it seems (1) is false if organisms are physical objects and if particles survive ingestion. For suppose that there is exactly one x, Alice, who is a squirrel, and at t1 we find a y that satisfies (1). And now suppose that at t2 there comes into existence a nut whose simple parts are not already parts of y, and at t3 this nut has been eaten and fully digested by Alice. Suppose no parts of y have ceased to exist between t1 and t3. Then y exists at t3 by (c), and has Alice as a part of itself (by (a) and (b)), and the simple particles of the nut are parts of y by transitivity as they are parts of Alice. Hence y has gained parts, contrary to (b), a contradiction.

(Note that the argument can be run modally against a four-dimensionalist version of (1).)

The mereological universalist’s best bet may be to deny that fusions satisfy (c). Normally, we think that the only way for a fusion to perish is for one of its proper parts to perish. But there may be another way for a fusion to perish, namely by certain kinds of changes in the mereological structure of the fusion’s proper parts, and specifically by one of the fusion’s proper parts gaining a part that wasn’t already in the fusion.

Here is another problem for (1), though. Suppose that Alice the squirrel is the only physical object in the universe. Now consider a y satisfying (1)(a)–(b). Then y is distinct from Alice because y has different modal properties from Alice: Alice can survive annihilation of one of her claws while y cannot by (b). But this violates the Weak Supplementation mereological axiom, since all of y’s parts overlap Alice. So we cannot combine fusions as normally conceived of (since the normal conception of them includes classical mereology) with organisms.

A way out of both problems is to say that there are two different senses of parthood at issue: fusion-parthood and organic-parthood, and there is no transitivity across them. This is a serious ideological complication.

Tuesday, October 15, 2019


Monism holds there is only one (or at least one fundamental) thing in reality: the universe. Pluralism, as normally taken, holds there are many. An underexplored metaphysical view is oligonism: the view that there are (at least fundamentally) only a handful of objects in reality, but more than one.

One way to get oligonism is to take the universe of monism and add God while holding that God is not derivative from the universe. But that’s still a monism about created reality, and my interest here is going to be in oligonism about created reality (the non-theist reader can substitute “concrete reality”).

The most promising version of oligonism is one on which the correct physics of the world consists of a handful of fundamental fields (e.g., gravitational, electromagnetic, etc.) and these fundamental fields are the fundamental objects in reality.

Oligonism suffers from an inconvenient complication as compared to monism. The monist can at least say that we have derivative existence as parts of a fundamental whole. The field oligonist cannot, because there is no one fundamental whole that we are parts of. On field oligonism, what we need to say is that each of us is jointly constituted by the arrangement of a handful of fields: I exist in virtue of the gravitational, electromagnetic and other fields having the right sorts of concentrations here.

Maybe, though, one can have one-many parthood relation: x is a part of y, z, w, ... even though x isn't a part of y, or z, or w, but only of all them jointly. Then we could exist as parts of the gravitational, electromagnetic and other fields, without us existing as parts of any one of them. A one-many parthood relation isn't crazy. Take an Aristotelian or van Inwagen view on which living things are the only complex objects. Now we could imagine two organisms, A and B, that each have a symbiotic relationship with a third object C but not with each other, so that we have two symbiotic wholes: AC and BC. Further suppose that only a part of C is involved in AC and a disjoint part of C is involved in BC. Then we could say that C is a part of AC and BC, but isn't a part of either AC or of BC, nor is there a greater whole ABC that contains all of C.

Of course, I don't think oligonism is true. The main reason I don't think that is that I think we are fundamental.

Friday, October 11, 2019

Do inconsistent credences lead to Dutch Books?

It is said that if an agent has inconsistent credences, she is Dutch Bookable. Whether this is true depends on how the agent calculates expected utilities. After all, expected utilities normally are Lebesgue integrals over a probability measure, but the inconsistent agent’s credences are not a probability measure, so strictly speaking there is no such thing as a Lebesgue integral over them.

Let’s think how a Lebesgue integral is defined. If P is a probability measure and U is a measurable function on the sample space, then the expected value of U is defined as:

  1. E(U)=∫0P(U > y)dy − ∫−∞0P(U < y)dy

where the latter two integrals are improper Riemann integrals and where P(U > y) is shorthand for P({ω : U(ω)>y}) and similarly for P(U < y).

Now suppose that P is not a probability measure, but an arbitrary function from the set of events to the real numbers. We can still define the expected value of U by means of (1) as long as the two Riemann integrals are defined and aren’t both ∞ or both −∞.

Now, here is an easy fact:

Proposition: Suppose that P is a function from a finite algebra of events to the non-negative real numbers such that P(∅)=0. Suppose that U is a measurable (with respect to the finite algebra) function such that (a) P(U > y)=0 for all y > 0 and (b) P(U < 0)>0. Then if E(U) is defined by (1), we have E(U)<0.

Proof: Since the algebra is finite and U is measurable, U takes on only finitely many values. If y0 is the largest of its negative values, then P(U < 0)=P(U < y) for any negative y > y0, and hence ∫−∞0P(U < y)dy ≥ |y0|P(U < 0)>0 by (b), while ∫0P(U > y)dy by (a). □

But then:

Corollary: If P is a function from a finite algebra of events on the samples space Ω to the non-negative real numbers with P(∅)=0 and P(Ω)>0, then an agent who maximizes expected utility with respect to the credence assignment P as computed via (1) and starts with a baseline betting portfolio for which the utility is zero no matter what happens will never be Dutch Boooked by a finite sequence of changes to her portfolio.

Proof: The agent starts off with a portfolio with a utility assignment U0 where P(U0 > y)=0 for all y > 0 and P(U0 < y)=0 for all y < 0, and hence once where E(U0)=0 by (1). If the agent is in a position where the expected utility based on her current portfolio is non-negative, she will never accept a change to the portfolio that turns the portfolio’s expected utility negative, as that would violated expected utility maximization. By mathematical induction, no finite sequence of changes to her portfolio will turn her expected utility negative. But if a portfolio is a Dutch Book then the associated utility function U is such that P(U < 0)=P(Ω)>0 and P(U > y)=0 for all y > 0. Hence by the Proposition, E(U)<0, and hence a Dutch Book will not be accepted at any finite stage. □

Note that the Corollary does assume a very weak consistency in the credence assignment: negative credences are forbidden, impossible events get zero credence, and necessary events get non-zero credence.

Additionally, the Corollary does allow for the possibility of what one might call a relative Dutch Book, i.e., a change between portfolios that loses the agent money no matter what. The final portfolio won’t be a Dutch Book relative to the initial baseline portfolio, of course.

Note, however, that we don’t need consistency to get rid of relative Dutch Books. Adding the regularity assumption that P(A)>0 for all non-empty A and the monotonicity condition that if A ⊂ B then P(A)<P(B) is all we need to ensure the agent will never accept even a relative Dutch Book. For regularity plus monotonicity ensures that a relative Dutch Book always decreases expected utility as defined by (1). But these conditions are not enough to rule out all inconsistency. For instance, if in the case of the flip of a single coin I assign probability 1 to heads-or-tails, probability 0.8 to heads, probability 0.8 to tails, and probability 0 to the empty event, then my assignment is patently inconsistent, but satisfies all of the above assumptions and hence is neither absolutely nor relatively Dutch Bookable.

How does all this cohere with the famous theorems about inconsistent credence assignments being Dutch Bookable? Simple: Those theorems define expected utility for inconsistent credences differently. Specifically, they define expected utility as ∑iUiP(Ei) where the Ei partition the sample space such that on Ei the utility has the constant value Ui. But that’s not the obvious and direct generalization of the Lebesgue integral!

I vaguely recall hearing something that suggests to me that this might be in the literature.

Also, I slept rather poorly, so I could be just plain mistaken in the formal stuff.

Thursday, October 10, 2019

Approximatable laws

Some people, most notably Robin Collins, have run teleological arguments from the discoverability of the laws of nature.

But I doubt that we know that the laws of nature are discoverable. After all, it seems we haven’t discovered the laws of physics yet.

But the laws of nature are, surely, approximatable: it is within our power to come up with approximations that work pretty well in limited, but often useful, domains. This feature of the laws of nature is hard to deny. At the same time, it seems to be a very anthropocentric feature, since the both the ability to approximate and the usefulness are anthropocentric features. The approximatability of the laws of nature thus suggests a universe whose laws are designed by someone who cares about us.

Objection: Only given approximatable laws is intelligence an advantage, so intelligent beings will only evolve in universes with approximatable laws. Hence, the approximatable laws can be explained in a multiverse by an anthropic principle.

Response: Approximatability is not a zero-one feature. It comes in degrees. I grant that approximatable laws are needed for intelligence to be an advantage. But they only need to be approximatable to the degree that was discovered by our prehistoric ancestors. There is no need for the further approximatability that was central to the scientific revolution. Thus an anthropic principle explanation only explains a part of the extent of approximatability.

Tuesday, October 8, 2019

Humean accounts of modality

Humean accounts of modality, like Sider’s, work as follows. We first take some privileged truths, including all the mathematical ones, and an appropriate collection of others (e.g., ones about natural kind membership or the fundamental truths of metaphysics). And then we stipulate that to be necessary is to follow from the collection of privileged truths, and the possible that whose negation isn’t necessary.

Here is a problem. We need to be able to say things like this:

  1. Necessarily it’s possible that 2+2=4.

For that to be the case, then:

  1. It’s possible that 2+2=4

has to follow from the privileged truths. But on the theory under consideration, (2) means:

  1. That 2 + 2 ≠ 4 does not follow from the privileged truths.

So, (3) has to follow from the privileged truths. Now, how could it do that? Suppose first that the privileged truths include only the mathematical ones. Then (3) has to be a mathematical truth: for only mathematical truths follow logically from mathematical truths. But this means that “the privileged truths”, i.e., “the mathematical truths”, has to have a mathematical description. For instance, there has to be a set or proper class of mathematical truths. But that “the mathematical truths” has a mathematical description is a direct violation of Tarski’s Indefinability of Truth theorem, which is a variant of Goedel’s First Incompleteness Theorem.

So we need more truths than the mathematical ones to be among the privileged ones, enough that (3) should follow from them. But it unlikely that any of the privileged truths proposed by the proponents of Humean accounts of modality will do the job with respect to (3). Even the weaker claim:

  1. That 2 + 2 ≠ 4 does not follow from the mathematical truths

seems hard to get from the normally proposed privileged truths. (It’s not mathematical, it’s not natural kind membership, it’s not a fundamental truth of metaphysics, etc.)

Consider this. The notion of “follows from” in this context is a formal mathematical notion. (Otherwise, it’s an undefined modal term, rendering the account viciously circular.) So facts about what does or does not follow from some truths seem to be precisely mathematical truths. One natural way to make sense of (4) is to say that there is a privileged truth that says that some set T is the set of mathematical truths, and then suppose there is a mathematical truth that 2 + 2 ≠ 4 does not follow from T. But a set of mathematical truths violates Indefinability of Truth.

Perhaps, though, we can just add to the privileged truths some truths about what does and does not follow from the privileged truths. In particular, the privileged truths will contain, or it will easily follow from them, the truth that they are mutually consistent. But now the privileged truths become self-referential in a way that leads to contradiction. For instance:

  1. No x such that F(x) follows from the privileged truths.

will make sense for any F, and we can choose a predicate F such that it is provable that (5) is the only thing that satisfies F (cf. Goedel’s diagonal lemma). Now, if (5) follows from the privileged truths, then it also follows from the privileged truths that (5) doesn’t follow from the privileged truths, and hence that the privileged truths are inconsistent. Thus, from the fact that the privileged truths are consistent, which itself is a privileged truth or a consequence thereof, one can prove (5) doesn’t follow from the privileged truths, and hence that (5) is true, which is absurd.

Monday, October 7, 2019

How the law needs to be written in the heart

In ethics, we seek a theory of obligation whose predictions match our best intuitions.

Suppose that explorers on the moon find a booklet with pages of platinum that contains an elegant collection of moral precepts that match our best intuitions to an incredible degree, better than anything that has been seen before. When we apply the precepts to hard cases, we find solutions that, to people we think of as decent, seem just right, and the easy cases all work correctly. And every apparently right action either follows from the precepts, or turns out to be a sham on deeper reflection.

This would give us good reason to think the precepts of the booklet in fact do sum up obligations. But now imagine Euthyphro came along and gave us this metaethical theory:

  1. What makes an action right is that it follows from the content of this booklet.

Euthyphro would be wrong. For even though (1) correctly gives a correct account of what actions are in fact right, the right action isn’t right because it’s written in the booklet. (Is it written in the booklet because it’s right? Probably: the best theory of the booklet’s composition would be that it was written by some ethical genius who wrote what was right because it was right.)

Why not? What’s wrong with (1)? It seems to me that (1) is just too extrinsic to us. There is no connection between the booklet and our actions, besides the fact that the actions required by the booklet are exactly the right ones.

What if instead the booklet were an intrinsic feature of human beings? What if ethics were literally written in the human heart, so that microscopic examination of a dissected human heart found miniature words spelling out precepts that we have very good reason to think sum up the theory of the right? Again, we should not go for a Euthyphro-style theory that equates the right with what is literally written in the heart. Yet on this theory the grounds of the right would be literally intrinsic to us—and they could be essential to us, if we wish: further examination could show that it is an essential feature of human DNA that it generates this inscription. This would give us reason to think that human beings were designed by an ethical genius, but not that the ground of the right is the writing in the heart.

The lesson is this, I think. We want the grounds of the right to be of the correct sort. Being metaphysically intrinsic to us is a necessary condition for this, but it is not sufficient. We want the grounds of the right to be “close to us”: closer than our physical hearts, as it were.

But we also don’t want the grounds of the right to be too close to us. We don’t want the right to be grounded in the actual content of our desires or beliefs. We are looking for grounds that exercise some sort of a dominion over us, but not an alien dominion.

The more I think about this, the more I see the human form—understood as an actual metaphysical component intrinsic and essential to the human being—as having the exactly right balance of standoffish dominion and closeness to provide these grounds. In other words, Natural Law provides the right metaethics.

And the line of thought I gave above can also be repeated for epistemological normativity. So we have reason to think the Natural Law provides the right metaepistemology as well.

Friday, October 4, 2019

A tension in some theistic Aristotelian thinkers

Here is a tension in the views of some theistic Aristotelian philosophers. On the one hand, we argue:

  1. That the mathematical elegance and discoverability of the laws of physics is evidence for the existence of God

but we also think:

  1. There are higher-level (e.g., biological and psychological) laws that do not reduce to the laws of physics.

These higher-level laws, among other things, govern the emergence of higher-level structures from lower-level ones and the control that the higher-level structures exert over the lower-level ones.

The higher-level laws are largely unknown except in the broadest outline. They are thus not discoverable in the way the laws of physics are claimed to be, and since no serious proposals are yet available as to their exact formulation, we have no evidence as to their elegance. But as evidence for the existence of God, the elegance and discoverability of a proper subset of the laws is much less impressive. In other words, (1) is really impressive if all the laws reduce to the laws of physics. But otherwise, (1) is rather less impressive. I’ve never never seen this criticism.

I think, however, there is a way for the Aristotelian to still run a design argument.

Either all the laws reduce to the laws of physics or not.

If they all reduce to the laws of physics, pace Aristotelianism, we have a great elegance and discoverability design argument.

Suppose now that they don’t. Then there is, presumably, a great deal of complex connection between structural levels that is logically contingent. It would be logically possible for minds to arise out of the kinds of arrangements of physical materials we have in stones, but then the minds wouldn’t be able to operate very effectively in the world, at least without massively overriding the physics. Instead, minds arise in brains. The higher-level laws rarely if ever override the lower-level ones. Having higher-level laws that fit so harmoniously with the lower-level laws is very surprising a priori. Indeed, this harmony is so great as to be epistemically suspicious, suspicious enough that the need for such a harmony makes one worry that the higher-level laws are a mere fiction. But if they are a mere fiction, then we go back to the first option, namely reduction. Here we are assuming the higher level stuff is irreducible. And now we have a great design argument from their harmony with the lower-level laws.

Wednesday, October 2, 2019

An Aristotelian account of proper parthood (for integral parts)

Here it is: x is a proper part of y iff x is informed by a form that informs y and x's being informed by that form is derivative from y's being informed by it.

Shape and parts

Alice is a two-dimensional object. Suppose Alice’s simple parts fill a round region of space. Then Alice is round, right?

Perhaps not! Imagine that Alice started out as an extended simple in the shape of a solid square and inside the space occupied by her there was an extended simple, Barbara, in the shape of a circle. (This requires there to be two things in the same place: that’s not a serious difficulty.) But now suppose that Alice metaphysically ingested Barbara, i.e., a parthood relation came into existence between Barbara and Alice, but without any other changes in Alice or Barbara.

Now Alice has one simple part, Barbara (or a descendant of Barbara, if objects “lose their identity” upon becoming parts—but for simplicity, I will just call that part Barbara), who is circular. So, Alice’s simple parts fill a circular region of space. But Alice is square: the total region occupied by her is a square. So, it is possible to have one’s simple parts fill a circular region of space without being circular.

It is tempting to say that Alice has two simple parts: a smaller circular one and a larger square one that encompasses the circular one. But that is mistaken. For where would the “larger square part” come from? Alice had no proper parts, being an extended simple, before ingesting Barbara, and the only part she acquired was Barbara.

Maybe the way to describe the story is this: Alice is square directly, in her own right. But she is circular in respect of her proper parts. Maybe Alice is the closest we can have to a square circle?
Here is another apparent possibility. Imagine that Alice started as an immaterial object with no shape. But she acquired a circular part, and came to be circular in respect of her proper parts. So, now, Alice is circular in respect of her proper parts, but has no shape directly, in her own right.

Once these distinctions have been made, we can ask this interesting question:
  • Do we human beings have shape directly or merely in respect of our proper parts?
If the answer is “merely in respect of our proper parts”, that would suggest a view on which we are both immaterial and material, a kind of Hegelian synthesis of materialism and simple dualism.