Friday, May 31, 2019

Gunk, etc.

If we think parts are explanatorily prior to wholes, then gunky objects—objects which have parts but no smallest parts—involve a vicious explanatory regress. But if one takes the Aristotelian view that wholes are prior to parts, then the regress involved in gunky objects doesn’t look vicious at all: the whole is prior to some parts, these parts are prior to others, and so on ad infinitum. It’s just like a forward causal regress: today’s state causes tomorrow, tomorrow’s causes the next day’s, and so on ad infinitum.

On the other hand, on the view that parts are explanatorily prior to wholes, upward compositional regresses are unproblematic: the head is a part of the cow, the cow is a part of the earth, the earth is a part of the solar system, the solar system is a part of the Orion arm, the Orion arm is a part of the Milky Way, the Milky Way is a part of the Local Group, and this could go on forever. The Aristotelian, on the other hand, has to halt upward regresses at substances, say, cows.

This suggests that nobody should accept an ontologically serious version of the Leibniz story on which composition goes infinitely far both downward and upward, and that it is fortunate that Leibniz doesn’t accept an ontologically serious version of that story, because only the monads and their inner states are to be taken ontologically seriously. But that's not quite right. For there is a third view, namely that parthood does not involve either direction of dependence: neither do parts depend on wholes nor do wholes depend on parts. I haven't met this view in practice, though.

Leibniz on infinite downward complexity

Leibniz famously thinks that ordinary material objects like trees and cats have parts, and these parts have parts, and so on ad infinitum. But he also thinks this is all made up of monads. Here is a tempting mental picture to have of this:

  • Monads, …, submicroscopic parts, microscopic parts, macroscopic parts, ordinary objects.

with the “…” indicating infinitely many steps.

This is not Leibniz’s picture. The quickest way to see that it’s not is that organic objects at each level immediately have primary governing monads. There isn’t an infinite sequence of steps between the cat and the cat’s primary monad. The cat’s primary monad is just that, the cat’s primary monad. The cat is made up of, say, cells. Each cell has a primary monad. Again, there isn’t an infinite sequence of steps between the cat and the primary monads of the cells: there might turn out to be just two steps.

In fact, although I haven’t come across texts of Leibniz that speak to this question, I suspect that the best way to take his view is to say that for each monad and each object partly constituted by that monad, the “compositional distance” between the monad and the object is finite. And there is a good mathematical reason for this: There are no infinite chains with two ends.

If this is right, then the right way to express Leibniz’s infinite depth of complexity idea is not that there is infinite compositional distance between an ordinary object and its monads, but rather than there is no upper bound on the compositional distance between an ordinary object and its monads. For each ordinary object o and each natural number N, there is a monad m which is more than N compositional steps away from o.

Fundamental mereology

It is plausible that genuine relations have to bottom out in fundamental relations. E.g., being a blood relative bottoms out in immediate blood relations, which are parenthood and childhood. It would be very odd indeed to say that a is b’s relative because a is c’s relative and c is b’s relative, and then a is c’s relative because a is d’s relative and d is c’s relative, and so on ad infinitum. Similarly, as I argued in my infinity book, following Rob Koons, causation has to bottom out in immediate causation.

If this is right, then proper parthood has to bottom out in what one might call immediate parthood. And this leads to an interesting question that has, to my knowledge, not been explored much: What is the immediate parthood structure of objects?

For instance, plausibly, the big toe is a part of the body because the big toe is a part of the foot which, in turn, is a part of the body. And the foot is a part of the body because the foot is a part of the leg which, in turn, is a part of the body. But where does it stop? What are the immediate parts of the body? The head, torso and the four limbs? Or perhaps the immediate parts are the skeletal system, the muscular system, the nervous system, the lymphatic system, and so on. If we take the body as a complex whole ontologically seriously, and we think that proper parthood bottoms out in immediate parthood, then there have to be answers to such questions. And similarly, there will then be the question of what the immediate parts of the head or the nervous system are.

There is another, more reductionistic, way of thinking about parthood. The above came from the thought that parthood is generated transitively out of immediate parthood. But maybe there is a more complex grounding structure. Maybe particles are immediately parts of the body and immediately parts of the big toe. And then, say, a big toe is a part of the body not because it is a part of a bigger whole which is more immediately a part of the body, but rather a big toe is a part of the body because its immediate parts are all particles that are immediately parts of the body.

Prescinding from the view that relations need to bottom out somewhere, we should distinguish between fundamental parts and fundamental instances of parthood. One might have one without the other. Thus, one could have a story on which we are composed of immediate parts, which are composed of immediate parts, and so on ad infinitum. Then there would be fundamental instances of the parthoood relation—they obtain between a thing and its immediate parts—but no fundamental parts. Or one could have a view with fundamental parts while denying that there are any fundamental instances of parthood.

In any case, there is clearly a lot of room for research in fundamental mereology here.

Thursday, May 30, 2019

Taste and cross-cultural encounters

After visiting the British Museum yesterday, I find it rather hard to take seriously the argument for the relativity of beauty from the diversity of taste. It seems clear that just as C. S. Lewis has argued for a moral core cutting across cultures, one can argue that there is an aesthetic core across cultures.

There is, however, an interesting apparent difference between the diversity of taste and the diversity of morals. I think a cross-cultural encounter involving a difference of taste regarding the best cultural artifacts—by each culture’s own standards—should typically lead to a broadening of taste. But a cross-cultural moral encounter should not typically lead to a broadening of morals. Very often, it should lead to a narrowing of morals: for instance, one culture learning from the other that slavery sex is wrong.

Why this difference? I think it may come from a difference in quantifiers.

As Aquinas already noted (in a somewhat different way), to be morally good, an action has to be good or neutral with respect to every relevant dimension of moral evaluation. If it is good with respect to courage and kindness and generosity, but it is bad with respect to justice (Robin Hood?), then the action is plain wrong. Thus as new dimensions of moral evaluation are discovered, as can happen in cross-cultural encounter, we get a narrowing of the actions that we classify as morally good.

On the other hand, for an item to be beautiful, it only needs to be beautiful with respect to some relevant dimensions of beauty. A musical performance is still beautiful on the whole even if the orchestra is dressed in dirty rags, and a painting can be beautiful even if it reeks of oil. Thus as we discover new dimensions of beauty, we get a broadening of the pieces that we classify as beautiful.

Friday, May 24, 2019

A way forward on the normalizability problem for the Fine-Tuning Argument

The Fine-Tuning Argument claims that the life-permitting ranges of various parameters are so narrow that, absent theism, we should be surprised that the parameters fall into those ranges.

The normalizability objection is that if a parameter ξ can take any real value, then any finite life-permitting range of values of ξ counts as a “narrow range”, since every finite range is an infinitesimal portion of the full range from −∞ to ∞. Another way to put the problem is that there is no uniform probability distribution on the set of real numbers.

There is, however, a natural probability distribution on the set of real numbers that makes sense as a prior probability distribution. It is related to the Solomonoff priors, but rather different.

Start with a language L with a finite symbol set usable for describing mathematical objects. Proceed as follows. Randomly generate finite strings of symbols in L (say, by picking independently and uniformly randomly from the set of symbols in L plus an “end of string” symbol until you generate an end of string symbol). Conditionalize on the string constituting a unique description of a probability measure on the Lebesgue measurable subsets of the real numbers. If you do get a unique description of a probability measure, then choose a real number according to this distribution.

The result is a very natural probability measure PL (a countable weighted sum of probability measures on the same σ-algebra with weights adding to unity is a probability measure) on the Lebesgue measurable subsets of the real numbers.

We can now in principle evaluate the fine-tuning argument using this measure.

The problem is that this measure is hard to work with.

Note that using this measure, it is false that all narrow ranges have very small probability. For instance, consider the intuitively extremely narrow range from 101000 to 101000. Supposing that the language is a fairly standard mathematical language for describing probability distributions, we can specify a uniform distribution on the 0-length interval from 101000 to 101000 as U[101000, 101000], which is 23 characters of LaTeX, plus an end of string. Using 95 ASCII characters, plus the end of string character, PL of this interval will be at least 96−24 or something like 10−48. Yet the size of the range is zero. In other words, intuitively narrow ranges around easily describable numbers, like 101000, get disproportionately high probability.

But that is how it should be, as we learn from the fact that the exponent 2 in Newton’s law of gravitation had better have a non-zero prior, even though the interval from 2 to 2 has zero length.

Whether the Fine-Tuning Argument works with PL for a reasonable choice of L and for a particular life-permitting range of ξ is thus a hard question. But in any case, for a fixed language L where we can define a map between strings and distributions, we can now make perfectly rigorous sense of the probability of a particular range of possibilities for ξ. We have replaced a conceptual difficulty with a mathematical one. That’s progress.

Further, now that we see that there can be a reasonable fairly canonical probability on infinite sets, the intuitive answer to the normalizability problem—namely, “this range seems really narrow”—could constitute a reasonable judgment as to what answer would be returned by one’s own reasonable priors, even if these are not the same as the probabilities given above.

Oh, and this probability measure solves the tweaked problem of regularity, because it assigns non-zero probability to every describable event. I think this is even better than my modified Solomonoff distribution.

Improving on Solomonoff priors

Let’s say that we want prior probabilities for data that can be encoded as a countably infinite binary sequence. Generalized Solomonoff priors work as follows: We have a language L (in the original setting, it’ll be based on Turing machines) and we generate random descriptions in L in a canonical way (e.g., add an end-of-string symbol to L and randomly and independently generate symbols until you hit the end-of-string symbol, and then conditionalize on the string uniquely describing an infinite binary sequence). Typically the set of possible descriptions in L is countable and we get a nice well-defined probability measure on the space of all countably infinite binary sequences, which favors those sequences that are simpler in the sense of being capable of a simpler encoding.

Here is a serious problem with this method. Let N be the set of all binary sequences that cannot be uniquely described in L. Then the method assigns prior probability zero to N, even though most sequences are in N. In particular, this means that if we get an L-indescribable sequence—and most sequences generated by independent coin tosses will be like that—then no matter how much of it we observe, we will be almost sure of the false claim that the sequence is L-describable.

Here, I think, is a better solution. Use a language L that can give descriptions of subsets of the space Ω of countably infinite binary sequences. Now our (finitely additive) priors will be generated as follows. Choose a random string of symbols in L and conditionalize on the string giving a unique description of a subset. If the subset S happens to be measurable with respect to the standard (essentially Lebesgue) measure on infinite binary sequences (i.e., the coin toss measure), then randomly choose a point in S using a finitely additive extension of the standard measure to all subsets of S. If the subset S is not measurable, then randomly choose a point in S using any finitely additive measure that assigns probability zero to all singletons.

For a reasonable language L, the resulting measure gives a significant probability to an unknown binary sequence being indescribable. For Ω itself will typically be easily described, and so there will be a significant probability p that our random description of a subset will in fact describe all of Ω, and the probability that we have an indescribable sequence will be at least p.

It wouldn’t surprise me if this is in the literature.

Thursday, May 23, 2019

On a twist on too-many-thinkers arguments

One of the ways to clinch a too-many-thinkers argument (say, Merricks’ argument against perdurantism, or Olson’s argument for animalism) is to say that the view results in an odd sceptical worry: one doesn’t know which of the many thinkers one is. For instance, if both the animal and the person think, how can you know that you are the animal and not the person: it seems you should have credence 1/2 in each.

I like too-many-thinkers arguments. But I’ve been worried about this response to the sceptical clinching: When the animal and the person think words like “I am a person”, the word “I” refers to the person, even when used by the animal, and hence both think the truth. In other words, “I” means something like: the person colocated with the the thinker/speaker.

But I think I have a good response to this response. It would be a weird limitation on our language if it did not allow speaker or thinker self-reference. Even if in fact “I” means the person colocated with the the thinker/speaker, we should be able to stipulate another pronoun, “I*”, one that refers just to the thinker/speaker. And it would be absurd to think that one not be able to justifiably assert “I* am a person.”

Wednesday, May 22, 2019

Functionalism and maximalism

It is widely held that consciousness is a maximal property—a property F such that, “roughly, … large parts of an F are not themselves F.” Naturalists have used maximality, for instance, to respond to Merricks’ worry that on naturalism, if Alice is conscious, so is Alice minus a finger, as they both have a brain sufficient for consciousness (see previous link). There are also the sceptical consequences, noted by Merricks, arising from thinking our temporal parts to be consciousness.

But functionalists cannot hold to maximalism. For imagine a variant on the Chinese room experiment where the bored clerk processes Chinese characters with the essential help of exactly one stylus and one wax tablet. The functionalist is committed to the clerk plus the stylus and tablet—call that clerk-plus—being conscious, as long as the stylus and tablet are essential to the functioning of the system. But if the clerk-plus is conscious, the clerk is not by maximalism. For consciousness is a maximal property, and the clerk is a large part of the clerk-plus. But it is absurd to think that the clerk turns into a zombie as soon as he starts to process Chinese characters.

Perhaps, though, instead of consciousness being maximal, the functionalist maximalist can say that maximally specific phenomenal types of consciousness—say, feeling such and such a sort of boredom B—are maximal. The clerk feels B, but clerk-plus is, say, riveted by reading the Romance of the Three Kingdoms. There is no violation of maximality with respect to the clerk’s feeling bored, because clerk-plus isn’t bored.

That could be the case. But it could also so happen that at some moment clerk-plus feels B as well. After all, the same feeling of boredom can be induced by different things. The Romance has slow bits. It could happen that clerk-plus is stuck in a slow bit, and for a moment clerk and clerk-plus lose sight of the details and are aware of nothing but their boredom—the qualitatively same boredom. And that violates maximality for specific types of consciousness.

If maximalism is needed for a naturalist theory of mind and if functionalism is our best naturalist theory of mind, then the best naturalist theory fails.

Monday, May 20, 2019

Presentism, gappy existence and self-causation

Yesterday, at the invitation of a student, I did a Marian pilgrimage to Walsingham. If you have a chance to go, go. It’s worth it for spiritual reasons. But here I want to reflect on a metaphysics of time question, related to the experience of participating in this venerable institution.

The Walsingham pilgrimage is an institution dating back to the middle ages. It was abolished by an unecumenical king in 1538, but then eventually re-established around the 19th century.

According to presentism, between the 16th and 19th centuries, it was true that the pilgrimage does not exist. Those who caused it to be re-established, thus, caused it to exist plain and simple. But it is very strange that one could cause to exist something that already once existed—and without any time travel or backwards causation. (Given time travel, one can make something and take it into the past. In making it, then, one caused something to exist that already existed. That’s just a part of the strangeness of time travel.)

One might try to get out of this puzzle by supposing that institutions like pilgrimages do not really exist, and that nothing that exists can have gappy existence. (As stated, corruptionist presentists who believe in a resurrection are out of luck. But they can say that when God is causing the re-existence of something, it’s not so strange.)

But the puzzle remains when we consider self-preservation.

Saturday, May 18, 2019


Plausibly—though there are some set-theoretic worries that require some care if the language is rich enough—for a fixed language, there are only countably many situations we can describe. Consequently, we only need to do Bayesian epistemology for countably many events. But this solves the problem of regularity for uncountable sample spaces. For even if there are uncountably many events, only countably many are describable and hence matter, and they form a field (i.e., are closed under finite unions and complements) and:

Proposition: For any countable field F of subsets of a set Ω, there is a countably additive probability measure P on the power set of Ω such that every event in F has non-zero probability.

Proof: Let the non-empty members of F be u1, u2, .... Let a1, a2, ... be any sequence of positive numbers adding up to 1 (e.g., an = 2n). Choose one point xn ∈ un. Let P(A)=∑nanAn where An is 1 if xn ∈ A and 0 otherwise.

Note that this proof uses the countable Axiom of Choice, but almost nobody is worried about that.

Thursday, May 16, 2019

Analogies to ectopic pregnancy

The standard Catholic view of tubal pregnancy is that it is permissible to remove the tube with the child. The idea seems to be that the danger to the mother comes from the potential rupture of the tube, and hence removal of the tube is removal of that which poses the danger, and the death of the child is a non-intended side-effect, with the action justified by double effect. I’ve always been queasy about this reasoning, but I now have two related analogies that make me feel better about this.

Case 1: There are two astronauts on a spaceship, with no oxygen left in the air. The astronauts are wearing spacesuits with oxygen tanks. The oxygen tanks are sufficient for the astronauts to survive until they get home: 50% of the oxygen can be expected to be used up before getting home. However, one of the tanks is rigged by a malefactor with an explosive device such that if more than 20% of the oxygen is used, it will explode, killing both astronauts. The astronaut wearing that particular spacesuit is unconscious and cannot be consulted. It is not feasible to disarm the bomb or to swap tanks. The conscious astronaut removes the explosive tank from the other astronaut’s space suit and throws it into space, knowing that this will result in the unconscious astronaut dying from lack of oxygen. The intention, however, is to remove the item that will dangerously rupture if it is left in place. It is not the intention to kill the other astronaut. This is true even though it is the other astronaut’s breathing that would trigger the tank’s explosion.

The proximate source of the danger is the oxygen tank. But the more distant source is the breathing. It seems very plausible that it makes a moral difference whether the conscious astronaut shoots the unconscious astronaut to stop their breathing (wrong) or removes their tank to expel the danger (right action). This seems a legitimate case of double effect reasoning.

Case 2: Much as in Case 1, but (a) there is intense radiation outside the spaceship’s shielding, so that getting pushed into space even while wearing a spacesuit on will be fatal, and (b) there is no way to separate the tank from the astronaut. Thus, the other astronaut picks up the explosive tank, and throws it far into space. The tank is connected to the unconscious astronaut, so the unconscious astronaut flies out with the tank, and is killed by radiation. The tank never explodes, because the oxygen doesn't get depleted

Again, this seems a perfectly legitimate case of double effect reasoning.

What about the alternative of removing the child from the tube, which orthodox Catholic ethicists tend to reject (unless done in the hope reattaching in the correct place)? Well, the child is connected to the tube via a placenta. The placenta is to a large degree an organ of the child. As I understand it, removal of the child from the tube would require intentionally cutting the placenta, in a way that is fatal to the child. This directly fatal intervention seems akin to slicing the astronaut to remove them from the suit. This seems harder to justify.

Monday, May 13, 2019

A tweak to regularity

Let Gp be the law of gravitation that states that F = Gm1m2/rp, for some real number p. There was a time when it was rational to believe G2. But here is a problem. When 0 < |p − 2|<10−100 (say), Gp is practically empirically indistinguishable from G2, in the sense that within the accuracy of our instruments it predicts exactly the same observations. Moreover, there are uncountably many values of p such that 0 < |p − 2|<10−100. This means that the prior probability for most (i.e., all but at most countably many) such values of p must have been 0. On the other hand, if the prior probability for G2 had been 0, then the posterior probability would have always stayed at 0 in our Bayesian updates (because the probability of our measurements conditionally on the denial of G2 never was 0, which it would have to have been to budge us from a zero prior).

So, G2 is exceptional in the sense that it has a non-zero prior probability, whereas most hypotheses Gp have zero prior probability. This embodies a radical preference for a more elegant theory.

Let N be the set of values of p such that the rational prior probability P(Gp) is non-zero. Then N contains at most countably many values of p. I conjecture that N is the set of all the real numbers that can be specifically defined in the language of mathematics (e.g., 2, 3.8, eπ and the smallest real root of z7 + 3z6 + 2z5 + 7πz3 − z + 18).

If this is right, then Bayesian regularity—the thesis that all contingent hypotheses should have non-zero probability—should be replaced by the weaker thesis that all contingent expressible hypotheses should have non-zero probability.

Note that all this doesn’t mean that we are a priori certain that the law of gravitation involves a mathematically definable exponent. We might well assign a non-zero probability to the disjunction of Gp over all non-definable p. We might even assign a moderately large non-zero probability to this disjunction.

Punishment by loss of reputation

John Stuart Mill famously wrote:

We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience.

I have two concerns about the middle item, punishment “by the opinion of his fellow-creatures”: (1) standing and (2) due process.

1. Standing

Punishment requires the right kind of standing on the part of the punisher. Unless in some way you are under my authority or perhaps I am an aggrieved party, I do not have the standing to punish you. There are two ways of taking this worry.

First, one might take it that without standing it is literally impossible for me to punish you. It is certainly possible for me to treat you harshly, and my harshness can be a reaction to your wrongdoing, but perhaps it won’t be a punishment.

I am not completely sure about this, though. For suppose you have done something wrong and a vigilante without standing has imposed harsh treatment on you in reaction to this, a harsh treatment that would have counted as maxing out retribution if the vigilante had standing, and then you fall into the hands of an authority with the standing to punish. A case can be made that at that point it is inappropriate for the authority to impose further harsh treatment, and that the best explanation is that the vigilante has already punished you. But perhaps this case isn’t right. Our law does not, I think, work this way. A judge might take into account what you suffered at the hands of the vigilante and reduce your sentence, but it does not seem that the judge would be unjust in still giving you the full sentence that the law calls for (and the vigilante then being punished if caught, too). Moreover, the intuition that “you’ve suffered enough already” may apply even in cases where you something bad happens to you as a non-punitive consequence of a crime, say if you’re a drunk driver and you crash into a wall causing yourself to be paralyzed from the neck down. So on the whole, I am dubious that it is possible to punish without standing.

The second worry about standing is that without standing, I have no right to impose the harsh treatment on you (barring special circumstances, such as your giving me permission). This is clear if in fact the previous worry about standing applies and the harsh treatment would not count as punishment—for in that case, the harsh treatment is unjustly applied, since the one relevant justification for it would be that it is a punishment, and it’s not. But even if the harsh treatment were to count as punishment, without standing an injustice has happened.

But perhaps third-parties do in fact have standing to punish. I can see two stories being told to defend this standing.

First, no man is an island, so if you wrong one person, perhaps you wrong all of society, and so third-parties have standing as aggrieved parties. I am doubtful, however, whether aggrieved parties as such do have standing to punish. My children do not have the right to punish each other for misdeeds committed against each other. Moreover, it seems implausible that there be a disjunctive story about the standing to punish, so that both authorities and aggrieved parties have standing. One might try to say that only aggrieved parties have standing to punish, and then say that authorities punish as representatives of the aggrieved community, but that seems mistaken. For authorities can also legitimately punish wrongs done against those that are not members of the aggrieved community. Parents can legitimately punish children for things that the children did against members of other families. (It is tempting to say that this is a punishment for the violation of family rules, which damages the peace of the family, but that approach does not seem pedagogically right.)

Moreover, in a Christian context, it is very dubious whether aggrieved parties have any right to punish on account of their grievance: to impose punishment on account of one’s own grievance seems to be the kind of behavior that the duty of forgiveness rules out and that is also ruled out by Romans 12:19. So a justification of punishment in terms of a standing that derives from being aggrieved is not available to Christians.

Second, perhaps random third-parties count as deputed by society to impose punishment by adverse opinion, even though they are not deputed to impose punishment by violent means. If so, then they have standing to punish on the grounds of deputed authority rather than ont he grounds of being aggrieved. This fits much better with the anti-vengeance motif of the New Testament. Perhaps some evidence of such a deputation is that truth is a defense in defamation lawsuits.

I think an implicit deputation model is the best story about punishment by adverse third-party opinion. But I am still sceptical. One reason is this. Punishment by third-party opinion can be at least as harsh on the wrongdoer as a fine or even a moderate term of imprisonment. Yet we do not think courts have a duty to routinely significantly reduce punishments for significant crimes on the grounds that the person has already been punished by public opinion, or to increase punishments on the grounds that public opinion has been silent. Thus, adverse opinion does not seem to be a properly deputed part of the punishment.

2. Due process

Punishment requires procedural justice. But public opinion rarely follows best practices there. Even though punishments through adverse opinion can be as harsh on the accused as criminal penalties, the thorough examination of evidence, with a presentation of both sides by able legal representation and a factual examination by independent peers following a “beyond reasonable doubt” standard is rarely present in the case of punishment by public opinion. And even if there are no reasonable grounds for doubt about the wrongs committed, rarely is there a serious examination of evidence about mens rea or sanity.

About the only time that public opinion is able to follow our best practices is if the public opinion comes after a proper criminal trial and is entirely conditioned on its outcome. But that is rare, and anyway isn’t the case that Mill is thinking about.

Final remarks

The above does not mean, however, that public opinion needs to be silent on wrongs done. For there are other reasons to criticize someone’s conduct besides punishment, such as:

  • protecting vulnerable others

  • leading the perpetrator to change of behavior and/or heart

  • inspiring others to resist injustice.

But if I am right, it is crucial for the sake of justice that the adverse public opinion be motivated by such goods as these rather than by retribution. And there is always the danger of self-deceit and the need for prudent choice of means (public denunciation seems less likely to lead to positive change than private admonition).

Saturday, May 11, 2019

Feeling bad about harms to our friends

Suppose something bad happens to my friend, and while I am properly motivated in the right degree to alleviate the bad, I just don’t feel bad about it (nor do I feel good about). Common sense says I am morally defective. But suppose, instead, something bad happens just to me, and I stoically (I am not making any claims about the Stoic movement by using this word, despite the etymology) bear up under it, without feeling bad, though being properly motivated to alleviate the harm. Common sense praises this rather than castigating it. Yet, aren’t friends supposed to be other selves?

So, we have a paradox generated by:

  1. The attitudes we should have towards our friends are very much like those we should have towards ourselves.

  2. It is wrong not to feel bad about harms to our friends even when we are properly motivated to fight those harms.

  3. It is not wrong to feel bad about harms to ourselves when we are properly motivated to fight those harms.

As some terminological background, feeling bad about our friends’ losses is not exactly empathy. In empathy, we feel the other’s feelings as we see things from their point of view. So, feeling bad about harms to our friends will only be empathy if our friends are themselves feeling bad about these harms. There are at least two kinds of cases where we feel bad about harms to our friends when our friends themselves do not: (a) our friends are being stoical and (b) our friends are unaware of the harms (e.g., their reputation is being harmed by gossip we witness, or our friends are being harmed by acting viciously while thinking it’s virtuous). Moreover, even when our friends are feeling bad about the harms, our feeling bad about the harms will only be a case of empathy if we feel bad because they are feeling bad. If we feel bad because of the badness of the harms, that’s different.

In fact, we don’t actually have a good word in English for feeling bad on account of a friend’s being harmed. Sympathy is perhaps a bit closer than empathy, but it has connotations that aren’t quite right. Perhaps “compassion” in the OED’s obsolete sense 1 and sense 2a is close. The reason we don’t have a good word is that normally our friends themselves do feel bad about having been harmed, and our terminology fails to distinguish whether our feeling bad is an instance of sharing in their feeling or of emotionally sharing in the harm to them. (Think of how the “passion” in “compassion” could be either the other’s negative feeling or it could be the underlying harm.) And I think we also don’t have a word for feeling bad on account of our own being harmed, our “self compassion” (we do have “self pity”, but that’s generally seen as bad), though we do have thicker words for particular species of the phenomenon, such as shame or grief. So I’ll just stick to the clunky “feeling bad on account of harm”.

When we really are dealing with empathy, i.e., when we feel bad for our friend because our friend feels bad for it, the paradox is easier to resolve. We can add a disjunct to (1) and say:

  1. The attitudes we should have towards our friends are very much like either those that we should have towards ourselves or those that our friends non-defectively have towards themselves.

This is a bit messy. I’m not happy with it. But it captures a lot of cases.

But what about the pure case of feeling bad for harms to a friend, not because the friend feels bad about it?—either because the friend doesn’t know about the harm, or the friend is being stoical, or our bad feeling is a direct reflection of the harms rather than indirectly via the other’s feeling of the harms. (Of course there will also be the special case where the feeling is the harm, as perhaps in the case of pains.) I am not sure.

I actually feel a pull to saying that especially when our friend doesn’t feel bad about the harm, we should, on their behalf. If our friend nobly does not feel the insult, we should feel it for them. And if our friend is being unjustly maligned, we should not only work to rescue their reputation, but we should feel bad.

But I am still given pause by the plausibility of (1) (even as modified to (4)) and (3). One solution would be to say that we should feel bad about harms to ourselves, that we should not be stoical about them. But I don’t want to say that the stoical attitude is always wrong. If our friends are being stoical about something, we don’t always want to criticize them for it, even mentally. Still there are cases where our friends are rightly criticizable for a stoical attitude. One case is where they should be grieving for the loss of someone they love. A more extreme case is where they should be feeling guilt for vicious action—in that case, we wouldn’t even use the fairly positive word “stoical”, but we would call their attitude “unfeeling” or something like that. In those cases, at least, it does seem like they should feel bad for the harm, and we should likewise feel bad on their behalf whether or not they do. (And, yes, this feeling may be in the neighborhood of a patronizing feeling in the case where they are not feeling the guilt they should—but the neighborhood of patronization has some places that sometimes need to be occupied.)

Still, I doubt that it is ever wrong not feel something. That would be like saying that it is wrong not to smell something. Emotions are perceptions of putative normative facts, I think. It can be defective not to smell an odor, either because one has lost one’s sense of smell or because one has failed to sniff when one should have. But the failure to smell an odor is not wrong, though it may be the consequence of doing something wrong, as when the repair person has neglected to sniff for a gas leak.

Instead, I think the thing to say is that there is a good in feeling bad about harms to a friend—or to ourselves. The good is the good of correct perception of the normative state of affairs. A good always generates reasons, and the good is to be pursued absent countervailing reasons. But there can be countervailing reasons. When I injure my shoulder, my pain is a correct perception of my body’s injured state. Nonetheless, because that pain is unpleasant (or fill in whatever the right story about why we rightly avoid pain), I take an ibuprofen. I have reason to feel the pain, namely because the pain is a correct way of seeing the world, but I also have reason not to feel the pain, namely because it hurts.

Similarly, if someone has insulted me, I have reason to feel bad, because feeling bad is a correct reflection of the normative state of affairs. But I also have reason not to feel bad, because feeling bad is unpleasant. So it can be reasonable not to feel bad. Loving my friend as myself does not require me to make greater sacrifices for my friend than I would make for myself, though it is sometimes supererogatory to do so (and sometimes foolish, as when the sacrifice is excessive given the goods gained). So if I don’t have an obligation to sacrifice my equanimity to in order to feel bad for the insult to me, it seems that I don’t have an obligation to sacrifice it in order to feel bad for the insult to my friend. But that sounds wrong, doesn’t it?

So where does the asymmetry come from? Here is a suggestion. In typical cases where our friend feels bad for the harm, our feeling does not actually match the intensity of our friend’s, and this is not a defect in friendship. So the unpleasantness of feeling bad for oneself is worse than in the case of feeling bad for one’s friend. Thus, more equanimity is sacrificed for the sake of our feelings correctly reflecting reality when it is our own case, and hence the argument that if I don’t have an obligation to make the sacrifice for myself, I don’t have an obligation to make the sacrifice for my friend is fallacious, as the sacrifices are not the same. Furthermore, to be honest, there is a pleasure in feeling bad for a friend. The OED entry for “compassion” cites this psychological insight from a sermon by Mozley (1876): “Compassion … gives the person who feels it pleasure even in the very act of ministering to and succouring pain.” I haven’t read the rest of the sermon, but I think this is not any perverse wallowing or the like. The “compassion” is an exercise of the virtue of friendship, and there is an Aristotelian pleasure in exercising a virtue. And this is much more present when it is one’s friend one is serving. Thus, once again, the sacrifice tends to be less when one feels bad for one’s friend than when one feels bad for oneself, and hence the reason that one has to feel bad for one’s friend is less often outbalanced by the reason not to than in one’s own case.

Nonetheless, the reason to feel bad for one’s friend can be outbalanced by reasons to the contrary. Correct perceptual reflection of reality is not the only good to be pursued—not even the only good in the friendship.

Friday, May 10, 2019

Closure views of modality

Logical-closure views of modality have this form:

  1. There is a collection C of special truths.

  2. A proposition is necessary if and only if it is provable from C.

For instance, C could be truths directly grounded in the essences of things.

By Goedel Second Incompleteness considerations like those here, we can show that the only way a view of modality like this could work is if C includes at least one truth that provably entails an undecidable statement of arithmetic.

This is not a problem if C includes all mathematical truths, as it does on Sider’s view.


Suppose narrowly logical necessity LL is provability from some recursive consistent set of axioms and narrowly logical possibility ML is consistency with that set of axioms. Then Goedel’s Second Incompleteness Theorem implies the following weird anti-S5 axiom:

  • LLMLp for every statement p.

In particular, the S5 axiom MLp → LLMLp holds only in the trivial case where MLp is false.

For suppose we have LLMLp. Then MLp has a proof. But MLp is equivalent to ∼LLp. However, we can show that ∼LLp implies the consistency of the axioms: for if the axioms are not consistent, then by explosion they prove p and hence LLp holds. Thus, if LLLLp, then ∼LLp can be proved, and hence consistency can be proved, contrary to Second Incompleteness.

The anti-S5 axiom is equivalent to the axiom:

  • MLLLp.

In particular, every absurdity—even 0≠0—could be necessary.

I wonder if there is any other modality satisfying anti-S5.

An infinite chain can't have two ends

Say that a chain C is a collection of nodes with the following properties:

  1. Each node is directly connected to at most two other nodes.

  2. If x is directly connected to y then y is directly connected to x (symmetry).

  3. C is globally connected in the sense that for any non-empty proper subset S of C, there is a node in S and a node outside of S that are directly connected to each other.

(This is a different sense of “chain” from the one in Zorn’s Lemma.)

Fun fact: Every infinite chain has at most one endpoint, where an endpoint is a node that is directly connected to only one other node.

I.e., one cannot join two nodes with an infinite chain.

Corollary: We cannot join two events by an infinite chain of instances of immediate causation.

I've occasionally wondered if there is a useful generalization of transitive closure to allow for infinite chains, and to my intuition the fact above suggests that there isn't.

An argument for animals in heaven

In quick outline, here’s a valid argument:

  1. There are plants in heaven.

  2. If there are plants in heaven, there are non-human animals in heaven.

  3. So, there are non-human animals in heaven.

Let me expand on the argument.

Humans in heaven (i.e., on the New Earth, after resurrection) will have both supernatural and natural fulfillment. The natural fulfillment of humans requires an appropriate environment. That environment requires plants. A heavenly city with no trees or grass or flowers just wouldn’t be heavenly for us. This is fitting as humans were made for a garden. The fall turned the garden into a field of hard labor for survival, but all will be restored, and so there will be a garden again.

But plants, of the sort that form the natural environment of humans, require an ecosystem that includes non-human animals. There need to be pollinators in the air and worms in the ground. And how eerily quiet a garden would be with no birds chirping, how unnatural for humans.

This does not mean that there will be a resurrection of animals. Just as a plant can be perfect without living forever, a non-rational animal can be perfect without living forever. One may, however, worry that we will form attachments to non-human animals and would be saddened by their death. There are three responses. First, perhaps some non-human organisms could live forever, namely particular ones which are important to humans: say, a bonsai or a companion dog. Second, perhaps we wouldn’t form these attachments, maybe because no animals would be tame. Third, it might be that we would all transcend time to the extent that (a) our memory would not fade and (b) we would all have the correct view of time, i.e., eternalism, so that we would be constantly aware that our beloved animal exists simpliciter, albeit in the past.

Thursday, May 9, 2019

Yet another bundle theory of objects

I will offer a bundle theory with one primitive symmetric relationship. Moreover, the primitive relationship is essential to pairs. I don’t like bundle theories, but this one seems to offer a nice and elegant solution to the bundling problem.

Here goes. The fundamental entities are tropes. The primitive symmetric relationship is partnership. As stated above, this is essential to pairs: if x and y are partners in one world, they are partners in all worlds in which both exist. If x and y are tropes that exist and are partners, then we say they are coinstantiated.

Say that two possible tropes, existing in worlds w1 and w2 respectively, are immediate partners provided that there is a possible world where they both exist and are partners. Then derivative partnerhood is defined to be the transitive closure of immediate partnerhood.

The bundles in any fixed world are in one-to-one correspondence with the maximal non-empty pluralities of pairwise-partnered tropes, and each bundle is said to have each of the tropes that makes up the corresponding plurality. We have an account of transworld identity: a bundle in w1 is transworld identical with a bundle in w2 just in case some trope in the first bundle is a derivative partner of some trope in the second bundle. (This is a four-dimensionalist version. If we want a three dimensionalist one, then replace worlds throughout with world-time pairs instead.) So we have predication (or as good as a trope theorist is going to have) and identity. That seems enough for a reductive story about objects.

We can even have ersatz objects if we have the ability to form large transworld sets of possible tropes: just let an ersatz object be a maximal set of pairwise derivately partnered tropes. An ersatz object then is said to ersatz-exist at a world w iff some trope that is a member of the ersatz object exists at w. We can then count objects by counting the ersatz objects.

This story is compatible with all our standard modal intuitions without any counterpart theoretic cheats.

Of course, the partnership relationship is mysterious. But it is essential to pairs, so at least it doesn’t introduce any contingent brute facts. And every story in the neighborhood has something mysterious about it.

There are two very serious problems, however:

  1. On this story we don’t really exist. All that really exist are the tropes.

  2. This story is incompatible with transsubstantiation—as we would expect of a story on which there is no substance.

So what’s the point of this post? Well, I think it is nice to develop a really good version of an opposing theory, so as to be able to focus one’s critique on what really matters.

Wednesday, May 8, 2019

A ray of Newtonian particles

Imagine a Newtonian universe consisting of an infinite number of equal masses equidistantly arranged at rest along a ray pointing to the right. Each mass other than first will experience a smaller gravitational force to the left and a greater (but still finite, as it turns out) gravitational force to the right. As a result, the whole ray of masses will shift to the right, but getting compressed as the masses further out will experience less of a disparity between the left-ward and right-ward forces. There is something intuitively bizarre about a whole collection of particles starting to move in one direction under the influence of their mutual gravitational forces. It sure looks like a violation of conservation of momentum. Not that such oddities should surprise us in infinitary Newtonian scenarios.

Wednesday, May 1, 2019

Wilde Lectures schedule

For the benefit of any readers who will be in Oxford this month, here is a schedule of my Wilde Lectures in Natural and Comparative Religion.

The Bayesian false belief pandemic

Suppose that a credence greater than 95% suffices to count as a belief, and that you are a rational agent who tossed ten fair coins but did not see the results. Then you have at least 638 false beliefs about coin toss outcomes.

To see this, for simplicity, suppose first that all the coins came up heads. Let Tn be the proposition that the nth coin is tails. Then the disjunction of five or more of the Tn has probability 96%, and so you believe every disjunction of five or more of the Tn. Each such belief is false, because all the coins will in fact come up heads. There are 638 (pairwise logically inequivalent) disjunctions of five or more of the Tn. So, you have at least 638 false beliefs here (even if we are counting up to logical equivalence).

Things are slightly more complicated if not all the coins come up heads, but exactly the same conclusion is still true: you have 638 disjunctions of five or more false single-coin-outcome beliefs.

But it seems that nothing went wrong in the coin toss situation: everything is as it should be. There is no evil present. So, it seems, reasonable false belief is not an evil.

I am not sure what to make of this conclusion, since it also seems to me that it is the telos of our beliefs to correctly represent reality, and a failure to do that seems an evil.

Perhaps the thing to say is this: the belief itself is bad, but having a bad belief isn’t always intrinsically bad for the agent? This seems strange, but I think it can happen.

Consider a rather different case. I want to trigger an alarm given the presence of radiation above a certain threshold. I have a radiation sensor that has practically no chance of being triggered when the radiation is below the threshold but has a 5% independent failure rate when the radiation is above the threshold. And a 5% false negative rate is not good enough for my application. So I build a device with five independent sensors, and have the alarm be triggered if any one sensor goes off. My false negative rate goes down to 3 ⋅ 10−7. Suppose now four sensors are triggered and the fifth is not. The device is working correctly and triggers the alarm, even though one sensor has failed. The failure of the sensor is bad for the sensor but not bad for the device.

Another move is to say that there is an evil present in the false belief case, but it’s tiny.

And yet another move is to deny that one should have a belief when the credence rises above a threshold.