Thursday, July 2, 2020

Does supererogation always deserve praise?

Suppose that Bob spent a month making a birthday cake for Alice that was only slightly better than what was available in the store, and Bob did not enjoy the process at all. One can fill out the case in such a way that what Bob did was permissible. Moreover, it is was more burdensome to him than buying the slightly less good cake would have been, and it was better for Alice, so it looks like the action was supererogatory. Nonetheless, we wouldn’t praise this action: We would say that the action was insufficiently prudent. So, it seems that not every supererogatory action is praiseworthy.

Perhaps the problem is with my understanding of supererogation. If we add the necessary condition for supererogation that the action is on balance better than the relevant alternative, then we can avoid saying that Bob’s action is supererogatory, because it is not better on balance than the alternative. But I would rather avoid adding that a supererogatory action is on balance better than the alternative. For then it becomes mysterious how it can be permissible to do the alternative.

I am inclined to just bite the bullet and deny the supererogation always deserves praise.

Generalizing supererogation

My preferred way of understanding supererogation is that an action is supererogatory provided that it is permissible and more burdensome than some permissible alternative (see here for a defense). This suggests an interesting generalization. Let J denote an individual or a group (perhaps described relative to the agent). Then an action is J-supererogatory provided that it is is permissible and more burdensome for J than some permissible alternative.

Then supererogatory actions are, in the new terminology, agent-supererogatory. On the other hand, we have a new category of actions, others-supererogatory. These actions are permissible but more burdensome to others than some permissible alternative. An action can be both agent-supererogatory and others-supererogatory. For instance, suppose that by sacrificing two arms I can save two people from losing two arms each, but by sacrificing one arm I can save one person from losing one arm. And suppose I have no special duties here, so it is permissible for me to make no sacrifice at all. Then, the action of sacrificing one arm is agent-supererogatory (it is more burdensome than the permissible alternative of no sacrifice) and others-supererogatory (it is less burdensome than sacrificing both arms).

Supererogation and determinism

  1. If at most one action is possible for one, that action is not supererogatory.

  2. If determinism is true, then there is never more than one action possible for one.

  3. So, if any action is supererogatory, determinism is false.

There is controversy over (2), but I don’t want to get into that in this post. What about (1)? Well, the standard story about supererogation is something like this: A supererogatory action is one that is better than, or perhaps more burdensome, that some permissible alternative. In any case, supererogatory actions are defined in contrast to a permissible alternative. But that permissible alternative has got to be possible for one to count as a genuine alternative. For instance, suppose I stay up all night with a sick friend. That’s better than going to sleep. But if there is loud music playing which would make it impossible for me to go to sleep and I am tied up so I can’t go elsewhere, then my staying up all night with the friend is not supererogatory.

Tuesday, June 30, 2020

Do promises sometimes make otherwise wrong actions permissible?

Consider a variant of my teenage Hitler case. You’re a hospital anesthetist and teenage Hitler is about to have an emergency appendectomy. The only anesthetic you have available is one that requires a neutralizer to take the patient out of anesthesia—without the neutralizer, the patient dies. You know (an oracle told you) that if teenage Hitler survives, he’ll kill millions. And you’re the only person who knows how to apply anesthesia or the neutralizer in this town.

You’re now asked to apply anesthesia. You have two options: apply or refuse. If you refuse, the surgeon will perform the appendectomy without anesthesia, causing excruciating pain to a (still) innocent teenager, who will still go on to kill millions. Nobody benefits from your refusal.

But if you apply anesthesia, you will put yourself in a very awkward moral position. Here is why. Once the surgery is over, standard practice will be to apply the neutralizer. But the Principle of Double Effect (PDE) will forbid you from applying the neutralizer. For applying the neutralizer is an action that has two effects: the good effect of saving teenage Hitler’s life and the evil effect of millions dying. PDE allows you to do actions that have a foreseen evil effect only when the evil effect is not disproportionate to the good effect. But here the evil effect is disproportionate. So, PDE forbids application of the neutralizer. Thus if you know yourself to be a morally upright person, you also know that if you apply the anesthesia, you will later refuse to apply the neutralizer. But surely it is wrong to apply the anesthesia to an innocent teenage while expecting not to apply the neutralizer. For instance, it would be clearly wrong to apply the anesthesia if one were out of neutralizer.

So, it seems you need to refuse to apply anesthesia. But your reasons for the refusal wiil be very odd: you must refuse to apply anesthesia, because it would be morally wrong for you to neutralize the anesthesia, even though everyone is no worse or better off in the scenario where you apply anesthesia and neutralize it than in the scenario where the operation happens without anesthesia. To make the puzzle even sharper, we can suppose that if teenage Hitler has the operation without anesthesia, he will blame you for the pain, and eventually add your ethnic group—which otherwise he would have no prejudice against—to his death lists. So your refusal to apply anesthesia not only causes pain to an innocent teenager but causes many deaths.

The logical structure here is this: If you do A, you will be forbidden from doing B. But you are not permitted to do A if you expect not do B. And some are much better off and no one is worse off if you do both A and B than if you do neither.

Here is a much more moderate case that seems to have a similar structure. Bob credibly threatens to break all of Carl’s house windows unless Alice breaks one of Carl’s windows. It seems that it would be right for Alice to break the window since any reasonable person would choose to have one window broken rather than all of them. But suppose instead Bob threatens to break all of Carl’s windows unless Alice promises to break one of Carl’s windows tomorrow. And Alice knows that by tomorrow Bob will be in jail. Alice knows that if she makes the promise, she would do wrong to keep it, for Carl’s presumed permission of one window being broken to save the other windows would not extend to the pointless window-breaking tomorrow. And one shouldn’t make a promise one is planning not to keep (bracketing extreme cases, which this is not one of). So Alice shouldn’t make the promise. But no one would be worse off if Alice made the promise and kept it.

I wonder if there isn’t a way out of both puzzles, namely to suppose that in some cases a promise makes permissible something that would not otherwise be permissible. Thus, it would normally be wrong to apply the neutralizer to teenage Hitler. But if you promised to do so (e.g., implicitly when you agree to perform your ordinary medical duties at the hospital, or explicitly when you reassured his mom that you’ll bring him out of anesthesia), then it becomes permissible, despite the fact that many would die if you kept the promise. Similarly, if Alice promised Bob to break the window, it could become permissible to do so. Of course, we better not say in general that promises make permissible things that would otherwise be impermissible.

The principle here could be roughly something like that:

  1. If it would be permissible for you to now intentionally ensure that a state of affairs F occurs at a later time t, then it is permissible for you to promise to bring about F at t and then to do so if no relevant difference in the circumstances occurs.

Consider how (1) applies to the teenage Hitler and window-breaking cases.

It would be permissible for you to set up a machine that would automatically neutralize Hitler’s anesthesia at the end of the operation, and then to administer anesthesia. Thus, it is now—i.e., prior to your administering the anesthesia—permissible for you to ensure that Hitler’s anesthesia will be neutralized. Hence, by (1) it is permissible for you to promise to neutralize the anesthesia and then to keep the promise, barring some relevant change in the circumstances.

Similarly, it would be permissible for you to throw a rock at Carl’s window from very far away (out in space, say) so that it would only reach the window tomorrow. So, by (1) it is permissible for you to promise to break the window tomorrow and then to keep the promise.

On the other hand, take the case where an evildoer asks you to promise to kill an innocent tomorrow or else she’ll kill ten today, and suppose that tomorrow the evildoer will be in jail and unable to check up on what you did. It would be wrong for you to now intentionally ensure the innocent dies tomorrow, so (1) does not apply and does not give you permission to make and keep the promise. (Some people will think it’s OK to make and break this promise. But no one thinks it’s OK to make and keep this promise.)

Principle (1) seems really ad hoc. But perhaps this impression is reduced when we think of promises as a way of projecting our activity forward in time. Principle (1) basically says that if it would be permissible to project our activity forward in time by making a robot—or by self-hypnosis—then we should be able to accomplish something similar by a promise.

The above is reminiscent of cases where you promise to ignore someone’s releasing you from a promise. For instance, Alice, a staunch promoter of environmental causes, lends Bob a large sum of money, on the condition of Bob making the following promise: Bob will give the money back in ten years, unless Alice’s ideals shift away from environmentalism in which case he will give it to the Sierra Fund, notwithstanding any pleas to the contrary from Alice. The current context—Alice’s requirements at borrowing time—becomes normative at the time for the promise to be kept, notwithstanding some feared changes.

I am far from confident of (1). But it would let one escape the unhappy position of saying that in cases with the above structure one is required to let the worst happen. I expect there are counterexamples to (1), too. But perhaps (1) is true ceteris paribus.

Sunday, June 28, 2020

Pluralism in public life

Consider this formulation of the central problem of a pluralist democracy:

  1. How to have a democracy where there is a broad plurality of sets of values?

Assuming realism about the correct set of values, this is roughly equivalent to:

  1. How to have a democracy where most people are wrong in different ways about the values?

But when we think about (1) and (2), we are led to thinking about the problem in different ways. Formulation (1) leads us to think the problem is with the state, which should somehow accommodate itself to the plurality of values. Formulation (2) points us, however, to the idea that the problem is with the people (including perhaps ourselves) who have the wrong set of values.

My own view is that there is partial but incomplete realism about values. Specifically, there is such a thing as the correct set of values. But there is a legitimate plurality of rankings between the values, though even there not everything goes—some rankings violate human nature. As a result, the problem is both with us, in that most of us have the wrong set of values and have some prioritizations that violate human nature, and with the state which needs to accommodate a legitimate plurality of prioritizations.

Wednesday, June 24, 2020

Two attempts at deriving internal time from the causal order of modes

It would be nice to define the internal time of a substance in terms of the causal order of its accidents.

For each mode (i.e., accident or substantial form) α that a finite substance x has, there is the event cα of α’s being caused. Causal priority provides a strict partial ordering on the events cα.

Perhaps the simplest theory of the internal time of the substance x is that the moments of internal time just are the events cα and their order just is the causal priority order.

This has the consequence that internal time need not be totally ordered, since one can have cases where α ≠ β but there is no priority relation between cα and cβ. This consequence is welcome and unwelcome. It is welcome, as it allows one to give a nice account of bilocation involving the bifurcation of internal time. It is unwelcome, as intuitively time is linear. Let’s see if we can do something to reduce the unwelcome consequence.

Let’s suppose—as per causal finitism—that causal interactions are discrete. Then we can define a fundamental distance between the moments of internal time: d(cα, cβ) is the length of the longest unidirectional causal priority chain between cα and cβ. One might reasonably hypothesize that d(cα, cβ) is something of the order of magnitude of the temporal distance between cα and cβ in the rest frame of the substance in units of the order of Planck time. (Note that d is not a metric because of the unidirectionality constraint on the chains.)

This lets us have a second way of defining the internal time of a substance x. Let f be x’s substantial form. Then we can define “the start time” of a mode α as d(f, α): the length of the longest internal causal priority chain from cf to cα. Now likely some modes will have a simultaneous internal start time—they will have the same distance to cf.

For this to define an intuitively plausible time sequence, we need the substance to have lots of interconnections between its accidents. Ordinary substances do seem to have that.

And perhaps some accidents won’t have an internal start time—if God turns me blue right now, my blueness won’t have an internal start time. But nonetheless that blueness can be “attached” to my internal temporal sequence by noting that it will be close according to d to some of my near-future accidents. For that miraculous blueness will interact with some of my other accidents to produce new accidents that are properly in my middle age. For instance, it will interact with my memories of observations of things not turning blue to generate the accident of surprise.

Monday, June 22, 2020

Thomson's core memory paradox

This is a minor twist on the previous post.

Magnetic core memory (long obsolete!) stored bits in the magnetization of tiny little rings. It was easy to write data to core memory: there were coils around the ring that let you magnetize it in one of two directions, and one direction corresponded to 0 and the other to 1. But reading was harder. To read a memory bit, you wrote a bit to a location and sensed an electromagnetic fluctation. If there was a fluctuation, then it follows that the bit you wrote changed the data in that location, and hence the data in that location was different from the bit you wrote to it; if there was no fluctuation, the bit you wrote was the same as the bit that was already there.

The problem is that half the time reading the data destroys the original bit of data. In those cases—or one might just do it all the time—you need to write back the original bit after reading.

Now, imagine an idealized core not subject to the usual physics limitations of how long it takes to read and write it. My particular system reads data by writing a 1 to the core, checking for a fluctuation to determine what the original datum was, and writing back that original datum.

Let’s also suppose that the initial read process has a 30 second delay between the initial write of the 1 to the core and the writing back of the original bit. But the reading system gets better at what it’s doing (maybe the reading and writing is done by a superpigeon that gets faster and faster as it practices), and so each time it runs, it’s four times as fast.

Very well. Now suppose that before 10:00:00, the core has a 0 encoded in it. And read processes are triggered at 10:00:00, 10:00:45, 10:00:56.25, and so on. Thus, the nth read process is triggered 60/4n seconds before 10:01:00. This process involves the writing of a 1 to the core at the beginning of the process and a writing back of the original value—which will always be a 0—at the end.

Intuitively:

  1. As long as the memory is idealized to avoid wear and tear, any possible number—finite or infinite—of read processes leaves the memory unaffected.

By (1), we conclude:

  1. After 10:01:00, the core encodes a 0.

But here’s how this looks from the point of view of the core. Prior to 10:00:00, a 0 is encoded in the core. Then at 10:00:00, a 1 is written to it. Then at 10:00:30, a 0 is written back. Then at 10:00:45, a 1 is written to it. Then at 10:00:52.5, a 0 is written back. And so on. In other words, from the point of view of the core, we have a Thomson’s Lamp.

This is already a problem. For we have an argument as to what the outcome of a Thomson’s Lamp process is, and we shouldn’t have one, since either outcome should be as likely.

But let’s make the problem worse. There is a second piece of core memory. This piece of core has a reading system that involves writing a 0 to the core, checking for a fluctuation, and then writing back the original value. Once again, the reading system gets better with practice. And the second piece of core memory is initialized with a 1. So, it starts with 1, then 0 is written, then 1 is written back, and so on. Again, by premise (1):

  1. After the end of the reading processes, we have a 1 in the core.

But now we can synchronize the reading processes for the second core in such a way that the first reading occurs at 9:59:30, and space out and time the readings in such a way that prior to 9:59:30, a 1 is encoded in the core. At 9:59:30, a 0 is written to the core. At 10:00:00, a 1 is written back to the core, thereby completing the first read process. At 10:00:30, a 0 is written to the core. At 10:00:45, a 1 is written back, thereby completing a second read process. And so on.

Notice that from around 10:00:01 until, but not including, 10:01:00, the two cores are always in the same state, and the same things are done to it: zeroes and ones are written to the cores at exactly the same time. But when, then, do the two cores end up in different final states? Does the first core somehow know that when, say, at 10:00:30, the zero is written into it, that zero is a restoration of the value that should be there, so that at the end of the whole process the core is supposed to have a zero in it?

Thursday, June 18, 2020

Another way to turn Thomson's Lamp into a real paradox

In Thomson’s Lamp, a lamp is (say) off at 10:00, and the switch is toggled at 10:30, 10:45, 10:52.5, and so on, and we are asked whether the lamp is on or off at 11:00, neither option being satisfactory.

As it stands, Thomson’s Lamp is a puzzle rather than a paradox. There does not seem to be any absurdity in the answer being “on” or the answer being “off”.

In Infinity, Causation and Paradox I tried to generate a paradox from Thomson’s Lamp. But here is perhaps a better way. Start with this premise:

  1. Removing any number of interactions with a system none of which changes a system will not affect the system.

Now, consider these complex interactions with the lamp system:

  • Toggling the switch at 10:30 and at 10:45

  • Toggling the switch at 10:52.5 and at 10:56.25

Two successive togglings do nothing, so each of these is an interaction that does nothing. By 1, removing them all makes no difference. Now, we know that if we remove them all, the lamp will be off at 11:00, since its switch will not have been toggled even once since 10:00. So, we have established:

  1. The lamp will be off at 11:00.

But now consider these complex interactions:

  • Toggling the switch at 10:45 and at 10:52.5

  • Toggling the switch at 10:56.25 and at 10:58.125

Again, each of these is an interaction that makes no difference. So if we remove them all, by 1 that won’t change anything. But if we remove all these interactions, we have a lamp that is on at 10:31 (since we still have the 10:30 toggling) and then never has its switch toggled. Thus, we have shown:

  1. The lamp will be on at 11:00.

So, indeed, we now have a paradox.

Tuesday, June 16, 2020

Presentism and ex nihilo nihil fit

Consider these three theses:

  1. There is at most one empty world, i.e., world where nothing exists.

  2. Presentism is true.

  3. Something can come from nothing.

By 3, the following should be possible: first there is nothing, and then there is something. But if something can come from nothing, a fortiori it is possible that nothing comes from nothing. Thus, by bivalence about the future, here are two metaphysical possibilities:

  1. There is nothing now, but later there will be something.

  2. There is nothing now, and there will never be anything.

By presentism, if there is nothing now, there is nothing. So, both 4 and 5 entail that the world is empty. But there is at most one empty world. So, 4 and 5 are true in the same world, which is absurd!

Thus, we should reject one of 1–3, or reject bivalence about the future.

Given the plausibility of bivalence as well as of 1, we have an argument that presentists should deny 3.

I myself deny 3, but since I’m not a presentist I deny it on other grounds.

Cats versus nothing

Suppose I insisted that the Big Bang happened due to a cat generating an extremely high energy hairball. You would think I’m crazy. But why is the cat theory any worse than a theory on which the Big Bang happened for no reason at all?

Granted, we haven’t ever seen such a high energy hairball coming from a cat. But we likewise haven’t seen something come from nothing.

Granted, we know something about the causal powers of cats, namely that they lack the power to originate high energy hairballs. But likewise we know about the causal power of nothing, namely that where there is nothing, there is no causal power.

However, this last response is too quick. For when we talk of the universe coming from nothing versus the universe coming from a cat, we are equivocating on “coming from”. When the atheist says the universe came from nothing, they don’t mean that nothing was something that originated the universe. Rather, they simply deny that there was something that originated the universe. Cats don’t have the power to generate universes, so universes don’t get generated by cats. Similarly, where there is nothing, there is no power to generate universes, so universes don’t get generated by nothing. But the atheist doesn’t say that the universe is generated by (a?) nothing—they simply deny that it was generated by something.

Thus, the problem with the universe coming from a cat is with the origination: cats just aren’t the sorts of things to originate universes.

I guess that’s right, but I still feel the pull of the thought that a cat comes closer to making it possible for a universe to come into being than nothingness does. After all, where there is a cat, there are some causal powers. And where there is nothing, there aren’t any.

Perhaps another way to make the argument go through is to say this: There is nothing less absurd about the universe appearing causelessly ex nihilo than there is about a cat causelessly ex nihilo gaining a universe-creating power.

Monday, June 15, 2020

Multidimensionality of game scoring

One obvious internal good of a game is victory. But victory generally isn’t everything, even when one restricts oneself to the internal goods. Score is another internal good: it is internally better to win by a larger amount—though a narrower victory (but not so narrow that it look like it was just a fluke) is typically externally more enjoyable. Similarly, there can be the additional internal good—often created ad hoc—of winning without making use of some resource—winning a video game without killing any character, or climbing a route while using only one hand. But there are other internal goods that are not just modifications of victory. For instance, in role-playing games, being true to your character’s character is an internal good that can be in conflict with victory (this is important to the plot of the film The Gamers: Dorkness Rising). There is an honor-like internal good found in many games: for instance, in versions of cut-throat tennis with rotation, it makes sense to throw a game to prevent your doubles partner from winning—but it would feel dishonorable and like poor sportsmanship. Elegance and “form” are other internal goods found in many sports.

Enumerating these internal goods would be an endless task. Probably the better thing to do is to say that we have the normative power of creating a plurality of internal-value partial orderings between possible playthroughs and labeling them as we wish, often using terms that provide an analogy to some external value comparison: “more honorable than”, “more peaceable than”, “more victorious than”, “more elegant than”, etc.

Friday, June 12, 2020

The A-theory and a countably infinite fair lottery

Let’s suppose that the universe has a beginning and the tensed theory of propositions (which is accepted by most A-theorists) is true. Then consider for each n the proposition dn that n days have elapsed from the beginning of the universe. This proposition is contingent on a tensed theory of propositions. Exactly one of the propositions dn is true. No one of the propositions dn is more likely to be true than any other. So, it seems, we have a countably infinite fair lottery. But such are, arguably, impossible. See Chapter 4 of my infinity book. (E.g., it’s fun to note that on the tensed theory of time we should be incredibly surprised that it’s only 13 billion years since the beginning of the universe.)

Since the universe does have a beginning (and even if it does not, we can still run the argument relative to some other event than the beginning of the universe), it seems we should reject the tensed theory of propositions.

Wednesday, June 10, 2020

My new mask

I got myself a 3M 6300 half-face mask with P100 filters. Unfortunately, these have an exhalation valve so that any germs one has are breathed out at other people. So I put some cotton cloth in the exhalation valve, so that others at least get the same benefit they would if I was wearing a single layer cotton mask (or better, because the fit of the mask is superb--none of that air leaking around the nose that I had with my home-made masks), and I removed the exhalation-block valves at the P100 filters so a large portion of the outgoing air gets routed through them.

I look really weird in it. (My mom says it's World War I in pink.) But it's quite a bit more comfortable for breathing than my home-made fitted cloth mask (dual layer: microfiber washcloth plus cotton T-shirt) was, the straps are really comfy and spread the pressure really well, and the silicone lining fits really well.


I went to the grocery store. I was self-conscious and wondered about odd looks. Nothing, except for very friendly though slightly amused smiles from one older lady. The weird isn't so weird these days.

Update: I put the exhalation-block valves back to protect the P100 filters from my moisture, and printed a clip-on front filter holder for protecting others from my exhalations. I put a combination of shop-towel and cotton T-shirt in the exhalation filter. If the shop-towel I used is as good as the blue shop towels that have been tested, the result is N95-level protection for others from me and P100-level protection for me from others. I am planning to get another mask and look into ways of modifying it for sound projection (e.g., removing valves, using thinner home-made filters, but still keeping the fit which is incredibly good) so I can use it while teaching in the fall. Maybe I can make a filter holder that takes a microphone, even.

Tuesday, June 9, 2020

Hyperintensional vagueness

The typical examples of vagueness in the literature are ones where it is vague whether a subject has a property (e.g., vagueness) or whether a statement is true. But there is another kind of vagueness which we might call “hyperintensional vagueness”, which looks like it should be quite widespread. The easiest way to introduce this is in a supervaluationist context: a term has vagueness provided it has more than one precisification. But one possibility here is that all the precisifications of the term are intensionally the same. In that case, we can say that the term is merely hyperintensionally vague.

For instance, the English word “triangle” looks like it’s only hyperintensionally vague. It has two precisifications: a three-sided polygon and a three-angled polygon (the etymology favors the latter, but we cannot rely on etymology for semantics). Since necessarily all and only three-sided polygons are three-angled polygons, the two precisifications are intensionally the same.

Hyeprintensional vagueness doesn’t affect first-order logic or even modal logic, so it doesn’t get much talked about. But it does seem to be an interesting phenomenon that is even harder to get rid of than extensional or even intensional vagueness. Consider the vagueness in “bachelor”: it is extensionally vague whether a man who had his marriage annulled or the Pope is a bachelor. But even after we settle all the intensional vagueness by giving precise truth conditions for “x is a bachelor” such as “x is a never validly married, marriageable man”, there will still be hyperintensionally differing precisifications of “bachelor” such as:

  • a marriageable man none of whose past marriages was valid

  • a marriageable man none of whose past valid statuses was a marriage

  • a human being none of whose past marriages was valid and who is a man.

This makes things even harder for epistemicists who have to uphold a fact of the matter as to the hyperintensionally correct precisification. Moreover, at this point epistemicists cannnot make use of the standard classical logic argument for epistemicism. For while that argument has much force against extensional vagueness, it has no force against hyperintensional vagueness. One could hold that there is no extensional or intensional vagueness but there is hyperintensional vagueness, but that sounds bad to me.

Skepticism and the Principle of Sufficient Reason now online

Rob Koons' and my article "Skepticism and the Principle of Sufficient Reason", forthcoming in Philosophical Studies, is now online.