Monday, August 31, 2020

Reinstall Microsoft Store on Windows 10

This is really just a note to self for future reference. I had uninstalled Microsoft Store very completely on my Win10 laptop, but then wanted it back. None of the solutions I found online worked out of the box. Finally, here is what worked (a variant of something I found online):
  1. Go to https://store.rg-adguard.net/
  2. Enter in the URL: https://www.microsoft.com/store/productId/9WZDNCRFJBMP and click on the checkmark
  3. Download the latest WindowsStore*.AppxBundle and WindowsStore*.BlockMap files
  4. Start powershell in admin mode
  5. cd [your download directory]
  6. get-item *WindowsStore*.appx* | Add-AppxPackage

Sunday, August 30, 2020

Another KN95 mask mod

As an experiment, I cut the ear loops of one of my KN95 masks, and then made them join very snugly behind the head with rubber bands. I found that both comfort and fit improved greatly: I tested this while rock climbing. I expect this might work for surgical-style masks as well.

Important note: If you do this, strengthen the ear loop joints with Shoe Goo or some other goopy adhesive first, or they will likely not withstand the added pressure.

An optional improvement I made later is to add a clasp to the lower strap to make it easier to put on despite the loop being tight. Here are my 3D printing files. If you don't have a 3D printer, you could probably make a clasp out of a paperclip.

This is a lot of work for a disposable mask. But I reuse them.

Friday, August 28, 2020

The inhumanity problem for morality

When a state legislates, it often carves out very specific exceptions to the legislation. Sometimes, of course, one is worried that the exceptions are a sign that the legislators are pursuing special interests rather than the common good, but sometimes the exceptions are quite reasonable. For instance, you shouldn’t possess child pornography… except, say, if you are involved in the law enforcement process and need it as evidence to get the child pornographers. There is something ugly about carving out exceptions, but the point is to make society work well rather than make the laws elegant. Special-case clauses seem to be unavoidable in practice, given the messiness and complexity of human life. Elegant exceptionless legislation—with some important exceptions!—is apt to be inhuman.

I kind of wonder if an analogous thing might not be true in the case of morality, and for the same reason, the messiness and complexity of human life. Could it be that elegant exceptionless moral laws would necessarily have to be inhuman?

What solutions are available to this problem?

Well, we might just dig in our heels, either optimistically or pessimistically.

The optimistic version says: yes, we have elegant exceptionless moral laws, and they do work well for us. One way of running the optimistic variant is to make the moral laws leave a lot to human positive law. Thus, there are going to be exceptions to any prohibition of theft, but perhaps morality leaves the specification of this to the state. Or perhaps one could be really optimistic and have moral laws that do not leave a lot to positive law, but nonetheless they work. Act utilitarianism could be thought to provide this kind of solution, having a simple rule “Maximize utility!”, but its problem is that this rule is just wrong. Rule utilitarianism provides a nicer solution by having the elegant meta-rule “Do those things that fall under a utility-maximizing rule”, but I think the technical details here are insuperable.

The pessimistic variant says: yes, we have elegant exceptionless moral laws, and we’re stuck with that, even though it doesn’t work that great for us. That might be a better way to take act utilitarianism, but such pessimism is not a very attractive approach.

But what if we don’t want to dig in our heels? One could think that there are just brute (perhaps metaphysically necessary) facts about the moral rules, and many of these brute facts have specific exceptions: “Don’t lie, except to save a life or to prevent torture.” I think bruteness, and especially inelegant bruteness, is a last resort.

One might think that moral particularism is a solution: there are general elegant moral laws, but they all have unspecified exceptions. They say things like: “Don’t torture, other things being equal.” There is still a fact of the matter as to what to do in a particular situation, a fact that a virtuous agent may be able to discern, but these facts cannot be formulated in a general way, because any finite description of the particular situation will leave out factors that could in some other case trump the described considerations. There are exceptionless moral rules on such a view, but they are infinite in length. Unless some story is given as to where these infinite rules come from, this seems like it might be just an even worse version of the brute fact story.

Divine command theory, on the other hand, could provide a very nice solution to the problem, exactly analogous to the legislative solution. If God is the author of moral laws, he can legislate: “Thou shalt not kill, except in cases of types A, B and C.”

Natural law could also provide such a solution, at least given theism: God could select for instantiation a nature that has a complex teleology with various specific exceptions.

Where do I fall? I think I want to hold out for a two-level theistic natural law story. On one level, there is a simple, single and elegant moral rule embedded in our nature: “Love everything!” However, the content of that love is specified in a very complex way by our nature and by the circumstances (love needs to be appropriate to the specifics of the relationships). This specification is embedded in our nature by much more complex rules. And God chose this nature for instantiation because it works so well.

Making KN95 masks better

I have found (using a small single blind test) that disposable masks have better audio quality for teaching: cloth muffles the voice, especially I think an already somewhat muddy male voice like mine. Surgical-style masks have a lot of leaks around the edges, so I went with cheap ebay KN95 masks.

I reuse them, "disinfected" by airing for seven days (as recommended for N95 masks by the N95's inventor), leaving them hanging on a little wooden rack with nails in it. (I had some fun with my CNC router on it as you can see.)
The cheap KN95s have some air leaking around the edges and top, and occasionally I've had an earloop break off. So, here are three mods I've made. You can do the first two without special equipment, but the third requires a 3D printer. (But if you're someone in my social circle at Baylor, I could 3D print some for you.)

First: add a bit of Shoe Goo under, around and over where the earloops meet the mask. This seems to greatly increase the strength of the connection. No broken off earloops since.

Second: I added some rubber bands joining the earloops. The main point was to make the fit more snug, reducing air leaking. Sadly, it puts more pressure on the ears. (There is probably a sweet spot in rubber band length where it reduces pressure on the ears, but I'm putting safety over comfort.)

Third: I replaced the flimsy metal nosepiece with a hefty 3D printed one. It took me five prototypes until I got the size and shape right, but then it worked great in teaching. It's 2mm thick, printed in PLA, and glued on with Shoe Goo. (The metal strips came off very easily from at least one of the brands of cheap KN95s.) No more fiddling with fitting the nosepiece, and no more feeling of air going up and out along the nose bridge, so I expect it increased the protection for others from exhalation. I don't normally need to wear glasses with a mask, but I tested with my sunglasses while walking home from class and found no fogging. I still fiddle with and adjust the mask, but a nice bonus is that I mainly need to touch the plastic strip, which is probably cleaner than the filtering surface.

I don't know that the strip increased the protection for me as significantly, because the KN95s already would seal around the face when breathing in. (There have been a lot of claims made that masks protect others more than they protect the wearer. I am somewhat skeptical of this in the case of KN95s and well-fitted cloth masks, because my experience is that when you breathe in, fitted masks pull to the face and seal much more tightly than when breathing out.)

My 3D printing files are here. Unless I have an identical twin I don't know about, you'll need to edit the OpenSCAD files and tweak the Bezier parameters to make them work for you. Mine I did mainly by trial and error with five prototypes, but when I made one for my son, I had him press a wire around the bridge of his nose, and then scanned the wire along with a ruler for size, and traced Beziers over the wire (if you're in my Baylor social circle, I can do this for you, on the basis of a good photo of a bent wire and a ruler or other calibrating object).

Thursday, August 27, 2020

The coincidence between the right and the beneficial

One of the earliest and most important discoveries in Western philosophy was:

  1. Doing the right thing is sometimes bad for you (in non-moral ways).

This precludes any easy reduction of morality to self-interest. But at the same time, the philosophers could see that:

  1. Doing the right thing is usually good for you (even in non-moral ways).

This leads to an interesting problem that has occasionally been discussed, but not as much as one might hope:

  1. What explains why acting morally well tends to be good for you (even in non-moral ways).

Living in accordance with one’s conscience, while having that conscience be well-formed, tends to lead to a kind attractive happiness that we can often see in people. There is that smile which reflects both a kindliness of nature and an inner joy. Why is there this harmony between the right and the beneficial?

If we were non-realists about morality, we might give an evolutionary explanation: our moral beliefs evolved to benefit us. But if we are realists about morality, then that only makes it puzzling: why is it that the true moral beliefs are the ones that tend to benefit us?

Divine command ethics has a plausible story grounded in God’s loving desire that we live under moral rules that are good for us. Natural law ethics has a different story: our natures are harmonious, and hence the many ends we have are mutually supportive. That story, of course, only shifts the problem to the more general question of the mutual support of our ends, and I suspect that this more general question cannot be answered without bringing in theism, either by positing that God is more likely to instantiate harmonious natures or that because created natures are way of participating in the God whose inner life is a harmony of love, they tend to be (or maybe even all are) harmonious.

Tuesday, August 25, 2020

Sofas and uncaused events

Consider three worlds where a sofa rises and in each of which Alice and Bob have the same lifting powers, and nothing beyond Alice and Bob influences the sofa’s rise.

  • w1: the sofa is lifted by Alice and Bob, with neither of them sufficiently power to lift the sofa on their own

  • w2: the sofa rises without Alice or Bob exerting their lifting abilities

  • w3: the sofa rises with Alice exerting her lifting abilities.

If we think w1 and w2 are both possible, we should think that w3 is also possible. It would be too weird if eliminating both Alice and Bob’s exertions were compatible with the sofa rising, but somehow keeping Alice’s exertions precluded the sofa from rising.

But in w3, Alice can’t be the cause of the sofa’s rising. But she seems to have the same influence on it as in w1. So it seems she is a merely partial cause of the sofa’s rising.

However, Alice can’t be a merely partial cause of the sofa’s rising without being a part of a full cause of the sofa rising. But nothing else influences the sofa’s rise. So, there is no full cause, and yet Alice can’t be a merely partial cause.

Thus, w3 is impossible. But if w1 and w2 are possible, so is w3. So, w2 is impossible. So, uncaused events are impossible.

When can we have exact symmetries of hyperreal probabilities?

In many interesting cases, there is no way to define a regular hyperreal-valued probability that is invariant under symmetries, where “regular” means that every non-empty set has non-zero probability. For instance, there is no such measure for all subsets of the circle with respect to rotations: the best we can do is approximate invariance, where P(A)−P(rA) is infinitesimal for every rotation. On the other hand, I have recently shown that there is such a measure for infinite sequences of fair coin tosses where the symmetries are reversals at a set of locations.

So, here’s an interesting question: Given a space Ω and a group G of symmetries acting on Ω, under what exact conditions is there a hyperreal finitely-additive probability measure P defined for all subsets of Ω that satisfies the regularity condition P(A)>0 for all non-empty A and yet is fully (and not merely approximately) invariant under G, so that P(gA)=P(A) for all g ∈ G and A ⊆ Ω?

Theorem: Such a measure exists if and only if the action of G on Ω is locally finite. (Assuming the Axiom of Choice.)

The action of G on Ω is locally finite iff for every x ∈ Ω and every finitely-generated subgroup H of G, the orbit Hx = {hx : h ∈ H} of x under H is finite. In other words, we have such a measure provided that applying the symmetries to any point of the space only generates finitely many points.

This mathematical fact leads to a philosophical question: Is there anything philosophically interesting about those symmetries whose action is locally finite? But I’ve spent so much of the day thinking about the mathematical question that I am too tired to think very hard about the philosophical question.

Sketch of Proof of Theorem: If some subset A of Ω is equidecomposable with a proper subset A′, then a G-invariant measure P will assign equal measure to both A and A′, and hence will assign zero measure to the non-empty set A − A′, violating the regularity condition. So, if the requisite measure exists, no subset is equidecomposable with a proper subset of itself, which by a theorem of Scarparo implies that the action of G is locally finite.

Now for the converse. If we could show the result for all finitely-generated groups G, by using ultraproduct along an ultrafilter on the partially ordered set of all finitely generated subgroups of G we could show this for a general G.

So, suppose that G is finitely generated and the orbit of x under G is finite for all x ∈ Ω. A subset A of G is said to be G-invariant provided that gA = A for all g ∈ G. The orbit of x under G is always G-invariant, and hence every finite subset of A is contained in a finite G-invariant subset, namely the union of the orbits of all the points in A.

Consider the set F of all finite G-invariant subsets of Ω. It’s worth noting that every finite subset of G is contained in a finite G-closed subset: just take the union of the orbits under G. For A ∈ F, let PA be uniform measure on A. Let F* = {{B ∈ F : A ⊆ B}:A ∈ F}. This is a non-empty set with the finite intersection property. Let U be an ultrafilter extending F*. Let *R be the ultraproduct of the reals over F with respect to U, and let P(C) be the equivalence class of the function A ↦ PA(A ∩ C) on F. Note that C ↦ PA(A ∩ C) is G-invariant for any G-invariant set A, so P is G-invariant. Moreover, P(C)>0 if C ≠ ∅. For let C′ be the orbit of some element of C. Then {B ∈ F : C′⊆B} is in F*, and PA(A ∩ C′) > 0 for all A such that C′⊆A, so the set of all A such that PA(A ∩ C′) > 0 is in U. It follows that P(C′) > 0. But C′ is the orbit of some element x of C, so every singleton subset of C′ has the same P-measure as {x} by the G-invariance of P. So P({x}) = P(C′)/|C′| > 0, and hence P(C)≥P({x}) > 0.

Monday, August 24, 2020

Invariance under independently chosen random transformations

Often, a probabilistic situation is invariant under some set of transformations, in the sense that the complete probabilistic facts about the situation are unchanged by the transformation. For instance, in my previous post I suggested that a sequence of fair coin flips should be invariant under the transformation of giving a pre-specified subset of the coins an extra turn-over at the end and I proved that we can have this invariance in a hyperreal model of the situation.

Now, a very plausible thesis is this:

Randomized Invariance: If a probabilistic situation S is invariant under each member of some set T of transformations, then it is also invariant under the process where one chooses a random member of T independently of S and applies that member to S.

For instance, in the coin flip case, I could choose a random reversing transformation as follows: I line up (physically or mentally) the infinite set of coins with an independent second infinite set of coins, flip the second set of coins, and wherever that flip results in heads, I reverse the corresponding coin in the first set.

By Randomized Invariance, doing this should not change any of the probabilities. But insisting on this case of Randomized Invariance forces us to abandon the idea that we should assign such things as an infinite sequence of heads a non-zero but infinitesimal probability. Here is why. Consider a countably infinite sequence of fair coins arranged equidistantly in a line going to the left and to the right. Fix a point r midway between two successive coins. Now, use the coins to the left of r to define the random reversing transformation for the coins to the right of r: if after all the coins are flipped, the nth coin to the left of r is heads, then I give an extra turn-over to the nth coin to the right of r.

According to Randomized Invariance, the probability that all the coins to the right of r will be tails after the random reversing transformations will be the same as the probability that they were all tails before it. Let p be that probability. Observe that after the transformations, the coins to the right of r are all tails if and only if before the transformations the nth coin to the right and the nth coin to the left showed the same thing (for we only get tails on the nth coin on the right at the end if we had tails there at the beginning and the nth coin on the left was tails, or if we had heads there at the beginning, but the heads on the nth coin to the left forced us to reverse it). Hence, p is also the probability that the corresponding coins to the left and right of r showed the same thing before the transformation.

Thus, we have shown that the probability that all the paired coins on the left and right equidistant to r are the same (i.e., we have a palindrome centered at r) is the same as the probability that we have only tails to the right of r. Now, apply the exact same argument with “right” and “left” reversed. We conclude that the probability that the coins on the right and left equidistant to r are always the same is the same as the probability that we have only tails to the left of r. Hence, the probability of all-tails to the left of r is the same as the probability of all-tails to the right of r.

And this argument does not depend on the choice of the midpoint r between two coins. But as we move r one coin to the right, the probability of all-tails to the right of r is multiplied by two (there is one less coin that needs to be tails) and the probability of all-tails to the left of r is multiplied by a half. And yet these numbers have to be equal as well by the above argument. Thus, 2p = p/2. The only way this can be true is if p = 0.

Therefore, Randomized Invariance, plus the thesis that all the non-random reversing transformations leave unchanged the probabilistic situation (a thesis made plausible by the fact that even with infinitesimal probabilities, we provably can have a model of the probabilities that is invariant under these transformation), shows that we must assign probability zero to all-tails, and infinitesimal probabilities are mistaken.

This is, of course, a highly convoluted version of Timothy Williamson’s coin toss argument. The reason for the added complexity is to avoid any use of shift-based transformations that may be thought to beg the question against advocates of non-Archimedean probabilities. Instead, we simply use randomized reversal symmetry.

Hyperreal modeling of infinitely many coin flips

A lot of my work in philosophy of probability theory has been devoted to showing that one cannot use technical means to get rid of certain paradoxes of infinite situations. As such, most of the work has been negative. But here is a positive result. (Though admittedly it was arrived at in the service of a negative result which I hope to give in a future post.)

Consider the case of a (finite or infinite, countable or not) sequence of independent fair coin flips. Here is an invariance feature we would like to have for our coin flips. Suppose that ahead of time, I designate a (finite or infinite) set of locations in the infinite sequence. You then generate the sequence of independent fair coin flips, and I go through my pre-designated set of locations, and turn over each of the coins corresponding to that location. (For instance, if you will make a sequence of four coin flips, and I predesignate the locations 1 and 3, and you get HTTH, then after my extra flipping set the sequence of coin flips becomes TTHH: I turned over the first and third coins.) The invariance feature we want is that no matter what set of locations I predesignate, it won’t affect the probabilistic facts about the sequence of independent fair coin flips.

This invariance feature is clearly present in finite cases. It is also present if “probabilistic facts” are understood according to classical countably-additive real-valued probability theory. But what if we have infinitely many coins, and we want to be able to do things like comparing the probability of all the coins being heads to all the even-numbered coins being heads, and say that the latter is more likely than the former, with both probabilities being infinitesimal? Can we still have our reversal-invariance property for all predesignated sets of locations?

There are analogous questions for other probabilistic situations. For instance, for a spinner, the analogous property is adding an extra predesignated rotation to the spinner once the spinner stops, and it is well-known that one cannot have such invariance in a context that gives us “enough” infinitesimal probabilities (e.g., see here for a strong and simple result).

But the answer is positive for the coin flip case: there is a hyperreal-valued probability defined for all subsets of the set of sequences (with fixed index set) of heads and tails that has the reversal-invariance property for every set of locations.

This follows from the following theorem.

Theorem: Assume the Axiom of Choice. Let G be a locally finite group (i.e., every finite subset generates a finite subgroup) and suppose that G acts on some set X. Then there is a hyperreal finitely additive probability measure P defined for all subsets of X such that P(gA)=P(A) for every A ⊆ X and g ∈ G and P(A)>0 for all non-empty A.

To apply this theorem to the coin-flip case, let G be the abelian group whose elements are sets of locations with the exclusive-or operation (i.e., A ⊕ B = (A − B)∪(B − A) is the set of all locations that are in exactly one of A and B). The identity is the empty set, and every element has order two (i.e., A ⊕ A = ∅). But for abelian groups, the condition that every finite subset generates a finite subgroup is equivalent to the condition that every element has finite order (i.e., some finite multiple of it is zero).

Mathematical notes: The subgroup condition on G in the Theorem entails that every element of G has finite order, but is stronger than that in the non-abelian case (due to the non-trivial fact that there are infinite finitely generated torsion groups). In the special case where X = G, the condition that every element of G have finite order is necessary for the theorem. For if g has infinite order, let A = {gn : n ≥ 0}, and note that gA is a proper subset of A, so the condition that non-empty sets get non-zero measure and finite additivity would imply that P(gA)<P(A), which would violate invariance. It is an interesting question whether the condition that every finite subset generates a finite subgroup is also necessary for the Theorem if X = G.

Proof of Theorem: Let F be the partially ordered set whose elements are pairs (H, V) where H is a finite subgroup of G and V is a finite algebra of subsets of X closed under the action of H, with the partial ordering (H1, V1)≼(H2, V2) if and only if H1 ⊆ H2 and V1 ⊆ V2.

Given (H, V) in F, let BV be the basis of V, i.e., a subset of pairwise disjoint non-empty elements of V such that every element of V is a union of (finitely many) elements of BV. For A ∈ BV and g ∈ H, note that gA is a member of V since V is closed under the action of H. Thus, gA = B1 ∪ ... ∪ Bn for distinct elements B1, ..., Bn in BV. I claim that n = 1. For suppose n ≥ 2. Then g−1B1 ⊆ A and g−1B2 ⊆ A, and yet both g−1B1 and g−1B2 are members of V by H-closure. But since A is a basis element it follows that g−1B1 = A = g−1B2, and hence B1 = B2, a contradiction. Thus, n = 1 and hence gA ∈ BV. Moreover, if gA = gB then A = B, so each member g of H induces a bijection of BV onto itself.

Now let P(H, V) be the probability measure on V that assigns equal probability to each member of BV. Since each member of H induces a bijection of BV onto itself, it’s easy to see that P(H, V) is an H-invariant probability measure on V. And, for convenience, if A ∉ V, write P(H, V)(A)=0.

Let F* = {{B ∈ F : A ≼ B}:A ∈ F}. This is a nonempty set with the finite intersection property (it is here that we will use the fact that every finite subset of G generates a finite subgroup). Hence it can be extended to an ultrafilter U. This ultrafilter will be fine: {B ∈ F : A ≼ B}∈U for every A ∈ F. Let *R be the ultraproduct of the reals R over F with respect to U, i.e., the set of functions from F to R modulo U-equivalence. Given a subset A of X, let P(A) be the equivalence class of (H, V)↦P(H, V)(A).

It is now easy to verify that P has all the requisite properties of a finitely-additive hyperreal probability that is invariant under G and assigns non-zero probability to every non-empty set.

Friday, August 21, 2020

Complete Probabilistic Characterizations

Consider the concept of a complete probabilistic characterization (CPC) of an experiment. It’s a bit of a fuzzy concept, but we can get some idea about it. For instance, if I have a coin loaded in favor of heads, then saying that heads is more likely than tails is not a CPC. Minimally, the CPC will give exact numbers where the probabilities have exact numbers. But the CPC may go beyond giving numerical probabilities. For instance, if you toss infinitely main fair coins, the numerical probability that they are all heads is zero as is the probability that all the even numbered ones are heads. But intuitively it is more likely that the even numbered ones are heads than that all of them are heads. If there is something to this intuition, the CPC will include the relevant information: it may do that by assigning different infinitesimal probabilities to the two events, or by giving conditional probabilities conditioned on various zero-probability events.

A deep question that has sometimes been discussed by philosophers of probability is what CPCs are like. Here are three prominent candidates:

  1. classical real-valued probabilities

  2. hyperreal probabilities assigning non-zero (but perhaps infinitesimal) probability to every possible event

  3. primitive conditional probabilities allowing conditioning on every possible event.

The argument against (1) and for (2) and (3) is that (1) doesn’t distinguish things that should be distinguished—like the heads case above. I want to offer an argument against (2) and (3), however.

Here is a plausible principle:

  1. If X and Y are measurements of two causally independent experiments, then the CPC of the pair (X, Y) is determined by the CPCs of X and Y together with the fact of independence.

If (4) is true, then a challenge for a defender of a particular candidate for CPC is to explain how the CPC of the pair is determined by the individual CPCs of the independent experiments.

In the case of (1), the challenge is easily met: the pair (X, Y) has as its probability measure the product of the probability measures for X and Y.

In the cases of (2) and (3), the challenge has yet to be met, and there is some reason to think it cannot be met. In this post, I will argue for this in the case of (2): the case of (3) follows from the details of the argument in the case of (2) plus the correspondence between Popper functions and hyperreal probabilities.

Consider the case where X and Y are uniformly distributed over the interval [0, 1]. By independence, we want the pair (X, Y) to have a hyperreal finitely additive probability measure P such that P(X ∈ A, Y ∈ B)=P(X ∈ A)P(Y ∈ B) for all events A and B. But it turns out that this requirement on P highly underdetermines P. In particular, it seems to be that for any positive real number r, we can find a hyperreal measure P such that P(X ∈ A, Y ∈ B)=P(X ∈ A)P(Y ∈ B) for all A and B, and such that P(X = Y)=rP(Y = 0). Hence, independence highly underdetermines what value P assigns to the diagonal X = Y as compared to the value it assigns to Y = 0.

Maybe some other conditions can be added that would determine the CPC of the pair. But I think we don’t know what these would be. As it stands, we don’t know how to determine the CPC of the pair in light of the CPC of the members of the pair, if CPCs are of type (2).

Wednesday, August 19, 2020

Product spaces for hyperreal and full conditional probabilities

I think the following is a consequence of a hyperreal variant of the Horn-Tarski extension theorem for measures on boolean algebras:

Claim: Suppose that <Ωi, Fi, Pi> for i ∈ I is a finitely additive probability space with values in some field R* of hyperreals. Then, assuming the Axiom of Choice, there is a hyperreal-valued finitely additive probability space <Ω, 2Ω, P> where Ω = ∏i ∈ IΩi and where the Ωi-valued random variables πi given by the natural projections of Ω to Ωi are independent and have the distributions given by the Pi.

Note that the values of P might be in a hyperreal field larger than R*.

Given the Claim, and given the well-known correspondences between hyperreal-valued probabilities and full conditional real-valued probabilities, it follows that we can define meaningful product-space conditional real-valued probabilities.

It would be really nice if the product-space conditional probabilities were unique in the special case where Fi is the power set of Ωi, or at least if they were close enough to uniqueness to define the same real-valued conditional probabilities.

For a particularly interesting case, consider the case where X and Y are generated by uniform throws of a dart at the interval [0, 1], and we have a regular finitely additive hyperreal-valued probability on [0, 1] (regular meaning that all non-empty sets have positive measure). Let Z be the point (X, Y) in the unit square.

Looking at how the proof of the Horn-Tarski extension theorem works, it seems to me that for any positive real number r, and any non-trivial line segment L along the x = y diagonal in the square [0, 1]2, there is a product measure P satisfying the conditions of the Claim (where P1 and P2 are the uniform measures on [0, 1]) such that P(L)=rP(H), where H is the horizontal line segment {(x, 0):x ∈ [0, 1]}. For instance, if L is the full diagonal, we would intuitively expect P(L)=21/2P(H), but in fact we can make P(L)=100000P(H) or P(L)=P(H)/100000 if we like. It is clear that such a discrepancy will generate different conditional probabilities.

I haven’t checked all the details yet, so this could be all wrong.

But if it is right, here is a philosophical upshot. We would expect there to be a unique canonical product probability for independent random variables. However, if we insist on probabilities that are so fine-grained as to tell infinitesimal differences apart, then we do not at present have any such unique canonical product probability. If we are to have one, we need some condition going beyond independence.

This is part of a larger set of claims, namely that we do not at present have a clear notion of what “uniform probability” means once we make our probabilities more finegrained than classical real-valued probability.

Putative Sketch of Proof of Claim: Embedding R* in a larger field if necessary, we may assume that R* is |2Ω|-saturated. Define a product measure on the cylinder subsets of Ω as usual. The proof of the Horn-Tarski extension theorem for measures on boolean algebras looks to me like it works for |B|-saturated hyperreal-valued probability measures where B is the boolean algebra, and completes the proof of our claim.

Real dilemmas, alas

I’ve been trying to avoid holding there are real moral dilemmas—ones where one is genuinely morally required to do something and to abstain from it. But here is a problem:

  1. One is obligated to do what one believes to be obligatory.

  2. Some people believe that ϕing is obligatory and that refraining from ϕing is obligatory.

  3. So, some people are obligated to ϕ and not to ϕ.

Perhaps the most obvious case of (2) is killing in war. It seems to be not an uncommon view that (a) all killing is wrong, but (b) you should kill to defend the innocent in a just war.

Tuesday, August 18, 2020

Mistaken conscience and failure

Alice is a sniper tasked with stopping Bob the terrorist who is about to set up a bomb that will kill many. Let’s take it for granted that shooting Bob would have been permissible and even praiseworthy. Now, Alice takes all reasonable precautions but she misidentifies Carl the innocent as the terrorist and shoots Carl.

Among the infinitely many ways that we can describe Alice’s action, two are of particular moral relevance:

  1. Trying to shoot Bob the terrorist.

  2. Shooting an innocent person.

Alice is morally responsible for, and even praiseworthy, for performing (1). She is not responsible for (2), since she did (2) unintentionally and in non-culpable ignorance (remember that she took all reasonable precautions).

Did Alice do a morally impermissible action? It sounds like (2) is impermissible, and Alice indisputably did it. But perhaps this is too quick. For it is not clear to me that Alice’s shooting an innocent person is an action. Suppose that while Alice was sleeping, an evil tinkerer set up a pressure-sensitive switch connected to a gun pointed at David the innocent, and Alice rolled over onto it. Then Alice shot David, but we cannot say that she did anything: shooting David wasn’t an action, but a mere event. And if she didn’t do anything, she didn’t do anything impermissible.

Now, Alice’s trying to shoot Bob the terrorist identical with her shooting an innocent person. And since Alice’s trying to shoot Bob is an action, it follows that her shooting the innocent person is also an action. So it seems that Alice did do something impermissible.

But even this may not be quite right. For it may be that it is not quite right to say that (2) is impermissible. Rather, what are impermissible are actions that are non-accidental cases of shooting an innocent. And both the shooting of Carl and of David are accidental cases (and that of David isn’t even an action).

If this is right, then we can say that Alice did nothing wrong in either the case of Carl or of David.

Now, let’s switch to a harder case. Alice has a reasonable but false belief that she is pursuing a just war, but she is not. She shoots Ella the enemy combatant. Did Alice do anything morally wrong? It seems that she did: she shot Ella. But perhaps we can say something very similar to what we said above. There are two ways to describe Alice’s action;

  1. Trying to shoot enemy combatant Ella in pursuit of a just war

  2. Shooting enemy combatant Ella not in pursuit of a just war.

Action (1) is permissible, but unbeknownst to Alice was doomed to failure as the war was not just. Now, what is impermissible is non-accidentally shooting enemy combatants not in pursuit of a just war. But Alice did this accidentally: she reasonably thought it was a just war. So perhaps Alice is entirely off the hook for doing something morally wrong. Instead, she accidentally did something that it would be have been wrong to do non-accidentally.

Let’s switch to an even harder case. Alice has a reasonable (given her flawed upbringing and culture) but false belief that in order to save lives in the pursuit of a just war it is permissible to shoot innocent non-combatants, and she shoots Fred the innocent non-combatant. Can we say that Alice didn’t do anything wrong, but merely accidentally did something that it would have been wrong to do non-accidentally? Perhaps. Perhaps we can describe Alice’s action in two ways:

  1. Trying to permissibly shoot the innocent non-combatant Fred to save lives

  2. Shooting the innocent non-combatant Fred to save lives.

Action (5) is permissible, but doomed to failure. And it is impermissible to non-accidentally do (6). But now it seems that we cannot make the move of saying that Alice only accidentally did (6). For she was trying to do (6), though she was trying to do more than just what is included in (6): she was trying to do (6) permissibly.

But perhaps there is a similar move possible to the one we made before. Perhaps what is impermissible is to do (6) as such, where the “as such” includes both non-accidentality and the assumption that no further relevant factors are involved. And Alice wasn’t trying to do (6) as such: she was trying to do (6) permissibly.

If so, this would give us a nice account of what happens in cases of honestly mistaken conscience. We are intending to do something permissibly, and we fail at this. Instead we accidentally end up performing only a part of our intention. That part would be something that it would be impermissible to attempt as such, but we didn’t attempt it as such, but we intended it only qua permissible.

For this account to work, it has to be the case that if we are to act well, we should positively include permissibility among our intentions. Virtue may help here.

Monday, August 17, 2020

Physicalism and vice

  1. If physicalism is true, then vice is an instance of medium-to-long-term poor bodily function.

  2. Instances of medium-to-long-term poor bodily function are illnesses.

  3. Vice is not an illness.

  4. So, physicalism is not true.

The Non-Identity Theodicy (Scott Hill) SCP session

The Analytic Collective and SCP are having an inaugural online session on August 21 at 4-5:30 pm Eastern Time to discuss Scott Hill's fascinating paper "The Non-Identity Theodicy". I will be commenting on the paper.

Abstract: This paper defends a theodicy based on ideas discussed in the literature on the non-identity problem and the literature on origin essentialism. I then address a series of objections about the ethics of God's acts in my theodicy and about the metaphysics of origins on which my theodicy depends.

Join the Analytic Collective facebook group for a Zoom link.

This is a pre-read session. The paper is here. And my comments are here.

An important use for virtue

It’s obvious that virtues are morally instrumentally useful: possessing them makes it more probable that one will act morally well. Many of my friends think virtues are much more important than that.

Here is one thing that has occurred to me along those lines. Pretty much any action can be morally ruined by a bad intention. But in some specific cases, an action will be ruined by the lack of a specific kind of reason, intention or end. Here are some potential examples, not all of which will be plausible to everyone (the first two reasons should be plausible to everyone; the remaining ones will have narrower appeal):

  • The wrongness of BS and lying shows that it is (at least normally) wrong to make an assertion if the (believed) truth of the content is not among the reasons for making it.

  • It is wrong to intentionally kill someone except for the sake of a very small number of clearly delineated reasons (justice, defense of the innocent, etc.)

  • Since we are to love God with all our hearts, every action should be done at least in part for the sake of God.

  • If a married couple engages in sexual union for reasons that do not include their being married to each other, then their act is internally too much like an act of fornication.

  • It is sacrilegious to attend Mass without doing so at least in part for some religious reason.

But practically speaking, it is hard to include an explicit intention each time one engages in an act of a certain type, especially if the act is moderately frequent (as assertion is, and as killing in wartime can be).

Here virtue can come in: an act’s flowing from a virtue allows the act to inherit the intentions and reasons that are attached to the virtue, and virtue is a habit, so this mechanism is perfectly suited to attaching the right intention to each act of a given type.

Saturday, August 15, 2020

Fighting the flu

Some people, perhaps more towards the beginning of our pandemic than now, have said that we wouldn’t have a shutdown for seasonal influenza, and COVID-19 is not much worse. Our best mortality data shows that this argument is unsound: COVID-19 is much worse. But still I wonder if there isn’t something to do the idea of turning the argument around to conclude that we should be doing more about the flu—and the common cold, while we’re at it—than we are.

The flu isn’t nearly as deadly as COVID-19, but it does kill many. It causes significant suffering to a much greater number than it kills, and it is very disruptive to the economy. The kinds of public health measures taken against COVID-19 have apparently been extremely effective against the flu, apparently leading to a seven-fold decrease in flu-like symptoms in Australia around April of this year as compared to last year. Of course, for economic reasons, it would not be prudent to shut down businesses and schools to prevent the flu, especially since economic impact is one of the reasons for fighting the flu. And, in my sample of one, I continue to delight in the fact that since spring break, I haven’t had any flu or cold—five months without coughing is completely new to me, and wonderful!

But some of the measures taken against COVID-19 carry little economic costs, and yet might significantly decrease flu transmission. Specifically: voluntary individual social distancing and masks. Prior to the pandemic, comfortable personal space in the U.S. was said to be at least 1.5 feet for good non-romantic friends, four feet for strangers and three for co-workers and casual acquaintances. We could modify our etiquette to increase all these distances to six in those circumstances where it is not seriously inconvenient to do so. And we could also make it a part of our social etiquette that we wear good quality masks (which we could presumably make in large numbers at relatively low cost if we put more resources into it in the long term) when we are with those who aren’t very close to us, again when this is not seriously inconvenient.

Of course, there would be many circumstances where distancing and masking would be seriously inconvenient, and our etiquette could take those into account, just as it already allows for exceptions to personal space requirements on public transit and on crowded streets. And in cases where facial expressions are important, or when communicating with members of the Deaf community, one would need to take off one’s mask or use a mask with a window.

And there might well be some bonuses:

  • covering up a significant portion of the face could result in greater social equality for two reasons: (a) decreased lookism because of covering up of much of the face (one of my teens mentioned acne in this connection!) and (b) decreased barriers to social participation by those with serious social anxiety (for instance, I have noticed that I feel more comfortable in social interactions when covered up)
  • potential for avoidance of being the victim of street crime, in that non-accidental violation of one’s personal space would provide an earlier warning of bad intentions (with lots of false positives, of course) and allow earlier evasive and protective action.

It would require research whether such partial measures would have sufficient effectiveness against the flu (and the common cold, which is still pretty unpleasant) to outweigh their inconvenience, at least when any bonuses are added.

Nonetheless, I am kind of thinking of unilaterally implementing some variant of these measures once the pandemic is over. The idea of being on an airplane or in a car with strangers and not wearing a mask—even if flu and common cold are all that one has to worry about—now seems rather weird or even repugnant to me. And I’ve wanted more personal space for a while—I can see myself continuing to step back from people not in my household when having conversations to ensure six feet of separation.

(And of course getting vaccinated for the flu goes without saying. I didn't even bother to write it because it's so obvious until my wife reminded me of how many people don't do it.)

Friday, August 14, 2020

Inclusive vs. proper parthood

Contemporary analytic philosophers seem to treat the “inclusive” concept of parthood, on which each object counts as an improper part of itself, as if it were more fundamental than the concept of proper parthood.

It seems to me that we should minimize the number of fundamental relations that all objects have to stand in. We are stuck with identity: every object is identical with itself. But anything beyond that we should avoid as much as we can.

Now, it is plausible that whatever parthood relation—inclusive parthood or proper parthood—is the more fundamental of the two is in fact a fundamental relation simpliciter. For it is unlikely that parthood can be defined in terms of something else. But if we should minimize the number of fundamental relations that all objects must stand in, then it is better to hold that proper parthood rather than inclusive parthood is a fundamental relation. For every object has to stand in inclusive parthood to itself. But it is quite possible to have objects that are not proper parts of anything else.

On this view, proper parthood will be a fundamental relation, and improper parthood is just the disjunction of proper parthood with identity.

Thursday, August 13, 2020

Relativity and an argument for incompatibilism

A common argument for the incompatibility of freedom and determinism goes something like this (where premises 1, 2 and 3 are implicitly assumed to hold at all times):

  1. It is currently possible that I will do A only if the past and the laws are compatible with my future doing of A.

  2. If determinism is true, then the past and the laws are only compatible with my future doing of what I will in fact do.

  3. So, if determinism is true, the only things that it is currently possible that I will do are the things that I will in fact do.

  4. Freedom requires that at some time it be possible that I will do something other than what I will in fact do.

But given relativity theory, it is not clear what “the past” means in the above arguments, since past is always relative to some reference frame. There are at least four ways of reading (1):

  • Strongest: It is now possible for me to do A only if the events in the complement of my present closed future light-cone and the laws are compatible with my doing A.

  • Stronger: It is now possible for me to do A only if for every reference frame R, the past according to R and the laws are compatible with my doing A.

  • Weaker: It is now possible for me to do A only if for some reference frame R, the past according to R and the laws are compatible with my doing A.

  • Weakest: It is now possible for me to do A only if the events in my present open past light-cone and the laws are compatible with my doing A.

Now, generally we should prefer less strong premises. So we should avoid the Strongest and Stronger readings of (1). But I claim that the analogue of (2) is unjustified if we take the Weaker reading of (1). For suppose A would be a future action. Then the past open light-cone of A will be strictly larger than my current past open light-cone. Determinism tells us that A or its absence is nomically determined by the events in its past open light-cone. But that past open light-cone is strictly larger than my current past open light-cone. And it could be that some event E that is in A’s past open light-cone but not in my current past open light-cone makes a difference as to whether A happens. Then there will be a reference frame R such that this event E would be outside my current past according to R. Thus, A’s or its absence’s being determined by the events in its past open light-cone leaves open the possibility that some event E that isn’t in my current past according to R makes a difference as to whether A happens, and hence that A or its absence need not be determined by the events in my current past according to R.

So, for the argument (1)–(4) to work given relativity, it seems we need the Stronger or Strongest reading for (1).

Is there a better way to fix the argument relativistically? Maybe. I like the idea of replacing (1) with an atemporal formulation:

  1. Action A is only free if its non-occurrence is compatible with the laws and the subset of events in A’s causal history that are outside of my life.

Separation from God

The worst part of being in hell is separation from God. But Jesus did not become separated from God. So how could his suffering atone in place of our deserved punishment of eternity in hell?

Some theologians, perhaps of a kenotic sort, may hold that Jesus did become separated from God. But this is heterodox.

Here is perhaps a solution: separation from God in hell is the worst part of being in hell, but it’s not a punishment.

As it stands, this would contradict the Catechism of the Catholic Church which states: “The chief punishment of hell is eternal separation from God” (1035).

But perhaps we can distinguish two senses of punishment: retributive and non-retributive. Suppose that I am vain, and vanity leads to a fall, namely that I become a plagiarist. My plagiarism, then, is a kind of punishment for my pride. It is fitting. It is just. But it is not a retribution for my vanity. Here is one feature of this kind of non-retributive punishment: its lack is not an injustice. Suppose I am vain and instead of this leading to further vice, people notice my ridiculous vanity and start laughing at me, which hurts my feelings badly. In this case, I am much better off than in the case where my vanity led me to plagiarism, since I did not become more vicious. But notice that even though becoming more vicious would have been quite fair, my not becoming more vicious isn’t itself unjust. For it is the omission of due retributive punishment that is unjust.

This distinction in hand, we might say that separation from God in hell is non-retributive punishment. Many authors in the 20th century have argued that hell is a kind of choice one makes rather than a retribution. But with the distinction, we can say that this is true of the separation from God: that is what the wicked have chosen, and it is just that they get it, but it is not retributive punishment. There is, however, retributive punishment in hell, the chief part of which is the pain of separation from God. This pain, however, Christ could be said to suffer, for while not himself actually separated from God, he could take on himself the pain of separation on behalf of others, through perfect empathy.

Another simple way to see a problem with infinitesimal probabilities


Suppose I independently randomly and uniformly choose X and Y between 0 and 1, not including 1 but possibly including 0. Now in the diagram above, let the blue event B be that the point (X, Y) lies one one of the two blue line segments, and let the red event R be that it lies on one of the two red line segments. (The red event is the graph of the fractional part of 2x; the blue event is the reflection of this in the line y = x.) As usual, a filled circle indicates a point included and an unfilled circle indicates a point not included; the purple point at (0, 0) is in both the red and blue events.

It seems that B is twice as likely as R. For, given any value of X—see the dotted line in the diagram—there are two possible values of Y that put one in B but only one possible value of X that puts one in R.

But of course the situation is completely symmetric between X and Y, and the above reasoning can be repeated with X and Y swapped to conclude that R is about twice as likely as B.

Hmm.

Of course, there is no paradox in classical probability theory where we just say that the red and blue events have zero probability, and twice zero equals zero.

But if we have any probability theory that distinguishes different events that are classically of zero-probability and says things like “it’s more likely that Y is 0.2 or 0.8 than that Y is 0.2” (say because both events have infinitesimal probability, with one of these infinitesimals being twice as big as the other), then the above reasoning should yield the absurd conclusion that B is more likely than R and R is more likely than B.

Technically, there is nothing new in the above. It just shows that when we have a probability theory that distinguishes classically zero-probability events, that probability theory will fail conglomerability. I.e., we have to reject the reasoning that just because conditionally on any value of X it’s twice as likely that we’re in B as in R, therefore it’s twice as likely that we’re in B as in R. We already knew that conglomerability reasoning had to be rejected in such probability theories. But I think this is a really vivid way of showing the point, as this instance of conglomerability reasoning seems super plausible. And I think the vividness of it makes it clear that the problem doesn’t depend on any kind of weird trickery with strange sets, and that no mere technical tweak (such as moving to qualitative or comparative probabilities) is likely to get us out of it.

Tuesday, August 11, 2020

Yet another variant of the Borel-Kolmogorov paradox

Suppose that a point is uniformly randomly choosen in the unit square. Then you learn that either the point lies on the diagonal y = x (red), or it lies on the horizontal line y = 1/2 (blue). What probability should you assign to its lying on the diagonal?

Answer 1: The diagonal has length 21/2 and the horizontal line has length 1. Thus, the total length of the lines where the point might be is 21/2 + 1, and the probability that it’s on the diagonal is 21/2/(21/2 + 1) ≈ 0.59.

Answer 2: We can think of the uniform random choice of a point in the unit square as the choice of two independent coordinates, x and y. Suppose that x has been chosen. Then to be on the diagonal line, y has to equal x, while to be on the horizontal line, y has to equal 1/2. These two things are clearly equally likely, regardless of what x is, so the probability must be 1/2.

Both answers seem reasonable.

Suppose you are attracted to Answer 1 which gives 0.59. Then I can give you an argument for a third answer.

Answer 1b: Here is a way to uniformly choose a point in a square. I first uniformly choose a point in a rectangle whose height is twice its width, and then divide the y coordinate by a factor of two. Being on the diagonal or on the middle horizontal line of the point uniformly chosen in the square are respectively equivalent to being on the diagonal or on the middle horizontal line of the rectangle. But the length of the diagonal of the rectangle is 51/2, while the middle horizontal line has the same length 1 as in the square. So applying the reasoning behind Answer 1 to the rectangle case, the probability that the point is on the diagonal is 51/2/(51/2 + 1) ≈ 0.69.

Thus, if you are attracted to the “geometrical” reasoning behind Answer 1, there are infinitely many other answers available, corresponding to the infinitely many ways of generating a point uniformly in a square by generating it in a rectangle and squashing or stretching.

This might push you to Answer 2, since the reasoning behind Answer 2 seems much more determinate. But there are variants to Answer 2. Here is another way to generate a point uniformly on the unit square. Rotate the unit square by 45 degrees clockwise around the orgin to get a diamond whose size along the x and y axes is 21/2. Now choose x with a symmetric triangular probability density between 0 and 21/2, choose y0 uniformly between −1 and 1, and then rescale y0 to make its range fit within the diamond. Parallel reasoning to that used in Answer 2 will now generate a different answer, indeed an answer making the diagonal be more likely.

Note that while I put the paradox in terms of conditioning on a measure zero (the union of the two line segments), one can also put the paradox in terms of comparing probabilities if one likes to be able to compare zero probability sets.

Lesson: Either there are infinitely many different kinds of “uniform distributions of a point in a square”, or else we shouldn’t compare sets of zero measure.

Leaving space

Suppose that we are in an infinite Euclidean space, and that a rocket accelerates in such a way that in the first 30 minutes its speed doubles, in the next 15 minutes it doubles again, in the next 7.5 minutes it doubles, and so on. Then in each of the first 30 minutes, and the next 15 minutes, and the next 7.5 minutes, and so on, it travels roughly the same distance, and over the next hour it will have traveled an infinite distance. So where will it be? (This is a less compelling version of a paradox Josh Rasmussen once sent me. But it’s this version that interests me in this post.)

The causal finitist solution is that the story is impossible, for the final state of the rocket depends on infinitely many accelerations, and nothing can causally depend on infinitely many things.

But there is another curious solution that I’ve never heard applied to questions like this: after an hour, the rocket will be nowhere. It will exist, but it won’t be spatially related to anything outside of itself.

Would there be a spatial relationship between the parts of the rocket? That depends on whether the internal relationships between the parts of the rocket are dependent on global space, or can be maintained in a kind of “internal space”. One possibility is that all of the rocket’s particles would lose their spatiality and exist aspatially. Another is that they would maintain spatial relationships with each other, without any spatial relationships to things outside of the rocket.

While I embrace the causal finitist solution, it seems to me that the aspatial solution is pretty good. A lot of people have the intuition that material objects cannot continue to exist without being in space. I don’t see why not. One might, of course, think that spatiality is definitive of materiality. But why couldn’t a material object then continue to exist after having lost its materiality?

Monday, August 10, 2020

Do "one thought too many" objections work?

Consider “one thought too many” objections in ethics, on which certain considerations that objectively favor an action are nonetheless a “thought too many”, and it is better to act without them. Examples given in the literature involve using consequentialist reasoning when saving one’s spouse, or visiting a sick friend because of duty.

I’ve been sympathetic to such objections in the past, but now I am inclined to think some of them wrongheaded. Let’s say that I set out to visit my sick friend for the “right reason”, namely friendship. But on my way to my friend, I might get a brilliant idea how to get past an obstacle in a video game that I’ve been playing, and that might result in a temptation to go back home and try out the idea. Moreover, given my free will, there is a non-zero chance that I will wicked succumb to that temptation, sacrificing friendship to a video game.

But the chance that I would have sacrificed friendship and duty to a video game would have been lower, since in general the more reasons I am impressed by in favor of a choice, the more likely I am to make that choice. In other words, the “one thought too many” makes my determination to visit my friend more resilient. Liable to temptation as we are, it is a good thing if we keep before our minds all the reasons that favor an action.

But what if I were morally perfect, and there were no chance of succumbing to temptation? Would there still be a benefit to still having that allegedly unneeded “extra thought”? Typically, yes. For suppose that on my way to visit my friend, I find myself obtaining morally significant reasons to do something else, reasons that are strong but do not rise to the level of a duty. If the only reason to visit my friend were friendship, these new competing reasons might well be morally sufficient to outweigh the reason for the visit. But if I have the additional reason of duty in favor of visiting my friend, then these reasons may no longer outweigh. In other words, acting on all the reasons favoring the visit to my friend makes me ready to rationally respond to new reasons to the contrary.

I have argued that God is omnirational: that God acts on all the (uncanceled) reasons that favor his actions. The above considerations suggest that we should approximate this omnirationality.

There is still room for a “one thought too many” objection in some cases. Sometimes the extra reason is a bad reason. Then of course we have a thought too many. Sometimes, too, counting the extra reason involves double counting: “I have two reasons to keep my promise. First, I promised. Second, it’s my duty.” But in cases where the extra reason is a good reason, and it’s not canceled by some higher order reason, it’s a good thing to keep that extra reason around, to be more resilient in the face of temptation and to be better at weighing new reasons.

Friday, August 7, 2020

Virtues as pocket oracles

Consider three claims:

  1. Virtues when fully developed make it possible to see what is the right thing to do without conscious deliberation.

  2. Acting on fully developed virtues is the best way to act.

  3. Acting on a pocket oracle, which simply tells you in each case what is to be done, misses out on something important in our action.

All three claims sound pretty plausible, but there is a tension between them. To make the tension evident, ask this question:

  • What makes a fully developed virtue relevantly different from a pocket oracle?

Consider three possible answers to the question:

Answer 1: The virtue makes you understand why the right thing is right (e.g., because it is courageous, or loyal, etc.). The oracle just says what is right.

But: We can easily add that the oracle gives an explanation, and that you understand that explanation. Intuition (3) is still going to be there.

Answer 2: The virtues are in us while the oracle is external.

But: Suppose that due to a weird mutation your gut had the functions of the pocket oracle, and gave you literal gut feelings as to what the right thing to do is (and, if necessary, why).

Answer 3: The virtues are formed by one’s own hard work.

But: Perhaps I had to work really hard to get the oracle. Or maybe I designed the AI system it uses.

Maybe there is some other answer to the question. But I would prefer to say that there is a relevant similarity between the case where the virtue tells me what to do and when the oracle does (even the gut oracle), namely that in neither case did I consciously weigh the options myself to come up with the answer.

I would deny (1). There are some independent reasons for that.

First, in difficult cases the struggle is important. This struggle involves oneself being pulled multiple ways by the genuine goods favoring the different actions. It is important to acknowledge the competing goods, especially if they are weighty. If I am trying to decide whether to rescue the drowning friend of five years or the mere acquaintance, it is by being deliberationally pulled by the good of rescuing the mere acquaintance that I acknowledge their moral call on me.

Second, there is sometimes literal and complex calculation going on in decisions. There is a 25% chance of rescuing 74 people versus a 33% chance of rescuing 68 people. It is not a part of perfected human virtue to have us do arithmetic in our heads instantly and see whether (0.25)(74) or (0.33)(68) is bigger. Of course, most of the time the deliberation is not mathematical, but that only makes things harder. We are not gods, and our agential perfection does not involve divine timeless deliberation.

Third, there is a trope in some science fiction (Watts’ Blindsight is where I saw this first) that there are non-human beings that are highly intelligent but lack consciousness. The idea is that consciousness involves some kind of second order reflection which actually slows down an agent, and agents that lack this evolutionary complication might actually be better. It seems to me that the temporally extended and self-reflective experience of deliberation is actually quite important to us as human agents. We are not gods or these kinds of aliens.

"For the common good"

Aquinas thinks that for something to be a law, it must be “for the common good” (in addition to satisfying other conditions). Otherwise, the legislation (as we might still call it) is not really a law, and does not morally require obedience except to avoid chaos.

But suppose we have a cynical view of legislative activity, thinking that many cases of legislation are imposed not in order to the further the common good but in order to get the legislators reelected. One may worry that even if such a piece of legislation happens to further the common good, it is not for the common good but for reelection, and hence is not a valid law on Aquinas’ criteria.

Here is a possible way out. We should limit our cynicism. Start with a multiplicity of different options here, perhaps importantly different: the legislator may think the law would be popular with their constituents; voting for the law may help the legislator get an alliance with other legislators that will help getting reelected; or, the legislation will secure a large campaign contribution from an interested party. The last is the most crass, of course. But even so, it is reasonable to think that in most cases the legislator thinks that their getting reelected serves the common good. There may be some cases of serious corruption or power pursuit where even this is gone, but in those cases we really should worry about the validity of the supposed law. But in many cases even when there is corruption, I expect the legislators think it is good for their country that they be in office.

This solution reads “for the common good” broadly. The having of the legislation need not be aimed at the common good, but it is enough if the passing of the legislation—or maybe just the legislator’s voting in favor of it—is aimed at the common good. One may worry that this is overbroad: that the content of the legislation has to serve the public good.

But that would be too strict a criterion. The common good is the common good of the relevant political entity, say a country. But international negotiation can result in treaties where two countries each pass a coordinated piece of legislation such that: (a) the content of each piece of legislation harms the citizens of the country it is passed in and benefits the citizens of the other country; but (b) the benefits outweigh the harms in such a way that the coordinated deal is for the common good of each country. In this case, it is not the content of the legislation that serves the good of the people governed by it, but the fact of there being such legislation serves their good, by getting the other country to pass the coordinated legislation. And this seems like it could be a perfectly legitimate case of valid legislation, assuming the harms are not of a kind that are morally impermissible (e.g., the legislation invidiously harming a vulnerable group).

In fact, the case of the legislator voting for a piece of legislation in order to get reelected is not very different from such international negotiation. In each case, the legislation as such may not directly serve the common good, but its promotion is nonetheless thought to lead to the common good. So it is important to read the “for the common good” criterion broadly. But if we read it this broadly, then apart from really serious cases of corruption or power madness, we have good reason to think that most of the legislation we are under in democratic societies is “for the common good”, unless it is clearly immoral (further discussion here would require separate analysis of the two ways legislation can be immoral: by requiring immoral action from one or by being an immoral imposition that doesn’t require immoral action from one).

A value asymmetry in double effect reasoning

The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects.

Now, this is clearly a mistake about intention: there is no such asymmetry. However, I wonder if there isn’t a real asymmetry in the value of the actions. Simplify by considering actions that have exactly one unintended side-effect, which is either good or bad. My intuition says that an action’s having a foreseen bad side-effect, even when that side-effect is unintended and the action is justified by Double Effect, makes the action less valuable. But on the other hand, an action’s having a foreseen good side-effect, when that side-effect is unintended, doesn’t seem to make the action any better.

Let me try to think through this asymmetry intuition. I would be a worse person if I intended the bad side-effect. But I would be a better one if I intended the good side-effect. My not intending the good side-effect is a sign of vice in me (as is clear in the standard Knobe case, where the CEO’s indifference to the environmental benefits of his action is vicious). So not only does the presence of an unintended good side-effect not make the action better, it makes it worse. But so far there is no asymmetry: the not intending of the bad is good and the not intending of the good is bad. The presence of a good side-effect gives me an opportunity for virtue if I intend it and for vice if I fail to intend. The presence of a bad side-effect gives me an opportunity for vice if I intend it and for virtue if I fail to intend.

But maybe there still is an asymmetry. Here are two lines of thought that lead to an asymmetry. First, think about unforeseen, and even unforeseeable, effects. Let’s say that my writing this post causes an earthquake in ten years in Japan by a chaotic chain of events. I do feel that’s bad for me and bad for my action: it is unfortunate to be the cause of a bad, whether intentionally or not. But I don’t have a similar intuition on the good side. If my writing this post prevents an earthquake by a chaotic chain of events, I don’t feel like that’s good for me or my action. So perhaps that is all that is going on in my initial value asymmetry: there is a non-moral disvalue in an action whenever it unintentionally causes a bad effect, but no corresponding non-moral value when it unintentionally causes a good effect, and foresight is irrelevant. But my intuitions here are weak. Maybe there is nothing to the earthquake intuition.

Second, normally, when I perform an action that has an unintended bad side-effect, that is a defect of power in my action. I drop the bombs on the enemy headquarters, but I don’t have the power to prevent the innocents from being hit; I give my students a test, but I don’t have the power to prevent their being stressed. The action exhibits a defect of power and that makes it worse off, though not morally so. Symmetry here would say that when the action has an unintended good side-effect, then it exhibits positive power. But here exactly symmetry fails: for the power of an action qua action is exhibited precisely through its production of intended effects. The production of unintended effects does not redound to the power of the action qua action (though it may redound to its power qua event).

So, if I am right, an action is non-morally worse off, worse off as an exercise of power, for having an unintended bad effect, at least when that bad side-effect is unavoidable. What if it is avoidable, but I simply don’t care to avoid it? Then the action is morally worse off. Either way, it’s worse off. But this is asymmetric: an action isn’t better off as an exercise of power by having an unintended good effect, regardless of whether the good side-effect is avoidable or not, since power is exhibited by actions in fulfilling intentions.

Thursday, August 6, 2020

Humdrum cases of double effect reasoning

While the Principle of Double Effect is mostly discussed in the literature in connection with very bad effects, typically death, that trigger deontic concerns, lately philosophers (e.g., Masek) have been noting that double effect reasoning can be important in much more humdrum situations.

For instance, I cause suffering to my students in many ways: stress over assignments, awkwardness over small group discussions, boredom, etc. I hope that this suffering is normally of a sort that doesn’t trigger deontic concerns. If an evildoer told me that I must bore my students or else he’d kill someone, intentionally boring my students would be the right thing to do. However, under normal circumstances, it would be wicked of me to intend my students to be bored or stressed, but it is not wicked for me to adopt pedagogical techniques that, unfortunately, foreseeably result in unintended boredom or stress (reviewing material that some students know is apt to be boring to them; tests are unavoidably stressful to most).

Another interesting and fairly humdrum case is this. You are speaking to a large group, and you realize that some people in the audience will misunderstand a sentence you are about to say as asserting something false. However, the issue is not important, time is limited, and the misunderstanding is not egregious as the falsehood is not far from the truth. So you reasonably choose not to waste time over the sentence. But if you intended the misunderstanding, you would be lying or at least deceiving.

Wednesday, August 5, 2020

Label independence and lotteries

Suppose we have a countably infinite fair lottery, in John Norton’s sense of label independence: in other words, probabilities are not changed by any relabeling—i.e., any permutation—of tickets. In classical probability, it’s easy to generate a contradiction from the above assumptions, given the simple assumption that there is at least one set A of tickets that has a well-defined probability (i.e., that the probability that the winning ticket is from A is well-defined) and that has the property that both A and its complement are infinite. John Norton rejects classical probability in such cases, however.

So, here’s an interesting question: How weak are the probability theory assumptions we need to generate a contradiction from a label independent countably infinite lottery? Here is a collection that works:

  1. The tickets are numbered with the set N of natural numbers.

  2. If A and B are easily describable subsets of the tickets that differ by an easily describable permutation of N, then they are equally probable.

  3. For every easily describable set A of tickets, either A or its complement is (or both are) more likely than the empty set.

  4. If A and B are disjoint and each is more likely than the empty set, then A ∪ B is more likely than A or is more likely than B.

  5. Being at least as likely as is reflexive and transitive.

Here, Axioms 3 and 4 are my rather weak replacement for finite additivity (together with an implicit assumption that easily describable sets have a well-defined probability). Axiom 2 is a weak version of label independence, restricted to easily describable relabeling. Axiom 5 is obvious, and the noteworthy thing is that totality is not assumed.

What do I mean by “easily describable”? I shall assume that sets are “easily describable” provided that they can be described by a modulo 4 condition: i.e., by saying what value(s) the members of the set have to have modulo 4 (e.g., “the evens”, “the odds” and “the evens not divisible by four” are all “easily describable”). And I shall assume that a permutation of N is “easily describable” provided that it can be described by giving a formula fi(x) using integer addition, subtraction, multiplication and division for i = 0, 1, 2, 3 that specifies what happens to an input x that is equal to i modulo 4. (E.g., the permutation that swaps the evens and the odds is given by the formulas f2(0)=f0(x)=x + 1 and f3(x)=f1(x)=x − 1.)

Proof: Let A be the set of even numbers. By (3), A or N − A is more likely than the empty set. But A and N − A differ by an easily describable permutation (swap the evens with the odds). So, by (2) they are equally likely. So they are both more likely than the empty set. Let B be the subset of A consisting of the even numbers divisible by four and let C = A − B be the even numbers not divisible by 4. Then B and C differ by an easily describable permutation (leave the odd numbers unchanged; add two to the evens divisible by four; subtract two from the evens not divisible by four). Moreover, A and B differ by an easily (but less easily!) describable permutation. (Exercise!) So, A, B and C are all equally likely by (2). So they are all more likely than the empty set. So, A = B ∪ C is more likely than either B or C (or both) by (4). But this contradicts the fact that A is equally likely as B and C.

Monday, August 3, 2020

Uncountably infinite fair lottery

A fair lottery is going to be held. There are uncountably infinitely many players and the prize is infinitely good. Specifically, countably infinitely many fair coins will be tossed, and corresponding to each infinite sequence of heads and tails there is a ticket that exactly one person has bought.

Along comes Truthful Alice. She offers you a deal: she’ll take your ticket and give you two tickets. Of course, you go for the deal since it doubles your chances of winning, and Alice gives the same deal to everyone else, and everyone else goes for it. Alice then has everyone’s tickets. She now proceeds as follows. If you had a ticket with the sequence X1X2X3..., she gives you the tickets HHHHHX1X2X3... and HHHHTX1X2X3.... And she keeps for herself all the tickets that start with something other than HHHH.

So, everyone has gone for the deal, and Alice has a 15/16 chance of winning (since that’s the chance that the coin sequence won’t start with HHHH). That’s paradoxical!

This paradox suggests that there may be something wrong with the concept of a fair infinite lottery even when the number of tickets is uncountable.

Here is one way to soften the paradox. If you reason with classical probability theory, without any infinitesimals, you will agree that the deal offered you by Alice doubles your chances of winning, but you will also note that the chance it doubles is zero, and doubling zero is no increase. So if you reason with classical probability theory, you will be indifferent to Alice’s deal. There is still something strange in thinking that Alice is able to likely enrich herself at the expense of a bunch of people doing something they are rationally indifferent about. But it’s less surprising than if she can do so at the expense of people doing what they rationally ought.

There is another thought which I find myself attracted to. The very concept of a fair lottery breaks down in infinite cases. If the lottery were fair, exchanging a ticket for two tickets would be a good deal. But the lottery isn’t fair, because there are no infinite fair lotteries.

Should we vaccinate for COVID-19 ahead of Phase III trial results?

Warning: This is speculative back-of-the-envelope discussion of public policy outside of my fields of expertise.

For some time I’ve been thinking that perhaps, now that the Phase II safety trials of some coronavirus vaccines have been completed, we should just start vaccinating prior to completion of the Phase III trials. But of course, it’s not my field! Yesterday, however, I was pleased to note an article in Forbes by a biostatistician advocating the same thing.

While it’s not my field, I am a decision theorist, so here is a back-of-the-envelope utility calculation. The Oxford vaccine has had Phase I/II safety testing on 543 participants. Of course, there is about a 37% chance that, if there were an adverse effect that afflicts one in 543 participants, it would be missed by that sample. So, let’s suppose that there is a 40% chance of a 1 in 600 users lethal effect. So, counting vaccine-related death, the disutility of giving the vaccine to someone is (0.4)(1/600)=0.0007 deaths.

Given that Phase I/II does reveal some evidence of effectiveness, albeit evidence not sufficient for knowledge, we might say that we now have about 70% probability that the vaccine works, and let’s say that if it works, it works for 90% of the users (of course, these numbers are made-up). Moreover, by the best CDC estimate, each COVID-19 infection has a 0.0065 chance of resulting in death. So, counting death alone, the utility of giving the vaccine to someone who would otherwise have become infected is (0.7)(0.9)(0.0065)=0.004, counting only their life or death.

But here is the rub. We don’t actually know if a given individual would be infected if they did not receive the experimental vaccine. Eventually there will probably be a fully tested vaccine and/or a treatment. So far, pretty much the worst major location in the US has been New York City, and only 25% of the city has been infected. The utility of giving the vaccine to one person in New York City at the right time would thus have been something (0.25)(0.004)=0.001, again counting only their life or death.

So, the disutility is 0.0007 deaths and the utility is 0.001 lives, and these numbers are pretty close, so close that the uncertainties in all the back-of-the-envelope estimates likely swamp the differences. So by the above numbers, the idea of giving not fully tested vaccines probably wouldn’t be a good one, since as a matter of general policy we should keep to established testing protocols unless there is a compelling case to the contrary, and the case so far does not seem compelling.

However, a number of things can affect the above calculations, items 1–5 in favor of the not fully tested vaccines and 6 and 7 against it:

  1. very early data from some Phase III trials—e.g., the lack of deaths in the week following the initial injection—could be used to significantly lower the probability of lethal effect

  2. perhaps past data from other vaccine trials, and/or from medical theory, justifies one in thinking that the chance of a 1 in 600 lethal effect is much smaller than 40%; it’s not my field, so I have no idea if this is the case

  3. infection of others: the above only counts the benefit to the person being vaccinated; but if that person got COVID-19, they would on average have also infected about one other person; so there is a significant benefit to society from a successful vaccination

  4. some people are at higher risk for COVID-19 death; giving them the untested vaccine might make sense even if it doesn’t make sense for the average person; however, this is muddied by the fact that intuitively those in higher risk categories may also be at higher risk for vaccine complications

  5. behavior shifts: people who receive a vaccine are likely to take fewer precautions against infecting themselves and others, so if the vaccine doesn’t work, their chance of infection is likely to go up if they are vaccinated

  6. in many locations, the infection rate between now and whenever there is a fully tested successful vaccine or treatment may be rather lower than in New York City

At least the following is true: the option should be taken seriously, and careful calculations should be done.