Wednesday, January 31, 2024

Modality and the Axiom of Choice

Suppose that the set theory of our world is a Solovay model, where we don’t have the Axiom of Choice (AC), and where every subset of the reals is Lebesgue measurable. Now imagine that God picks out a line in space, and defines the Vitali equivalence relation for points on that line (where two points are equivalent if and only if the distance between them is a rational number). It is then surely within God’s power to create a particle of some unexemplified type T at exactly one point in every equivalence class. There is nothing incoherent about that! But if God did that, then there would surely be a set of the points containing a particle of type T. And that set would be a nonmeasurable Vitali set.

So what?

Well, prima facie, there are three possibilities about the existence of nonmeasurable sets:

  1. Necessarily, there are no nonmeasurable sets.

  2. Necessarily, there are nonmeasurable sets.

  3. It is contingent whether there are nonmeasurable sets.

My argument strongly suggests that if there are no nonmeasurable sets, it is nonetheless possible that there are nonmeasurable sets. Hence, (1) is ruled out.

So we have an argument for the disjunction of (2) and (3).

Now, I think a lot of people have the intuition that mathematical facts are necessary. If so, then (3) is ruled out. They will see this as an argument for (2).

I don’t see it that way myself: I am quite open to contingent mathematical truths.

More generally, the argument shows that:

  1. For any set of disjoint nonempty subsets of the reals, it is possible that there is a choice function.

Again, if the existence of pure sets is not a contingent matter, we conclude AC is true for all subsets of the reals.

An odd thought about ZFC

The axioms of ZFC set theory can be divided into (a) the positive axioms, that say that a set with certain properties exist, and (b) two negative axioms that deny the existence of certain sets (Extensionality: given any set, there is no other set with the same members; Regularity: no irregular sets).

The positive axioms divide further into two classes: (i) those that are obvious special cases of naive set theory’s Axiom of Comprehension, and (ii) the Axiom of Choice.

Here is an alternate intellectual history thought experiment. Suppose we never discovered the contradiction in naive set theory or anything like it, maybe because we had a psychological block against thinking about non-self-membered sets, applying Cantor’s Theorem to the universal set, etc. The Axiom of Choice would continue to have an intuitive plausibility, and the “mathematical need” for it, say in the case of the Hahn-Banach Theorem, would likely still arise. And so we would be pulled to adopt it.

This makes me think this. The other positive axioms of ZFC (i.e., the positive axioms of ZF) have an ad hoc feel to them. They are special cases of Comprehension, carefully chosen to both give enough applications of Comprehension and to avoid contradiction (we hope). I feel that much of the plausibility of the other positive axioms of ZFC comes from their being special cases of the highly intuitive—but incoherent—Axiom of Comprehension. And that’s a little suspicious.

Normally one thinks of the Axiom of Choice as the most suspicious of ZFC’s axioms. But here we have a source of suspicion for axioms of ZFC that does not affect Choice.

Well, maybe. Maybe an enemy of Choice could say that both Choice and Comprehension are the fruit of the poisonous tree of principles of plenitude.

Tuesday, January 30, 2024

Pressuring people to violate conscience

If you pressure someone to act against their deeply-set moral beliefs, then your pressure is an action which, if successful, results in:

  1. the person’s changing their deeply-set moral beliefs, or

  2. the person’s acting against their deeply-set moral beliefs.

Our experience of life shows that (2) is rather more likely than (1). People rarely change their deeply-set moral beliefs, but they act against them all too frequently.

But it is wrong to act against one’s moral beliefs. Moreover, acting against one’s moral beliefs is more likely to be culpable than other wrongdoings. For in other wrongdoings, there is always the possibility of being inculpable due to ignorance. But when one acts against one’s moral beliefs, that excuse isn’t available. There is still the possibility that one is insane or that fear of the pressure has taken away one’s free will, but it seems very plausible that most of the time when someone acts against their deeply-set moral beliefs, they are culpable.

Thus, if you pressure someone to act against their deeply-set moral beliefs, there is a very significant chance—bigger than 25%, it is reasonable to estimate—that if you succeed, you will do so by having gotten them to act culpably wrongly. But we should have learned from Socrates that there is nothing worse in life than culpable wrongdoing. Thus the pressure risks a greater than 25% chance of imposing a harm worse than death on the person being pressured.

There are times when it is permissible to impose on someone a 25% risk of death, but that requires very grave reasons indeed, and one should go to great lengths to avoid such an imposition if at all possible. One requires even graver reasons to pressure someone to go against their deeply-set moral beliefs, and one should go to greater lengths to avoid such an imposition.

Remark 1: Here is a kind of a case where it is easier to justify pressure. The harm in violating a mistaken conscience is two-fold: (i) doing wrong, and (ii) culpably so. But now suppose that in fact the person is objectively morally obligated to perform the action they are being pressured to. In fact, let’s suppose the following: the person has a particularly grave objective obligation to ϕ, but they mistakenly believe they have a mild or moderate obligation not to ϕ. Then we may imagine that if they ϕ, they culpably violate a moderate moral obligation, but if they refuse to ϕ, they inculpably violate a grave moral obligation. Which is better? Is it more destructive of one’s moral character to inculpably violate a grave obligation or to culpably violate a moderate one? This is not clear. So in a case like that, pressure is a lot easier to justify.

Conversely, where pressure is hardest to justify is where there is no objective moral duty for the person to perform the action they are being pressured to.

Remark 2: Does it make any difference whether the deeply-set moral beliefs are religious in nature or not? My initial thought is that it does not. In both cases, we have the grave harm of being pressured to wrongdoing, and likely culpable wrongdoing. But on reflection, there can be a difference. Our lives as persons revolve around significant interpersonal relationships. Damaging the deepest relationships between persons requires extremely strong justification. That is why, for instance, we do not (with some exceptions) require spouses to testify against each other in court. But in the fact the deepest relationship in a person’s life is their relationship with God. And to go not only against morality but against what one takes to be the will of God imposes particularly nasty damage on that relationship. Thus when the person cognizes the action they are being pressured to take as not only wrong but contrary to the will of God, the harm that befalls them in doing the action is especially grave. Note that for this harm, it is not necessary that the action be contrary to the will of God—it is enough that the agent believes that it is.

I mean the argument in the previous paragraph to depend on the fact that the person really is in a relationship with God, and in particular that God really exists. I am not talking of the merely subjective harm of thinking that an imaginary relationship is harmed. The extent to which that argument can be extended to people whose religion is non-theistic takes thought. One might hope that these people are still having a relationship with God in and through their religion, and then a version of the point may well apply.

Sunday, January 28, 2024

Measure screen latency

Last weekend, I spent a while measuring screen latency (for gaming purposes) on our TV, using a Raspberry PI, a photodiode and an oscilloscope. Instructions are here.

Friday, January 26, 2024

Counting with plural quantification

I’ve been playing with the question of what if anything we can say with plural quantification that we can’t say with, say, sets and classes.

Here’s an example. Plural quantification may let us make sense of cardinality comparisons that go further than standard methods. For instance, if our mathematical ontology consists only of sets, we can still define cardinality comparisons for pluralities of sets:

  1. Suppose the xx and the yy are pluralities of sets. Then |xx| ≤ |yy| iff there are zz that are an injective function from the xx to the yy.

What is an injective function from the xx to the yy? It is a plurality, the zz, such that each of the zz is an ordered pair of classes, and such that for any a among the xx there is unique b among the yy such that (a,b) is among the zz and for any b among the yy there is at most one a among the xx such that [a,b] is among the zz.

This lets us say stuff like:

  1. There are more sets than members of any set.

Or if our mathematical ontology includes sets and classes, we can compare the cardinalities of pluralities of classes using (1), as long as we can define an ordered pair of classes—which we can, e.g., by identifying the ordered pair of a and b with the class of all ordered pairs (i,x) where i = 0 and x ∈ a or where i = 1 and x ∈ b.

This would then let us say (and prove using a variant of Cantor’s diagonal argument, assuming Comprehension for pluralities):

  1. There are more classes than sets.

The authority of game rules

I just set myself this task: Without moving my middle and index fingers, I would wiggle the ring finger of my right hand twenty times in ten seconds. I then fulfilled this task (holding the middle and index fingers still with my left hand). What was I doing by fulfilling my intention? I think I was playing and winning a game, albeit not a very fun one.

Here are four ways of describing a game fitting my intentions:

  1. Victory condition: Wiggle ring finger 20 times in 10 seconds. Rules: Don’t move middle and index fingers.

  2. Victory condition: Wiggle ring finger 20 times in 10 seconds without moving middle finger. Rules: Don’t move index finger.

  3. Victory condition: Wiggle ring finger 20 times in 10 seconds without moving index finger. Rules: Don’t move middle finger.

  4. Victory condition: Wiggle ring finger 20 times in 10 seconds without moving middle and index fingers. Rules: None.

Each of these generates the same gameplay: exactly the same things count as victory, since cheaters never win, and so you win only if you follow the rules. Ockham’s Razor suggests that they are all the same game. In other words, whether we put some constraints as rules or build them into the victory condition is just a matter of descriptive convenience.

If this is right, then here is a first attempt at a simple account of what we are doing when we play a game:

  1. To play a game is to try to achieve the game’s victory condition without breaking the rules.

This gives a neat, simple and reductive account of the authority of the rules: Their authority comes from the fact that according with them is a necessary condition for achieving an end that one has adopted. It is simply the authority of instrumental rationality.

Of course, all sorts of complications come up. One is that games sometimes have score instead of a victory condition. In that case, what you are doing is aiming at higher scores rather than trying to achieve a specific victory condition, with some sort of an understanding of how breaking the rules affects the score (maybe it sets the score to zero, or maybe it you get whatever score you had just before you broke the rules). This tricky, and I explored these kinds of directional aiming in my ACPA talk last fall.

A bigger problem is this. My story started with a single player game. But things are more complicated in a two player game.

Problem 1: According to (5), if you plan to cheat, then you aren’t trying to achieve the victory condition without breaking the rules, and hence you aren’t playing the game. But to cheat at a game you have to play it! So in fact you never cheated!

Response: I think this is the right result for a single player game you play on your own. By planning to cheat in some particular way, I am simply the changing what game I am playing—and I can do that mid-game if I so choose. For instance, speedrunners of video games sometimes set themselves rules for what kind of “cheating” they are allowed to do: Can one use emulation and save states? Can one use glitches? Can one use automation, and where? If they are playing solely on their own, none of that is really cheating, because it is allowed by the rules they set themselves: their goal is to complete the game faster within such-and-such parameters.

But if you are playing against another or there is an audience to the game, things are different. We could just say this: You are cheating in the sense that you are deceiving the audience and the other player into thinking you are playing the game. Or we could say that there are two senses of playing a game: the first is simply to try to achieve the victory condition without breaking the rules, and the second is an implicit or explicit agreement with other persons to be playing a game in the first sense (e.g., indicated by signing up for the game).

Problem 2: Suppose I plan to beat you at chess while following the rules. To that end I drive to your house. Then my driving to your house is an attempt to achieve chess victory without breaking the rules, and hence by (5) the driving appears to be a part of the gameplay.

Response: This seems to me to be a pretty serious problem for the account actually. There seems to be an intuitive distinction between an action according with the rules that promotes victory and a move in the game. Another example is taking a drink of water while playing chess and doing so with the intention of improving one’s mental functioning and hence chances at victory. In taking the drink, one isn’t playing.

Or is one? When I think about it, the distinction seems kind of arbitrary and without normative significance. Take an athlete who is training for the big game. We want to say that they aren’t playing yet. But notice that in most modern sports the training itself falls under rules—specifically, rules about performance enhancing drugs. We could easily say that the training is basically a part of the game, a part that is much more loosely regulated than the rest. Similarly, many sports regulate the breaks that players can take, and deciding how to apportion the allowed break time (e.g., to take a drink of water, to relax, to stretch, or to refrain from taking it in order to make the other player think one isn’t tired) seems like a part of the game. Or take bodybuilding. It seems quite a distortion of the game to think only of the time in front of the judges as the gameplay.

What about the drive? That sure doesn’t seem to be a part of the game. But is that so clear? First, in a number of settings not showing up yields a forfit, a kind of loss. So you can lose by failing to drive! Second, you can choose how to drive—in restful or a stressful way, for instance—based on how this will affect the more formal gameplay.

Note that in chess, thinking is clearly a part of the gameplay. But you can start to think however early you like! You can be planning your first move should you end up winning the toss and playing white, for instance, while driving. Or not. The decision whether to engage in such planning is a part of your competency as a player.

Problem 3: It seems possible to play games with small children without trying to win, hoping that they will win.

Response: Maybe in such a case, one is pretending to play the game the child is playing, while one is playing a different game, say one whose victory condition is: “My child will checkmate me after I have made it moderately difficult for them to do so.” This is deceitful (the child will typically be unhappy if they figure out they were allowed to win), and deceit is defeasibly wrong. Is there a defeater here? I have my doubts.

Problem 4: I can try to run to my office in five minutes as a game or in order to be in time for office hours. There is a difference between the two: only in the first case am I playing. But in both cases the definition of (5) is satisfied.

Response: Maybe not. In (5), we have an ambiguity: it is not clear whether the fact that the victory condition is the victory condition of a game is a part of the content of one’s intention. If it is a part of the content of one’s intention, then the problem disappears: when I run in order to be on time, I am not aiming to get there in time as a game. But if we require the ludic component to be itself a part of the intention, however, then we lose some of the reductive appeal of (5) absent an account of games.

Should we require a thought of a game to be a part of the intention? Maybe. Suppose you find an odd game machine that dispenses twenty dollar bills if you tap a button 16 times in a second (quite an achievement). You try to do this just to get the money. Are you playing a game? I suspect not. For suppose a friend on a lark took your driver’s license and loaded it into the machine, and the only way to get it back is to tap the button 16 times in a second. You aren’t playing a game!

Perhaps we could say this. Instead of requiring the game part of (5) to be in the content of the intentions, we can require that something about the victory part be a part of the intentions. And perhaps what makes something a victory for you in the relevant sense is that in addition to whatever instrumental and intrinsic value it has, it is in part being pursued simply to achieve a goal one has set oneself as such. That’s what makes it not entirely serious. A game is a kind of whim: you just decided to pursue a goal, and off you go, because you decided to do so. (Of course, you might have good reason to have whims!)

Problem 5: Doesn’t the story violate the guise of the good thesis, since the victory need have no good in it apart from your setting it to yourself as an end, and hence your setting it to yourself as an end can’t be justified by its good.

Response: The case here is similar to one of my favorite family of edge cases for action theory, such as when you get a prize if you can induce electrical activity in the nerves from your brain to your arm, so you raise your arm. The raising of your arm has no value to you. But you have reason to set the rising of your arm as an end, since setting the raising of your arm as an end is a means to inducing the electrical activity in the nerves. In this case, the rising of the arm is worthless, either as an end or as a means, but aiming at it has value, since it causes the electrical activity in the nerves. (Depending on how one understands the response to Problem 4, it may be that one is then playing a simple little game with oneself.)

Similarly, if something is purely played as a game, the end has no value prior to its being set. But there are two values that you can aim at in setting victory-by-the-rules as your end: the value of achieving ends and the value of striving for achievable ends, to both of which the setting of achievable ends is a means. And maybe there is the good of play (which may or may not be subsumed under the values of striving for and achieving ends). So you have the guise of the good in an extended way: the end itself is not an independent good, but the adoption of the end is a good. That’s how it is in the electrical activity case.

Final remarks: The difficulties do not, I think, affect the basic account that the fundamental normative force of the rules of a game simply come from instrumental rationality: following the rules is necessary for the achievement of your goal. And there is a secondary normative force in multiplayer games, coming from general moral rules about compacts and deceit.

But perhaps we shouldn’t even say that there is normative force in the rules. They simply yield necessary conditions for achievement of one’s end. Perhaps they have no more, but no less, normative force than the fact that I need to exert energy to get to my office is.

Wednesday, January 24, 2024

What plurals are there?

Plural quantification is meant to be a logical way of avoiding some technical and/or conceptual difficulties with sets and second-order quantification. Instead of quantifying over one thing, one quantifies over pluralities. Thus, a theist might say: For all xs, God thinks of the xs in their interrelationship.

What plurals are there? Intuitively, for any finite list of objects, there is a plurality of precisely those objects. After all, we can easily have a sentence about any finite plurality of things we have names for: Alice, Bob and Carl like each other. But what furthe pluralities are there?

An expansive proposal is plural comprehension: the axiom schema that says that for any formula F with free variables that include y, for any values of the free variables other than y, there are xs such that y is one of the xs iff F. Unlike the comprehension schema in naive set theory, there does not seem to be any direct Russell-type paradox for plural comprehension, because the xs are not in general an object, but multiple objects.

But plural comprehension on its own does not seem to quite settle what plurals there are. Suppose we have a plurality of nonempty disjoint sets. We can for instance ask: Is there a plurality of objects that includes exactly one object from each of these sets? If (a) there is a set of these disjoint sets, and (b) the Axiom of Choice holds for sets, then the answer is affirmative by plural comprehension. But of course whether the Axiom of Choice holds for sets is itself not philosophically settled, and further not every plurality of sets is such that there is a set of the sets in the plurality.

Observations of this sort show that plural quantification is not as metaphysically innocent as it may seem. You might have hoped that there is no further metaphysical commitment in allowing for plural quantification than in singular quantification. But we can now have substantive questions about what pluralities there are even after we have fixed what singular objects there are, even if we assume plural comprehension. For instance, suppose we think that the objects are the physical objects of the world plus the elements of a model of ZF set theory with ur-elements and with the negation of the Axiom of Choice. We can know what all the objects are, and it still not be decided what pluralities there are. For in the case of a set of disjoint nonempty sets that lacks a choice set, as far as I can tell, there still might be a "choice plurality" (a plurality that has exactly one object from each of the disjoint sets) or there might not be one. (And if you say, well, the Axiom of Choice is obviously true, I may try to come back with a similar issue regarding Choice for proper classes.)

Or I might make a similar point about the Continuum Hypothesis (CH). The following story seems quite coherent. Every uncountable subset of the real numbers is in a bijection with the set of reals (i.e., CH is true), but there is an uncountable plurality of real numbers not in bijection with the plurality of reals. (It's easy to define bijections of pluralities in terms of pluralities of pairs.) But it's also coherent that CH is true, but there is no such uncountable plurality of reals--i.e., that CH is true for sets but its analogue for pluralities is false.

We might try to get out of this by insisting that, necessarily, the right set theory has to have a stronger version of the Schema of Separation that allows for formulas free plural variables and for the plural-membership relation. But that's conceding that the theory of pluralities is metaphysically non-innocent, because now what pluralities there are will constrain what objects there are!

So the question of what restrictions we put on plurals is a really substantive question.

Next note that following point. There seem to be two particularly simple and non-arbitrary answers to the Special Composition Question which asks which pluralities compose a whole: nihilism (there are no non-trivial cases of composition) and universalism (every plurality composes a whole). But once we have realized that it is a substantive question what pluralities there are, it seems that what objects there are and affirming universalism, even with mereological essential thrown in, doesn't settle the question of what wholes there are. There is substantial metaphysics to be done to figure out what pluralities there are!

I say the above with a caution: there are various technicalities I am glossing over, and I wouldn't be surprised if some of them turned out to be really important.

Tuesday, January 23, 2024

Do I need to be aware of what I am intending if I am to be responsible?

I am going to argue that one doesn’t need to be conscious of intending to ϕ in order to be responsible for intending to ϕ.

The easiest version of the argument supposes time is discrete. Let t1 be the very first moment at which I have already intended to ϕ. My consciousness of that intending comes later, at some time t2: there is a time delay in our mental processing. So, at t1, I have already intended to ϕ. When I have intended to ϕ, I am responsible for ϕ. But now suppose that God annihilates me before t2. Then I never come to be aware that I intended to ϕ, but yet I was already responsible for it.

Here are three ways out:

  1. I am not yet responsible at t1, but only come to be responsible once I come to be aware of my intention, namely at t2.

  2. My awareness is simultaneous with the intention, and doesn’t come from the intention, but from the causal process preceeding the intention. During that causal process I become more and more likely to intend to ϕ, and so my awareness is informed by this high probability.

  3. My awareness is a direct simultaneous seeing of the intention, partially constituted by the intention itself, so there is no time delay.

Unconscious aliens

Lately I’ve been starting my philosophy of mind course with Carolyn Gilman’s short story about unconscious but highly intelligent aliens.

We can imagine such aliens having thoughts, beliefs, concepts, representational and motivational states. After all, we have beliefs even when totally unconscious, and we have subconscious thoughts, concepts, as well as representational and motivational states.

I’ve wondered what unconscious aliens would think about our philosophical arguments about physicalism and consciousness. They might not have the concept of consciousness or of an experiential state, but they could have the concept of “that special mode of representing reality that humans have and we don’t”. And so now I ask myself: Would these aliens have any reason to think that consciousness-based arguments for dualism have any force? Would they have any reason to think that “special mode” is a non-physical mode?

Of course, the aliens might be convinced of dualism on the basis of intentionality arguments. But would something about humans give them additional evidence of dualism about humans?

The aliens shouldn’t be surprised to discover that humans when awake have some ways of processing inputs that they themselves don’t, nor should that give any evidence for dualism. Neither should the presence of some special “phenomenological” vocabulary in humans for describing such processing.

But I think what should give the aliens some evidence is the conviction that many humans have that their “experiences” lack physical properties, that they are categorically different from physical properties and things. If someone describes an object of sensory perception as lacking color, that gives one reason to think the object indeed lacks color. If someone describes the object of introspective perception as lacking charge or mass, that gives one reason to think the object indeed lacks charge or mass.

The aliens would need to then consider the fact that some people have the conviction and others do not, and try to figure out which ones are doing a better job learning from their introspection.

Monday, January 22, 2024

The hyperreals and the von Neumann - Morgenstern representation theorem

This is all largely well-known, but I wanted to write it down explicitly. The von Neumann–Morgenstern utility theorem says that if we have a total preorder (complete transitive relation) on outcomes in a mixture space (i.e., a space such that given members a and b and any t ∈ [0,1], there is a member (1−t)a + tb satisfying some obvious axioms) and satisfying:

  • Independence: For any outcomes a, b and c and any t ∈ (0, 1], we have a ≾ b iff ta + (1−t)c ≾ tb + (1−t)c, and

  • Continuity: If a ≾ b ≾ c then there is a t ∈ [0,1] such that b ≈ (1−t)a + tc (where x ≈ y iff x ≾ y and y ≾ x)

the preorder can be represented by a real-valued utility function U that is a mixture space homomorphism (i.e., U((1−t)a+tb) = (1−t)U(a) + tU(b)) and such that U(a) ≤ U(b) if and only a ≾ b.

Clearly continuity is a necessary condition for this to hold. But what if we are interested in hyperreal-valued utility functions and drop continuity?

Quick summary:

  • Without continuity, we have a hyperreal-valued representation, and

  • We can extend our preferences to recover continuity with respect to the hyperreal field.

More precisely, Hausner in 1971 showed that in a finite dimensional case (essentially the mixture space being generated by the mixing operation from a finite number of outcomes we can call “sure outcomes”) with independence but without continuity we can represent the total preorder by a finite-dimensional lexicographically-ordered vector-valued utility. In other words, the utilities are vectors (u0,...,un − 1) of real numbers where earlier entries trump later ones in comparison. Now, given an infinitesimal ϵ, any such vector can be represented as u0 + u1ϵ + ... + un − 1ϵn − 1. So in the finite dimensional case, we can have a hyperreal-valued utility representation.

What if we drop the finite-dimensionality requirement? Easy. Take an ultrafilter on the space of finitely generated mixture subspaces of our mixture space ordered by inclusion, and take an ultraproduct of the hyperreal-valued representations on each of these, and the result will be a hyperreal-valued utility representing our preorder on the full space.

(All this stuff may have been explicitly proved by Richter, but I don’t have easy access to his paper.)

Now, on to the claim that we can sort of recover continuity. More precisely, if we allow for probabilistic mixtures of our outcomes with weights in the hyperreal field F that U takes values in, then we can embed our mixing space M in an F-mixing space MF (which satisfies the axioms of a mixing space with respect to members of the larger field F), and extend our preference ordering ≾ to MF such that we have:

  • F-continuity: If a ≾ b ≾ c then there is a t ∈ F with 0 ≤ t ≤ 1 such that b ≈ (1−t)a + tc (where x ≈ y iff x ≾ y and y ≾ x).

In other words, if we allow for sufficiently fine-grained probabilistic mixtures, with hyperreal probabilities, we get back the intuitive content of continuity.

To see this, embed M as a convex subset of a real vector space V using an embedding theorem of Stone from the middle of the last century. Without loss of generality, suppose 0 ∈ M and U(0) = 0. Extend U to the cone CM = {ta : t ∈ [0, ∞), a ∈ M} generated by M by letting U(ta) = tU(a). Note that this is well-defined since U(0) = 0 and if ta = ub with 0 ≤ t < u, then b = (1−s) ⋅ 0 + s ⋅ a, where s = t/u, and so U(b) = sU(a). It is easy to see that the extension will be additive. Next extend U to the linear subspace VM generated by CM (and hence by M) by letting U(ab) = U(a) − U(b) for a and b in CM. This is well-defined because if a − b = c − d, then a + d = b + c and so U(a) + U(d) = U(b) + U(c) and hence U(a) − U(b) = U(c) − U(d). Moreover, U is now a linear functional on VM. If B is a basis of VM, then let VMF be an F-vector space with basis B, and extend U to an F-linear functional from VMF to F by letting U(t1a1+...+tnan) = t1U(a1) + ... + tnU(an), where the ai are in B and the ti are in F. Now let MF be the F-convex subset of VMF generated by M. This will be an F-mixing space (i.e., it will satisfy the axioms of a mixing space with the field F in place of the reals). Let a ≾ b iff U(a) ≤ U(b) for a and b in MF. Then if a ≾ b ≾ c, we have U(a) ≤ U(b) ≤ U(c). Let t between 0 and 1 in F be such that (1−t)U(a) + tU(c) = U(b). By F-linearity of U, we will then have U((1−t)a+tc) = U(b).

Friday, January 19, 2024

The impact of right action on virtue

One of Socrates’ great discoveries is that moral goodness is good for us.

Virtue ethicists think there are two ways that acting morally well is typically good for us:

  1. The action itself is a constituent of our well-being, and

  2. The action promotes our possession of virtue.

Now, (1) is not just typically present, but always: good actions are constituents of human well-being. But unless we can count on miracles, there is no guarantee that (2) is always there. We can easily imagine cases where if you don’t do something immoral, you will captured and made to live among people whose vice will rub off on you to a degree where you are likely to become more vicious than you would have been had you done that one immoral thing. But those cases involve highly unusual situations. You might think that (2) is true except in highly exceptional cases.

Are there more common cases where morally good action fails to promote virtue? Well, acting morally well sometimes puts one in a position of temptation. This is not at all uncommon. You take a paycut to work for a charitable organization. But this results in financial pressures and now you are tempted to embezzle from your employer. For the sake of justice you work as a judge. And now you may be offered bribes, or simply be tempted to pride because of your social position. You drive to the grocery store to buy a treat for your child, and then along the way you are tempted to unsafe driving practices.

In a number of such cases, if you fall to the temptation, you become morally worse than you would have been had you omitted the initial morally good action. It is better to work for Morally Neutral Conglomerate, Inc. than for a charitable organization if you would be embezzling from the latter but not from the former. And we empirically know that people do fall to such temptations.

Thus we know there are ordinary cases where an instance of acting morally well has led to moral downfall.

But whether these cases are also counterexamples to the universality of (2) depends on how we read the “promotes” in (2). If we read it purely causally, then, yes, these are cases where doing the right thing was an important causal factor in someone’s moral downfall. But likely we should read (2) in a probabilistic tendency way. Perhaps we have something like this:

  1. The mathematical expectation of the level of virtue is higher upon doing the action than upon omitting it.

Again, in the highly exceptional cases this need not be true, unless you can expect a miracle. You may be in a position to be pretty confident that you will morally deteriorate unless you escape a corrupting environment but have no way to escape it except by doing something immoral.

But in typical ordinary cases, (3) seems pretty plausible. At least this is true: there are going to be few cases where the expected level of virtue is significantly lower upon doing the right action. For if that were the case, that would constitute a strong moral reason not to do the action, and hence except in a few cases where that strong moral reason gets overridden, the action won’t be right after all.

All that said, I wonder how good our empirical data is that (3) is true in the case of most ordinary actions.

Wednesday, January 17, 2024

A violation of the Principle of Sufficient Reason?

I have a number of times over my career claimed that in the ordinary course of life, we don’t take seriously the hypothesis that something we can’t find an explanation for has no explanation.

Well, I now had an opportunity for observing what happens psychologically to me when I can’t find an explanation.

A couple of days ago, my wife found a significant pool of water in the morning on the top surface of our clothes dryer. When I looked at it, it was like 250ml or more. If it were on the floor or on the washer, I would expect it was from a washer-related leak. If our clothes dryer had a water connection for steaming clothes, a leak would make sense (ChatGPT 3.5 suggested this hypothesis). If the quantity were lower, it could easily be from wet clothes put carelessly on top of the dryer or condensation. If there was wetness in the cabinets above the dryer, it would likely be a leak in one of the many containers of cleaning, photo-developing and other chemicals stored there. If the ceiling showed a discoloration above the dryer, it would be a leak from upstairs. If the liquid smelled, it might be urine from the cat sneaking in.

But none of these apply, to the point where my best four explanations are all hard to believe:

  1. a family member sleepwalking with a glass of water, wandering into the laundry room, spilling the water, and walking away,

  2. God doing a miracle just to impress on me that there are more things in heaven and earth than are dreamed of in my natural philosophy,

  3. a very precisely aimed horizontal leak from one of the faucets in the room, none of which are above the dryer (the next morning, there were slight leaks in two faucets in the room, but the leaks were a non-directional wetness rather than a jet aimed at a precise target).

  4. a family member spilling water (from what?) on the dryer and forgetting all about it.

(A plumber called in for the faucet leaks could think of no explanation, except to note that there are many plumbing problems given our current Texas freeze.)

What is my psychology about this? I can’t get myself to believe any of (a)–(d), or even their disjunction. I find myself strongly pulled to just forget the event, to pretend to myself that the event was but a dream, and it now seems to me that that is one way in which we cope with unexplained events. But of course my wife remembers the event, and I can’t get myself to take seriously the idea that we both had the same dream (plus there was no waking up after it—after we cleaned up the spill, I launched into other activities rather than finding myself back in bed).

What about this option?

  1. The event has an explanation: it violates the Principle of Sufficient Reason.

I also can’t take (e) seriously. But do I take (e) less seriously than the options in (a)–(d)? Speaking of subjective feelings, I don’t think I feel much more incredulous about (e) than about (a)–(d).

So what do I really think? I guess:

  1. There is a mundane explanation and I am not smart enough to think of it.

Tuesday, January 16, 2024

Impossible duties and consequentialism

Intuitively, sometimes you’re obligated to do something you can’t do. For instance, you promised to visit a friend at 5 pm, and at 4:45 pm you are hiking a one-hour drive away. Or you did something bad, and now you owe the victim a sincere apology, but you’re a vicious person and not psychologically capable of rendering an apology that is sincere.

Consequentialist theories, however, have to limit their consideration to actions you can do, since otherwise everything we do is wrong. For whatever we do, there is an impossible action with even better consequences. You spend a day volunteering at a homeless shelter. That may sound good, but the consequences would have been better if instead you magically cured all cancer.

Thus, it seems:

  1. If consequentialism is true, you are only ever obligated to do something possible.

  2. Sometimes, you are obligated to do the impossible.

  3. So, consequentialism is false.

That said, I am not completely convinced of (2).

Thursday, January 11, 2024

A deontological asymmetry

Consider these two cases:

  1. You know that your freely killing one innocent person will lead to three innocent drowning people being saved.

  2. You know that saving three innocent drowning people will lead to your freely killing one innocent person.

It’s easy to imagine cases like (1). If compatibilism is true, it’s also pretty easy to imagine cases like (2)—we just suppose that your saving the innocent people produces a state of affairs where your psychology gradually changes in such a way that you kill one innocent person. If libertarianism and Molinism are true, we can also get (2): God can reveal to you the conditional of free will.

If libertarianism is true but Molinism is false, it’s harder to get (2), but we can still get it, or something very close to it. We can, for instance, imagine that if you rescue the three people, you will be kidnapped by someone who will offer increasingly difficult to resist temptations to kill an innocent person, and it can be very likely that one day you will give in.

Deontological ethics says that in (1) killing the innocent person is wrong.

Does it say that saving the three innocents is wrong in (2)? It might, but not obviously so. For the action is in itself good, and one might reasonably say that becoming a murderer is a consequence that is not disproportionate to saving the three lives. After all, imagine this variant:

  1. You know that saving three innocent drowning people will lead to a fourth person freely killing one innocent person.

Here it seems that it is at least permissible to save the three innocents. That someone will through a weird chain of events become a murderer if you save the three innocents does not make it wrong to save the three.

I am inclined to think that saving the three is permissible in (2). But if you disagree, change the three to thirty. Now it seems pretty clear to me that saving the drowning people is permissible in (2). But it is still wrong to kill an innocent person to save thirty.

Even on threshold deontology, it seems pretty plausible that the thresholds in (1) and (2) are different. If n is the smallest number such that it is permissible to save n drowning people, at the expense of a side-effect of your eventually killing one innocent, then it seems plausible that n is not big enough to make it permissible to kill one innocent to save n.

So, let’s suppose we have this asymmetry between (1) and (2), with the “three” replaced by some other number as needed (the same one in both statements), so that the action described in (1) is wrong but the one in (2) is permissible.

This then will be yet another counterexample to the project of consequentializing deontology: of finding a utility assignment that renders conclusions equivalent to those of deontology. For the consequences of (1) and (2) are the same, even if one assigns a very big disutility to killing innocents.