Wednesday, November 30, 2022

Two versions of the guise of the good thesis

According to the guise of the good thesis, one always acts for the sake of an apparent good. There is a weaker and a stronger version of this:

  • Weak: Whenever you act, you act for an end that you perceive is good.

  • Strong: Whenever you act, you act for an end, and every end you act for you perceive as good.

For the strong version to have any plausibility, “good” must include cases of purely instrumental goodness.

I think there is still reason to be sceptical of the strong version.

Case 1: There is some device which does something useful when you trigger it. It is triggered by electrical activity. You strap it on to your arm, and raise your arm, so that the electrical activity in your muscles triggers the device. Your raising your arm has the arm going up as an end, but that end is not perceived as good, but merely neutral. All you care about is the electrical activity in your muscles.

Case 2: Back when they were dating in high school, Bob promised to try his best to bake a nine-layer chocolate cake for Alice’s 40th birthday. Since then, Bob and Alice have had a falling out, and hate each other’s guts. Moreover, Alice and all her guests hate chocolate. But Alice doesn’t release Bob from his promise. Bob tries his best to bake the cake in order to fulfill his promise, and happens to succeed. In trying to bake the cake, Bob acted for the end of producing a cake. But producing the cake was worthless, since no one would eat it. The only value was in the trying, since that was the fulfillment of his promise.

In both cases, it is still true that the agent acts for a good end—the useful triggering of the device and the production of the cake. But in both cases it seems they are also acting for a worthless end. Thus the cases seem to fit with the weak but not the strong guise of the good thesis.

I was going to leave it at this. But then I thought of a way to save the strong guise of the good thesis. Success is valuable as such. When I try to do something, succeeding at it has value. So the arm going up or the cake being produced are valuable as necessary parts of the success of one’s action. So perhaps every end of your action is trivially good, because it is good for your action to succeed, and the end is a (constitutive, not causal) means to success.

This isn’t quite enough for a defense of the strong thesis. For even if the success is good, it does not follow that you perceive the success as good. You might subscribe to an axiological theory on which success is not good in general, but only success at something good.

But perhaps we can say this. We have a normative power to endow some neutral things with value by making them our ends. And in fact the only way to act for an end that does not have any independent value is by exercising that normative power. And exercising that normative power involves your seeing the thing you’re endowing with value as valuable. And maybe the only way to raise your arm or for Bob to bake the cake in the examples is by exercising the normative power, and doing so involves seeing the end as good. Maybe. This has some phenomenological plausibility and it would be nice if it were true, because the strong guise of the good thesis is pretty plausible to me.

If this story is right, it adds a nuance to the ideas here.

Tuesday, November 29, 2022

An odd poker variant

Suppose Alice can read your mind, and you are playing poker against a set of people not including Alice. You don’t care about winning, just about money. Alice has a deal for you that you can’t refuse.

  • If you win, she takes your winnings away.

  • If you lose, but you tried to win, she pays you double what you lost.

  • If you lose, but you didn’t try to win, she does nothing.

Clearly the prudent thing to do is to try to win. For if you don’t try to win, then you are guaranteed not to get any money. But if you do try, you won’t lose anything, and you might gain.

Here is the oddity: you are trying to win in order to get paid, but you only get paid if you don’t win. Thus, you are trying to achieve something, the achievement of which would undercut the end you are pursuing.

Is this possible? I think so. We just need to distinguish between pursuing victory for the sake of something else that follows from victory and pursuing victory for the sake of something that might follow from the pursuit of victory.

Nonoverriding morality

Some philosophers think that sometimes norms other than moral norms—e.g., prudential norms or norms of the meaningfulness of life—take precedence over moral norms and make permissible actions that are morally impermissible. Let F-norms be such norms.

A view where F-norms always override moral norms does not seem plausible. In the case of prudential or meaningfulness, it would point to a fundamental selfishness in the normative constitution of the human being.

So the view has to be that sometimes F-norms take precedence over moral norms, but not always. There must thus be norms which are neither F-norms nor moral norms that decide whether F-norms or moral norms take precedence. We can call these “overall norms of combination”. And it is crucial to the view that the norms of combination themselves be neither F-norms nor moral norms.

But here is an oddity. Morality already combines F-considerations and first order paradigmatically moral considerations. Consider two actions:

  1. Sacrifice a slight amount of F-considerations for a great deal of good for one’s children.

  2. Sacrifice an enormous amount of F-considerations for a slight good for one’s children.

Morality says that (1) is obligatory but (2) is permitted. Thus, morality already weighs F and paradigmatically moral concerns and provides a combination verdict. In other words, there already are moral norms of combination. So the view would be that there are moral norms of combination and overall norms of combination, both of which take into account exactly the same first order considerations, but sometimes come to different conclusions because they weigh the very same first order considerations differently (e.g., in the case where a moderate amount of F-considerations needs to be sacrificed for a moderate amount of good for one’s children).

This view violates Ockham’s razor: Why would we have moral norms of combination if the overall norms of combination always override them anyway?

Moreover, the view has the following difficulty: It seems that the best way to define a type of norm (prudential, meaningfulness, moral, etc.) is in terms of the types of consideration that the norm is based on. But if the overall norms of combination take into account the very same types of consideration as the moral norms of combination, then this way of distinguishing the types of norms is no longer available.

Maybe there is a view on which the overall ones take into account not the first-order moral and F-considerations, but only the deliverances of the moral and F-norms of combination, but that seems needlessly complex.

Monday, November 28, 2022

Oppositional relationships

Here are three symmetric oppositional possibilities:

  1. Competition: x and y have shared knowledge that they are pursuing incompatible goals.

  2. Moral opposition: x and y have shared knowledge that they are pursuing incompatible goals and each takes the other’s pursuit to be morally wrong.

  3. Mutual enmity: x and y have shared knowledge that they each pursue the other’s ill-being for a reason other than the other’s well-being.

The reason for the qualification on reasons in 3 is that one might say that someone who punishes someone in the hope of their reform is pursuing their ill-being for the sake of their well-being. I don’t know if that is the right way to describe reformative punishment, but it’s safer to include the qualification in (3).

Note that cases of moral opposition are all cases of competition. Cases of mutual enmity are also cases of competition, except in rare cases, such as when a party suffers from depression or acedia which makes them not be opposed to their own ill-being.

I suspect that most cases of mutual enmity are also cases of moral opposition, but I am less clear on this.

Both competition and moral opposition are compatible with mutual love, but mutual enmity is not compatible with either direction of love.

Additionally, there is a whole slew of less symmetric options.

I think loving one’s competitors could be good practice for loving one’s (then necessarily non-mutual) enemies.

Games and consequentialism

I’ve been thinking about who competitors, opponents and enemies are, and I am not very clear on it. But I think we can start with this:

  1. x and y are competitors provided that they knowingly pursue incompatible goals.

In the ideal case, competitors both rightly pursue the incompatible goals, and each knows that they are both so doing.

Given externalist consequentialism, where the right action is the one that actually would produce better consequences, ideal competition will be extremely rare, since the only time the pursuit of each of two incompatible goals will be right is if there is an exact tie between the values of the goals, and that is extremely rare.

This has the odd result that on externalist consequentialism, in most sports and other games, at least one side is acting wrongly. For it is extremely rare that there is an exact tie between the values of one side winning and the value of the other side winning. (Some people enjoy victory more than others, or have somewhat more in the way of fans, etc.)

On internalist consequentalism, where the right action is defined by expected utilities, we would expect that if both sides are unbiased investigators, in most of the games, at least one side would at take the expected utility of the other side’s winning to be higher. For if both sides are perfect investigators with the same evidence and perfect priors, then they will assign the same expected utilities, and so at least one side will take the other’s to have higher expected utility, except in the rare case where the two expected utilities are equal. And if both sides assign expected utilities completely at random, but unbiasedly (i.e., are just as likely to assign a higher expected utility to the other side winning as to themselves), then bracketing the rare case where a side assigns equal expected utility to both victory options, any given side will have a probability of about a half of assigning higher expected utility to the other side’s victory, and so there will be about a 3/4 chance that at least one side will take the other side’s victory to be more likely. And other cases of unbiased investigators will likely fall somewhere between the perfect case and the random case, and so we would expect that in most games, at least one side will be playing for an outcome that they think has lower expected utility.

Of course, in practice, the two sides are not unbiased. One might overestimate the value of oneself winning and the underestimate the value of the other winning. But that is likely to involve some epistemic vice.

So, the result is that either on externalist or internalist consequentialism, in most sports and other competitions, at least one side is acting morally wrongly or is acting in the light of an epistemic vice.

I conclude that consequentialism is wrong.

Precise lengths

As usual, write [a,b] for the interval of the real line from a to b including both a and b, (a,b) for the interval of the real line from a to b excluding a and b, and [a, b) and (a, b] respectively for the intervals that include a and exclude b and vice versa.

Suppose that you want to measure the size m(I) of an interval I, but you have the conviction that single points matter, so [a,b] is bigger than (a,b), and you want to use infinitesimals to model that difference. Thus, m([a,b]) will be infinitesimally bigger than m((a,b)).

Thus at least some intervals will have lengths that aren’t real numbers: their length will be a real number plus or minus a (non-zero) infinitesimal.

At the same time, intuitively, some intervals from a to b should have length exactly b − a, which is a real number (assuming a and b are real). Which ones? The choices are [a,b], (a,b), [a, b) are (a, b].

Let α be the non-zero infinitesimal length of a single point. Then [a,a] is a single point. Its length thus will be α, and not a − a = 0. So [a,b] can’t always have real-number length b − a. But maybe at least it can in the case where a < b? No. For suppose that m([a,b]) = b − a whenever a < b. Then m((a,b]) = b − a − α whenever a < b, since (a, b] is missing exactly one point of [a,b]. But then let c = (a+b)/2 be the midpoint of [a,b]. Then:

  1. m([a,b]) = m([a,c]) + m((c,b]) = (ca) + (bcα) = b − a − α,

rather than m([a,b]) as was claimed.

What about (a,b)? Can that always have real number length b − a if a < b? No. For if we had that, then we would absurdly have:

  1. m((a,b)) = m((a,c)) + α + m((c,b)) = c − a + α + b − c = b − a + α,

since (a,b) is equal to the disjoint union of (a,c), the point c and (c,b).

That leaves [a, b) and (a, b]. By symmetry if one has length b − a, surely so does the other. And in fact Milovich gave me a proof that there is no contradiction in supposing that m([a,b)) = m((b,a]) = b − a.

Tuesday, November 22, 2022

Hyperreal expected value

I think I have a hyperreal solution, not entirely satisfactory, to three problems.

  1. The problem of how to value the St Petersburg paradox. The particular version that interests me is one from Russell and Isaacs which says that any finite value is too small, but any infinite value violates strict dominance (since, no matter what, the payoff will be less than infinity).

  2. How to value gambles on a countably infinite fair lottery where the gamble is positive and asymptotically approaches zero at infinity. The problem is that any positive non-infinitesimal value is too big and any infinitesimal value violates strict dominance.

  3. How to evaluate expected utilities of gambles whose values are hyperreal, where the probabilities may be real or hyperreal, which I raise in Section 4.2 of my paper on accuracy in infinite domains.

The apparent solution works as follows. For any gamble with values in some real or hyperreal field V and any finitely-additive probability p with values in V, we generate a hyperreal expected value Ep, which satisfies these plausible axioms:

  1. Linearity: Ep(af+bg) = aEpf + bEpg for a and b in V

  2. Probability-match: Ep1A = p(A) for any event A, where 1A is 1 on A and 0 elsewhere

  3. Dominance: if f ≤ g everywhere, then Epf ≤ Epg, and if f < g everywhere, then Epf < Epg.

How does this get around the arguments I link to in (1) and (2) that seem to say that this can’t be done? The trick is this: the expected value has values in a hyperreal field W which will be larger than V, while (4)–(6) only hold for gambles with values in V. The idea is that we distinguish between what one might call primary values, which are particular goods in the world, and what one might call distribution values, which specify how much a random distribution of primary values is worth. We do not allow the distribution values themselves to be the values of a gamble. This has some downsides, but at least we can have (4)–(6) on all gambles.

How is this trick done?

I think like this. First it looks like the Hahn-Banach dominated extension theorem holds for V2-valued V1-linear functionals on V1-vector spaces V1 ⊆ V2 are real or hyperreal field, except that our extending functional may need to take values in a field of hyperreals even larger than V2. The crucial thing to note is that any subset of a real or hyperreal field has a supremum in a larger hyperreal field. Then where the proof of the Hahn-Banach theorem uses infima and suprema, you move to a larger hyperreal field to get them.

Now, embed V in a hyperreal field V2 that contains a supremum for every subset of V, and embed V2 in V3 which has a supremum for every subset of V2. Let Ω be our probability space.

Let X be the space of bounded V2-valued functions on Ω and let M ⊆ X be the subspace of simple functions (with respect to the algebra of sets that Ω is defined on). For f ∈ M, let ϕ(f) be the integral of f with respect to p, defined in the obvious way. The supremum on V2 (which has values in V3) is then a seminorm dominating ϕ. Extend ϕ to a V-linear function ϕ on X dominated by V2. Note that if f > 0 everywhere for f with values in V, then f > α > 0 everywhere for some α ∈ V2, and hence ϕ(−f) ≤  − α by seminorm domination, hence 0 < α ≤ ϕ(f). Letting Ep be ϕ restricted to the V-valued functions, our construction is complete.

I should check all the details at some point, but not today.

Monday, November 21, 2022

Dominance and countably infinite fair lotteries

Suppose we have a finitely-additive probability assignment p (perhaps real, perhaps hyperreal) for a countably infinite lottery with tickets 1, 2, ... in such a way that each ticket has infinitesimal probability (where zero counts as an infinitesimal). Now suppose we want to calculate the expected value or previsio EpU of any bounded wager U on the outcome of the lottery, where we think of the wager as assigning a value to each ticket, and the wager is bounded if there is a finite M such that |U(n)| < M for all n.

Here are plausible conditions on the expected value:

  1. Dominance: If U1 < U2 everywhere, then EpU1 < EpU2.

  2. Binary Wagers: If U is 0 outside A and c on A, then EpU = cP(A).

  3. Disjoint Additivity: If U1 and U2 are wagers supported on disjoint events (i.e., there is no n such
    that U1(n) and U2(n) are both non-zero), then Ep(U1+U2) = EpU1 + EpU2.

But we can’t. For suppose we have it. Let U(n) = 1/(2n). Fix a positive integer m. Let U1(n) be 2 for n ≤ m + 1 and 0 otherwise. Let U2(n) be 1/m for n > m + 1 and 0 for n ≤ m + 1. Then by Binary Wagers and by the fact that each ticket has infinitesimal probability, EpU1 is an infinitesimal α (since the probability of any finite set will be infinitesimal). By Binary Wagers and Dominance, EpU2 ≤ 1/(m+1). Thus by Disjoint Additivity, Ep(U1+U2) ≤ α + 1/(m+1) < 1/m. But U < U1 + U2 everywhere, so by Dominance we have EpU < 1/m. Since 0 < U everywhere, by Dominance and Binary Wagers we have 0 < EpU.

Thus, EpU is a non-zero infinitesimal β. But then β < U(n) for all n, and so by Binary Wagers and Dominance, β < EpU, a contradiction.

I think we should reject Dominance.

Corruptionism and care about the soul

According to Catholic corruptionists, when I die, my soul will continue to exist, but I won’t; then at the Resurrection, I will come back into existence, receiving my soul back. In the interim, however, it is my soul, not I, who will enjoy heaven, struggle in purgatory or suffer in hell.

Of course, for any thing that enjoys heaven, strugges in purgatory or suffers in hell, I should care that it does so. But should I have that kind of special care that we have about things that happen to ourselves for what happens to the soul? I say not, or at most slightly. For suppose that it turned out on the correct metaphysics that my matter continues to exist after death. Should I care whether it burns, decays, or is dissected, with that special care with which we care about what happens to ourselves? Surely not, or at most slightly. Why not? Because the matter won’t be a part of me when this happens. (The “at most slightly” flags the fact that we can care about “dignitary harms”, such as nobody showing up at our funeral, or us being defamed, etc.)

But clearly heaven, purgatory and hell in the interim state is something we should care about.

Friday, November 18, 2022

Social choice principles and invariance under symmetries

A comment by a referee of a recent paper of mine that one of my results in decision theory didn’t actually depend on numerical probabilities and hence could extend to social choice principles made me realize that this may be true for some other things I’ve done.

For instance, in the past I’ve proved theorems on qualitative probabilities. A qualitative probability is a relation on the subsets of some sample space Ω such that:

  1. ≼ is transitive and reflexive.

  2. ⌀ ≼ A

  3. if A ∩ C = B ∩ C = ⌀, then A ≼ B iff A ∩ C ≼ B ∩ C (additivity).

But need not think of Ω as a space of possibilities and of ≼ as a probability comparison. We could instead think of it as a set of people who are candidates for getting some good thing, with A ≼ B meaning that it’s at least as good for the good thing to be distributed to the members of B as to the members of A. Axioms (1) and (2) are then obvious. And axiom (3) is an independence axiom: whether it is at least as good to give the good thing to the members of B as to the members of A doesn’t depend on whether we give it to the members of a disjoint set C at the same time.

Of course, for a general social choice principle we need more than just a decision whether to give one and the same good to the members of some set. But we can still formalize those questions in terms of something pretty close to qualitative probabilities. For a general framework, suppose a population set X (a set of people or places in spacetime or some other sites of value) and a set of values V (this could be a set of types of good, or the set of real numbers representing values). We will suppose that V comes with a transitive and reflexive (preorder) preference relation . Now let Ω = X × V. A value distribution is a function f from X to V, where f(x) = v means that x gets something of value v.

We want to generate a reflexive and transitive preference ordering ≼ on the set VX of value distributions.

Write f ≈ g when f ≼ g and g ≼ f, and f ≺ g when f ≼ g but not g ≼ f. Similarly for values v and w, write v < w if v ≤ w but not w ≤ v.

Here is a plausible axiom on value distributions:

  1. Sameness independence: if f1, f2, g1, g2 are value distributions and A ⊆ X is such that (a) f1 ≼ f2, (b) f1(x) = f2(x) and g1(x) = g2(x) if x ∉ A, (c) f1(x) = g1(x) and f2(x) = g2(x) if x ∈ A.

In other words, the mutual ranking between two value distributions does not depend on what the two distributions do to the people on whom the distributions agree. If it’s better to give $4 to Jones than to give $2 to Smith when Kowalski is getting $7, it’s still better to give $4 to Jones than to give $2 to Smith when Kowalski is getting $3. There is probably some other name in the literature for this property, but I know next to nothing about social choice literature.

Finally, we want to have some sort of symmetries on the population. The most radical would be that the value distributions don’t care about permutations of people, but more moderate symmetries may be required. For this we need a group G of permutations acting on X.

  1. Strong G-invariance: if g ∈ G and f is a value distribution, then f ∘ g ≈ f.

Here, f ∘ g is the value distribution where site x gets f(g(x)).

Additionally, the following is plausible:

  1. Pareto: If f(x) ≤ g(x) for all x with f(x) < g(x) for some x, then f ≺ g.

Theorem: Assume the Axiom of Choice. Suppose on V is reflexive, transitive and non-trivial in the sense that it contains two values v and w such that v < w. There exists a reflexive, transitive preference ordering on the value distributions satisfying (4)–(6) if and only if there is such an ordering that is total if and only if G has locally finite action on X.

A group of symmetries G has locally finite action a set X provided that for each finite subset H of G and each x ∈ X, applying finite combinations of members of G to x generates only a finite subset of X. (More precisely, if ⟨H⟩ is the subgroup generated by G, then Hx is finite.)

If X is finite, then local finiteness of action is trivial. If X is infinite, then it will be satisfies in some cases but not others. For instance, it will be satisfied if G is permutations that only move a finite number of members of X at a time. It will on the other hand fail if X is a infinite bunch of people regularly spaced in a line and G is shifts.

The trick to the proof of the Theorem is to reduce preferences between distributions to comparisons of subsets of X × V and to reduce comparisons of subsets of X to preferences between binary distributions.

Proof of Therem: Suppose that G has locally finite action. Define Ω = X × V. By Theorem 2 of my invariance of non-classical probabilities paper, there is a strongly G-invariant regular (i.e., ⌀ ≺ A if A is non-empty) qualitative probability ≼ on Ω. Given a value distribution f, let f* = {(x,v) : v ≤ f(x)} be a subset of Ω. Define f ≼ g iff f* ≼ g.

Totality, reflexivity, transitivity and strong G-invariance for value distributions follows from the same conditions for subsets of Ω. Regularity of on the subsets of Ω and additivity implies that if A ⊂ B then A ≺ B. The Pareto condition for ≼ on the value distributions follows since if f and g satisfy are such that f(x) ≤ g(x) for all x with strict inequality for some x, then f* ⊂ g*. Finally, the complicated sameness independence condition follows from additivity.

Now suppose there is a (not necessarily total) strongly G-invariant reflexive and transitive preference ordering ≼ on the value distributions satisfying (4)–(6). Given a subset A of X, define A to be the value distribution that gives w to all the members of A and v to all the non-members, where v < w. Define A ≼ B iff A ≼ B. This will be a strongly G-invariant reflexive and transitive relation on the subsets of X. It will be regular by the Pareto condition. Finally, additivity follows from the sameness independence condition. Local finiteness of action of G then follows from Theorem 2 of my paper. ⋄

Note that while it is natural to think of X has just a set of people or of locations, inspired by Kenny Easwaran one can also think of it as a set Q × Ω where Ω is a probability space and Q is a population, so that f(x,ω) represents the value x gets at location ω. In that case, G might be defined by symmetries of the population and/or symmetries of the probability space. In such a setting, we might want a weaker Pareto principle that supposes additionally that f(x,ω) < g(x,ω) for some x and all ω. With that weaker Pareto principle, the proof that the existence of a G-invariant preference of the right sort on the distributions implies local finiteness of action does not work. However, I think we can still prove local finiteness of action in that case if the symmetries in G act only on the population (i.e., for all x and ω there is an y such that g(x,ω) = (y,ω)). In that case, given a subset A of the population Q, we define A to be the distribution that gives w to all the persons in A with certainty (i.e., everywhere on Ω) and gives v to everyone else, and the rest of the proof should go through, but I haven’t checked the details.

Thursday, November 17, 2022

Cerebrums and rattles

Animalists think humans are animals. Suppose I am an animalist and I think that I go with my cerebrum in cerebrum-transplant cases. That may seem weird. But suppose we make an equal opportunity claim here: all animals that have cerebra go with their cerebra. If your dog Rover’s cerebrum is transplanted into a robotic body, then the cerebrumless thing is not Rover. Rather, Rover inhabits a robotic body or that body comes to be a part of Rover, depending on views about prostheses. And the same is true for any animal that has a cerebrum.

It initially seems weird to say that some animals can survive reduced to a cerebrum and others cannot. But it’s not that weird when we add that the ones that can’t survive reduced to a cerebrum are animals that don’t have a cerebrum.

The person who thinks survival reduced to a cerebrum is implausible for an animal might, however, say that this is what’s odd about it. An animal reduced to cerebrum lacks internal life support organs (heart, lungs, etc.) It is odd to think that some animals can survive without internal life support and others cannot.

But compare this: Some animals can partly exist in spatial locations where they have no living cells, and others cannot. The outer parts of my hairs are parts of me, but there are no living cells there. If my hair is in a room, then I am partly in that room, even if no living cells of mine are in the room. But on the other hand, there are some animals (at least the unicellular ones, but maybe also some soft invertebrates) that can only exist where they have a living cell.

One might object that the spatial case and the temporal case are different, because in the spatial case we are talking of partial presence and in the temporal case of full presence. But a four-dimensionalist will disagree. To exist at a time is to be partly present at that time. So to a four-dimensionalist the analogy is pretty strict.

Finally, compare this. Suppose Snaky a rattlesnake stretched along a line in space. Now suppose we simultaneously annihilate everything in Snaky. Now, “simultaneously” is presumably defined with respect to some reference frame F1. Let z be a point in Snaky’s rattle located just prior (according to F1) to Snaky’s destruction. Then Snaky is partly present at z. But with a bit of thought, we can see that there is another reference frame F2 where the only parts of Snaky simultaneous with z are parts of the rattle: all the non-rattle parts of Snaky have already been annihilated at F2, but the rattle has not. Then in F2 the following is true: there is a time at which Snaky exists but nothing outside of Snaky’s rattle exists. Hence Snaky can exist as just a rattle, albeit for a very, very short period of time.

Hence even a snake can exist without its life-support organs, but only for a short period of time.

Monday, November 14, 2022

Reducing goods to reasons?

In my previous post I cast doubt on reducing moral reasons to goods.

What about the other direction? Can we reduce goods to reasons?

The simplest story would be that goods reduce to reasons to promote them.

But there seem to be goods that give no one a reason to promote them. Consider the good fact that there exist (in the eternalist sense: existed, exist now, will exist, or exist timelessly) agents. No agent can promote the fact that there exist agents: that good fact is part of the agent’s thrownness, to put it in Heideggerese.

Maybe, though, this isn’t quite right. If Alice is an agent, then Alice’s existence is a good, but the fact that some agent or other exists isn’t a good as such. I’m not sure. It seems like a world with agents is better for the existence of agency, and not just better for the particular agents it has. Adding another agent to the world seems a lesser value contribution than just ensuring that there is agency at all. But I could be wrong about that.

Another family of goods, though, are necessary goods. That God exists is good, but it is necessarily true. That various mathematical theorems are beautiful is necessarily true. Yet no one has reason to promote a necessary truth.

But perhaps we could have a subtler story on which goods reduce not just to reasons to promote them, but to reasons to “stand for them” (taken as the opposite of “standing against them”), where promotion is one way of “standing for” a good, but there are others, such as celebration. It does not make sense to promote the existence of God, the existence of agents, or the Pythagorean theorem, but celebrating these goods makes sense.

However, while it might be the case that something is good just in case an agent should “stand for it”, it does not seem right to think that it is good to the extent that an agent should “stand for it”. For the degree to which an agent should stand for a good is determined not just by the magnitude of the good, but the agent’s relationship to the good. I should celebrate my children’s accomplishments more than strangers’.

Perhaps, though, we can modify the story in terms of goods-for-x, and say that G is good-for-x to the extent that x should stand for G. But that doesn’t seem right, either. I should stand for justice for all, and not merely to the degree that justice-for-all is good-for-me. Moreover, there goods that are good for non-agents, while a non-agent does not have a reason to do anything.

I love reductions. But alas it looks to me like reasons and goods are not reducible in either direction.

The 2018 Belgium vs Brazil World Cup game

In 2018, the Belgians beat the Brazilians 2-1 in the 2018 World Cup soccer quarterfinals. There are about 18 times as many Brazilians and Belgians in the world. This raises a number of puzzles in value theory, if for simplicity we ignore everyone but Belgians and Brazilians in the world.

An order of magnitude more people wanted the Brazilians to win, and getting what one wants is good. An order of magnitude more people would have felt significant and appropriate pleasure had the Brazilians won, and an appropriate pleasure is good. And given both wishful thinking as well as reasonable general presumptions about there being more talent available in a larger population base, we can suppose that a lot more people expected the Brazilians to win, and it’s good if what one thinks is the case is in fact the case.

You might think that the good of the many outweighs the good of the few, and Belgians are few. But, clearly, the above facts gave very little moral reason to the Belgian players to lose. One might respond that the above facts gave lots of reason to the Belgians to lose, but these reasons were outweighed by the great value of victory to the Belgian players, or perhaps the significant intrinsic value of playing a sport as well as one can. Maybe, but if so then just multiply both countries’ populations by a factor of ten or a hundred, in which case the difference between the goods (desire satisfaction, pleasure and truth of belief) is equally multiplied, but still makes little or no moral difference to what the Belgian players should do.

Or consider this from the point of view of the Brazilian players. Imagine you are one of them. Should the good of Brazil—around two hundred million people caring about the game—be a crushing weight on your shoulders, imbuing everything you do in practice and in the game with a great significance? No! It’s still “just a game”, even if the value of the good is spread through two hundred million people. It would be weird to think that it is a minor pecadillo for a Belgian to slack off in practice but a grave sin for a Brazilian to do so, because the Brazilian’s slacking hurts an order of magnitude more people.

That said, I do think that the larger population of Brazil imbues the Brazilians’ games and practices with some not insignificant additional moral weight than the Belgians’. It would be odd if the pleasure, desire satisfaction and expectations of so many counted for nothing. But on the other hand, it should make no significant difference to the Belgians whether they are playing Greece or Brazil: the Belgians shouldn’t practice less against the Greeks on the grounds that an order of magnitude fewer people will be saddened when the Greeks lose than when Brazilians do.

However, these considerations seem to me to depend to some degree on which decisions one is making. If Daniel is on the soccer team and deciding how hard to work, it makes little difference whether he is on the Belgian or Brazilian team. But suppose instead that Daniel is has two talents: he could become an excellent nurse or a top soccer player. As a nurse, he would help relieve the suffering of a number of patients. As a soccer player, in addition to the intrinsic goods of the sports, he would contribute to his fellow citizens’ pleasure and desire satisfaction. In this decision, it seems that the number of fellow citizens does matter. The number of people Daniel can help as a nurse is not very dependent on the total population, but the number of people that his soccer skills can delight varies linearly with the total population, and if the latter number is large enough, it seems that it would be quite reasonable for Daniel to opt to be a soccer player. So we could have a case where if Daniel is Belgian he should become a nurse but if Brazilian then a soccer player (unless Brazil has a significantly greater need for nurses than Belgium, that is). But once on the team, it doesn’t seem to matter much.

The map from axiology to moral reasons is quite complex, contextual, and heavily agent-centered. The hope of reducing moral reasons to axiology is very slim indeed.

Friday, November 11, 2022

Species flourishing

As an Aristotelian who believes in individual forms, I’m puzzled about cases of species-level flourishing that don’t seem reducible to individual flourishing. On a biological level, consider how some species (e.g., social insects, slime molds) have individuals who do not reproduce. Nonetheless it is important to the flourishing of the species that the species include some individuals that do reproduce.

We might handle this kind of a case by attributing to other individuals their contribution to reproduction of the species. But I think this doesn’t solve the problem. Consider a non-biological case. There are things that are achievements of the human species, such as having reached the moon, having achieved a four minute mile, or having proved the Poincaré conjecture. It seems a stretch to try to individualize these goods by saying that we all contributed to them. (After all, many of us weren’t even alive in 1969.)

I think a good move for an Aristotelian who believes in individual forms is to say that “No man or bee is an island.” There is an external flourishing in virtue of the species at large: it is a part of my flourishing that humans landed on the moon. Think of how members of a social group are rightly proud of the achievements of some famous fellow-members: we Poles are proud of having produced Copernicus, Russians of having launched humans into space, and Americans of having landed on the moon.

However, there is still a puzzle. If it is a part of every human’s good that “I am a member of a species that landed on the moon”, does that mean the good is multiplied the more humans there are, because there are more instances of this external flourishing? I think not. External flourishing is tricky this way. The goods don’t always aggregate summatively between people in the case of external flourishing. If external flourishing were aggregated summatively, then it would have been better if Russia rather than Poland produced Copernicus, because there are more Russians than Poles, and so there would have been more people with the external good of “being a citizen of a country that produced Copernicus.” But that’s a mistake: it is a good that each Pole has, but the good doesn’t multiply with the number of Poles. Similarly, if Belgium is facing off Brazil for the World Cup, it is not the case that it would be way better if the Brazilians won, just because there are a lot more Brazilians who would have the external good of “being a fellow citizen with the winners of the World Cup.”

More on the interpersonal Satan's Apple

Let me take another look at the interpersonal moral Satan’s Apple, but start with a finite case.

Consider a situation where a finite number N of people independently make a choice between A and B and some disastrous outcome happens if the number of people choosing B hits a threshold M. Suppose further that if you fix whether the disaster happens, then it is better you to choose A than B, but the disastrous outcome outweighs all the benefits from all the possible choices of B.

For instance, maybe B is feeding an apple to a hungry child, and A is refraining from doing so, but there is an evil dictator who likes children to be miserable, and once enough children are not hungry, he will throw all the children in jail.

Intuitively, you should do some sort of expected utility calculation based on your best estimate of the probability p that among the N − 1 people other than you, M − 1 will choose B. For if fewer or more than M − 1 of them choose B, your choice will make no difference, and you should choose B. If F is the difference between the utilities of B and A, e.g., the utility of feeding the apple to the hungry child (assumed to be fairly positive), and D is the utility of the disaster (very negative), then you need to see if pD + F is positive or negative or zero. Modulo some concerns about attitudes to risk, if pD + F is positive, you should choose B (feed the child) and if its negative, you shouldn’t.

If you have a uniform distribution over the possible number of people other than you choosing B, the probability that this number is M − 1 will be 1/N (since the number of people other than you choosing B is one of 0, 1, ..., N − 1). Now, we assumed that the benefits of B are such that they don’t outweigh the disaster even if everyone chooses B, so D + NF < 0. Therefore (1/N)D + F < 0, and so in the uniform distribution case you shouldn’t choose B.

But you might not have a uniform distribution. You might, for instance, have a reasonable estimate that a proportion p of other people will choose B while the threshold is M ≈ qN for some fixed ratio q between 0 and 1. If q is not close to p, then facts about the binomial distribution show that the probability that M − 1 other people choose B goes approximately exponentially to zero as N increases. Assuming that the badness of the disaster is linear or at most polynomial in the number of agents, if the number of agents is large enough, choosing B will be a good thing. Of course, you might have the unlucky situation that q (the ratio of threshold to number of people) and p (the probability of an agent choosing B) are approximately equal, in which case even for large N, the risk that you’re near the threshold will be too high to allow you to choose B.

But now back to infinity. In the interpersonal moral Satan’s Apple, we have infinitely many agents choosing between A and B. But now instead of the threshold being a finite number, the threshold is an infinite cardinality (one can also make a version where it’s a co-cardinality). And this threshold has the property that other people’s choices can never be such that your choice will put things above the threshold—either the threshold has already been met without your choice, or your choice can’t make it hit the threshold. In the finite case, it depended on the numbers involved whether you should choose A or B. But the exact same reasoning as in the finite case, but now without any statistical inputs being needed, shows that you should choose B. For it literally cannot make any difference to whether a disaster happens, no matter what other people choose.

In my previous post, I suggested that the interpersonal moral Satan’s Apple was a reason to embrace causal finitism: to deny that an outcome (say, the disaster) can causally depend on infinitely many inputs (the agents’ choices). But the finite cases make me less confident. In the case where N is large, and our best estimate of the probability of another agent choosing B is a value p not close to the threshold ratio q, it still seems counterintuitive that you should morally choose B, and so should everyone else, even though that yields the disaster.

But I think in the finite case one can remove the counterintuitiveness. For there are mixed strategies that if adopted by everyone are better than everyone choosing A or everyone choosing B. The mixed strategy will involve choosing some number 0 < pbest < q (where q is the threshold ratio at which the disaster happens) and everyone choosing B with probability pbest and A with probability 1 − pbest, where pbest is carefully optimized allow as many people to feed hungry children without a significant risk of disaster. The exact value of pbest will depend on the exact utilities involved, but will be close to q if the number of agents is large, as long as the disaster doesn’t scale exponentially. Now our statistical reasoning shows that when your best estimate of the probability of other people choosing B is not close to the threshold ratio q, you should just straight out choose B. And the worry I had is that everyone doing that results in the disaster. But it does not seem problematic that in a case where your data shows that people’s behavior is not close to optimal, i.e., their behavior propensities do not match pbest, you need to act in a way that doesn’t universalize very nicely. This is no more paradoxical than the fact that when there are criminals, we need to have a police force, even though ideally we wouldn’t have one.

But in the infinite case, no matter what strategy other people adopt, whether pure or mixed, choosing B is better.

Thursday, November 10, 2022

The interpersonal Satan's Apple

Consider a moral interpersonal version of Satan’s Apple: infinitely many people independently choose whether to give a yummy apple to a (different) hungry child, and if infinitely many choose to do so, some calamity happens to everyone, a calamity outweighing the hunger the child suffers. You’re one of the potential apple-givers and you’re not hungry yourself. The disaster strikes if and only if infinitely many people other than you give an apple. Your giving an apple makes no difference whatsoever. So it seems like you should give the apple to the child. After all, you relieve one child’s hunger, and that’s good whether or not the calamity happens.

Now, we deontologists are used to situations where a disaster happens because one did the right thing. That’s because consequences are not the only thing that counts morally, we say. But in the moral interpersonal Satan’s Apple, there seems to be no deontology in play. It seems weird to imagine that disaster could strike because everyone did what was consequentialistically right.

One way out is causal finitism: Satan’s Apple is impossible, because the disaster would have infinitely many causes.

More on discounting small probabilities

In yesterday’s post, I argued that there is something problematic about the idea of discounting small probabilities, given that in a large enough lottery every possibility with has a small probability. I then offered a way of making sense of the idea by “trimming” the utility function at the top and bottom.

This morning, however, I noticed that one can also take the idea of discounting small probabilities more literally and still get the exact same results as by trimming utility functions. Specifically, given a probability function P and a probability discount threshold ϵ, we form a credence function Pϵ by letting Pϵ(A) = P(A) if ϵ ≤ P(A) ≤ 1 − ϵ, Pϵ(A) = 0 if P(A) < ϵ and Pϵ(A) = 1 if P(A) > 1 − ϵ. This discounts close-to-zero probabilities to zero and raises close-to-one probabilities to one. (We shouldn’t forget the second or things won't work well.)

Of course, Pϵ is not in general a probability, but it does satisfy the Zero, Non-Negativity, Normalization and Monotonicity axioms, and we can now use LSI level-set integral to calculate utilities with Pϵ.

If Uϵ is the “trimmed” utility function from my previous post, then LSIPϵ(U) = E(U2ϵ), so the two approaches are equivalent.

One can also do the same thing within Buchak’s REU theory, since that theory is equivalent to applying LSI with a probability transformed by a monotonic map of [0,1] to [0,1] keeping endpoints fixed, which is exactly what I did when moving from P to Pϵ.

Wednesday, November 9, 2022

How to discount small probabilities

A very intuitive solution to a variety of problems in infinite decision theory is that “for possibilities that have very small probabilities of occurring, we should discount those probabilities down to zero” when making decisions (Monton).

Suppose throughout this post that ϵ > 0 counts as our threshold of “very small probabilities”. No doubt ϵ < 1/100.

In this post I want to offer a precise and friendly amendment to the solution of neglecting small probabilities. But first why we need an amendment. Consider a game where an integer K is randomly chosen between  − 1 and N for some large fixed positive N, so large that 1/(2+N) < ϵ, and you get K dollars. The game is clearly worth playing. But if you discount “possibilities that have very small probabilities”, you are left with nothing: every possibility has a very small probability!

Perhaps this is uncharitable. Maybe the idea is not that we discount to zero all possibilities with small probabilities, but that we discount such possibilities until the total discount hits the threshold ϵ. But while this sounds like a charitable interpretation of the suggestion, it leaves the theory radically underdetermined. For which possibilities do we discount? In my lottery case, do we start by discounting the possibilities at the low end ( − 1, 0, 1, ...) until we have hit the threshold? Or do we start at the high end (N, N − 1, N − 2, ...) or somewhere in the middle?

Here is my friendly proposal. Let U be the utility function we want to evaluate the value of. Let T be the smallest value such that P(U>T) ≤ ϵ/2. (This exists: T = inf {λ : P(U>λ) ≤ ϵ/2}.) Let t be the largest value such that P(U<t) ≤ ϵ/2 (i.e., t = sup {λ : P(U<λ) ≤ ϵ/2}). Take U and replace any values bigger than T with T and any values smaller than t with t, and call the resulting utility function Uϵ. We now replace U with Uϵ in our expected value calculations. (In the lottery example, we will be trimming from both ends at the same time.)

The result is a precise theory (given the mysterious threshold ϵ). It doesn’t neglect all possibilities with small probabilities, but rather it trims low-probability outliers. The trimming procedure respects the fact that often utility functions are defined up to positive affine transformations.

Moreover, the trimming procedure can yield an answer to what I think is the biggest objection to small-probability discounting, namely that in a long enough run—and everyone should think there is a non-negligible chance of eternal life—even small probabilities can add up. If you are regularly offered the same small chance of a gigantic benefit during an eternal future, and you turn it down each time because the chance is negligible, you’re almost surely missing out on an infinite amount of value. But we can apply the trimming procedure at the level of choice of policies rather than of individual decisions. Then if small chances are offered often enough, they won’t all be trimmed away.

Tuesday, November 8, 2022

A principle about infinite sequences of decisions

There are many paradoxes of infinite sequences of decisions where the sequence of individual decisions that maximize expected utility is unfortunate. Perhaps the most vivid is Satan’s Apple, where a delicious apple is sliced into infinitely many pieces, and Eve chooses which pieces to eat. But if she greedily takes infinitely many, she is kicked out of paradise, an outcome so bad that the whole apple does not outweigh it. For any set of pieces Eve eats, another piece is only a plus. So she eats them all, and is damned.

Here is a plausible principle:

  1. If at each time you are choosing between a finite number of betting portfolios fixed in advance, with the betting portfolio in each decision being tied to a set of events wholly independent of all the later or earlier events or decisions, with the overall outcome being just the sum or aggregation of the outcomes of the betting portfolios, and with the utility of each portfolio well-defined given your information, then you should at each time maximize utility.

In Satan’s Apple, for instance, the overall outcome is not just the sum of the outcomes of the individual decisions to eat or not to eat, and so Satan’s Apple is not a counterexample to (1). In fact, few of the paradoxes of infinite sequences of decisions are counterexamples to (1).

However, my unbounded expected utility maximization paradox is.

I don’t know if there is something particularly significant about a paradox violating (1). I think there is, but I can’t quite put my finger on it. On the other hand, (1) is such a complex principle that it may just seem ad hoc.

Wednesday, November 2, 2022

Must we accept free stuff?

Suppose someone offers you, at no cost whatsoever, something of specified positive value. However small that value, it seems irrational to refuse it.

But what if someone offers you a random amount of positive value for free. Strict dominance principles say it’s irrational to refuse it. But I am not completely sure.

Imagine a lottery where some positive integer n is picked at random, with all numbers equally likely, and if n is picked, then you get 1/n units of value. Should you play this lottery for free?

The expected value of the lottery is zero with respect to any finitely-additive real-valued probability measure that fits the description (i.e., assign equal probablity to each number). And for any positive number x, the probability that you will get less than x is one. It’s not clear to me that it’s worth going for this.

If you like infinitesimals, you might say that the expected value of the lottery is infinitesimal and the probability of getting less than some positive number x is 1 − α for an infinitesimal α. That makes it sound like a better deal, but it’s not all that clear.

Of course, infinite fair lotteries are dubious. So I don’t set much store by this example.

Two different ways of non-instrumentally pursuing a good

Suppose Alice is blind to the intrinsic value of friendship and Bob can see the intrinsic value of friendship. Bob then told Alice that friendship is intrinsically valuable. Alice justifiedly trusts Bob in moral matters, and so Alice concludes that friendship has intrinsic value, even though she can’t “see” it. Alice and Bob then both pursue friendship for its own sake.

But there is a difference: Bob pursues friendship because of the particular ineffable “thick” kind of value that friendship has. Alice doesn’t know what “thick” kind of value friendship has, but on the basis of Bob’s testimony, she knows that it has some such value or other, and that it is a great and significant value. As long as Alice knows what kinds of actions friendship requires, she can pursue friendship without that knowledge, though it’s probably more difficult for her, perhaps in the way that it is more difficult for a tone-deaf person to play the piano, though in practice the tone-deaf person could learn what kinds of finger movements result in aesthetically valuable music without grasping that aesthetic value.

The Aristotelian tradition makes the grasp of the particular thick kind of value involved in a virtuous activity be a part of the full possession of that virtue. On that view, Alice cannot have the full virtue of friendship. There is something she is missing out on, just as the tone-deaf pianist is missing out on something. But she is not, I think, less praiseworthy than Bob. In fact Alice’s pursuit of friendship involves the exercise of a virtue which Bob’s does not: the virtue of faith, as exhibited in Alice’s trust in Bob’s testimony about the value of friendship.

Tuesday, November 1, 2022

Pursuing a thing for its own sake

Suppose you pursue truth for its own sake. As we learn from Aristotle, it does not follow that you don’t pursue truth for the sake of something else. For the most valuable things are both intrinsically and instrumentally valuable, and so they are typically pursued both for their own sake and for the sake of something else.

What if you pursue something, but not for the sake of something else. Does it follow that you pursue the thing for its own sake? Maybe, but it’s not as clear as it might seem. Imagine that you eat fiber for the sake of preventing colon cancer. Then you hear a study that says that fiber doesn’t prevent colon cancer. But you continue to eat fiber, out of a kind of volitional inertia, without any reason to do so. Then you are pursuing the consumption of fiber not for the sake of anything else. But merely losing the instrumental reason for eating fiber doesn’t give you a non-instrumentally reason. Rather, you are now eating fiber irrationally, for no reason.

Perhaps it is impossible to do something for no reason. But even if it is impossible to do something for no reason, it is incorrect to define pursuing something for its own sake as pursuing it not for the sake of something else. For that you pursue something for its own sake states something positive about your pursuit, while that you don’t pursue it for the sake of anything else states something negative about your pursuit. There is a kind of valuing of the thing for its own sake that is needed to pursue the thing for its own sake.

It is tempting to say that you pursue a thing for its own sake provided that you pursue it because of the intrinsic value you take it to have. But that, too, is incorrect. For suppose that a rich benefactor tells you that they will give you a ton of money if you gain something of intrinsic value today. You know that truth is valuable for its own sake, so you find out something. In doing so, you find out the truth because the truth is intrinsically valuable. But your pursuit of that truth is entirely instrumental, despite your reason being the intrinsic value.

Hence, to pursue a thing for its own sake is not the same as to pursue it because it has intrinsic value. Nor is it to pursue it not for the sake of something else.

I suspect that pursuing a thing for its own sake is a primitive concept.

Human worth and materialism

  1. A typical human being has much more intrinsic value than any 80 kg arrangement of atoms.

  2. If materialism is true, a typical human being is an 80 kg arrangement of atoms.

  3. So, materialism is not true.