Saturday, December 17, 2022

The right can be derived from the good

There is a way to connect the right and wrong with the good and bad:

  1. An action is right (respectively, wrong) if and only if it is noninstrumentally good (respectively, bad) to do it.

This is compatible with there being cases where it is bad for one to do the right thing. Thus, refraining from stealing the money that one would need to sign up for a class on virtue is right and noninstrumentally good, but if the class is really effective then stealing the money might be instrumentally good for one, though noninstrumentally ba.

I think (1) is something that everyone should accept. Even consequentialists can and should accept (1) (though utilitarian consequentialists have too shallow an axiology to make (1) true). But natural law theorists might add a further claim to (1): the left-hand-side is true because the right-hand-side is true.

The title of this post contradicts the title of another recent post, but the contents do not.

Variation in priors and community epistemic goods

Here is a hypothesis:

  • It is epistemically better for the human community if human beings do not all have the same (ur-) priors.

This could well be true because differences in priors lead to a variety of lines of investigation, a greater need for effort in convincing others, and less danger of the community as a whole getting stuck in a local epistemic optimum. If this hypothesis is true, then we would have an interesting story about why it would be good for our community if a range of priors were rationally permissible.

Of course, that it would be good for the community if some norm of individual rationality obtained does not prove that the norm obtains.

Moreover, note that it is very plausible that what range of variation of priors is good for the community depends on the species of rational animal we are talking about. Rational apes like us are likely more epistemically cooperative than rational sharks would be, and so rational sharks would benefit less from variation of priors, since for them the good of the community would be closer to just the sum of the individual goods.

But does epistemic rationality care about what is good for the community?

I think it does. I have been trying to defend a natural law account of rationality on which just as our moral norms are given by what is natural for the will, our epistemic norms are given by what is natural for our intellect. And just as our will is the will of a particular kind of deliberative animal, so too our intellect is the intellect of a particular kind of investigative animal. And we expect a correlation between what a social animal’s nature impels it to do and what is good for the social animal’s community. Thus, we expect a degree of harmony between the norms of epistemic rationality—which on my view are imposed by the nature of the animal—and the good of the community.

At the same time, the harmony need not be perfect. Just as there may be times when the good of the community and the good of the individual conflict in respect of non-epistemic flourishig, there may be such conflict in epistemic flourishing.

I am grateful to Anna Judd for pointing me to a possible connection between permissivism and natural law epistemology.

Friday, December 16, 2022

Panteleology: A few preliminary notes

Panteleology holds that teleology is ubiquitous. Every substance aims at
some end.

The main objection to panteleology is the same as that to panpsychism: the incredulous stare. I think a part of the puzzlement comes from the thought that things that are neither biological nor artifactual “just do what they do”, and there is no such thing as failure. But this seems to me to be a mistake. Imagine a miracle where a rock fails to fall down, despite being unsupported and in a gravitational field. It seems very natural to say that in that case the rock failed to do what rocks should do! So it may be that away from the biological realm (namely organisms and stuff made by organisms) failure takes a miracle, but the logical possibility of such a miracle makes it not implausible to think that there really is a directedness.

That said, I think the quantum realm provides room for saying that things don’t “just do what they do”. If an electron is in a mixed spin up/down state, it seems right to think about it as having a directedness at a pure spin-up state and a directedness at a pure spin-down state, and only one of these directednesses will succeed.

Panteleology seems to be exactly what we would expect in a world created by God. Everything should glorify God.

Panteleology is also entailed by a panpsychism that follows Leibniz in including the ubiquity of “appetitions” and not just perceptions. And it seems to me that if we think through the kinds of reasons people have for panpsychism, these reasons extend to appetitions—just as a discontinuity in perception is mysterious, a discontinuity in action-driving is mysterious.

Wednesday, December 14, 2022

The right cannot be derived from the good

Consider the following thesis that both Kantians, utilitarians and New Natural Law thinkers will agree on:

  1. All facts about rightness and wrongness can be derived from descriptive facts, facts about non-rightness value, and a small number of fundamental abstract moral principles.

The restriction to non-rightness good and bad is to avoid triviality. By “rightness value” here, I mean only the value that an action or character has in virtue of its being right or wrong to the extent that it is.

I don’t have a good definition of “abstract moral principle”, but I want them to be highly general principles about moral agency such as “Choose the greater over the lesser good”, “Do not will the evil”, etc.

I think (1) is false.

Consider this:

  1. It is not wrong for the government to forcibly and non-punitively take 20% of your lifetime income, but it is wrong for the government to forcibly and non-punitively take one of your kidneys.

I don’t think we can derive (2) in accordance with the strictures in (1). If a kidney were a lot more valuable than 20% of lifetime income, we would have some hope of deriving (2) from descriptive facts, non-rightness value facts, and abstract moral principles, for we might have some abstract moral principle prohibiting the government from forcibly and non-punitively taking something above some value. But a kidney is not a lot more valuable than 20% of lifetime income. Indeed, if it would cost you 20% of your lifetime income to prevent the destruction of one of your kidneys, it need not be unreasonable for you to refuse to pay. Indeed, it seems that either 20% of lifetime income is incommensurable with a kidney, or in some cases it is more valuable than a kidney.

If loss of a kidney were to impact one’s autonomy significantly more than loss of 20% of your lifetime income, then again there would be some hope for a derivation of (2). But whether loss of a kidney is more of an autonomy impact than loss of 20% of income will differ from person to person.

One might suppose that among the small number of fundamental abstract moral principles one will have some principles about respect for bodily integrity. I doubt it, though. Respect for bodily integrity is an immensely complex area of ethics, and it is very unlikely that it can be encapsulated in a small number of abstract moral principles. Respect for bodily integrity differs in very complex ways depending on the body part and the nature of the relationship between the agent and the patient.

I think counterexamples to (1) can be multiplied.

I should note that the above argument fails against divine command theories. Divine command theorists will say that about rightness and wrongness are identified with descriptive facts about what God commands, and these facts can be very rich and hence include enough data to determine (2). For the argument against (1) to work, the “descriptive facts” have to be more like the facts of natural science than like facts about divine commands.

Web-based tool for adding timed text and a timer to a video

When making my Guinness application record video, I wanted to include a time display in the video and Guinness also required a running count display. I ended up writing a Python script using OpenCV2 to generate a video of the time and lap count, and overlaid it with the main video in Adobe Premiere Rush.

Since then, I wrote a web-based tool for generating a WebP animation of a timer and text synchronized to a set of times. The timer can be in seconds or tenths of a second, and you can specify a list of text messages and the times to display them (or to hide them). You can then overlay it on a video in Premiere Rush or Pro. There is alpha support, so you can have a transparent or translucent background if you like, and a bunch of fonts to choose from (including the geeky-looking Hershey font that I used in my Python script.)

The code uses webpxmux.js, though it was a little bit tricky because in-browser Javascript may not have enough memory to store all the uncompressed images that webpxmux.js needs to generate an animation. So instead I encode each frame to WebP using webpxmux.js, extract the compressed ALPH and VP8 chunks from the WebP file, and store only the compressed chunks, writing them all at the end. (It would be even better from the memory point of view to write the chunks one by one rather than storing them in memory, but a WebP file has a filesize in its header, and that’s not known until all the compressed chunks have been generated. One could get around this limitation by generating the video twice, but that would be twice as slow.)

Monday, December 12, 2022

More on non-moral and moral norms

People often talk of moral norms as overriding. The paradigm kind of case seems to be like this:

  1. You are N-forbidden to ϕ but morally required to ϕ,

where “N” is some norm like that of prudence or etiquette. In this case, the moral requirement of ϕing overrides the N-prohibition on ϕing. Thus, you might be rude to make a point of justice or sacrifice your life for the sake of justice.

But if there are cases like (1), there will surely also be cases where the moral considerations in favor of ϕing do not rise to the level of a requirement, but are sufficient to override the N-prohibition. In those cases, presumably:

  1. You are N-forbidden to ϕ but morally permitted to ϕ.

Cases of supererogation look like that: you are morally permitted to do something contrary to prudential norms, but not required to do so.

So far so good. Moral norms can override non-moral norms in two ways: by creating a moral requirement contrary to the non-moral norms or by creating a moral permission contrary to the non-moral norms.

But now consider this. What happens if the moral considerations are at an even lower level, a level insufficient to override the N-prohibition? (E.g., what if to save someone’s finger you would need to sacrifice your arm?) Then, it seems:

  1. You are N-forbidden to ϕ and not morally permitted to ϕ.

But this would be quite interesting. It would imply that in the absence of sufficient moral considerations in favor of ϕing, an N-prohibition would automatically generate a moral prohibition. But this means that the real normative upshot in all three cases is given by morality, and the N-norms aren’t actually doing any independent normative work. This suggests strongly that on such a picture, we should take the N-norms to be simply a species of moral norms.

However, there is another story possible. Perhaps in the case where the moral considerations are at too low a level to override the N-prohibition, we can still have moral permission to ϕ, but that permission no longer overrides the N-prohibition. On this story, there are two kinds of cases, in both of which we have moral permission, but in one case the moral permission comes along with sufficiently strong moral considerations to override the N-prohibition, while in the other it does not. On this story, moral requirement always overrides non-moral reasons; but whether moral considerations override non-moral considerations depends on the relative strengths of the two sets of considerations.

Still, consider this. The judgment whether moral considerations override the non-moral ones seems to be an eminently moral judgment. It is the person with moral virtue who is best suited to figuring out whether such overriding happens. But what happens if morality says that the moral considerations do not override the N-prohibition? Is that not a case of morality giving its endorsement to the N-prohibition, so that the N-prohibition would rise to the level of a moral prohibition as well? But if so, then that pushes us back to the previous story where it is reasonable to take N-considerations to be subsumed into moral considerations.

I don’t want to say that all norms are moral norms. But it may well be that all norms governing the functioning of the will are moral norms.

Thursday, December 8, 2022

Utilitarianism, egoism and promises

Suppose Alice and Bob are perfect utilitarians or perfect amoral egoists in any combination. They are about to play a game where they raise a left hand or a right hand in a separate booth, and if they both raise the same hand, they both get something good. Otherwise, nobody gets that good. Nobody sees what they’re doing in the game: the game is fully automated. And they both have full shared knowledge of the above.

They confer before the game and promise to one another to raise the right hand. They go into their separate rooms. And what happens next?

Take first the case where they are both perfect amoral egoists. Amoral egoists don’t care about promises. So the fact that an amoral egoist promised to raise the right hand is no evidence at all that they will raise the right hand, unless there is something in it for them. But is there anything in it for them? Well, if Bob raises his right hand, then there is something in it for Alice to raise her right hand. But note that this conditional is true regardless of whether they’ve made any promises to each other, and it is equally true that if Bob raises his left hand, then there is something in it for Alice to raise her left hand.

The promise is simply irrelevant here. It is true that in normal circumstances, it makes sense for egoists to keep promises in order to fool people into thinking that they have morality. But I’ve assumed full shared knowledge of each other’s tendencies here, and so no such considerations apply here.

It is true that if Alice expects Bob to expect her to keep her promise, then Alice will expect Bob to raise his right hand, and hence she should raise her right hand. But since she’s known to be an amoral egoist, there is no reason for Bob to expect Alice to keep her promise. And the same vice versa.

What if they are utilitarians? It makes no difference. Since in this case both always get the same outcome, there is no difference between utilitarians and amoral egoists.

This means that in cases like this, with full transparency of behavioral tendencies, utilitarians and amoral egoists will do well to brainwash or hypnotize themselves into promise-keeping.

In ordinary life, this problem doesn’t arise as much, because as long as at least one person is more typical, and hence takes promises to have reason-giving force, or if public opinion is around to enforce promise-keeping, then the issue doesn’t come up. But I think there is a lesson here and in the previous post: for many ordinary practice, the utilitarian is free-riding on the non-utilitarians.

Utilitarianism and communication

Alice and Bob are both perfect Bayesian epistemic agents and subjectively perfect utilitarians (i.e., they always do what by their lights maximizes expected utility). Bob is going to Megara. He comes to a crossroads, from which two different paths lead to Megara. On exactly one of these paths there is a man-eating lion and on the other there is nothing special. Alice knows which path has the lion. The above is all shared knowledge for Alice and Bob.

Suppose the lion is on the left path. What should Alice do? Well, if she can, she should bring it about that Bob takes the right path, because doing so would clearly maximize utility. How can she do that? An obvious suggestion: Engage in a conventional behavior indicating a where the lion is, such as pointing left and roaring, or saying “Hail well-met traveler, lest you be eaten, I advise you to avoid the leftward leonine path.”

But I’ve been trying really hard to figure out how is it that such a conventional behavior would indicate to Bob that the lion is on the left path.

If Alice were a typical human being, she would have a habit of using established social conventions to tell the truth about things, except perhaps in exceptional cases (such as the murderer at the door), and so her use of the conventional lion-indicating behavior would correlate with the presence of lions, and would provide Bob with evidence of the presence of lions. But Alice is not a typical human being. She is a subjectively perfect utilitarian. Social convention has no normative force for Alice (or Bob, for that matter). Only utility does.

Similarly, if Bob were a typical human being, he would have a habit of forming his beliefs on the basis of testimony interpreted via established social conventions absent reason to think one is being misinformed, and so Alice’s engaging in conventional left-path lion-indicating behavior would lead Bob to think there is a lion on the left, and hence to go on the right. And while it woudl still be true that social convention has no normative force for Alice, Alice would think have reason to think that Bob follows convention, and for the sake of maximizing utility would suit her behavior to his. But Bob is a perfect Bayesian. He doesn’t form beliefs out of habit. He updates on evidence. And given that Alice is not a typical human being, but a subjectively perfect utilitarian, it is unclear to me why her engaging in the conventional left-path lion-indicating behavior is more evidence for the lion being on the left than for the lion being on the right. For Bob knows that convention carries no normative force for Alice.

Here is a brief way to put it. For Alice and Bob, convention carries no weight except as a predictor of the behavior of convention-bound people, i.e., people who are not subjectively perfect utilitarians. It is shared knowledge between Alice and Bob that neither is convention-bound. So convention is irrelevant to the problem at hand, the problem of getting Bob to avoid the lion. But there is no solution to the problem absent convention or some other tool unavailable to the utilitarian (a natural law theorist might claim that mimicry and pointing are natural indicators).

If the above argument is correct—and I am far from confident of that, since it makes my head spin—then we have an argument that in order for communication to be possible, at least one of the agents must be convention-bound. One way to be convention-bound is to think, in a way utilitarians don’t, that convention provides non-consequentialist reasons. Another way is to be an akratic utilitarian, addicted to following convention. Now, the possibility of communication is essential for the utility of the kinds of social animals that we are. Thus we have an argument that at least some subjective utilitarians will have to become convention-bound, either by getting themselves to believe that convention has normative force or by being akratic.

This is not a refutation of utilitarianism. Utilitarians, following Parfit, are willing to admit that there could be utility maximization reasons to cease to be utilitarian. But it is, nonetheless, really interesting if something as fundamental as communication provides such a reason.

I put this as an issue about communication. But maybe it’s really an issue about communication but coordination. Maybe the literature on repeated games might help in some way.

Wednesday, December 7, 2022

Transfer of endurance

There are empirical indications that various skills and maybe even virtues are pretty domain specific. It seems that being good at reasoning about one thing need not make one good at reasoning about another, even if the reasoning is formally equivalent.

I do have a piece of anecdotal data, though. I’ve been doing some endurance-ish sports. Nothing nearly like a marathon, but things like swimming 2-3 km, or climbing for an hour, typically (but not always) competing against myself.

And I have noticed some transfer of skills and maybe even of the virtue of patience both between the various sports and between the sports and other repetitive activities, such as grading. There is a distinctive feeling I have when I am half-way through something, and where I am fairly confident I can finish it, and a kind of relaxation past the half-way point where I become more patient, and time seems to flow “better”. For instance, I can compare how tired I feel half-way through a long set of climbs and how tired I feel half-way through a 2 km swim, and the comparison can give me some strength. Similar positive thinking can happen while grading, things like “I can do it” or “There isn’t all that much left.” Though there are also differences between the sports and the grading, because in grading the quality of the work matters a lot more, and since I am not racing against myself so there is no point of a burst of speed at the end if I find myself with an excess of energy. Pacing is also much less important for grading.

I have no idea if anything like this transfer works for other people.

Tuesday, December 6, 2022

Dividing up reasons

One might think that reasons for action are exhaustively and exclusively divided into the moral and the prudential. Here is a problem with this. Suppose that you have a spinner divided into red and green areas. If you spin it and it lands into red, something nice happens to you; if it lands on green, something nice happens to a deserving stranger. You clearly have reason to spin the spinner. But, assuming the division of reasons, your reason for spinning it is neither moral nor prudential.

So what should we say? One possibility is to say that there are only reasons of one type, say the moral. I find that attractive. Then benefits to yourself also give you moral reason to act, and so you simply have a moral reason to spin the spinner. Another possibility is to say that in addition to moral and prudential reasons there is some third class of “mixed” or “combination” reasons.

Objection: The chance p of the spinner landing on red is a prudential reason and the chance 1 − p of its landing on green is a moral reason. So you have two reasons, one moral and one prudential.

Response: That may be right in the simple case. But now imagine that the “red” set is a saturated nonmeasurable subset of the spinner edge, and the “green” set is also such. A saturated nonmeasurable subset has no reasonable probability assignment, not even a non-trivial range of probabilities like from 1/3 to 1/2 (at best we can assign it the full range from 0 to 1). Now the reason-giving strength of a chancy outcome is proportionate to the probability. But in the saturated nonmeasurable case, there is no probability, and hence no meaningful strength for the red-based reason or for the green-based reason. But there is a meaningful strength for the red-or-green moral-cum-prudential reason. The red-or-green-based reason hence does not reduce to two separate reasons, one moral and one prudential.

Now, one might have technical worries about saturated nonmeasurable sets figuring in decisions. I do. (E.g., see the Axiom of Choice chapter in my infinity book.) But now instead of supposing saturated nonmeasurable sets, suppose a case where an agent subjectively has literally no idea whether some event E will happen—has no probability assignment for E whatsoever, not even a ranged one (except for the full range from 0 to 1). The spinner landing on a set believed to be saturated nonmeasurable might be an example of such a case, but the case could be more humdrum—it’s just a case of extreme agnosticism. And now suppose that the agent is told that if they so opt, then they will get something nice on E and a deserving stranger will get something nice otherwise.

Final remark: The argument applies to any exclusive and exhaustive division of reasons into “simple” (i.e., non-combination) types.

Monday, December 5, 2022

Greek mathematics

I think it is sometimes said that it is anachronistic to attribute to the ancient Greeks the discovery that the square root of two is irrational, because what they discovered was a properly geometrical fact, that the side and diagonal of a square are incommensurable, rather than a fact about real numbers.

It is correct to say that the Greeks discovered an incommensurability fact. But it is, I think, worth noting that this incommensurability fact is not really geometric fact: it is a geometric-cum-arithmetical fact. Here is why. The claim that two line segments are commensurable says that there are positive integers m and n such that m copies of the first segment have the same length as n copies of the second. This claim is essentially arithmetical in that it quantifies over positive integers.

And because pure (Tarskian) geometry is decidable, while the theory of the positive integers is not decidable, the positive integers are not definable in terms of pure geometry, so we cannot eliminate the quantification over positive integers. In fact, it is known that the rational numbers are not definable in terms of pure geometry either, so neither the incommensurability formulation nor theory irrationality formulation is a purely geometric claim.

I think. All this decidability and definability stuff confuses me often.

Saturday, December 3, 2022

A new world record

[April, 2023 update: This record has been accepted by Guinness. I wonder how long it will last.]

And now for something not very philosophical. Today, in front of two witnesses and two timekeepers and with the help of Levi Durham doing an amazing feat of belaying me for an hour, I beat the Guinness World Record in greatest vertical distance climbed in one hour on an indoor climbing wall. The previous record was 928.5m and I did 1013.7m (with about half a minute to spare). On Baylor's climbing wall, this involved 67 climbs divided into sets of 10 (the last was 7), with about a minute of rest between sets (the clock kept on running during the rest).



Technical notes:
  • The top of the wall is 15.13 meters vertically from the ground (as measured by a geology grad student), at 3.5 degree slab.
  • I trained for about three months, not very heavily. In training did two unofficial full-length practice runs, and in each I beat the previous record: in the first one I got 947.1 meters and in the second I got 1004.5, so I was pretty confident I could beat the 928.5 meters on the official attempt (though I was still pretty nervous). I also trained by doing a small number of approximately 1/2 or 1/3 sized practices (maybe three or so), and more regular shorter runs (1-10 climbs) at fast pace. 
  • The route was a standard 5.7 grade for most of my training (including when I unofficially beat the records), with Rock management kindly agreeing to keep the route up for several months for me. For the final attempt, we added holds to make the finish at the top of the wall, and changed three other holds to easier ones. (Guinness has no route grade requirements.) 
  • A Kindle Fire running a pre-release version of my Giant Stopwatch app provided unofficial timing for audience to see and for my pacing. I had to modify the app to have a periodic beep to meet Guinness's requirements of an audible stop signal.
  • I climbed in sets of 10. The planned pace was 8:18 per set and a 44-45 second rest between sets (clock runs during rests,), averaging at 49.8 seconds per climb including descent. I was always ahead of pace, and I occasionally took a mini break at the mid-point time if I was too far ahead.
  • On the ground there was a sheet of paper with the start and end times of each break printed in large letters (calculated by this script), as well as the mid-point time for each set of 10 to keep me better on pace. 
  • I wore moderately worn (one small hole) and comfortable 5.10 Anasazi shoes, a Camp USA Energy harness, shorts and a T-shirt. (I have not received any sponsorship.) My belayer used a tube-style device and wore belay gloves.
  • In the morning I stress-baked pumpkin muffins for myself and the volunteers. I had the muffins, water and loose chalk on a table for use during breaks.
  • About half-way through, I ducked into the storage area inside the rock and changed to a dry shirt. 
  • Most of my practice was with an auto-belay, and at a shorter distance per climb (and hence greater number of climbs needed) since the auto-belay makes it impossible to get to the top of the wall. The auto-belay is also spring loaded so it effectively decreases body weight (by 7 lbs at the bottom according to my measurement). Then a couple of weeks ago the auto-belay was closed by management due to a maintenance issue, and I had a break in training until the Wednesday before the official attempt when I trained with a manual belay. 
  • Since Guinness requires video proof in addition to human witnesses, in the interests of redundancy, I had three cameras pointed at the attempt. The best footage (above) is from a Sony A7R2 with a zoom lens at 16mm, producing 1080P at 59.94 fps. Video was processed with Adobe Premiere Rush. The processing consisted of trimming the start and end, and adding a timing video track I generated with a Python OpenCV2 script, synchronized with single-frame precision at the 1:00:00 point with the footage of Giant Stopwatch (barely visible under the table towards the end of the video; early in the video, glare hides it). For the unofficial version I link above, I accelerated the middle climbs 10X in Premiere Rush.

Friday, December 2, 2022

Moderately pacifist war

I’ve been wondering whether it is possible for a country to count as pacifist and yet wage a defensive war. I think the answer is positive, as long as one has a moderate pacifism that is opposed to lethal violence but not to all violence. I think that a prohibition of all violence is untenable. It seems obvious that if you see someone about to shoot an innocent person, and you can give the shooter a shove to make them miss, you presumptively should.

Here’s what could be done by a moderately pacifist country.

First, we have “officially” non-lethal weapons: tasers, gas, etc. Some of these might violate current international law, but it seems that a pacifist country could modify its commitment to some accords.

Second, “lethal” weapons can be used less than lethally. For instance, with modern medicine, abdominal gunshot wounds are only 10% fatal, yet they are no doubt very effective at stopping an attacker. While it may seem weird to imagine a pacifist shooting someone in the stomach, when the chance of survival is 90%, it does not seem unreasonable to say that the pacifist could be aiming to stop the attacker non-lethally. After all, tasers sometimes kill, too. They do so less than 0.25% of the time, but that’s a difference of degree rather than of principle.

Third, we might subdivide moderate pacifists based on whether they prohibit all violence that foreseeably leads to death or just violence that intentionally leads to death. If it is only intentionally lethal violence that is forbidden, then quite a bit of modern warfare can stand. If the enemy is attacking with tanks or planes, one can intentionally destroy the tank or plane as a weapon, while only foreseeing, without intending, the death of the crew. (I don’t know how far one can take this line without sophistry. Can one drop a bomb on an infantry unit intending to smash up their rifles without intending to kill the soldiers?) Similarly, one can bomb enemy weapons factories.

Whether such a limited way of waging war could be successful probably depends on the case. If one combined the non-lethal (or not intentionally lethal) means with technological and numerical superiority, it wouldn’t be surprising to me if one could win.

Thursday, December 1, 2022

Against a moderate pacifism

Imagine a moderate pacifist who rejects lethal self-defense, but allows non-lethal self-defense when appropriate, say by use of tasers.

Now, imagine that one person is attacking you and nine other innocents, with the intent of killing the ten of you, and you can stop them with a taser. Surely you should, and surely the moderate pacifist will say that this is an appropriate use case for the taser.

Very well. Now consider this on a national level. Suppose there are a million enemy soldiers ordered to commit genocide against ten million, and you have two ways to stop them:

  1. Tase the million soldiers.

  2. Kill the general.

If you can tase one person to stop the murder of ten, then (1) should be permissible if it’s the only option. But tasers occasionally kill people. We don’t know how often. Apparently it’s less than 1 in 400 uses. Suppose it’s 1 in 4000. Then option (1) results in 250 enemy deaths.

So maybe our choice is between tasing a million, thereby non-intentionally killing 250 soldiers, and intentionally killing one general. It seems to me that (2) is morally preferable, even though our moderate pacifist has to allow (1) and forbid (2).

Note that a version of this argument goes through even if the moderate pacifist backs up and says that tasers are too lethal. For suppose instead of tasers we have drones that destroy the dominant hand of an enemy soldier while guaranteeing survival (with science fictional medical technology). It’s clearly right to release such a drone on a soldier who is about to kill ten innocents. But now compare:

  1. Destroy the dominant hand of a million soldiers.

  2. Kill the general.

I think (4) is still morally preferable to causing the kind of disruption to the lives of a million people that plan (3) would involve.

These may seem to be consequentialist arguments. I don't think so. I don't have the same intuitions if we replace the general by the general's innocent child in (2) and (4), even if killing the child were to stop the war (e.g., by making the general afraid that their other children would be murdered).

Normative powers

A normative power is a power to change a normative condition. Raz says the change is not produced “causally” but “normatively”.

Here is a picture on which this is correct. We exercise a normative power by exercising a natural power in such a context that the successful exercise of the natural power is partly constitutive of a normative fact. For instance, we utter a promise, thereby exercising a natural power to engage in a certain kind of speech act, and our exercise of that speech act is partly constitutive of, rather than causal of, the state of affairs of our being obligated to carry out the promised action.

There are two versions of the above model. On one version, there is an underlying fundamental conditional normative fact C, such as that if I have promised something then I should do it, and my exercise of normative power supplies the antecedent A of that conditional, and then the normative consequent of C comes to be grounded in C and A. On another version, there there are some natural acts that are directly constitutive of a normative state of affairs, not merely by supplying the antecedent of a conditional normative fact. I think the first version of the model is the more plausible in paradigmatic cases.

But why not allow for a causal model? Why not suppose that a normative power is a causal power to make an irreducible normative property come to be instantiated in someone? Thus, my power to promise is the power to cause myself to be obligated to do what I have promised.

I think the difficulty with a causal model is the fact that in paradigm cases of normative power, there is a natural power that is being exercised, and we have the intuition that the exercise of the natural power is necessary and sufficient for the normative effect. But on a causal model, why couldn’t I cause a promissory-type obligation without promising, simply causing the relevant property of being obligated to come to be instantiated in me? And why couldn’t I engage in the speech act while yet remaining normatively unbound, because my normative power wasn’t exercised in parallel with the natural power?

Maybe the answer to both questions is that I could, but only metaphysically and not causally. In other words, it could be that the laws of nature, or of human nature, make it impossible for me to exercise one of the powers without the other, just as I cannot wiggle my ring finger without wiggling my middle finger as well. On this view, if there is a God, he could cause me to acquire promissory-type obligations without my promising, and he could let me engage in the natural act of promising while blocking the exercise of normative power and leaving me normatively unbound. This doesn’t seem particularly problematic.

Perhaps the real problem for a lot of people with a causal view of normative powers is that it tends to lead to a violation of supervenience. For if it is metaphysically possble to have the exercise of the normative power without the exercise of the natural power, or vice versa, then it seems we don’t have supervenience of the normative on the non-normative. But supervenience does not seem to me to be inescapable.

Wednesday, November 30, 2022

Two versions of the guise of the good thesis

According to the guise of the good thesis, one always acts for the sake of an apparent good. There is a weaker and a stronger version of this:

  • Weak: Whenever you act, you act for an end that you perceive is good.

  • Strong: Whenever you act, you act for an end, and every end you act for you perceive as good.

For the strong version to have any plausibility, “good” must include cases of purely instrumental goodness.

I think there is still reason to be sceptical of the strong version.

Case 1: There is some device which does something useful when you trigger it. It is triggered by electrical activity. You strap it on to your arm, and raise your arm, so that the electrical activity in your muscles triggers the device. Your raising your arm has the arm going up as an end, but that end is not perceived as good, but merely neutral. All you care about is the electrical activity in your muscles.

Case 2: Back when they were dating in high school, Bob promised to try his best to bake a nine-layer chocolate cake for Alice’s 40th birthday. Since then, Bob and Alice have had a falling out, and hate each other’s guts. Moreover, Alice and all her guests hate chocolate. But Alice doesn’t release Bob from his promise. Bob tries his best to bake the cake in order to fulfill his promise, and happens to succeed. In trying to bake the cake, Bob acted for the end of producing a cake. But producing the cake was worthless, since no one would eat it. The only value was in the trying, since that was the fulfillment of his promise.

In both cases, it is still true that the agent acts for a good end—the useful triggering of the device and the production of the cake. But in both cases it seems they are also acting for a worthless end. Thus the cases seem to fit with the weak but not the strong guise of the good thesis.

I was going to leave it at this. But then I thought of a way to save the strong guise of the good thesis. Success is valuable as such. When I try to do something, succeeding at it has value. So the arm going up or the cake being produced are valuable as necessary parts of the success of one’s action. So perhaps every end of your action is trivially good, because it is good for your action to succeed, and the end is a (constitutive, not causal) means to success.

This isn’t quite enough for a defense of the strong thesis. For even if the success is good, it does not follow that you perceive the success as good. You might subscribe to an axiological theory on which success is not good in general, but only success at something good.

But perhaps we can say this. We have a normative power to endow some neutral things with value by making them our ends. And in fact the only way to act for an end that does not have any independent value is by exercising that normative power. And exercising that normative power involves your seeing the thing you’re endowing with value as valuable. And maybe the only way to raise your arm or for Bob to bake the cake in the examples is by exercising the normative power, and doing so involves seeing the end as good. Maybe. This has some phenomenological plausibility and it would be nice if it were true, because the strong guise of the good thesis is pretty plausible to me.

If this story is right, it adds a nuance to the ideas here.

Tuesday, November 29, 2022

An odd poker variant

Suppose Alice can read your mind, and you are playing poker against a set of people not including Alice. You don’t care about winning, just about money. Alice has a deal for you that you can’t refuse.

  • If you win, she takes your winnings away.

  • If you lose, but you tried to win, she pays you double what you lost.

  • If you lose, but you didn’t try to win, she does nothing.

Clearly the prudent thing to do is to try to win. For if you don’t try to win, then you are guaranteed not to get any money. But if you do try, you won’t lose anything, and you might gain.

Here is the oddity: you are trying to win in order to get paid, but you only get paid if you don’t win. Thus, you are trying to achieve something, the achievement of which would undercut the end you are pursuing.

Is this possible? I think so. We just need to distinguish between pursuing victory for the sake of something else that follows from victory and pursuing victory for the sake of something that might follow from the pursuit of victory.

Nonoverriding morality

Some philosophers think that sometimes norms other than moral norms—e.g., prudential norms or norms of the meaningfulness of life—take precedence over moral norms and make permissible actions that are morally impermissible. Let F-norms be such norms.

A view where F-norms always override moral norms does not seem plausible. In the case of prudential or meaningfulness, it would point to a fundamental selfishness in the normative constitution of the human being.

So the view has to be that sometimes F-norms take precedence over moral norms, but not always. There must thus be norms which are neither F-norms nor moral norms that decide whether F-norms or moral norms take precedence. We can call these “overall norms of combination”. And it is crucial to the view that the norms of combination themselves be neither F-norms nor moral norms.

But here is an oddity. Morality already combines F-considerations and first order paradigmatically moral considerations. Consider two actions:

  1. Sacrifice a slight amount of F-considerations for a great deal of good for one’s children.

  2. Sacrifice an enormous amount of F-considerations for a slight good for one’s children.

Morality says that (1) is obligatory but (2) is permitted. Thus, morality already weighs F and paradigmatically moral concerns and provides a combination verdict. In other words, there already are moral norms of combination. So the view would be that there are moral norms of combination and overall norms of combination, both of which take into account exactly the same first order considerations, but sometimes come to different conclusions because they weigh the very same first order considerations differently (e.g., in the case where a moderate amount of F-considerations needs to be sacrificed for a moderate amount of good for one’s children).

This view violates Ockham’s razor: Why would we have moral norms of combination if the overall norms of combination always override them anyway?

Moreover, the view has the following difficulty: It seems that the best way to define a type of norm (prudential, meaningfulness, moral, etc.) is in terms of the types of consideration that the norm is based on. But if the overall norms of combination take into account the very same types of consideration as the moral norms of combination, then this way of distinguishing the types of norms is no longer available.

Maybe there is a view on which the overall ones take into account not the first-order moral and F-considerations, but only the deliverances of the moral and F-norms of combination, but that seems needlessly complex.

Monday, November 28, 2022

Oppositional relationships

Here are three symmetric oppositional possibilities:

  1. Competition: x and y have shared knowledge that they are pursuing incompatible goals.

  2. Moral opposition: x and y have shared knowledge that they are pursuing incompatible goals and each takes the other’s pursuit to be morally wrong.

  3. Mutual enmity: x and y have shared knowledge that they each pursue the other’s ill-being for a reason other than the other’s well-being.

The reason for the qualification on reasons in 3 is that one might say that someone who punishes someone in the hope of their reform is pursuing their ill-being for the sake of their well-being. I don’t know if that is the right way to describe reformative punishment, but it’s safer to include the qualification in (3).

Note that cases of moral opposition are all cases of competition. Cases of mutual enmity are also cases of competition, except in rare cases, such as when a party suffers from depression or acedia which makes them not be opposed to their own ill-being.

I suspect that most cases of mutual enmity are also cases of moral opposition, but I am less clear on this.

Both competition and moral opposition are compatible with mutual love, but mutual enmity is not compatible with either direction of love.

Additionally, there is a whole slew of less symmetric options.

I think loving one’s competitors could be good practice for loving one’s (then necessarily non-mutual) enemies.

Games and consequentialism

I’ve been thinking about who competitors, opponents and enemies are, and I am not very clear on it. But I think we can start with this:

  1. x and y are competitors provided that they knowingly pursue incompatible goals.

In the ideal case, competitors both rightly pursue the incompatible goals, and each knows that they are both so doing.

Given externalist consequentialism, where the right action is the one that actually would produce better consequences, ideal competition will be extremely rare, since the only time the pursuit of each of two incompatible goals will be right is if there is an exact tie between the values of the goals, and that is extremely rare.

This has the odd result that on externalist consequentialism, in most sports and other games, at least one side is acting wrongly. For it is extremely rare that there is an exact tie between the values of one side winning and the value of the other side winning. (Some people enjoy victory more than others, or have somewhat more in the way of fans, etc.)

On internalist consequentalism, where the right action is defined by expected utilities, we would expect that if both sides are unbiased investigators, in most of the games, at least one side would at take the expected utility of the other side’s winning to be higher. For if both sides are perfect investigators with the same evidence and perfect priors, then they will assign the same expected utilities, and so at least one side will take the other’s to have higher expected utility, except in the rare case where the two expected utilities are equal. And if both sides assign expected utilities completely at random, but unbiasedly (i.e., are just as likely to assign a higher expected utility to the other side winning as to themselves), then bracketing the rare case where a side assigns equal expected utility to both victory options, any given side will have a probability of about a half of assigning higher expected utility to the other side’s victory, and so there will be about a 3/4 chance that at least one side will take the other side’s victory to be more likely. And other cases of unbiased investigators will likely fall somewhere between the perfect case and the random case, and so we would expect that in most games, at least one side will be playing for an outcome that they think has lower expected utility.

Of course, in practice, the two sides are not unbiased. One might overestimate the value of oneself winning and the underestimate the value of the other winning. But that is likely to involve some epistemic vice.

So, the result is that either on externalist or internalist consequentialism, in most sports and other competitions, at least one side is acting morally wrongly or is acting in the light of an epistemic vice.

I conclude that consequentialism is wrong.

Precise lengths

As usual, write [a,b] for the interval of the real line from a to b including both a and b, (a,b) for the interval of the real line from a to b excluding a and b, and [a, b) and (a, b] respectively for the intervals that include a and exclude b and vice versa.

Suppose that you want to measure the size m(I) of an interval I, but you have the conviction that single points matter, so [a,b] is bigger than (a,b), and you want to use infinitesimals to model that difference. Thus, m([a,b]) will be infinitesimally bigger than m((a,b)).

Thus at least some intervals will have lengths that aren’t real numbers: their length will be a real number plus or minus a (non-zero) infinitesimal.

At the same time, intuitively, some intervals from a to b should have length exactly b − a, which is a real number (assuming a and b are real). Which ones? The choices are [a,b], (a,b), [a, b) are (a, b].

Let α be the non-zero infinitesimal length of a single point. Then [a,a] is a single point. Its length thus will be α, and not a − a = 0. So [a,b] can’t always have real-number length b − a. But maybe at least it can in the case where a < b? No. For suppose that m([a,b]) = b − a whenever a < b. Then m((a,b]) = b − a − α whenever a < b, since (a, b] is missing exactly one point of [a,b]. But then let c = (a+b)/2 be the midpoint of [a,b]. Then:

  1. m([a,b]) = m([a,c]) + m((c,b]) = (ca) + (bcα) = b − a − α,

rather than m([a,b]) as was claimed.

What about (a,b)? Can that always have real number length b − a if a < b? No. For if we had that, then we would absurdly have:

  1. m((a,b)) = m((a,c)) + α + m((c,b)) = c − a + α + b − c = b − a + α,

since (a,b) is equal to the disjoint union of (a,c), the point c and (c,b).

That leaves [a, b) and (a, b]. By symmetry if one has length b − a, surely so does the other. And in fact Milovich gave me a proof that there is no contradiction in supposing that m([a,b)) = m((b,a]) = b − a.

Tuesday, November 22, 2022

Hyperreal expected value

I think I have a hyperreal solution, not entirely satisfactory, to three problems.

  1. The problem of how to value the St Petersburg paradox. The particular version that interests me is one from Russell and Isaacs which says that any finite value is too small, but any infinite value violates strict dominance (since, no matter what, the payoff will be less than infinity).

  2. How to value gambles on a countably infinite fair lottery where the gamble is positive and asymptotically approaches zero at infinity. The problem is that any positive non-infinitesimal value is too big and any infinitesimal value violates strict dominance.

  3. How to evaluate expected utilities of gambles whose values are hyperreal, where the probabilities may be real or hyperreal, which I raise in Section 4.2 of my paper on accuracy in infinite domains.

The apparent solution works as follows. For any gamble with values in some real or hyperreal field V and any finitely-additive probability p with values in V, we generate a hyperreal expected value Ep, which satisfies these plausible axioms:

  1. Linearity: Ep(af+bg) = aEpf + bEpg for a and b in V

  2. Probability-match: Ep1A = p(A) for any event A, where 1A is 1 on A and 0 elsewhere

  3. Dominance: if f ≤ g everywhere, then Epf ≤ Epg, and if f < g everywhere, then Epf < Epg.

How does this get around the arguments I link to in (1) and (2) that seem to say that this can’t be done? The trick is this: the expected value has values in a hyperreal field W which will be larger than V, while (4)–(6) only hold for gambles with values in V. The idea is that we distinguish between what one might call primary values, which are particular goods in the world, and what one might call distribution values, which specify how much a random distribution of primary values is worth. We do not allow the distribution values themselves to be the values of a gamble. This has some downsides, but at least we can have (4)–(6) on all gambles.

How is this trick done?

I think like this. First it looks like the Hahn-Banach dominated extension theorem holds for V2-valued V1-linear functionals on V1-vector spaces V1 ⊆ V2 are real or hyperreal field, except that our extending functional may need to take values in a field of hyperreals even larger than V2. The crucial thing to note is that any subset of a real or hyperreal field has a supremum in a larger hyperreal field. Then where the proof of the Hahn-Banach theorem uses infima and suprema, you move to a larger hyperreal field to get them.

Now, embed V in a hyperreal field V2 that contains a supremum for every subset of V, and embed V2 in V3 which has a supremum for every subset of V2. Let Ω be our probability space.

Let X be the space of bounded V2-valued functions on Ω and let M ⊆ X be the subspace of simple functions (with respect to the algebra of sets that Ω is defined on). For f ∈ M, let ϕ(f) be the integral of f with respect to p, defined in the obvious way. The supremum on V2 (which has values in V3) is then a seminorm dominating ϕ. Extend ϕ to a V-linear function ϕ on X dominated by V2. Note that if f > 0 everywhere for f with values in V, then f > α > 0 everywhere for some α ∈ V2, and hence ϕ(−f) ≤  − α by seminorm domination, hence 0 < α ≤ ϕ(f). Letting Ep be ϕ restricted to the V-valued functions, our construction is complete.

I should check all the details at some point, but not today.

Monday, November 21, 2022

Dominance and countably infinite fair lotteries

Suppose we have a finitely-additive probability assignment p (perhaps real, perhaps hyperreal) for a countably infinite lottery with tickets 1, 2, ... in such a way that each ticket has infinitesimal probability (where zero counts as an infinitesimal). Now suppose we want to calculate the expected value or previsio EpU of any bounded wager U on the outcome of the lottery, where we think of the wager as assigning a value to each ticket, and the wager is bounded if there is a finite M such that |U(n)| < M for all n.

Here are plausible conditions on the expected value:

  1. Dominance: If U1 < U2 everywhere, then EpU1 < EpU2.

  2. Binary Wagers: If U is 0 outside A and c on A, then EpU = cP(A).

  3. Disjoint Additivity: If U1 and U2 are wagers supported on disjoint events (i.e., there is no n such
    that U1(n) and U2(n) are both non-zero), then Ep(U1+U2) = EpU1 + EpU2.

But we can’t. For suppose we have it. Let U(n) = 1/(2n). Fix a positive integer m. Let U1(n) be 2 for n ≤ m + 1 and 0 otherwise. Let U2(n) be 1/m for n > m + 1 and 0 for n ≤ m + 1. Then by Binary Wagers and by the fact that each ticket has infinitesimal probability, EpU1 is an infinitesimal α (since the probability of any finite set will be infinitesimal). By Binary Wagers and Dominance, EpU2 ≤ 1/(m+1). Thus by Disjoint Additivity, Ep(U1+U2) ≤ α + 1/(m+1) < 1/m. But U < U1 + U2 everywhere, so by Dominance we have EpU < 1/m. Since 0 < U everywhere, by Dominance and Binary Wagers we have 0 < EpU.

Thus, EpU is a non-zero infinitesimal β. But then β < U(n) for all n, and so by Binary Wagers and Dominance, β < EpU, a contradiction.

I think we should reject Dominance.

Corruptionism and care about the soul

According to Catholic corruptionists, when I die, my soul will continue to exist, but I won’t; then at the Resurrection, I will come back into existence, receiving my soul back. In the interim, however, it is my soul, not I, who will enjoy heaven, struggle in purgatory or suffer in hell.

Of course, for any thing that enjoys heaven, strugges in purgatory or suffers in hell, I should care that it does so. But should I have that kind of special care that we have about things that happen to ourselves for what happens to the soul? I say not, or at most slightly. For suppose that it turned out on the correct metaphysics that my matter continues to exist after death. Should I care whether it burns, decays, or is dissected, with that special care with which we care about what happens to ourselves? Surely not, or at most slightly. Why not? Because the matter won’t be a part of me when this happens. (The “at most slightly” flags the fact that we can care about “dignitary harms”, such as nobody showing up at our funeral, or us being defamed, etc.)

But clearly heaven, purgatory and hell in the interim state is something we should care about.

Friday, November 18, 2022

Social choice principles and invariance under symmetries

A comment by a referee of a recent paper of mine that one of my results in decision theory didn’t actually depend on numerical probabilities and hence could extend to social choice principles made me realize that this may be true for some other things I’ve done.

For instance, in the past I’ve proved theorems on qualitative probabilities. A qualitative probability is a relation on the subsets of some sample space Ω such that:

  1. ≼ is transitive and reflexive.

  2. ⌀ ≼ A

  3. if A ∩ C = B ∩ C = ⌀, then A ≼ B iff A ∩ C ≼ B ∩ C (additivity).

But need not think of Ω as a space of possibilities and of ≼ as a probability comparison. We could instead think of it as a set of people who are candidates for getting some good thing, with A ≼ B meaning that it’s at least as good for the good thing to be distributed to the members of B as to the members of A. Axioms (1) and (2) are then obvious. And axiom (3) is an independence axiom: whether it is at least as good to give the good thing to the members of B as to the members of A doesn’t depend on whether we give it to the members of a disjoint set C at the same time.

Of course, for a general social choice principle we need more than just a decision whether to give one and the same good to the members of some set. But we can still formalize those questions in terms of something pretty close to qualitative probabilities. For a general framework, suppose a population set X (a set of people or places in spacetime or some other sites of value) and a set of values V (this could be a set of types of good, or the set of real numbers representing values). We will suppose that V comes with a transitive and reflexive (preorder) preference relation . Now let Ω = X × V. A value distribution is a function f from X to V, where f(x) = v means that x gets something of value v.

We want to generate a reflexive and transitive preference ordering ≼ on the set VX of value distributions.

Write f ≈ g when f ≼ g and g ≼ f, and f ≺ g when f ≼ g but not g ≼ f. Similarly for values v and w, write v < w if v ≤ w but not w ≤ v.

Here is a plausible axiom on value distributions:

  1. Sameness independence: if f1, f2, g1, g2 are value distributions and A ⊆ X is such that (a) f1 ≼ f2, (b) f1(x) = f2(x) and g1(x) = g2(x) if x ∉ A, (c) f1(x) = g1(x) and f2(x) = g2(x) if x ∈ A.

In other words, the mutual ranking between two value distributions does not depend on what the two distributions do to the people on whom the distributions agree. If it’s better to give $4 to Jones than to give $2 to Smith when Kowalski is getting $7, it’s still better to give $4 to Jones than to give $2 to Smith when Kowalski is getting $3. There is probably some other name in the literature for this property, but I know next to nothing about social choice literature.

Finally, we want to have some sort of symmetries on the population. The most radical would be that the value distributions don’t care about permutations of people, but more moderate symmetries may be required. For this we need a group G of permutations acting on X.

  1. Strong G-invariance: if g ∈ G and f is a value distribution, then f ∘ g ≈ f.

Here, f ∘ g is the value distribution where site x gets f(g(x)).

Additionally, the following is plausible:

  1. Pareto: If f(x) ≤ g(x) for all x with f(x) < g(x) for some x, then f ≺ g.

Theorem: Assume the Axiom of Choice. Suppose on V is reflexive, transitive and non-trivial in the sense that it contains two values v and w such that v < w. There exists a reflexive, transitive preference ordering on the value distributions satisfying (4)–(6) if and only if there is such an ordering that is total if and only if G has locally finite action on X.

A group of symmetries G has locally finite action a set X provided that for each finite subset H of G and each x ∈ X, applying finite combinations of members of G to x generates only a finite subset of X. (More precisely, if ⟨H⟩ is the subgroup generated by G, then Hx is finite.)

If X is finite, then local finiteness of action is trivial. If X is infinite, then it will be satisfies in some cases but not others. For instance, it will be satisfied if G is permutations that only move a finite number of members of X at a time. It will on the other hand fail if X is a infinite bunch of people regularly spaced in a line and G is shifts.

The trick to the proof of the Theorem is to reduce preferences between distributions to comparisons of subsets of X × V and to reduce comparisons of subsets of X to preferences between binary distributions.

Proof of Therem: Suppose that G has locally finite action. Define Ω = X × V. By Theorem 2 of my invariance of non-classical probabilities paper, there is a strongly G-invariant regular (i.e., ⌀ ≺ A if A is non-empty) qualitative probability ≼ on Ω. Given a value distribution f, let f* = {(x,v) : v ≤ f(x)} be a subset of Ω. Define f ≼ g iff f* ≼ g.

Totality, reflexivity, transitivity and strong G-invariance for value distributions follows from the same conditions for subsets of Ω. Regularity of on the subsets of Ω and additivity implies that if A ⊂ B then A ≺ B. The Pareto condition for ≼ on the value distributions follows since if f and g satisfy are such that f(x) ≤ g(x) for all x with strict inequality for some x, then f* ⊂ g*. Finally, the complicated sameness independence condition follows from additivity.

Now suppose there is a (not necessarily total) strongly G-invariant reflexive and transitive preference ordering ≼ on the value distributions satisfying (4)–(6). Given a subset A of X, define A to be the value distribution that gives w to all the members of A and v to all the non-members, where v < w. Define A ≼ B iff A ≼ B. This will be a strongly G-invariant reflexive and transitive relation on the subsets of X. It will be regular by the Pareto condition. Finally, additivity follows from the sameness independence condition. Local finiteness of action of G then follows from Theorem 2 of my paper. ⋄

Note that while it is natural to think of X has just a set of people or of locations, inspired by Kenny Easwaran one can also think of it as a set Q × Ω where Ω is a probability space and Q is a population, so that f(x,ω) represents the value x gets at location ω. In that case, G might be defined by symmetries of the population and/or symmetries of the probability space. In such a setting, we might want a weaker Pareto principle that supposes additionally that f(x,ω) < g(x,ω) for some x and all ω. With that weaker Pareto principle, the proof that the existence of a G-invariant preference of the right sort on the distributions implies local finiteness of action does not work. However, I think we can still prove local finiteness of action in that case if the symmetries in G act only on the population (i.e., for all x and ω there is an y such that g(x,ω) = (y,ω)). In that case, given a subset A of the population Q, we define A to be the distribution that gives w to all the persons in A with certainty (i.e., everywhere on Ω) and gives v to everyone else, and the rest of the proof should go through, but I haven’t checked the details.

Thursday, November 17, 2022

Cerebrums and rattles

Animalists think humans are animals. Suppose I am an animalist and I think that I go with my cerebrum in cerebrum-transplant cases. That may seem weird. But suppose we make an equal opportunity claim here: all animals that have cerebra go with their cerebra. If your dog Rover’s cerebrum is transplanted into a robotic body, then the cerebrumless thing is not Rover. Rather, Rover inhabits a robotic body or that body comes to be a part of Rover, depending on views about prostheses. And the same is true for any animal that has a cerebrum.

It initially seems weird to say that some animals can survive reduced to a cerebrum and others cannot. But it’s not that weird when we add that the ones that can’t survive reduced to a cerebrum are animals that don’t have a cerebrum.

The person who thinks survival reduced to a cerebrum is implausible for an animal might, however, say that this is what’s odd about it. An animal reduced to cerebrum lacks internal life support organs (heart, lungs, etc.) It is odd to think that some animals can survive without internal life support and others cannot.

But compare this: Some animals can partly exist in spatial locations where they have no living cells, and others cannot. The outer parts of my hairs are parts of me, but there are no living cells there. If my hair is in a room, then I am partly in that room, even if no living cells of mine are in the room. But on the other hand, there are some animals (at least the unicellular ones, but maybe also some soft invertebrates) that can only exist where they have a living cell.

One might object that the spatial case and the temporal case are different, because in the spatial case we are talking of partial presence and in the temporal case of full presence. But a four-dimensionalist will disagree. To exist at a time is to be partly present at that time. So to a four-dimensionalist the analogy is pretty strict.

Finally, compare this. Suppose Snaky a rattlesnake stretched along a line in space. Now suppose we simultaneously annihilate everything in Snaky. Now, “simultaneously” is presumably defined with respect to some reference frame F1. Let z be a point in Snaky’s rattle located just prior (according to F1) to Snaky’s destruction. Then Snaky is partly present at z. But with a bit of thought, we can see that there is another reference frame F2 where the only parts of Snaky simultaneous with z are parts of the rattle: all the non-rattle parts of Snaky have already been annihilated at F2, but the rattle has not. Then in F2 the following is true: there is a time at which Snaky exists but nothing outside of Snaky’s rattle exists. Hence Snaky can exist as just a rattle, albeit for a very, very short period of time.

Hence even a snake can exist without its life-support organs, but only for a short period of time.

Monday, November 14, 2022

Reducing goods to reasons?

In my previous post I cast doubt on reducing moral reasons to goods.

What about the other direction? Can we reduce goods to reasons?

The simplest story would be that goods reduce to reasons to promote them.

But there seem to be goods that give no one a reason to promote them. Consider the good fact that there exist (in the eternalist sense: existed, exist now, will exist, or exist timelessly) agents. No agent can promote the fact that there exist agents: that good fact is part of the agent’s thrownness, to put it in Heideggerese.

Maybe, though, this isn’t quite right. If Alice is an agent, then Alice’s existence is a good, but the fact that some agent or other exists isn’t a good as such. I’m not sure. It seems like a world with agents is better for the existence of agency, and not just better for the particular agents it has. Adding another agent to the world seems a lesser value contribution than just ensuring that there is agency at all. But I could be wrong about that.

Another family of goods, though, are necessary goods. That God exists is good, but it is necessarily true. That various mathematical theorems are beautiful is necessarily true. Yet no one has reason to promote a necessary truth.

But perhaps we could have a subtler story on which goods reduce not just to reasons to promote them, but to reasons to “stand for them” (taken as the opposite of “standing against them”), where promotion is one way of “standing for” a good, but there are others, such as celebration. It does not make sense to promote the existence of God, the existence of agents, or the Pythagorean theorem, but celebrating these goods makes sense.

However, while it might be the case that something is good just in case an agent should “stand for it”, it does not seem right to think that it is good to the extent that an agent should “stand for it”. For the degree to which an agent should stand for a good is determined not just by the magnitude of the good, but the agent’s relationship to the good. I should celebrate my children’s accomplishments more than strangers’.

Perhaps, though, we can modify the story in terms of goods-for-x, and say that G is good-for-x to the extent that x should stand for G. But that doesn’t seem right, either. I should stand for justice for all, and not merely to the degree that justice-for-all is good-for-me. Moreover, there goods that are good for non-agents, while a non-agent does not have a reason to do anything.

I love reductions. But alas it looks to me like reasons and goods are not reducible in either direction.

The 2018 Belgium vs Brazil World Cup game

In 2018, the Belgians beat the Brazilians 2-1 in the 2018 World Cup soccer quarterfinals. There are about 18 times as many Brazilians and Belgians in the world. This raises a number of puzzles in value theory, if for simplicity we ignore everyone but Belgians and Brazilians in the world.

An order of magnitude more people wanted the Brazilians to win, and getting what one wants is good. An order of magnitude more people would have felt significant and appropriate pleasure had the Brazilians won, and an appropriate pleasure is good. And given both wishful thinking as well as reasonable general presumptions about there being more talent available in a larger population base, we can suppose that a lot more people expected the Brazilians to win, and it’s good if what one thinks is the case is in fact the case.

You might think that the good of the many outweighs the good of the few, and Belgians are few. But, clearly, the above facts gave very little moral reason to the Belgian players to lose. One might respond that the above facts gave lots of reason to the Belgians to lose, but these reasons were outweighed by the great value of victory to the Belgian players, or perhaps the significant intrinsic value of playing a sport as well as one can. Maybe, but if so then just multiply both countries’ populations by a factor of ten or a hundred, in which case the difference between the goods (desire satisfaction, pleasure and truth of belief) is equally multiplied, but still makes little or no moral difference to what the Belgian players should do.

Or consider this from the point of view of the Brazilian players. Imagine you are one of them. Should the good of Brazil—around two hundred million people caring about the game—be a crushing weight on your shoulders, imbuing everything you do in practice and in the game with a great significance? No! It’s still “just a game”, even if the value of the good is spread through two hundred million people. It would be weird to think that it is a minor pecadillo for a Belgian to slack off in practice but a grave sin for a Brazilian to do so, because the Brazilian’s slacking hurts an order of magnitude more people.

That said, I do think that the larger population of Brazil imbues the Brazilians’ games and practices with some not insignificant additional moral weight than the Belgians’. It would be odd if the pleasure, desire satisfaction and expectations of so many counted for nothing. But on the other hand, it should make no significant difference to the Belgians whether they are playing Greece or Brazil: the Belgians shouldn’t practice less against the Greeks on the grounds that an order of magnitude fewer people will be saddened when the Greeks lose than when Brazilians do.

However, these considerations seem to me to depend to some degree on which decisions one is making. If Daniel is on the soccer team and deciding how hard to work, it makes little difference whether he is on the Belgian or Brazilian team. But suppose instead that Daniel is has two talents: he could become an excellent nurse or a top soccer player. As a nurse, he would help relieve the suffering of a number of patients. As a soccer player, in addition to the intrinsic goods of the sports, he would contribute to his fellow citizens’ pleasure and desire satisfaction. In this decision, it seems that the number of fellow citizens does matter. The number of people Daniel can help as a nurse is not very dependent on the total population, but the number of people that his soccer skills can delight varies linearly with the total population, and if the latter number is large enough, it seems that it would be quite reasonable for Daniel to opt to be a soccer player. So we could have a case where if Daniel is Belgian he should become a nurse but if Brazilian then a soccer player (unless Brazil has a significantly greater need for nurses than Belgium, that is). But once on the team, it doesn’t seem to matter much.

The map from axiology to moral reasons is quite complex, contextual, and heavily agent-centered. The hope of reducing moral reasons to axiology is very slim indeed.

Friday, November 11, 2022

Species flourishing

As an Aristotelian who believes in individual forms, I’m puzzled about cases of species-level flourishing that don’t seem reducible to individual flourishing. On a biological level, consider how some species (e.g., social insects, slime molds) have individuals who do not reproduce. Nonetheless it is important to the flourishing of the species that the species include some individuals that do reproduce.

We might handle this kind of a case by attributing to other individuals their contribution to reproduction of the species. But I think this doesn’t solve the problem. Consider a non-biological case. There are things that are achievements of the human species, such as having reached the moon, having achieved a four minute mile, or having proved the Poincaré conjecture. It seems a stretch to try to individualize these goods by saying that we all contributed to them. (After all, many of us weren’t even alive in 1969.)

I think a good move for an Aristotelian who believes in individual forms is to say that “No man or bee is an island.” There is an external flourishing in virtue of the species at large: it is a part of my flourishing that humans landed on the moon. Think of how members of a social group are rightly proud of the achievements of some famous fellow-members: we Poles are proud of having produced Copernicus, Russians of having launched humans into space, and Americans of having landed on the moon.

However, there is still a puzzle. If it is a part of every human’s good that “I am a member of a species that landed on the moon”, does that mean the good is multiplied the more humans there are, because there are more instances of this external flourishing? I think not. External flourishing is tricky this way. The goods don’t always aggregate summatively between people in the case of external flourishing. If external flourishing were aggregated summatively, then it would have been better if Russia rather than Poland produced Copernicus, because there are more Russians than Poles, and so there would have been more people with the external good of “being a citizen of a country that produced Copernicus.” But that’s a mistake: it is a good that each Pole has, but the good doesn’t multiply with the number of Poles. Similarly, if Belgium is facing off Brazil for the World Cup, it is not the case that it would be way better if the Brazilians won, just because there are a lot more Brazilians who would have the external good of “being a fellow citizen with the winners of the World Cup.”

More on the interpersonal Satan's Apple

Let me take another look at the interpersonal moral Satan’s Apple, but start with a finite case.

Consider a situation where a finite number N of people independently make a choice between A and B and some disastrous outcome happens if the number of people choosing B hits a threshold M. Suppose further that if you fix whether the disaster happens, then it is better you to choose A than B, but the disastrous outcome outweighs all the benefits from all the possible choices of B.

For instance, maybe B is feeding an apple to a hungry child, and A is refraining from doing so, but there is an evil dictator who likes children to be miserable, and once enough children are not hungry, he will throw all the children in jail.

Intuitively, you should do some sort of expected utility calculation based on your best estimate of the probability p that among the N − 1 people other than you, M − 1 will choose B. For if fewer or more than M − 1 of them choose B, your choice will make no difference, and you should choose B. If F is the difference between the utilities of B and A, e.g., the utility of feeding the apple to the hungry child (assumed to be fairly positive), and D is the utility of the disaster (very negative), then you need to see if pD + F is positive or negative or zero. Modulo some concerns about attitudes to risk, if pD + F is positive, you should choose B (feed the child) and if its negative, you shouldn’t.

If you have a uniform distribution over the possible number of people other than you choosing B, the probability that this number is M − 1 will be 1/N (since the number of people other than you choosing B is one of 0, 1, ..., N − 1). Now, we assumed that the benefits of B are such that they don’t outweigh the disaster even if everyone chooses B, so D + NF < 0. Therefore (1/N)D + F < 0, and so in the uniform distribution case you shouldn’t choose B.

But you might not have a uniform distribution. You might, for instance, have a reasonable estimate that a proportion p of other people will choose B while the threshold is M ≈ qN for some fixed ratio q between 0 and 1. If q is not close to p, then facts about the binomial distribution show that the probability that M − 1 other people choose B goes approximately exponentially to zero as N increases. Assuming that the badness of the disaster is linear or at most polynomial in the number of agents, if the number of agents is large enough, choosing B will be a good thing. Of course, you might have the unlucky situation that q (the ratio of threshold to number of people) and p (the probability of an agent choosing B) are approximately equal, in which case even for large N, the risk that you’re near the threshold will be too high to allow you to choose B.

But now back to infinity. In the interpersonal moral Satan’s Apple, we have infinitely many agents choosing between A and B. But now instead of the threshold being a finite number, the threshold is an infinite cardinality (one can also make a version where it’s a co-cardinality). And this threshold has the property that other people’s choices can never be such that your choice will put things above the threshold—either the threshold has already been met without your choice, or your choice can’t make it hit the threshold. In the finite case, it depended on the numbers involved whether you should choose A or B. But the exact same reasoning as in the finite case, but now without any statistical inputs being needed, shows that you should choose B. For it literally cannot make any difference to whether a disaster happens, no matter what other people choose.

In my previous post, I suggested that the interpersonal moral Satan’s Apple was a reason to embrace causal finitism: to deny that an outcome (say, the disaster) can causally depend on infinitely many inputs (the agents’ choices). But the finite cases make me less confident. In the case where N is large, and our best estimate of the probability of another agent choosing B is a value p not close to the threshold ratio q, it still seems counterintuitive that you should morally choose B, and so should everyone else, even though that yields the disaster.

But I think in the finite case one can remove the counterintuitiveness. For there are mixed strategies that if adopted by everyone are better than everyone choosing A or everyone choosing B. The mixed strategy will involve choosing some number 0 < pbest < q (where q is the threshold ratio at which the disaster happens) and everyone choosing B with probability pbest and A with probability 1 − pbest, where pbest is carefully optimized allow as many people to feed hungry children without a significant risk of disaster. The exact value of pbest will depend on the exact utilities involved, but will be close to q if the number of agents is large, as long as the disaster doesn’t scale exponentially. Now our statistical reasoning shows that when your best estimate of the probability of other people choosing B is not close to the threshold ratio q, you should just straight out choose B. And the worry I had is that everyone doing that results in the disaster. But it does not seem problematic that in a case where your data shows that people’s behavior is not close to optimal, i.e., their behavior propensities do not match pbest, you need to act in a way that doesn’t universalize very nicely. This is no more paradoxical than the fact that when there are criminals, we need to have a police force, even though ideally we wouldn’t have one.

But in the infinite case, no matter what strategy other people adopt, whether pure or mixed, choosing B is better.