Showing posts with label consequentialism. Show all posts
Showing posts with label consequentialism. Show all posts

Tuesday, January 16, 2024

Impossible duties and consequentialism

Intuitively, sometimes you’re obligated to do something you can’t do. For instance, you promised to visit a friend at 5 pm, and at 4:45 pm you are hiking a one-hour drive away. Or you did something bad, and now you owe the victim a sincere apology, but you’re a vicious person and not psychologically capable of rendering an apology that is sincere.

Consequentialist theories, however, have to limit their consideration to actions you can do, since otherwise everything we do is wrong. For whatever we do, there is an impossible action with even better consequences. You spend a day volunteering at a homeless shelter. That may sound good, but the consequences would have been better if instead you magically cured all cancer.

Thus, it seems:

  1. If consequentialism is true, you are only ever obligated to do something possible.

  2. Sometimes, you are obligated to do the impossible.

  3. So, consequentialism is false.

That said, I am not completely convinced of (2).

Thursday, January 11, 2024

A deontological asymmetry

Consider these two cases:

  1. You know that your freely killing one innocent person will lead to three innocent drowning people being saved.

  2. You know that saving three innocent drowning people will lead to your freely killing one innocent person.

It’s easy to imagine cases like (1). If compatibilism is true, it’s also pretty easy to imagine cases like (2)—we just suppose that your saving the innocent people produces a state of affairs where your psychology gradually changes in such a way that you kill one innocent person. If libertarianism and Molinism are true, we can also get (2): God can reveal to you the conditional of free will.

If libertarianism is true but Molinism is false, it’s harder to get (2), but we can still get it, or something very close to it. We can, for instance, imagine that if you rescue the three people, you will be kidnapped by someone who will offer increasingly difficult to resist temptations to kill an innocent person, and it can be very likely that one day you will give in.

Deontological ethics says that in (1) killing the innocent person is wrong.

Does it say that saving the three innocents is wrong in (2)? It might, but not obviously so. For the action is in itself good, and one might reasonably say that becoming a murderer is a consequence that is not disproportionate to saving the three lives. After all, imagine this variant:

  1. You know that saving three innocent drowning people will lead to a fourth person freely killing one innocent person.

Here it seems that it is at least permissible to save the three innocents. That someone will through a weird chain of events become a murderer if you save the three innocents does not make it wrong to save the three.

I am inclined to think that saving the three is permissible in (2). But if you disagree, change the three to thirty. Now it seems pretty clear to me that saving the drowning people is permissible in (2). But it is still wrong to kill an innocent person to save thirty.

Even on threshold deontology, it seems pretty plausible that the thresholds in (1) and (2) are different. If n is the smallest number such that it is permissible to save n drowning people, at the expense of a side-effect of your eventually killing one innocent, then it seems plausible that n is not big enough to make it permissible to kill one innocent to save n.

So, let’s suppose we have this asymmetry between (1) and (2), with the “three” replaced by some other number as needed (the same one in both statements), so that the action described in (1) is wrong but the one in (2) is permissible.

This then will be yet another counterexample to the project of consequentializing deontology: of finding a utility assignment that renders conclusions equivalent to those of deontology. For the consequences of (1) and (2) are the same, even if one assigns a very big disutility to killing innocents.

Monday, November 28, 2022

Games and consequentialism

I’ve been thinking about who competitors, opponents and enemies are, and I am not very clear on it. But I think we can start with this:

  1. x and y are competitors provided that they knowingly pursue incompatible goals.

In the ideal case, competitors both rightly pursue the incompatible goals, and each knows that they are both so doing.

Given externalist consequentialism, where the right action is the one that actually would produce better consequences, ideal competition will be extremely rare, since the only time the pursuit of each of two incompatible goals will be right is if there is an exact tie between the values of the goals, and that is extremely rare.

This has the odd result that on externalist consequentialism, in most sports and other games, at least one side is acting wrongly. For it is extremely rare that there is an exact tie between the values of one side winning and the value of the other side winning. (Some people enjoy victory more than others, or have somewhat more in the way of fans, etc.)

On internalist consequentalism, where the right action is defined by expected utilities, we would expect that if both sides are unbiased investigators, in most of the games, at least one side would at take the expected utility of the other side’s winning to be higher. For if both sides are perfect investigators with the same evidence and perfect priors, then they will assign the same expected utilities, and so at least one side will take the other’s to have higher expected utility, except in the rare case where the two expected utilities are equal. And if both sides assign expected utilities completely at random, but unbiasedly (i.e., are just as likely to assign a higher expected utility to the other side winning as to themselves), then bracketing the rare case where a side assigns equal expected utility to both victory options, any given side will have a probability of about a half of assigning higher expected utility to the other side’s victory, and so there will be about a 3/4 chance that at least one side will take the other side’s victory to be more likely. And other cases of unbiased investigators will likely fall somewhere between the perfect case and the random case, and so we would expect that in most games, at least one side will be playing for an outcome that they think has lower expected utility.

Of course, in practice, the two sides are not unbiased. One might overestimate the value of oneself winning and the underestimate the value of the other winning. But that is likely to involve some epistemic vice.

So, the result is that either on externalist or internalist consequentialism, in most sports and other competitions, at least one side is acting morally wrongly or is acting in the light of an epistemic vice.

I conclude that consequentialism is wrong.

Friday, November 11, 2022

More on the interpersonal Satan's Apple

Let me take another look at the interpersonal moral Satan’s Apple, but start with a finite case.

Consider a situation where a finite number N of people independently make a choice between A and B and some disastrous outcome happens if the number of people choosing B hits a threshold M. Suppose further that if you fix whether the disaster happens, then it is better you to choose A than B, but the disastrous outcome outweighs all the benefits from all the possible choices of B.

For instance, maybe B is feeding an apple to a hungry child, and A is refraining from doing so, but there is an evil dictator who likes children to be miserable, and once enough children are not hungry, he will throw all the children in jail.

Intuitively, you should do some sort of expected utility calculation based on your best estimate of the probability p that among the N − 1 people other than you, M − 1 will choose B. For if fewer or more than M − 1 of them choose B, your choice will make no difference, and you should choose B. If F is the difference between the utilities of B and A, e.g., the utility of feeding the apple to the hungry child (assumed to be fairly positive), and D is the utility of the disaster (very negative), then you need to see if pD + F is positive or negative or zero. Modulo some concerns about attitudes to risk, if pD + F is positive, you should choose B (feed the child) and if its negative, you shouldn’t.

If you have a uniform distribution over the possible number of people other than you choosing B, the probability that this number is M − 1 will be 1/N (since the number of people other than you choosing B is one of 0, 1, ..., N − 1). Now, we assumed that the benefits of B are such that they don’t outweigh the disaster even if everyone chooses B, so D + NF < 0. Therefore (1/N)D + F < 0, and so in the uniform distribution case you shouldn’t choose B.

But you might not have a uniform distribution. You might, for instance, have a reasonable estimate that a proportion p of other people will choose B while the threshold is M ≈ qN for some fixed ratio q between 0 and 1. If q is not close to p, then facts about the binomial distribution show that the probability that M − 1 other people choose B goes approximately exponentially to zero as N increases. Assuming that the badness of the disaster is linear or at most polynomial in the number of agents, if the number of agents is large enough, choosing B will be a good thing. Of course, you might have the unlucky situation that q (the ratio of threshold to number of people) and p (the probability of an agent choosing B) are approximately equal, in which case even for large N, the risk that you’re near the threshold will be too high to allow you to choose B.

But now back to infinity. In the interpersonal moral Satan’s Apple, we have infinitely many agents choosing between A and B. But now instead of the threshold being a finite number, the threshold is an infinite cardinality (one can also make a version where it’s a co-cardinality). And this threshold has the property that other people’s choices can never be such that your choice will put things above the threshold—either the threshold has already been met without your choice, or your choice can’t make it hit the threshold. In the finite case, it depended on the numbers involved whether you should choose A or B. But the exact same reasoning as in the finite case, but now without any statistical inputs being needed, shows that you should choose B. For it literally cannot make any difference to whether a disaster happens, no matter what other people choose.

In my previous post, I suggested that the interpersonal moral Satan’s Apple was a reason to embrace causal finitism: to deny that an outcome (say, the disaster) can causally depend on infinitely many inputs (the agents’ choices). But the finite cases make me less confident. In the case where N is large, and our best estimate of the probability of another agent choosing B is a value p not close to the threshold ratio q, it still seems counterintuitive that you should morally choose B, and so should everyone else, even though that yields the disaster.

But I think in the finite case one can remove the counterintuitiveness. For there are mixed strategies that if adopted by everyone are better than everyone choosing A or everyone choosing B. The mixed strategy will involve choosing some number 0 < pbest < q (where q is the threshold ratio at which the disaster happens) and everyone choosing B with probability pbest and A with probability 1 − pbest, where pbest is carefully optimized allow as many people to feed hungry children without a significant risk of disaster. The exact value of pbest will depend on the exact utilities involved, but will be close to q if the number of agents is large, as long as the disaster doesn’t scale exponentially. Now our statistical reasoning shows that when your best estimate of the probability of other people choosing B is not close to the threshold ratio q, you should just straight out choose B. And the worry I had is that everyone doing that results in the disaster. But it does not seem problematic that in a case where your data shows that people’s behavior is not close to optimal, i.e., their behavior propensities do not match pbest, you need to act in a way that doesn’t universalize very nicely. This is no more paradoxical than the fact that when there are criminals, we need to have a police force, even though ideally we wouldn’t have one.

But in the infinite case, no matter what strategy other people adopt, whether pure or mixed, choosing B is better.

Thursday, November 10, 2022

The interpersonal Satan's Apple

Consider a moral interpersonal version of Satan’s Apple: infinitely many people independently choose whether to give a yummy apple to a (different) hungry child, and if infinitely many choose to do so, some calamity happens to everyone, a calamity outweighing the hunger the child suffers. You’re one of the potential apple-givers and you’re not hungry yourself. The disaster strikes if and only if infinitely many people other than you give an apple. Your giving an apple makes no difference whatsoever. So it seems like you should give the apple to the child. After all, you relieve one child’s hunger, and that’s good whether or not the calamity happens.

Now, we deontologists are used to situations where a disaster happens because one did the right thing. That’s because consequences are not the only thing that counts morally, we say. But in the moral interpersonal Satan’s Apple, there seems to be no deontology in play. It seems weird to imagine that disaster could strike because everyone did what was consequentialistically right.

One way out is causal finitism: Satan’s Apple is impossible, because the disaster would have infinitely many causes.

Friday, October 14, 2022

Another thought on consequentializing deontology

One strategy for accounting for deontology while allowing the tools of decision theory to be used is to set such a high disvalue on violations of deontic constraints that we end up having to obey the constraints.

I think this leads to a very implausible consequence. Suppose you shouldn’t violate a deontic constraint to save a million lives. But now imagine you’re in a situation where you need to ϕ to save ten thousand lives, and suppose that the non-deontic-consequence badness of ϕing is negligible as compared to ten thousand lives. Further, you think it’s pretty likely that there is no deontic constraint against ϕing, but you’ve heard that a small number of morally sensitive people think there is. You conclude that there is a 1% chance that there is a deontic constraint against ϕing. If we account for the fact that you shouldn’t violate a deontic constraint to save a million lives by setting a disvalue on violation of deontic constraints greater than the disvalue of a million deaths, then a 1% risk of violating a deontic constraint is worse than ten thousand deaths, and so you shouldn’t ϕ because of the 1% risk of violating a deontic constraint. But this is surely the wrong result. One understands a person of principle refusing to do something that clearly violates a deontic constraint to save lots of lives. But to refuse to do something that has a 99% chance of not violating a deontic constraint to save lots of lives, solely because of that 1% chance of deontic violation, is very implausible.

While I think this argument is basically correct, it is also puzzling. Why is it that it is so morally awful to knowingly violate a deontic constraint, but a small risk of violation can be tolerated? My guess is it has to do with where deontic constraints come from: they come from the fact that in certain prohibited actions one is setting one’s will against a basic good, like the life of the innocent. In cases where violation is very likely, one simply is setting one’s will against the good. But when it is unlikely, one simply is not.

Objection The above argument assumes that the disvalue of deaths varies linearly in the number of deaths and that expected utility maximization is the way to go.

Response: Vary the case. Imagine that there is a ticking bomb that has a 99% chance of being defective and a 1% chance of being functional. If it’s functional, then when the timer goes off a million people die. And now suppose that the only way to disarm the bomb is to do something that has a 1% chance of violating a deontic constraint, with the two chances (functionality of the bomb and violation of constraint) being independent. It seems plausible that you should take the 1% risk of violating a deontic constraint to avoid a 1% chance of a million people dying.

Wednesday, April 6, 2022

Consequentialism and probability

Classic utilitarianism holds that the right thing to do is what actually maximizes utility. But:

  1. If the best science says that drug A is better for the patient than drug B, then a doctor does the right thing by prescribing drug A, even if due to unknowable idiosyncracies of the patient, drug B is actually better for the patient.

  2. Unless generalized Molinism is true, in indeterministic situations there is often no fact of the matter of what would really have happened had you acted otherwise than you did.

  3. In typical cases what maximizes utility is saying what is true, but the right thing to do is to say what one actually thinks, even if that is not the truth.

These suggest that perhaps the right thing to do is the one that is more likely to maximize utility. But that’s mistaken, too. In the following case getting coffee from the machine is more likely to maximize utility.

  1. You know that one of the three coffee machines in the breakroom has been wired to a bomb by a terrorist, but don’t know which one, and you get your morning coffee fix by using one of the three machines at random.

Clearly that is the wrong thing to do, even though there is a 2/3 probability that this coffee machine is just fine and utility is maximized (we suppose) by your drinking coffee.

This, in turn, suggests that the right thing to do is what has the highest expected utility.

But this, too, has a counterexample:

  1. The inquisitor tortures heretics while confident that this maximizes their and others’ chance of getting into heaven.

Whatever we may wish to say about the inquisitor’s culpability, it is clear that he is not doing the right thing.

Perhaps, though, we can say that the inquisitor’s credences are irrational given his evidence, and the expected utilities in determining what is right and wrong need to be calculated according to the credences of the ideal agent who has the same evidence.

This also doesn’t work. First, it could be that a particular inquisitor’s evidence does yield the credences that they actually have—perhaps they have formed their relevant beliefs on the basis of the most reliable testimony they could find, and they were just really epistemically unlucky. Second, suppose that you know that all the coffee machines with serial numbers whose last digit is the same as the quadrilionth digit of π have been rigged to explode. You’ve looked at the coffee machine’s serial number’s last digit, but of course you have no idea what the quadrilionth digit of π is. In fact, the two digits are different. You did the wrong thing by using the coffee machine, even though the ideal agent’s expected utilities given your evidence would say that you did the right thing—for the ideal agent would know a priori what the quadrilionth digit of π is.

So it seems that there really isn’t a good thing for the consequentialist to say about this stuff.

The classic consequentialist might try to dig in their heels and distinguish the right from the praiseworthy, and the wrong from the blameworthy. Perhaps maximizing expected utility is praiseworthy, but is right if and only if it actually maximizes utility. This this still has problems with (2), and it still gets the inquisitor wrong, because it implies that the inquisitor is praiseworthy, which is also absurd.

The more I think about it, the more I think that if I were a consequentialist I might want to bite the bullet on the inquisitor cases and say that either the inquisitor is acting rightly or is praiseworthy. But as the non-consequentialist that I am, I think this is a horrible conclusion.

Thursday, March 31, 2022

Deontology and the Spanish Inquisition

  1. If a person acts in a way that would be right if their relevant non-moral beliefs were correct, they are not subject to moral criticism for their action.

  2. If consequentialism or threshold deontology is correct, then inquisitors who tortured heretics for the good of the heretics’ souls acted in ways that would be right if the inquisitors’ relevant non-moral beliefs were correct.

  3. The torture of heretics is subject to moral criticism.

  4. So, neither consequentialism nor threshold deontology is correct.

Let me expand on 2. The inquisitors had the non-moral beliefs that heretics were bound for eternal misery, and that torturing the heretics had a significant chance of turning them to a path leading to eternal bliss and generally increasing the number of people receving eternal bliss and avoiding eternal misery. If these non-moral beliefs were correct, then the inquisitors would have been acting in a way that maximizes good consequences, and hence that would have been right if consequentialism is true. The same is true on threshold deontology. For while a threshold deontologist has deontic constraints on such things as torturing people for their beliefs, these constraints disappear once the stakes are high enough. And the stakes here are infinitely high: eternal bliss and eternal misery. Infinitely high had better be high enough!

Another way to put the argument is this: If consequentialism or threshold deontology is correct, then the only criticism we can make of the inquisitors is for their non-moral beliefs. And yet surely we should do more than that!

If we are to condemn the inquisitors on moral grounds, we need genuine absolute deontic prohibitions.

Monday, January 24, 2022

Against divine desire theories of duty

On the divine desire version of divine command theory, the right thing to do is what God wants us to do.

But what if God’s desires conflict? God does’t want us to commit murder. But suppose a truthful evildoer tells me that if I don’t murder one innocent person, then a thousand persons will be given a choice to murder an innocent person or die. Knowing humanity, I can be quite confident that of that thousand people, a significant number, maybe as much as a fifty or more, will opt to murder rather than be murdered. Thus, if I commit murder, God’s desire that there be no murder will be violated by one murder. If I don’t commit murder, God’s desire that there be no murder will be violated by about fifty or more murders. It seems that in this case murder fulfills God’s desires better. And yet murder is wrong.

(Some Christians these days have consequentialist inclinations and may want to accept the conclusion that in this case murder is acceptable. I will assume in this post that they are wrong.)

Perhaps we can say this: Desires should be divided into instrumental and non-instrumental ones, and it is only non-instrumental divine desires that define moral obligations. The fact that by murdering an innocent person I prevent fifty or so murders only gives God an instrumental desire for me to murder that innocent.

But this line of thought is risky. For suppose that God’s reasons for wanting the Israelites to refrain from pork were instrumental. What God really wanted was for the Israelites to have a cultural distinctiveness from other peoples, and refraining from pork served to produce that. On the view that instrumental desires do not produce obligations, it follows that the Israelites had no obligation to refrain from pork, which is wrong.

Perhaps, though, another move is possible. Maybe we should say that in the scenario I gave earlier God knows that his desires will be better served by my committing murder, but he does not want me to do so, whether instrumentally or not. For we need not suppose that whenever a rational being desires y and sees that x is instrumental to y then the rational being desires x. This does indeed get us out of the initial problem.

But we still have a bit of a puzzle. For suppose that someone you love has multiple desires and they cannot all be satisfied. Among that person’s desires, there will be desires concerning what you do and desires concerning other matters. Is it the case that in your love for them, their desires concerning what you do should automatically take precedence over their other desires? No! Suppose Alice and Bob love each other. Now imagine that Bob would really like a certain expensive item that he cannot afford to buy for himself, but that Alice, who is wealthier, can buy for him with only a minor hardship to her. We can now imagine that Bob’s desire that Alice spend no money is weaker than his desire for the expensive item. In that case, surely, given her love for Bob, Alice has good reason to buy the gift for Bob, and it is false that Bob’s desire concerning what Alice does (namely, his desire that she not spend money) take precedence over Bob’s stronger desire concerning other matters (namely, his desire for the item). It would be a loving thing for Alice, thus, to transgress Bob’s desire that she not spend money.

But presumably God’s desire that I not commit murder is weaker than God’s desire that fifty other people not commit murder. Thus, it seems that committing the murder would exhibit love of God—assuming that God’s desires is all that is at issue, and there are no moral obligations independent of God’s desires. Hence, there is a tension between love for God and obedience to God on the divine desire version of divine command theory. And that’s a tension we should avoid.

Wednesday, September 8, 2021

Reasons from the value of true belief

Two soccer teams are facing off, with a billion fans watching on TV. Brazil has a score of 2 and Belgium has a score of 0, and there are 15 minutes remaining. The fans nearly unanimously think Brazil will win. Suddenly, there is a giant lightning strike, and all electrical devices near the stadium fail, taking the game off the air. Coincidentally, during the glitch, Brazil’s two best players get red cards, and now Belgium has a very real chance to win if they try hard.

But the captain of the Brazilian team yells out this argument to the Belgians: “If you win, you will make a billion fans have a false belief. A false belief is bad, and when you multiply the badness by billion, the result is very bad. So, don’t win!”
Great hilarity ensues among the Belgians and they proceed to trounce the Brazilians.

The Belgians are right to laugh: the consideration that the belief of a billion fans will be falsified by their effort carries little to no moral weight.

Why? Is it that false belief carries little to no disvalue? No. For suppose that now the game is over. At this point, the broadcast teams have a pretty strong moral reason to try to get back on the air in order to inform the billion fans that they were mistaken about the result of the game.

In other words, we have a much stronger reason to shift people’s beliefs to match reality than to shift reality to match people’s beliefs. Yet in both cases the relevant effect on the good and bad in the world can be the same: there is less of the bad of false beliefs and more of the good of true beliefs. An immediate consequence of this is that consequentialism about moral reasons is false: the weight of moral reasons depends on more than the value of the consequences.

It is often said that belief has a mind-to-world direction of fit. It is interesting that this not only has repercussions for the agent’s own epistemic life, but for the moral life of other parties. We have much more reason to help others to true belief by affecting their beliefs than by affecting the truth and falsity of the content of the beliefs.

Do the Belgians have any moral reason to lose, in light of the fact that losing will make the fans have correct belief? I am inclined to think so: producing a better state of affairs is always worthwhile. But the force of the reason is exceedingly small. (Nor do the numbers matter: the reason’s force would remain exceedingly small even if there are trillions of fans because Earth soccer was famous through the galaxy.)

There is a connection between the good and the right, but it is quite complex indeed.

Friday, May 22, 2020

Lying to save lives

I’m imagining a conversation between Alice, who thinks it is permissible to lie to Nazis to protect innocents, and a Nazi. Alice has just lied to the Nazi to protect innocents hiding in her house. The Nazi then asks her: “Do you think it is permissible to lie to protect innocents from people like me?” If Alice says “Yes”, the Nazi will discount her statement, search her house and find innocents. So, she has to say “No.” But then the Nazi goes on to ask: “Why not? Isn’t life more important than truth? And I know that you think me an unjust aggressor (no, don’t deny it, I know you know it, but I’m not going to get you just for that).” And now Alice has to either cave and say that she does think it permissible to lie to unjust aggressors, in which case the game is up, and the innocents will die, or she has to exercise her philosophical mind to find the best arguments she can for a moral conclusion that she believes to be perverse. The latter seems really bad.

Or imagine that Alice thinks that the only way she will convince the Nazi that she is telling the truth in her initial lie is by adding lies about how much she appreciates the Nazi’s fearless work against Jews. That also seems really wrong to me.

Or imagine that Alice’s non-Nazi friend Bob can’t keep secrets and asks her if she is hiding any Jews. Moreover, Alice knows that Bob knows that Alice fearlessly does what she thinks is right. And so Bob will conclude that Alice is hiding Jews unless he thinks Alice believes Jews deserve death. And if Bob comes to believe that Alice is hiding Jews, the game will be up through no fault of Bob’s, since Bob can’t keep secrets. Now it looks like the only way Alice can keep the innocents she is hiding safe is by advocating genocide to Bob.

It is very intuitive that a Nazi at the door doesn’t deserve the truth about who is living in that house. And yet at the same time, it seems like everyone deserves the truth about what is right and wrong. But at the same time, it is difficult to limit a permission of lying to the former kinds of cases. There is a slippery slope here, with two stable positions: an absolutist prohibition on lying and a consequentialist calculus. An in-between position will be difficult to specify and defend.

Wednesday, December 19, 2018

Reducing the right to the good

Here is a simple reductive account of right and wrong that now seems to me to be obviously correct:

  1. An action is right if and only if it is non-instrumentally wholly good; it is wrong if and only if it is non-instrumentally at least partly bad.

Think, after all, how easily we move between saying that someone acted badly and that someone acted wrongly.

If (1) is a correct reduction, then we can reduce facts about right and wrong to facts about the value of particular kinds of things, namely actions.

By the way, if we accept (1), then consequentialism is equivalent to the following thesis:

  1. An action is non-instrumentally good if and only if it is on balance (instrumentally and non-instrumentally) best.

But it is quite strange to think that there be an entity that is non-instrumentally good if and only if it is on balance best.

Monday, October 29, 2018

Preventing suffering

Theodicies according to which sufferings make possible greater moral goods are often subjected to this objection: If so, why should we prevent sufferings?

I am not near to having a full answer to the question. But I think this is related to a question everyone, and not just the theist, needs to face up to. For everyone should accept Socrates’ great insight that moral excellence is much more important than avoiding suffering, and yet we should often prevent suffering that we think is apt to lead to the more important goods. I don’t know why. That’s right now one of the mysteries of the moral life for me. But it is as it is.

Famously, persons with disabilities tend to report higher life satisfaction than persons without disabilities. But we all know that accepting this data should not keep us from working to prevent disability-causing car accidents. While higher life satisfaction is not the same as moral excellence, the example is still instructive. Our reasons to prevent disability-causing car accidents do not require us to refute the empirical data suggesting that persons with disabilities lead more satisfying lives. I do not know why exactly we still have on balance reason to prevent such accidents, but it is clear to me that we do.

Mother Teresa thought that the West is suffering from a deep poverty of relationships, with both God and neighbor. Plausibly she was right. We probably are not in a position to know that affluence is a significant cause of this deep poverty, but we can be open to the real epistemic possibility that it is, and we can acknowledge the deep truth that the riches of relationship are far more important than physical goods, without this sapping our efforts to improve the material lot of the needy.

Or suppose you are witnessing Alice torturing Bob, and an oracle informed you that in ten years they will be reconciled, with Bob beautifully forgiving Alice and Alice deeply repenting, with the goods of the reconciliation being greater than the bads in the torture. I think I should still stop Alice.

A quick corollary of the above cases is that consequentialism is false. But there is a deep paradox here that cuts more deeply than consequentialism. I do not know how to resolve it.

Here are some stories, none of which are fully satisfying to me in their present state of development.

Perhaps it is better if humans have a special focus on the relief of suffering and improvement of material well-being of the patient. An opposite focus might lead to an unhealthy condescension.

Perhaps it has something to do with our embodied natures that a special focus on the bodily good of the other is a particularly fitting way for humans to express love for one another. While letting another suffer in the hope of greater on-balance happiness might be better for the patient, it could well be worse for the agent and the relationship. Maybe we should think of what Catholics call the “corporal works of mercy” as a kind of kiss, or maybe even something like a sacrament.

Perhaps there is something about respect for the autonomy of the other. Maybe others’ physical good is also our business while moral development is more their own business.

I think there is more. But the point I want to make is just that this is not a special question for theism and theodicy. It is a paradox that all morally sensitive people should see both sides of.

Coming back to theodicy, note that the above speculative considerations may not apply to God as the agent. (God cannot but condescend, being infinitely above us. God is not embodied, except in respect of the Incarnation. And we have no autonomy rights against God, as God is closer to us than we are to ourselves.)

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Monday, October 9, 2017

Preventing someone from murdering Hitler

You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. Should you warn Hitler’s guards?

  1. Intuition: No! If Hitler stays alive, millions will die.

  2. Objection: You would be intending Schmidt to kill Hitler, a killing that you know would be a murder, and you are morally speaking an accomplice. And it is wrong to intend an evil to prevent more evil.

There is a subtlety here. Perhaps you think: “It is permissible to kill an evil tyrant like Hitler, and so Schmidt is doing the right thing, but for the wrong reasons. So by not warning the guards, I am not intending Schmidt to commit a murder, but only a killing that is objectively morally right, albeit I foresee that Schmidt will commit it for the wrong reasons.” I think this reasoning is flawed—I don’t think one can say that Schmidt is doing anything morally permissible, even if the same physical actions would be morally permissible if they had other motive. But if you’re impressed by the reasoning, tweak the case a little. All this is happening before Hitler has done any of the evil tyrannical deeds that would justify killing him. However, you foresee with certainty that if Hitler is not stopped, he will do them. So Schmidt’s killing would be wrong, even if Schmidt were doing it to prevent millions of deaths.

What’s behind (2) is the thought that Double Effect forbids you to intend an evil, even if it’s for the purpose of preventing a greater evil.

But here is the fascinating thing. Double Effect forbids you from warning the guards. The action of warning the guards is an action that has two effects: (i) prevention of a murder, and (ii) the foreseen deaths of millions. Double Effect has a proportionality condition: it is only permissible to do an action with a good and a bad effect when the bad effect is proportionate to the good effect. But millions of deaths are not proportionate to the prevention of one murder. So Double Effect forbids you from warning the guards.

Now it seems that we have a conflict between Double Effect and Double Effect. On the one hand, Double Effect seems to say that you may not warn the guards, because doing so will cause millions of deaths. On the other hand, it seems to say that you may not refrain from warning the guards in order to save millions because in so doing you are intending Schmidt to kill Hitler.

I know of three ways out of this conflict.

Resolution 1: Double Effect applies only to commissions and not omissions. It is permissible to omit warning the guarads in order that Schmidt may have a free hand to kill Hitler, even though it would not be permissible to help Schmidt by any positive act. One may intend the killing of Hitler in the context of one’s omission but not in the context of one’s commission.

Resolution 2: This is a case of Triple Effect or, equivalent, of a defeater-defeater. You have some reason not to warn the guards. Maybe it’s just the general moral reason that you have not to invoke the stern apparatus of Nazi law, or the very minor reason not to bother straining one’s voice. There is a defeater for that reason, namely that warning the guards will prevent a murder. And there is a defeater-defeater: preventing that murder will lead to the deaths of millions. Thus, the defeater to your initial relatively minor moral reason not to warn guards—viz., that if you don’t, a murder will be committed—is defeater, and so you can just go with the initial moral reason. On this story, the initial Objection to the Intuition is wrong-headed, because it is not your intention to save millions—that is just a defeater to a defeater.

Resolution 3: Your intention is simply to refrain from acting in ways that have a disproportionately bad effect. We should simply not perform such actions. You aren’t refraining as a means to the prevention of the disproportionately bead effect, as the initial Objection claimed. Rather, you are refraining as a means to prevent oneself from contributing to a disproportionately bad effect, namely to prevent oneself from defending the life of the man who will kill millions.

Evaluation:

While Resolution 1 is in some ways attractive, it requires an explanation why intentions for evils are permissible in the context of omissions but not of commissions.

I used to really like something like Resolution 2. But now it seems forced to me, because it claims that your primary intention in the omission can be something so very minor—perhaps as minor as not straining one’s voice in some versions of the story. That just doesn’t seem psychologically realistic, and it seems to trivialize the goods and evils involved if one is focused on something minor. I still think the Triple Effect reasoning like has much to be said for it, but only in those cases where there is a significant good at stake in the initial intention.

I find myself now pulled to Resolution 3. The worry is that Resolution 3 pulls one towards the consequentialist justification of the initial intuition. But I think Resolution 3 is distinguishable from consequentialism, both logically and psychologically. Logically: the intention is not to contribute to an overwhelmingly bad outcome. Psychologically: one can refrain from warning the guards even if one wouldn’t raise a finger to help Schmidt. Resolution 3 suggests that there is an asymmetry between commission and omission, but it locates that asymmetry more plausibly than Resolution 1 did. Resolution 1 claimed that it was permissible to intend evils in the context of omissions. That is implausible for the same reason why it is impermissible to intend evils in the context of comissions: the will of someone who intends evil is a corrupt will. But Resolution 3 is an intuitively plausible non-consequentialist principle about avoiding being a contributor to evil.

In fact, if one so wishes, one can use Resolution 3 to fix the problem with Resolution 2. The initial intention becomes: Don’t be a contributor to evil. Defeater: If you don’t warn, a murder will happen. Defeater-defeater: But millions will die. Now the initial intention is very much non-trivial.

Tuesday, September 13, 2016

How a blog radically changes the world forever

On any given day, one in 30,000 Americans will conceive a child. So, roughly, there is a one in 60,000 chance that someone you (I'll just assume you're in the US for convenience) are interacting with will be conceiving a child later that day. Any interaction you have with a person who will be conceiving a child later that day is likely to affect the exact time of conception, and it seems very likely that varying the time of conception will vary the genetic identity of the child conceived. However, there might be some "resetting" mechanisms throughout the day, mediated by the way our days are governed by times of meetings and so on, and so not every interaction will change the time of conception. So let's say that one in four interactions with someone who will be conceiving a child later that day will vary who will be conceived (or whether anyone will be). That means that one in 240,000 interactions we have with people affects who will be conceived on that day.

Once one has affected who will be conceived that day, as long as the human race survives long enough, eventually just about everyone's genetic identity will be affected by one's actions. For, obviously, that conceived individual's own children's genetic identity will be affected. But that individual will interact with others, affecting the romantic decisions or at least times of conception of others, for instance. It seems quite safe to suppose that that individual's interactions over a lifetime will affect the genetic identity of ten individuals. Given an interconnected world like we have, it seems reasonable to suppose that in 20 generations, almost everyone's genetic identity will be affected (maybe there will be some isolated communities that won't be affected--but I think this is unlikely).

Counting a generation as 30 years, a blog that has 240,000 hits per year, running over a single year, will affect the genetic identity of almost everyone in 600 years. And this, in turn, will affect all vastly morally significant things where individuals matter: the starting of wars, the inventing of medical treatments, etc.

It is very likely, then, that the long-term effects of such a blog in terms of reshaping the world population vastly exceed whatever good and ill the blog does to the readers in the way proper to blogs. After all, one more or one less warmongering dictator and we have millions people killed or not killed. So the kinds of considerations one brings to bear on the question whether to have a blog--how will it affect my readers, etc.--are swamped by the real variation in consequences. (Assuming Judgment Day is still hundreds of years away.)

Not to be paralyzed in our actions, we need to bracket such great unknowns, even though we know they are there and that they matter more than the knowns on the basis of which we make our decisions!

Monday, October 27, 2014

Yet another infinite population problem

There are infinitely many people in existence, unable to communicate with one another. An angel makes it known to all that if, and only if, infinitely many of them make some minor sacrifice, he will give them all a great benefit far outweighing the sacrifice. (Maybe the minor sacrifice is the payment of a dollar and the great benefit is eternal bliss for all of them.) You are one of the people.

It seems you can reason: We are making our decisions independently. Either infinitely many people other than me make the sacrifice or not. If they do, then there is no gain for anyone to my making it—we get the benefit anyway, and I unnecessarily make the sacrifice. If they don't, then there is no gain for anyone to my making it—we don't get the benefit even if I do, so why should I make the sacrifice?

If consequentialism is right, this reasoning seems exactly right. Yet one better hope that it's not the case that everyone reasons like this.

The case reminds me of both the Newcomb paradox—though without the need for prediction—and the Prisoner's Dilemma. Like in the case of the Prisoner's Dilemma, it sounds like the problem is with selfishness and freeriding. But perhaps unlike in the case of the Prisoner's Dilemma, the problem really isn't about selfishness.

For suppose that the infinitely many people each occupy a different room of Hilbert's Hotel (numbered 1,2,3,...). Instead of being asked to make a sacrifice oneself, however, one is asked to agree to the imposition of a small inconvenience on the person in the next room. It seems quite unselfish to reason: My decision doesn't affect anyone else's (I so suppose—so the inconveniences are only imposed after all the decisions have been made). Either infinitely many people other than me will agree or not. If so, then we get the benefit, and it is pointless to impose the inconvenience on my neighbor. If not, then we don't get the benefit, and it is pointless to add to this loss the inconvenience to my neighbor.

Perhaps, though, the right way to think is this: If I agree—either in the original or the modified case—then my action partly constitutes the a good collective (though not joint) action. If I don't agree, then my action runs a risk of partly constituting a bad collective (though not joint) action. And I have good reason to be on the side of the angels. But the paradoxicality doesn't evaporate.

I suspect this case, or one very close to it, is in the literature.

Friday, January 31, 2014

Consequentialism and doing what is very likely wrong

Consider a version of consequentialism on which the right thing to do is the one that has the best consequences. Now suppose you're captured by an eccentric evil dictator who always tells the truth. She informs you there are ten innocent prisoners and there is a game you can play.

  • If you refuse to play, the prisoners will all be released.
  • If you play, the number of hairs on your head will be quickly counted by a machine, and if that number is divisible by 50, all the prisoners will be tortured to death. If that number is not divisible by 50, they will be released and one of them will be given a tasty and nutritious muffin as well, which muffin will otherwise go to waste.
Now it is very probable that the number of hairs on year head is not divisible by 50. And if it's not divisible by 50, then by the above consequentialism, you should play the game—saving ten lives and providing one with a muffin is a better consequence than saving ten lives. So if you subscribe to the above consequentialism, you will think that very likely playing is right and refusing to play is wrong. But still you clearly shouldn't play—the risk is too high (and you can just put that in expected utility terms: a 1/50 probability of 10 being tortured to death is much worse than a 49/50 probability of an extra muffin for somebody). So it seems that you should do what is very likely wrong.

So the consequentialist had better not say that the right thing to do is the one that has the best consequences. She would do better to say that the right thing to do is the one that has the best expected consequences. But I think that is a significant concession to make. The claim that you should act so as to produce the best consequences has a very pleasing simplicity to it. In its simplicity, it is a lovely philosophical theory (even though it leads to morally abhorrent conclusions). But once we say that you should maximize expected utility, we lose that elegant simplicity. We wonder why maximize expected utility instead of doing something more risk averse.

But even putting risk to one side, we should wonder why expected utility matters so much morally speaking. The best story about why expected utility matters have to do with long-run consequences and the law of large numbers. But that story, first, tells us nothing about intrinsically one-shot situations. And, second, that justification of expected utility maximization is essentially a rule utilitarian style of argument—it is the policy, not the particular act, that is being evaluated. Thus, anyone impressed by this line of thought should rather be a rule than an act consequentialist. And rule consequentialism has really serious theoretical problems.

Thursday, August 15, 2013

Endangerment and harm

Suppose I deliberately endanger you, but the danger doesn't befall you. Then there is a sense in which I do you no harm, but there is also a sense in which imposing the danger on your was a harm to you. You have a claim against me for my endangerment of you.

But one can also endanger people who never exist. For instance, if I give you a drug that has a high probability of physically harming your future children if you have any (let's say I assign a certain moderate probability to your having children), but you never actually have any children. There I might be harming you in some way, but I don't harmed them, since they never exist to be harmed. One can tweak the case so there are no parents to be harmed. Maybe I expect intelligent life to evolve on some planet with moderate probability, and I set up a device to harm some intelligent beings on that planet once they evolve, but no life evolves there.

There are thus two probabilities in endangerment. There is the probability that there is going to be potential victims at all and the conditional probability that a potential victim will be harmed given that there is going to be a potential victim at all. And the probability of harm is the product of these two probabilities.

It is a very interesting question whether there is a significant moral difference between a case where

  • I deliberately cause a probability 1/4 of harm to a person I know for sure to exist
versus a case where
  • I deliberately cause a probability 1/2 of (same as above) harm to a person I assign probability 1/2 to the existence of (i.e., I deliberately cause it to be the case that if that person exists, she has chance 1/2 of suffering that harm)
when we suppose that in the first case the danger did not in fact befall the person while in the second case the person did not in fact exist.

The consequentialist intuitions that we all have to some degree pull one to saying that there is no difference. On the other hand, in the first case there is a person that I have failed to love and respect her in the way that she deserves, while there is no such failure of love and respect in the second case. In fact, if one has a picture of morality as essentially involving interpersonal relations, it is difficult to see how any wrongdoing has happened in the second case if in fact the person never comes to exist.

A theist might be able to maintain both something like the consequentialist intuition and the idea that moral failures are primarily failures of interpersonal relations. There is a deep and mysterious message in Scripture expressed by the Psalmist saying to God: "Against you, you alone, have I sinned" (Ps 51.4). The Psalm heading connects this with David's sin against Uriah, which makes this message particularly puzzling, since it seems clear that David sinned against both Uriah and God. But suppose we take really seriously the idea that all positive attributes are acts of participation in God. Uriah's dignity, then, is an act of participation in God's dignity, and its value entirely derivative from God's infinite dignity. In some sense, then, David's wrongdoing against Uriah really just is a wrongdoing against God. Now suppose that David had been wrong, and there never had been a Uriah. (Maybe Bathsheba was an unmarried woman who created a myth of an Uriah in order to protect herself from unwanted advances.) The wrongdoing against God's dignity would have been just the same. The wrongdoing against Uriah wouldn't have been there, but that wrongdoing's "culpatory force" was entirely derivative from the culpatory force of the wrongdoing against God, since Uriah's dignity was an act of participation in God's dignity. If we have something like this picture, then we really can say that all moral failures are primarily failures of interpersonal relations and yet hold the two cases, the one where there is an endangered victim and the one where there turns out not to be one, to be morally on par. For all respect and love is ultimately and implicitly for God, though perhaps God qua participated in or participable in by a creature.

Wednesday, July 27, 2011

Infinite promises

Suppose I promise x that I will do whatever I promise y, and I promised y that I will do whatever I promise x.  I then promise x to bring ice cream to the party.  In so doing, I have violated my promise to x to bring ice cream to the party.  My violating my promise to x to bring ice cream violated my promise to y to do whatever I promise x.  My violating my promise to y to do whatever I promise x then violated my promise to x to do whatever I promised to y.  And so on.

It looks like by the simple neglect of bringing ice cream to the party, I have violated three promises in infinitely many ways.

But this action doesn't seem to be infinitely wrong, or if it is infinitely wrong, it is such because of the offense against God implicit in the promise-breaking, and not because of the infinite sequence of violations.

But why isn't it infinitely wrong (at least bracketing the theological significance)?

Is it because it's just one action?  No: for a single action can be infinitely wrong, as when someone utters a spell to make infinitely many people miserable while believing that the spell will be efficacious (it doesn't matter whether the spell is efficacious and whether there are infinitely many people).

Is it because only a finite number of promises are broken?  No: for a single promise can be broken infinitely often (given an infinite future, or a future dense interval of events if that's possible) with the demerit adding up.  (Imagine that I promise never to do something, and then I do it daily for eternity.)

Maybe one will bite the bullet and say that the action is infinitely wrong.  What's the harm in saying that?  Answer: incorrect moral priorities.  Keeping oneself from infinitely wrong actions is a much higher priority than keeping oneself from finitely wrong actions.  But it doesn't seem that one should greatly, if at all, prioritize being the sort of person who brings ice cream to parties in the above circumstances over, say, refraining from finitely but seriously hurting people's feelings.

Puzzling, isn't it?

The above generated a puzzle by infinite reflection.  But one can generate puzzling cases without such reflection.  Suppose x loves y, and I harm y.  I therefore also harm x, since as we learn from Aristotle, Aquinas and Nozick, the interests of the beloved are interests of the lover.  Now suppose infinitely many people love y.  (If a simultaneous infinity is impossible, assume eternalism and imagine an infinite future sequence of people who love y.  Or just suppose I falsely believe that infinitely many people love y.)  It seems that by imposing a minor harm on y, I impose a minor (perhaps very minor) harm on each of infinitely many people, and thereby an infinite harm.  Now, suppose that I have a choice whether to impose a minor harm on y, who is loved by infinitely many persons, or a major harm on z, who is loved by only finitely many.  As long as the major harm is only finitely greater than the minor harm, it seems that it is infinitely worse to impose the minor harm on y than the major harm on z.  But that surely is mistaken (and isn't it particularly bad to harm those who have fewer friends?).

One might try to bring God in.  Everyone is loved by God, and God is infinite, and so the major harm to z goes against the interests of God, and God's interests count infinitely (not that God is worse off "internally"), so the major harm to z multiplied by the importance of God's interests will outweigh the minor harm to y, even if one takes into account the infinitely many people who love y, since divine infinity trumps all other infinities.  But this neglects the fact that God also loves all the infinitely many people who love y, and hence the harm to the infinitely many lovers of y also gets multiplied by a divine infinity.

Nor is infinity needed to generate the puzzle.  Suppose that N people love y and only ten people love z, and my choice is whether to impose one hour of pain on y or fifty years of pain on z.  No matter how little the badness of x's suffering to x's lovers, it seems that if you make N large enough, it seems it will overshadow the disvalue of the fifty years of pain to z.

I think the right answer to all this is that wrongs, benefits and harms can't be arithmetized in a very general way.  There is, perhaps, pervasive incommensurability, so that the harms to y's lovers are incommensurable with the harms to y.

But I don't know that incommensurability is the whole story.  It is a benefit to one if a non-evil project one identifies with is successful.  Now imagine two sport teams, one that has a million fans and the other of which has a thousand.  Is it really the case that members of the less popular team have a strong moral reason to bring it about that the other team wins because of the benefit to the greater number of fans, even if it is a moral reason overridden by their duties of integrity and special duties to their fans?  (Likewise, is it really the case that the interests of Americans qua Americans morally count for about ten times as much as the interests of Canadians qua Canadians?)

Yet some harms and benefits do arithmetize fairly well.  It does seem about equally bad to impose two hours of suffering on ten people as one hour of suffering on twenty.

So whatever function combines values and disvalues is very complicated, and depends on the kind of values and disvalues being combined.  The only way a simple additivity can be assured is if we close our eyes to the vast universe of types of values, say restricting ourselves to pleasure and suffering as hedonistic utilitarians do.