Showing posts with label rule utilitarianism. Show all posts
Showing posts with label rule utilitarianism. Show all posts

Friday, June 20, 2025

Punishment, reward and theistic natural law

I’ve always found punishment and (to a lesser extent) reward puzzling. Why is it that when someone does something wrong is there moral reason to impose a harsh treatment on them, and why is it that when someone does something right—and especially supererogatory—is there moral reason to do something nice for them?

Of course, it’s easy to explain why it’s good for our species that there be a practice of reward and punishment: such a practice in obvious ways helps to maintain a cooperative society. But what makes it morally appropriate to impose a sacrifice on the individual for the good of the species in this way, whether the good of the person receiving the punishment or the good of the person giving the reward when the reward has a cost?

Punishment and reward thus fit into a schema where we would like to be able to make use of this argument form:

  1. It would be good (respectively, bad) for humans if moral fact F did (did not) obtain.

  2. Thus, probably, moral fact F does obtain.

(The argument form is better on the parenthetical negative version.) It would be bad for humans if we did not have distinctive moral reasons to reward and punish, since our cooperative society would be more liable to fall apart due to cheating, freeriding and neglect of others. So we have such moral reasons.

As I have said on a number of occasions, we want a metaethics on which this is a good argument. Rule-utilitarianism is such a metaethics. So is Adams’ divine command theory with a loving God. And so is theistic natural law, where God chooses which natures to exemplify because of the good features in these natures. I want to say something about this last option in our case, and why it is superior to the others.

Human nature encodes what is right and wrong for. Thus, it can encode that it is right for us to punish and reward. An answer as to why it’s right for us to reward and punish, then, is that God wanted to make cooperative creatures, and chose a nature of cooperative creatures that have moral reasons to punish and reward, since that improves the cooperation.

But there is a way that the theistic natural law solution stands out from the others: it can incorporate Boethius’ insight that it is intrinsically bad for one to get away unpunished with wrongdoing. For our nature not only encodes what is right and wrong for us to do, but also what is good or bad for us. And so it can encode that it is bad for us to get away unpunished. It is good for us that it be bad for us to get away unpunished, since its being bad for us to get away unpunished means that we have additional reason to avoid wrongdoing—if we do wrong, we either get punished or we get away unpunished, and both options are bad for us.

The rule-utilitarian and divine-command options only explain what is right and wrong, not what is good and bad, and so they don’t give us Boethius’ insight.

Monday, May 8, 2023

Glitches in the moral law?

Human law is a blunt instrument. We often replace the thing that we actually care about by a proxy for it, because it makes the law easier to formulate, follow and/or enforce. Thus, to get a driver’s license, you need to pass a multiple choice test about the rules of the road. Nobody actually cares whether you can pass the test: what we care about is whether you know the rules of the road. But the law requires passing a test, not knowledge.

When a thing is replaced by (sometimes we say “operationalized by”) a proxy in law, sometimes the law can be practically “exploited”, i.e., it is possible to literally follow the law while defeating its purpose. Someone with good test-taking skills might be able to pass a driving rules test with minimal knowledge (I definitely had a feeling like that in regard to the test I took).

A multiple-choice test is not a terrible proxy for knowledge, but not great. Night is a very good proxy for times of significant natural darkness, but eclipses show it’s not a perfect proxy. In both cases, a law based on the proxy can be exploited and will in more or less rare cases have unfortunate consequences.

But whether a law can be practically exploited or not, pretty much any law involving a proxy will have unfortunate or even ridiculous consequences in far-out scenarios. For instance, suppose some jurisdiction defines chronological age as the difference in years between today’s date and the date of birth, and then has some legal right that kicks in at age 18. Then if a six-month-old travels to another stellar system at close to the speed of light, and returns as a toddler, but 18 years have elapsed on earth, they will have that the legal rights accruing to an 18-year-old. The difference in years between today’s date and the date of birth is only a proxy for the chronological age, but it is a practically nearly perfect proxy—as long as we don’t have near-light-speed travel.

If a law involves a proxy that does not match the reality we care about in too common or too easy to engineer circumstances, then that’s a problem. On the other hand, if the mismatch happens only in circumstances that the lawmaker knows for sure won’t actually happen, that’s not an imperfection in the law.

Now suppose that God is the lawmaker. By the above observations, it does not reflect badly on a lawmaker if a law involves a proxy that fails only in circumstances that the lawmaker knows for sure won’t happen. More generally, it does not reflect badly on a lawmaker if a law has unfortunate or ridiculous consequences in cases that the lawmaker knows for sure won’t happen. Our experience with human law suggests that such cases are difficult to avoid without making the law unwieldy. And while there is no great difficulty for God in making an unwieldy law, such a law would be hard for us to follow.

In a context where a law is instituted by God (whether by command, or by desire, or by the choice of a nature for a created person), we thus should not be surprised if the law “glitches” out in far-out scenarios. Such “glitches” are no more an imperfection than it is an imperfection of a helicopter that it can’t fly on the moon. This should put a significant limitation on the use of counterexamples in ethics (and likely epistemology) in contexts where we are allowing for the possibility of a divine institution normativity (say, divine command or theistic natural law).

One way that this “glitching” can be manifested is this. The moral law does not present itself to us as just as a random sequence of rules. Rather, it is an organized body, with more or less vague reasons for the rules. For instance “Do not murder” and “Do not torture” may come under a head of “Human life is sacred.” (Compare how US federal law has “titles” like “Title 17: Copyright” and “Title 52: Voting and Elections”, and presumably there are vague value-laden principles that go with the title, such as promoting progress with copyright and giving voice to people with voting.) In far-out scenarios, the rules may end up conflicting with their reasons. Thus, to many people “Do not murder” would not seem a good way to respect to respect the sacredness of human life in far-out cases where murdering an innocent person is the only way to save the human race from extinction. But suppose that God in instituting the law on murder knew for sure that there would never occur a situation where the only way to save the human race from extinction is murder. Then there would be no imperfection in making the moral law be “Do not murder.” Indeed, this would be arguably a better law than “Do not murder unless the extinction of humanity is at stake”, because the latter law is needlessly complex if the extinction of humanity will never be at stake in a potential murder.

Thus the theistic deontologist faced with the question of whether it would be right to murder if that were the only way to save the human race can say this: The law prohibits murder even in this case. But if this case was going to have a chance of happening, then God would likely have made a different law. Thus, there are two ways of interpreting the counterfactual question of what would happen if we were in this far-out situation. We can either keep fixed the moral law, and say that the murder would be wrong, or we can keep fixed God’s love of human life, and say that in that case God would likely have made a different law and so it wouldn’t be wrong.

We should, thus, avoid counterexamples in ethics that involve situations that we don’t expect to happen, unless our target is an ethical theory (Kantianism?) that can’t make the above move.

But what about counterexamples in ethics that involve rare situations that do not make a big overall difference (unlike the case of the extinction of the human race)? We might think that for the sake of making the moral law more usable by the limited beings governed by it, God could have good reason for making laws that in some situations conflict with the reasons for the laws, as long as these situations are not of great importance to the human species. (The case of murdering to prevent the extinction of the human race would be of great importance even if it were extremely rare!)

If this is right—and I rather wish it isn’t—then the method of counterexamples is even more limited.

Tuesday, February 26, 2019

Lying and consequences

Suppose Alice never lies while Bob lies to saves innocent lives.

Consider circumstances where Alice and Bob know that getting Carl to believe a proposition p would save an innocent life, and suppose that Alice and Bob know whether p is true.

In some cases of this sort, Bob is likely to do better with respect to innocent lives:

  1. p is false and Carl doesn’t know Alice and Bob’s character.

  2. p is false and Carl doesn’t know that Alice and Bob know that getting Carl to believe p would save an innocent livfe.

For in cases 1 and 2, Bob is likely to succeed in getting Carl to believe p, while Alice is not.

But in one family of cases, Alice is likely to do better:

  1. p is true and Carl knows Alice and Bob’s character and knows that they believe that getting Carl to believe p would save an innocent life.

For in these cases, Carl wouldn’t be likely to believe Bob with regard to p, as he would know that Bob would affirm p whether p was true or false, as Bob is the sort of person who lies to save innocent lives, while Carl would surely believe Alice.

Are cases of type (1) and (2) more or less common than cases of type (3)?

I suppose standard cases where an aggressor at the door is asking whether a prospective victim is in the house may fall under category (1) when the aggressor knows that they are known to be an aggressor and will fall under category (2) when the aggressor doesn’t know that they are known to be an aggressor (Korsgaard discusses this case in a paper on Kant on lying).

On the other hand, category (3) includes some death penalty cases where (a) the life of the accused depends on some true testimony being believed and (b) the testifier is someone likely to think the accused to be innocent independently of the testimony (say, because the accused is a friend). For in such a case, Bob would just give the testimony whether it’s true or false, while Alice would only give it if it were true (or at least she thought it was), and so Bob’s testimony carries no weight while Alice’s does.

Category (3) also includes some cases where an aggressor at the door knows the character of their interlocutor in the house, and knows that they are known to be an aggressor, and where the prospective victim is not in the house, but a search of the house would reveal other prospective victims. For instance, suppose a Gestapo officer is asking whether there are Jews in the house, which there aren’t, but there are Roma refugees in the house. The Gestapo officer may know that Bob would say there aren’t any Jews even if there were, and so he searches the house and finds the Roma if Bob is at the door; but he believes Alice, and doesn’t search, and the Roma survive.

Roughly, the question of whether Alice or Bob’s character is better consequentialistically comes down to the question whether it is more useful, with respect to innocent life, to be more believable and always honest (Alice) or to be less believable and able to lie (Bob).

Tuesday, November 27, 2018

Evil, omniscience, and other matters

If God exists, there are many evils that God doesn’t prevent, even though it seems that we would have been obligated to prevent them if we could.

A sceptical theist move is that God knows something about the situations that we don’t. For instance, it may seem to us that the evil is pointless, but God sees it as interwoven with greater goods.

An interesting response to this is that even if we knew about the greater goods, we would be obligated to prevent the evil. Say, Carl sees Alice about to torture Bob, and Carl somehow knows (maybe God told him) that one day Alice will repent of the evil in response to a beautiful offer of forgiveness from Bob. Then I am inclined to think Carl should still prevent Alice from torturing Bob, even if repentance and forgiveness are goods so great that it would have been better for both Alice and Bob if the torture happened.

Here is an interesting sceptical theist response to this response. Normally, we don’t know the future well enough to know that great goods would arise from our permitting an evil. Because of this, our moral obligations to prevent grave evils have a bias in them towards what is causally closer to us. Moreover, this bias in the obligations, although it is explained by the fact that normally we don’t know the future very well, is present even in the exceptional cases where we do know the future sufficiently well, as in the Carl, Alice and Bob case.

This move requires an ethical system where a moral rule that applies in all circumstances can be explained by its usefulness in normal circumstances. Rule utilitarianism is of course such an ethical system. Divine command theory is as well: God can be motivated to issue an exceptionless rule because of the fact that normally the rule is a good one and it might not be good for us to be trying to figure out whether a case at hand is an exception to the rule (this is something I learned from Steve Evans). And St. Thomas Aquinas in his argument against nonmarital sex holds that natural law is also like that (he argues that typically nonmarital sex is bad for the offspring, and concludes that it is wrong even in the exceptional cases where it’s not bad for the offspring, because, as he says, laws are made with regard to the typical case).

Historically, this approach tends to be used to derive or explain deontic prohibitions (e.g., Aquinas’ prohibition on nonmarital sex). But the move from typical beneficiality of a rule to its holding always does not require that the rule be a deontic prohibition. A rule that weights nearer causal consequences more heavily could just as easily be justified in such a way, even if the rule did not amount to a deontic prohibition.

Similarly, one might use typical facts about our relationships with those closer to us—that we know what is good for them better than for strangers, that they are more likely to accept our help, that the material benefits of our help enhance the relationship—to explain why helping those closer to us should be more heavily weighted in our moral calculus than helping strangers, even in those cases where the the typical facts do not obtain. Once again, this isn’t a deontic case.

One might even have such typical-case-justified rules in prudential reasoning (perhaps a bias towards the nearer future is not irrational after all) and maybe even in theoretical reasoning (perhaps we shouldn’t be perfect Bayesian agents after all, because that’s not in our nature, given that normally Bayesian reasoning is too hard for us).

Friday, December 1, 2017

Laws of nature and moral rules

There is a lot to be said for the Mill-Ramsey-Lewis (MRL) account of laws as the axioms of a system that optimizes a balance of informativeness and simplicity. But there are really serious problems. The deepest is that the MRL regularities seem to systematize but not explain.

Similarly, there is a lot to be said for rule utilitarianism, but it also suffers from really serious problems. The deepest is probably that it just does not seem to be a compelling moral reason to do something harmful that under normal circumstances it is beneficial.

The MRL account of laws and rule utilitarianism are similar and a number of the problems facing them are structurally similar. Most deeply, the MRL laws don’t move things physically and rule utilitarian rules don’t move us morally. But there are also structurally similar technical problems, such as the account of simplicity, the way in which simplicity is to be balanced with informativeness or beneficiality, the apparent influence of future facts on present laws or moral truths, etc.

It is interesting that many of the problems of both accounts can be solved by bringing in theism. For instance, one can get a theistic MRL account of laws by saying that laws are the divinely willed axioms of a system that optimizes a divinely defined balance of informativeness and simplicity. And one can get a theistic rule utilitarian account by saying that laws are the divinely commanded rules that optimize a divinely defined balance of beneficiality and simplicity.

(I myself would prefer not to go for something quite so simple on the moral side: I’d prefer to insert our natures to mediate between God and our duties.)

Monday, January 26, 2015

Act and rule utilitarianism

Rule utilitarianism holds that one should act according to those rules, or those usable rules, that if adopted universally would produce the highest utility. Act utilitarianism holds that one should do that act which produces the highest utility. There is an obvious worry that rule utilitarianism collapses into act utilitarianism. After all, wouldn't utility be maximized if everyone adopted the rule of performing that act which produces the highest utility? If so, then the rule utilitarian will have one rule, that of maximizing the utility in each act, and the two theories will be the same.

A standard answer to the collapse worry is either to focus on the fact that some rules are not humanly usable or to distinguish between adopting and following a rule. The rule of maximizing utility is so difficult to follow (both for epistemic reasons and because it's onerous) that even if everyone adopted it, it still wouldn't be universally followed.

Interestingly, though, in cases with infinitely many agents the two theories can differ even if we assume the agents would follow whatever rule they adopted.

Here's such a case. You are one of countably infinitely many agents, numbered 1,2,3,..., and one special subject, Jane. (Jane may or may not be among the infinitely many agents—it doesn't matter.) Each of the infinitely many agents has the opportunity to independently decide whether to costlessly press a button. What happens to Jane depends on who, if anyone, pressed the button:

  • If a finite number n of people press the button, then Jane gets n+1 units of utility.
  • If an infinite number of people press the button, then Jane gets a little bit of utility from each button press: specifically, she gets 2k/10 units of utility from person number k, if that person presses the button.

So, if infinitely many people press the button, Jane gets at most (1/2+1/4+1/8+...)/10=1/10 units of utility. If finitely many people press the button, Jane gets at least 1 unit of utility (if that finite number is zero), and possibly quite a lot more. So she's much better off if finitely many people press.

Now suppose all of the agents are act utilitarians. Then each reasons:

My decision is independent of all the other decisions. If infinitely many other people press the button, then my pressing the button contributes (2k)/10 units of utility to Jane and costs nothing, so I should press. If only finitely many other people press the button, then my pressing the button contributes a full unit of utility to Jane and costs nothing, so I should press. In any case, I should press.
And so if everyone follows the rule of doing that individual act that maximizes utility, Jane ends up with one tenth of a unit of utility, an unsatisfactory result.

So from the point of view of act utilitarianism, in this scenario there is a clear answer as to what each person should do, and it's a rather unfortunate answer—it leads to a poor result for Jane.

Now assume rule utilitarianism, and let's suppose that we are dealing with perfect agents who can adopt any rule, no matter how complex, and who would follow any rule, no matter how difficult it is. Despite these stipulations, rule utilitarianism does not recommend that everyone maximize utility in this scenario. For if everyone maximizes utility, only a tenth of a unit is produced, and there are much better rules than that. For instance, the rule that one should press the button if and only if one's number is less than ten will produce nine units of utility if universally adopted and followed. And the rule that one should press the button if and only if one's number is less than 10100 will produce even more utility.

In fact, it's easy to see that in our idealized case, rule utilitarianism fails to yield a verdict as to what we should do, as there is no optimal rule. We want to ensure that only finitely many people press the button, but as long as we keep to that, the more the better. So far from collapsing into the act utilitarian verdict, rule utilitarianism fails to yield a verdict.

A reasonable modification of rule utilitarianism, however, may allow for satisficing in cases where there is no optimal rule. Such a version of rule utilitarianism will presumably tell us that it's permissible to adopt the rule of pressing the button if and only if one's number is less than 10100. This version of rule utilitarianism also does not collapse into act utilitarianism, since the act utilitarian verdict, namely that one should unconditionally press the button, fails to satisfice, as it yields only 1/10 units of utility.

What about less idealized versions of rule utilitarianism, ones with more realistic assumptions about agents. Interesting, those versions may collapse into act utilitarianism. Here's why. Given realistic assumptions about agents, we can expect that no matter what rule is given, there is some small independent chance that any given agent will press the button even if the rule says not to, just because the agent has made a mistake or is feeling malicious or has forgotten the rule. No matter how small that chance is, the result is that in any realistic version of the scenario we can expect that infinitely many people will press the button. And given that infinitely many other people will press the button, if only by mistake, the act utilitarian advice to press the button oneself is exactly right.

So, interestingly, in our infinitary case the more realistic versions of rule utilitarianism end up giving the same advice as act utilitarianism, while an idealized version ends up failing to yield a verdict, unless supplemented with a permission to satisfice.

But in any case, no version of rule utilitarianism generally collapses into act utilitarianism if such infinitary cases are possible. For there are standard finitary cases where realistic versions of rule utilitarianism fail to collapse, and now we see that there are infinitary ones where idealized versions fail to collapse. And so no version generally collapses, if cases like this are possible.

Of course, the big question here is whether such cases are possible. My Causal Finitism (the view that nothing can have infinitely many
items in its causal history) says they're not, and I think oddities such as above give further evidence for Causal Finitism.