Showing posts with label morality. Show all posts
Showing posts with label morality. Show all posts

Wednesday, July 9, 2025

Acting without knowledge of rightness

Some philosophers think that for your right action to be morally worthy you have to know that the action is right.

On the contrary, there are cases where an action is even more morally worthy when you don’t know it’s right.

  1. Alice is tasked with a dangerous mission to rescue hikers stranded on a mountain. She knows it’s right, and she fulfills the mission.

  2. Bob is tasked with a dangerous mission to rescue hikers stranded on a mountain. He knows it’s right, but then just before he heads out, a clever philosopher gives him a powerful argument that there is no right or wrong. He is not fully convinced, but he has no time to figure out whether the argument works before the mission starts. Instead, he reasons quickly: “Well, there is a 50% chance that the argument is sound and there is no such thing as right and wrong, in which case at least I’m not doing anything wrong by rescuing. But there is a 50% chance that there is such a thing as right and wrong, and if anything is right, it’s rescuing these hikers.” And he fulfills the mission.

Bob’s action is, I think, even more worthy and praiseworthy than Alice’s. For while Alice risks her life for a certainty of doing the right thing, Bob is willing to risk his life in the face of uncertainty. Some people would take the uncertainty as an excuse, but Bob does not.

Friday, May 30, 2025

The value of moral norms

Here is a very odd question that occurred to me: Is it good for there to be moral norms?

Imagine a world just like this one, except that there are no moral norms for its intelligent denizens—but nonetheless they behave as we do. They feel repelled by the idea of murder and torture, and find the life of a Mother Teresa attractive, but there are no moral truths behind these things.

Such a world would have one great advantage over ours: there would be no moral evil. That world’s Hitler and Stalin would cause just as much pain and suffering, but they wouldn’t be wicked in so doing. Given the Socratic insight that it is worse to do than to suffer evil, a vast amount of evil would disappear in such a world. At least a third of the evil in the world would be gone. Our world has three categories of evil:

I. Undergoing of natural evils

  1. Undergoing of moral evils, and

  2. Performance of moral evils.

The third category would be gone, and it is probably the biggest of the three. Wouldn’t that be worth it?

Here is one answer. For cooperative intelligent social animals, a belief in morality is very useful. But to live one’s life by a belief that is false seems a significant harm. Cooperative intelligent social animals in the alternative world would be constantly deceived by their belief in morality. That is a great evil. But is it as great an evil as all Category III evils taken together? I suspect it is but a small fraction of the sum of all Category III evils.

Here is a second answer. In removing moral norms, one would admittedly remove a vast category of evils, but also a vast category of goods: the performance of moral good. If we have the intuition that having moral norms is a good thing—that it would be a disappointment to learn that moral norms were an illusion—then we have to think that the performances of moral good are a very great thing indeed, one comparable to the sum of all Category III evils.

I am attracted to a combination of the two answers. But I can also see someone saying: “It doesn’t matter whether it’s worth having moral norms or not, but it is simply impossible to have cooperative intelligent social animals that believe in morality without their being under moral norms.” A Platonist may say that on the grounds that moral norms are necessary. A theist may say it on the grounds that it is contrary to the character of a perfect God to manufacture the vast deceit that would be involved in us thinking there are moral norms if there were no moral norms. These aren’t bad answers. But I still feel it’s good that there really are moral norms.

Wednesday, May 21, 2025

Doxastic moral relativism

Reductive doxastic moral relativism is the view that an action type’s being morally wrong is nothing but an individual or society’s belief that the action type is morally wrong.

But this is viciously circular, since we reduce wrongness to a belief about wrongness. Indeed, it now seems that murder is wrong provided that it is believed that it is believed that it is believed ad infinitum.

A non-reductive biconditional moral relativism fares better. This is a theory on which (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if it is believed that it does. Compare this: There is such a property as mass, and necessarily an object has mass if and only if God believes that it has mass.

There is a biconditional-explanatory version. On this theory (a) there is such a property as moral wrongness and (b) necessarily, an action type has that property if and only if, and if so then because, it is believed that it does.

While both the biconditional and biconditional-explanatory versions appear logically coherent, I think they are not particularly plausible. If there really is such a property as moral wrongness, and it does not reduce to our beliefs, then it just does not seem particularly plausible to think that it obtains solely because of our beliefs or that it obtains necessarily if and only if we believe it does. The only clear and non-gerrymandered examples we have of properties that obtain solely because of our beliefs or necessarily if and only if we believe they do are properties that reduce to our beliefs.

All this suggests to me that if one wishes to be a relativism, one should base the relativism on a different attitude than belief.

Monday, February 10, 2025

Autonomy and relativism

Individual relativism may initially seem to do justice to the idea of our autonomy: our moral rules are set by ourselves. But this attractiveness of relativism disappears as soon as we realize that our beliefs are largely not up to us—that, as the saying goes, we catch them like we catch the flu. This seems especially true of our moral beliefs, most of which are inherited from our surrounding culture. Thus, what individual relativism gives to us in terms of autonomy is largely taken away by reflection on our beliefs.

Tuesday, February 4, 2025

Asymmetry between moral and physical excellence

We can use a Mahatma Ghandi or a Mother Teresa as a moral exemplar to figure out what our virtues should be. But we cannot use an Usain Bolt or a Serena Williams as a physical exemplar to figure out what our physical capabilities should be. Why this disanalogy between moral and physical excellence?

It’s our intuition that Bolt and Williams exceed the physical norms for humans to a significant degree. But although Ghandi and Mother Teresa did many supererogatory things, I do not think they overall exceed the moral norms for human character to a significant degree. We should be like them, and our falling short is largely our fault.

Tuesday, January 14, 2025

More on the centrality of morality

I think we can imagine a species which have moral agency, but moral agency is a minor part of their flourishing. I assume wolves don’t have moral agency. But now imagine a species of canids that live much like wolves, but every couple of months get to make a very minor moral choice whether to inconvenience the pack in the slightest way—the rest is instinct. It seems to me that these canids are moral agents, but morality is a relatively minor part of their flourishing. The bulk of the flourishing of these canids would be the same as that of ordinary wolves.

Aristotle argued that the fact that rationality is how we differ from other species tells us that rationality is what is central to our flourishing. The above thought experiment shows that the argument is implausible. Our imaginary canids could, in fact, be the only rational species in the universe, and their moral agency or rationality (with Aristotle and Kant, I am inclined to equate the two) is the one thing that makes them different from other canids, but yet what is more important to their flourishing is what they have in common with other canids.

At the same time, it would be easy for an Aristotelian theorist to accommodate my canids. One needs to say that the form of a species defines what is central to the flourishing, and in my canids, unlike in humans, morality is not so central. And one can somehow observe this: rationality just is clearly important to the lives of humans in a way in which it’s not so much these canids.

In this way, I think, the Aristotelian may have a significant advantage over a Kantian. For a Kantian may have to prioritize rationality in all possible species.

In any case, we should not take it as a defining feature of morality that it is central to our flourishing.

One might wonder how this works in a theistic context. For humans, moral wrongdoing is also sin, an offense against a loving infinite Creator. As I’ve described the canids, they may have no concept of God and sin, and so moral wrongdoing isn’t seen as sin by them. Could you have a species which does have a concept of God and sin, but where morality (and hence sin) isn’t central to flourishing? Or does bringing God in automatically elevate morality to a higher plane? Anselm thought so. He might have been right. If so, then the discomfort that one is liable to feel at the idea of a species of moral agents where morality is not very important could be an inchoate grasp of the connection between God and morality.

The overridingness of morality and Double Effect

You’ve been imprisoned in a cell with a torture robot. The cell is locked by a combination lock, and your estimate is that you will be able to open it in a week. If the torture robot is left running, it will stimulate your pain center, causing horrible pain but no lasting damage, and not slowing down your escaping at all. An infallible oracle reveals to you that if you disable the robot, through a random confluence of events this will affect your character in such a way that in a year you will be 0.1% less patient for the rest of your life than you would otherwise be.

Now, sometimes, a small difference in the degree of a virtue could make a big difference. For instance, perhaps, you will one day be in a position where an extremely arduous task will need to be done to save someone’s life, and you just barely have enough patience for it, so that if you were 0.1% less patient, you wouldn’t do it. You ask the oracle whether something like this will happen if you turn off the robot. The oracle replies: “No, it’s just that you will be 0.1% more annoyed whenever you engage in an arduous task, but that’s never going to push you past any significant threshold—you’re not going to blow up in a big way at your child, or neglect a duty, or anything like that.”

It seems obviously reasonable to disable the robot. Thus, enormous short-term hedonic considerations can win out over tiny long-term virtue considerations. It is thus not the case that considerations of virtue always beat hedonic considerations.

What are we to make, then, of the deep insight—perhaps the most important insight in the history of Western philosophy—about the primacy of morality over other considerations?

Two things. First, moral considerations tend to be much more important than non-moral considerations.

Second, we should never do what is morally wrong, no matter what the price for avoiding it, and no matter how small the wrong. But there is a difference between doing what is morally wrong and doing something morally permissible that makes one less virtuous.

Here is a second case. You and an innocent stranger are in the cell. The robot is set to torture the stranger. The oracle now reveals to you that right after the escape, you will forget the last two weeks of your life, and your life will go the same way whether you disabled the robot or not, with exactly one morally relevant exception: if you have chosen to disable the robot, then one day, feeling peckish and having forgotten your wallet, you will culpably steal a candybar from a cornerstore.

It seems obvious that you should disable the robot, despite the fact that doing so leads to your doing a minor moral wrong. The point isn’t that disabling the robot justifies stealing the candybar—at the time that you steal it, you will have forgotten all about the robot, so there is no justification. The point is that even though you should never do wrong that a good might come of it, nonetheless sometimes for the sake of a great good it is permissible to do something that you know will lead to your later doing something impermissible.

Sometimes theologians have incautiously said things like that the smallest sin outweighs the greatest evil that is not a sin. I think this is incorrect. But what is correct is that you shouldn’t commit the smallest sin for the sake of the greatest good. However, the Principle of Double Effect applies to future sins: you can foresee but not intend that if you perform a certain action—turning off the robot, say—you will commit a future sin.

Monday, August 26, 2024

Assertion, lying, promises and social contract

Suppose you have inherited a heavily-automated house with a DIY voice control system made by an eccentric relative who programmed various functions to be commanded by a variety of political statements, all of which you disagree with.

Thus, to open a living room window you need to say: “A donkey would make a better president than X”, where X is someone who you know would be significantly better at the job than any donkey.

You have a guest at home, and the air is getting very stuffy, and you feel a little nauseous. You utter “A donkey would make a better president than X” just to open a window. Did you lie to your guest? You knowingly said something that you knew would be taken as an assertion by any reasonable person. But, let us suppose, you intended your words solely as
a command to the house.

Normally, you’d clarify to your guest, ideally before issuing the voice command, that you’re not making an assertion. And if you failed to clarify, we would likely say that you lied. So simply intending the words to be a command to the house rather than an assertion to the guest may not be enough to make them be that.

Maybe we should say this:

  1. You assert to Y providing (a) you utter words that you know would be taken to be an assertion to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not asserting to Y.

The conjunctive condition in (a) is a bit surprising, but i think both conjuncts need to be there. Suppose that your guest has the unreasonable belief that people typically program their home automation systems to run on political statements and rarely make political statements except to operate such systems, and hence would not take your words as an assertion. Then you don’t need to issue a clarification, even though you would be deceiving a reasonable person. Similarly, you’re not lying if you tell your home automation system “Please open the window” and your paranoid guest has the unreasonable belief that this is code for some political statement that you know to be false.

One might initially think that (c) should say that you actually failed to issue the clarification. But I think that’s not quite right. Perhaps you are feeling faint and only have strength for one sentence. You tell the home automation system to open the window, and you just don’t have the strength to to clarify to your guest that you’re not making a political statement. Then I think you haven’t lied or asserted—you made a reasonable effort by thinking about how you might clarify things, and finding no solution.

It’s interesting that condition (c) is rather morally loaded: it makes reference to reasonable effort.

Here is an interesting consequence of this loading. Similar things have to be said about promising as about asserting.

  1. You promise to Y providing (a) you utter words that you know would be taken to be a promise to Y by a reasonable person and by Y, (b) you intend to utter these words, and (c) you failed to put reasonable effort into finding a way to clarify that you are not promising to Y.

If this is right, then the practice of promising might be dependent on prior moral concepts, namely the concept of reasonable effort. And if that’s right, then contract-based theories of morality are viciously circular: we cannot explain what promises are without making reference to moral concepts.

Thursday, April 4, 2024

Intending the bad as such

Here is a plausible thesis:

  1. You should never intend to produce a bad effect qua bad.

Now, even the most hardnosed deontologist (like me!) will admit that there are minor bads which it is permissible to intentionally produce for instrumental reasons. If a gun is held to your head, and you are told that you will die unless you come up to a stranger and give them a moderate slap with a dead fish, then the slap is the right thing to do. And if the only way for you to survive a bear attack is to wake up your fellow camper who is much more handy with a rifle than you are, and the only way to wake them up is to poke them with a sharp stick, then the poke is the right thing. But these cases are not counterexamples to (1), since while the slap and poke are bad, one is not intending them qua bad.

However, there are more contrived cases where it seems that you should intend to produce a bad effect qua bad. For instance, suppose that you are informed that you will die unless you do something clearly bad to a stranger, but it is left entirely up to you what the bad thing is. Then it seems obvious that the right thing to do is to choose the least bad thing you can think of—the lightest slap with a dead fish, perhaps, that still clearly counts as bad—and do that. But if you do that, then you are intending the bad qua bad.

Yet I find (1) plausible. I feel a pull towards thinking that you shouldn’t set your will on the bad qua bad, no matter what. However it seems weird to think that it would be right to give a stranger a moderate slap with a dead fish if that was specifically what you were required to do to save your life, but it would be wrong to give them a mild slap if it were left up to you what bad thing to do. So, very cautiously, I am inclined to deny (1) in the case of minor bads.

Tuesday, February 27, 2024

Saving infinitely many lives

Suppose there is an infinitely long line with equally-spaced positions numbered sequentially with the integers. At each position there is a person drowning. All the persons are on par in all relevant respects and equally related to you. Consider first a choice between two actions:

  1. Save people at 0, 2, 4, 6, 8, ... (red circles).

  2. Save people at 1, 2, 3, 5, 7, ... (blue circles).

It seems pretty intuitive that (1) and (2) are morally on par. The non-negative evens and odds are alike!

But now add a third option:

  1. Save people at 2, 4, 6, 8, ... (yellow circles).

The relation between (2) and (3) is exactly the same as the relation between (1) and (2)—after all, there doesn’t seem to be anything special about the point labeled with the zero. So, if (1) and (2) are on par, so are (2) and (3).

But by transitivity of being on par, (1) and (3) are on par. But they’re not! It is better to perform action (1), since that saves all the people that action (3) saves, plus the person at the zero point.

So maybe (1) is after better than (2), and (2) is better than (3)? But this leads to the following strange thing. We know how much better (1) is than (2): it is better by one person. If (1) is better than (2) and (2) is better than (3), then since the relationships between (1) and (2) and between (2) and (3) are the same, it follows that (1) must be better than (2) by half a person and (2) must be better than (3) by that same amount.

But when you are choosing which people to save, and they’re all on par, and the saving is always certain, how can you get two options that are “half a person” apart?

Very strange.

In fact, it seems we can get options that are apart by even smaller intervals. Consider:

  1. Save people at 0, 10, 20, 30, 40, ....

  2. Save people at 1, 11, 21, 31, 41, ....

and so on up to:

  1. Save people at 10, 20, 30, 40, ....

Each of options (4)–(14) is related the same way to the next. Option (4) is better than option (14) by exactly one person. So it seems that each of options (4)–(13) is better by a tenth of a person than the next!

I think there is one at all reasonable way out, and it is to say that in both the (1)–(3) series and the (4)–(14) series, each option is incomparable with the succeeding one, but we have comparability between the start and end of each series.

Maybe, but is the incomparability claim really correct? It still feels like (1) and (2) should be exactly on par. If you had a choice between (1) and (2), and one of the two actions involved a slight benefit to another person—say, a small probability of saving the life of the person at  − 17—then we should go for the action with that slight benefit. And this makes it implausible that the two are incomparable.

My own present preferred solution is that the various things here seem implausible to us because human morality is not meant for cases with infinitely many beneficiaries. I think this is another piece of evidence for the species-relativity of morality: our morality is grounded in human nature.

Thursday, August 17, 2023

Tiebreakers

You need to lay off Alice or Bob, or else the company goes broke. For private reasons, you dislike Bob and want to see him suffer. What should you do?

The obvious answer is: choose randomly.

But suppose that there is no way to choose randomly. For instance, perhaps an annoying oracle which has told you the outcome of any process that you could have made use of random decision. The oracle says “If you flip the penny in your pocket, it will come up heads”, and now deciding that Alice is laid off on heads is tantamount to deciding that Alice is laid off.

So what should you do?

There seems to be something rationally and maybe morally perverse in one’s treatment of Alice if one fires her to avoid firing the person that one wants to fire.

But it seems that if one fires Bob, one does so in order to see him suffer, and that’s wrong.

I have two solutions, not mutually exclusive.

The first is that various rules of morality and rationality only make sense in certain normal conditions. Typical rules of rationality simply break down if one is in the unhappy circumstance of knowing that one’s ability to reason rationally is so severely impaired that there is no correlation between what seems rational and what is rational. Similarly, if one is brainwashed into having to kill someone, but is left with the freedom to choose the means, then one may end up virtuously beheading an innocent person if beheading is less painful than any other method of murder available, because the moral rules against murder presuppose that one has freedom of will. It could be that some of our moral rules also presuppose an ability to engage in random processes, and when that ability is missing, then the rules are no longer applicable. And since circumstances where random choices are possible are so normal, our moral intuitions are closely tied to these circumstances, and hence no answer to the question of what is the right thing to do is counterintuitive.

The second is that there is a special kind of reason, a tie-breaker reason. When one fires Bob with the fact that one wants to see him suffering being a tie-breaker, one is not intending to see him suffer. Perhaps what one is intending, instead, is a conditional: if one of Alice and Bob suffers, it’s Bob.

Tuesday, May 16, 2023

Morality and intention

Some philosophers (Thomson and Rachels, for instance) think that intention does not affect the rightness or wrongness of an act.

This view is quite implausible in the special case of speech acts, where the existence, type and content of a speech act is determined in part by intentions. If I enter a password into a computer by voice, I am not engaging in a speech act, even if I know there is a person near me who may think that I am speaking to them. Whether I am promising or predicting a future action depends in part on my intentions (“If you give me a paper outside of class time, I will lose it” could be a promise when said by a mean professor, but ordinarily is just a prediction). Who “you” refers to depends on the speaker’s intention to address a particular person.

And of course whether a speech act of a particular type and content is being engaged in can be quite relevant to the moral status of what one is doing. For a police officer to assert a racist proposition is wrong, but it need not be wrong for them quote a racist proposition asserted by a suspect or to enter a racist sentence by voice as a password into a suspect’s computer, and in ambiguous contexts the difference can simply be intention.

One might say that speech acts are not a counterexample to the moral irrelevance of intention thesis because here the intention determines the type of act, and the irrelevance of intention thesis only applies when we fix the type of act:

  1. Two acts of the same type in the same circumstances have the same moral status, even if the intentions behind them are different.

If this is right, then the moral irrelevance of intention thesis is one that typical action theorists who think intention is morally important can agree with. For they think that intention is crucial to determining the type of act—an intentional killing, for instance, being a different kind of act from a the causing of a foreseen but unintended death.

Perhaps what the advocates of the irrelevance of intention need to do is to combine the moral irrelevance of intention thesis, for acts of fixed type, with the thesis:

  1. Many acts other than speech acts do not depend on intention for the identification of their type.

It’s hard to criticize such a squishy thesis. But it’s worth noting that most acts the interact with another person have an expressive component, and expressive acts are like speech acts in having intention as a crucial component. One respects, disrespects, regards or disregards other people in typical interactions, and these things depend in part on intention. This is compatible with (2), but it makes the moral irrelevance of intention thesis much less powerful.

Monday, May 8, 2023

Glitches in the moral law?

Human law is a blunt instrument. We often replace the thing that we actually care about by a proxy for it, because it makes the law easier to formulate, follow and/or enforce. Thus, to get a driver’s license, you need to pass a multiple choice test about the rules of the road. Nobody actually cares whether you can pass the test: what we care about is whether you know the rules of the road. But the law requires passing a test, not knowledge.

When a thing is replaced by (sometimes we say “operationalized by”) a proxy in law, sometimes the law can be practically “exploited”, i.e., it is possible to literally follow the law while defeating its purpose. Someone with good test-taking skills might be able to pass a driving rules test with minimal knowledge (I definitely had a feeling like that in regard to the test I took).

A multiple-choice test is not a terrible proxy for knowledge, but not great. Night is a very good proxy for times of significant natural darkness, but eclipses show it’s not a perfect proxy. In both cases, a law based on the proxy can be exploited and will in more or less rare cases have unfortunate consequences.

But whether a law can be practically exploited or not, pretty much any law involving a proxy will have unfortunate or even ridiculous consequences in far-out scenarios. For instance, suppose some jurisdiction defines chronological age as the difference in years between today’s date and the date of birth, and then has some legal right that kicks in at age 18. Then if a six-month-old travels to another stellar system at close to the speed of light, and returns as a toddler, but 18 years have elapsed on earth, they will have that the legal rights accruing to an 18-year-old. The difference in years between today’s date and the date of birth is only a proxy for the chronological age, but it is a practically nearly perfect proxy—as long as we don’t have near-light-speed travel.

If a law involves a proxy that does not match the reality we care about in too common or too easy to engineer circumstances, then that’s a problem. On the other hand, if the mismatch happens only in circumstances that the lawmaker knows for sure won’t actually happen, that’s not an imperfection in the law.

Now suppose that God is the lawmaker. By the above observations, it does not reflect badly on a lawmaker if a law involves a proxy that fails only in circumstances that the lawmaker knows for sure won’t happen. More generally, it does not reflect badly on a lawmaker if a law has unfortunate or ridiculous consequences in cases that the lawmaker knows for sure won’t happen. Our experience with human law suggests that such cases are difficult to avoid without making the law unwieldy. And while there is no great difficulty for God in making an unwieldy law, such a law would be hard for us to follow.

In a context where a law is instituted by God (whether by command, or by desire, or by the choice of a nature for a created person), we thus should not be surprised if the law “glitches” out in far-out scenarios. Such “glitches” are no more an imperfection than it is an imperfection of a helicopter that it can’t fly on the moon. This should put a significant limitation on the use of counterexamples in ethics (and likely epistemology) in contexts where we are allowing for the possibility of a divine institution normativity (say, divine command or theistic natural law).

One way that this “glitching” can be manifested is this. The moral law does not present itself to us as just as a random sequence of rules. Rather, it is an organized body, with more or less vague reasons for the rules. For instance “Do not murder” and “Do not torture” may come under a head of “Human life is sacred.” (Compare how US federal law has “titles” like “Title 17: Copyright” and “Title 52: Voting and Elections”, and presumably there are vague value-laden principles that go with the title, such as promoting progress with copyright and giving voice to people with voting.) In far-out scenarios, the rules may end up conflicting with their reasons. Thus, to many people “Do not murder” would not seem a good way to respect to respect the sacredness of human life in far-out cases where murdering an innocent person is the only way to save the human race from extinction. But suppose that God in instituting the law on murder knew for sure that there would never occur a situation where the only way to save the human race from extinction is murder. Then there would be no imperfection in making the moral law be “Do not murder.” Indeed, this would be arguably a better law than “Do not murder unless the extinction of humanity is at stake”, because the latter law is needlessly complex if the extinction of humanity will never be at stake in a potential murder.

Thus the theistic deontologist faced with the question of whether it would be right to murder if that were the only way to save the human race can say this: The law prohibits murder even in this case. But if this case was going to have a chance of happening, then God would likely have made a different law. Thus, there are two ways of interpreting the counterfactual question of what would happen if we were in this far-out situation. We can either keep fixed the moral law, and say that the murder would be wrong, or we can keep fixed God’s love of human life, and say that in that case God would likely have made a different law and so it wouldn’t be wrong.

We should, thus, avoid counterexamples in ethics that involve situations that we don’t expect to happen, unless our target is an ethical theory (Kantianism?) that can’t make the above move.

But what about counterexamples in ethics that involve rare situations that do not make a big overall difference (unlike the case of the extinction of the human race)? We might think that for the sake of making the moral law more usable by the limited beings governed by it, God could have good reason for making laws that in some situations conflict with the reasons for the laws, as long as these situations are not of great importance to the human species. (The case of murdering to prevent the extinction of the human race would be of great importance even if it were extremely rare!)

If this is right—and I rather wish it isn’t—then the method of counterexamples is even more limited.

Thursday, February 23, 2023

Morality and the gods

In the Meno, we get a solution to the puzzle of why it is that virtue does not seem, as an empirical matter of fact, to be teachable. The solution is that instead of involving knowledge, virtue involves true belief, and true belief is not teachable in the way knowledge is.

The distinction between knowledge and true belief seems to be that knowledge is true opinion made firm by explanatory account (aitias logismoi, 98a).

This may seem to the modern philosophical reader to confuse explanation and justification. It is justification, not explanation, that is needed for knowledge. One can know that sunflowers turn to the sun without anyone knowing why or how they do so. But what Plato seems to be after here is not merely justified true belief, but something like the scientia of the Aristotelians, an explanatorily structured understanding.

But not every area seems like the case of sunflowers. There would be something very odd in a tribe knowing Fermat’s Last Theorem to be true, but without anybody in the tribe, or anybody in contact with the tribe, having anything like an explanation or proof. Mathematical knowledge of non-axiomatic claims typically involves something explanation-like: a derivation from first principles. We can, of course, rely on an expert, but eventually we must come to something proof-like.

I think ethics is in a way similar. There is something very odd about having justified true belief—knowledge in the modern sense—of ethical truths but not knowing why they are true. Yet it seems humans are often in this position. They know the ethical truths but not why they are true. Yet they have correct, and maybe even justified, moral judgments about many things. What explains this?

Socrates’ answer in the Meno is that it is the gods. The gods instill true moral opinion in people (especially the poets).

This is not a bad answer.

Monday, January 30, 2023

Epistemic goods

We think highly morally of teachers who put an enormous effort into getting their students to know and understand the material. Moreover, we think highly of these teachers regardless of whether they are in a discipline, like some branches of engineering, where the knowledge and understanding exists primarily for the sake of non-epistemic goods, as when they are in a discipline, like cosmology, where the knowledge and understanding is primarily aimed at epistemic goods.

The virtues and vices in disseminating epistemic goods are just as much moral virtues and vices as those in disseminating other goods, such as food, shelter, friendship, or play, and there need be little difference in kind. The person who is jealous of another’s knowledge has essentially the same kind of vice as the one who is jealous of another’s physical strength. The person generous with their time in teaching exhibits essentially the same virtue as the one generous with their time in feeding others.

There is, thus, no significant difference in kind between the pursuit of epistemic goods and the norms of the pursuit of other goods. We not infrequently have to weigh one against the other, and it is a mark of the virtuous person that they do this well.

But if this is all correct, then by parallel we should not make a significant distinction in kind between the pursuit of epistemic goods for oneself and the pursuit of non-epistemic goods for oneself. Hence, norms governing the pursuit of knowledge and understanding seem to be just a species of prudential norms.

Does this mean that epistemic norms are just a species of prudential norms?

I don’t think so. Consider that prudentially we also pursue goods of physical health. However, norms of physical health are not a species of prudential norms. It is the medical professional who is the expert on the norms of physical health, not the prudent person as such. Prudential norms apply to voluntary behavior as such, while the norms of physical health apply to the body’s state and function. We might say that norms of the voluntary pursuit of the fulfillment of the norms of physical health are prudential norms, but the norms of physical health themselves are not prudential norms. Similarly, the norms of the voluntary pursuit of the fulfillment of epistemic norms are prudential norms, but the epistemic norms themselves are no more prudential norms than the health norms are.

Monday, December 12, 2022

More on non-moral and moral norms

People often talk of moral norms as overriding. The paradigm kind of case seems to be like this:

  1. You are N-forbidden to Ï• but morally required to Ï•,

where “N” is some norm like that of prudence or etiquette. In this case, the moral requirement of Ï•ing overrides the N-prohibition on Ï•ing. Thus, you might be rude to make a point of justice or sacrifice your life for the sake of justice.

But if there are cases like (1), there will surely also be cases where the moral considerations in favor of ϕing do not rise to the level of a requirement, but are sufficient to override the N-prohibition. In those cases, presumably:

  1. You are N-forbidden to Ï• but morally permitted to Ï•.

Cases of supererogation look like that: you are morally permitted to do something contrary to prudential norms, but not required to do so.

So far so good. Moral norms can override non-moral norms in two ways: by creating a moral requirement contrary to the non-moral norms or by creating a moral permission contrary to the non-moral norms.

But now consider this. What happens if the moral considerations are at an even lower level, a level insufficient to override the N-prohibition? (E.g., what if to save someone’s finger you would need to sacrifice your arm?) Then, it seems:

  1. You are N-forbidden to Ï• and not morally permitted to Ï•.

But this would be quite interesting. It would imply that in the absence of sufficient moral considerations in favor of Ï•ing, an N-prohibition would automatically generate a moral prohibition. But this means that the real normative upshot in all three cases is given by morality, and the N-norms aren’t actually doing any independent normative work. This suggests strongly that on such a picture, we should take the N-norms to be simply a species of moral norms.

However, there is another story possible. Perhaps in the case where the moral considerations are at too low a level to override the N-prohibition, we can still have moral permission to Ï•, but that permission no longer overrides the N-prohibition. On this story, there are two kinds of cases, in both of which we have moral permission, but in one case the moral permission comes along with sufficiently strong moral considerations to override the N-prohibition, while in the other it does not. On this story, moral requirement always overrides non-moral reasons; but whether moral considerations override non-moral considerations depends on the relative strengths of the two sets of considerations.

Still, consider this. The judgment whether moral considerations override the non-moral ones seems to be an eminently moral judgment. It is the person with moral virtue who is best suited to figuring out whether such overriding happens. But what happens if morality says that the moral considerations do not override the N-prohibition? Is that not a case of morality giving its endorsement to the N-prohibition, so that the N-prohibition would rise to the level of a moral prohibition as well? But if so, then that pushes us back to the previous story where it is reasonable to take N-considerations to be subsumed into moral considerations.

I don’t want to say that all norms are moral norms. But it may well be that all norms governing the functioning of the will are moral norms.

Tuesday, November 29, 2022

Nonoverriding morality

Some philosophers think that sometimes norms other than moral norms—e.g., prudential norms or norms of the meaningfulness of life—take precedence over moral norms and make permissible actions that are morally impermissible. Let F-norms be such norms.

A view where F-norms always override moral norms does not seem plausible. In the case of prudential or meaningfulness, it would point to a fundamental selfishness in the normative constitution of the human being.

So the view has to be that sometimes F-norms take precedence over moral norms, but not always. There must thus be norms which are neither F-norms nor moral norms that decide whether F-norms or moral norms take precedence. We can call these “overall norms of combination”. And it is crucial to the view that the norms of combination themselves be neither F-norms nor moral norms.

But here is an oddity. Morality already combines F-considerations and first order paradigmatically moral considerations. Consider two actions:

  1. Sacrifice a slight amount of F-considerations for a great deal of good for one’s children.

  2. Sacrifice an enormous amount of F-considerations for a slight good for one’s children.

Morality says that (1) is obligatory but (2) is permitted. Thus, morality already weighs F and paradigmatically moral concerns and provides a combination verdict. In other words, there already are moral norms of combination. So the view would be that there are moral norms of combination and overall norms of combination, both of which take into account exactly the same first order considerations, but sometimes come to different conclusions because they weigh the very same first order considerations differently (e.g., in the case where a moderate amount of F-considerations needs to be sacrificed for a moderate amount of good for one’s children).

This view violates Ockham’s razor: Why would we have moral norms of combination if the overall norms of combination always override them anyway?

Moreover, the view has the following difficulty: It seems that the best way to define a type of norm (prudential, meaningfulness, moral, etc.) is in terms of the types of consideration that the norm is based on. But if the overall norms of combination take into account the very same types of consideration as the moral norms of combination, then this way of distinguishing the types of norms is no longer available.

Maybe there is a view on which the overall ones take into account not the first-order moral and F-considerations, but only the deliverances of the moral and F-norms of combination, but that seems needlessly complex.

Monday, November 14, 2022

The 2018 Belgium vs Brazil World Cup game

In 2018, the Belgians beat the Brazilians 2-1 in the 2018 World Cup soccer quarterfinals. There are about 18 times as many Brazilians and Belgians in the world. This raises a number of puzzles in value theory, if for simplicity we ignore everyone but Belgians and Brazilians in the world.

An order of magnitude more people wanted the Brazilians to win, and getting what one wants is good. An order of magnitude more people would have felt significant and appropriate pleasure had the Brazilians won, and an appropriate pleasure is good. And given both wishful thinking as well as reasonable general presumptions about there being more talent available in a larger population base, we can suppose that a lot more people expected the Brazilians to win, and it’s good if what one thinks is the case is in fact the case.

You might think that the good of the many outweighs the good of the few, and Belgians are few. But, clearly, the above facts gave very little moral reason to the Belgian players to lose. One might respond that the above facts gave lots of reason to the Belgians to lose, but these reasons were outweighed by the great value of victory to the Belgian players, or perhaps the significant intrinsic value of playing a sport as well as one can. Maybe, but if so then just multiply both countries’ populations by a factor of ten or a hundred, in which case the difference between the goods (desire satisfaction, pleasure and truth of belief) is equally multiplied, but still makes little or no moral difference to what the Belgian players should do.

Or consider this from the point of view of the Brazilian players. Imagine you are one of them. Should the good of Brazil—around two hundred million people caring about the game—be a crushing weight on your shoulders, imbuing everything you do in practice and in the game with a great significance? No! It’s still “just a game”, even if the value of the good is spread through two hundred million people. It would be weird to think that it is a minor pecadillo for a Belgian to slack off in practice but a grave sin for a Brazilian to do so, because the Brazilian’s slacking hurts an order of magnitude more people.

That said, I do think that the larger population of Brazil imbues the Brazilians’ games and practices with some not insignificant additional moral weight than the Belgians’. It would be odd if the pleasure, desire satisfaction and expectations of so many counted for nothing. But on the other hand, it should make no significant difference to the Belgians whether they are playing Greece or Brazil: the Belgians shouldn’t practice less against the Greeks on the grounds that an order of magnitude fewer people will be saddened when the Greeks lose than when Brazilians do.

However, these considerations seem to me to depend to some degree on which decisions one is making. If Daniel is on the soccer team and deciding how hard to work, it makes little difference whether he is on the Belgian or Brazilian team. But suppose instead that Daniel is has two talents: he could become an excellent nurse or a top soccer player. As a nurse, he would help relieve the suffering of a number of patients. As a soccer player, in addition to the intrinsic goods of the sports, he would contribute to his fellow citizens’ pleasure and desire satisfaction. In this decision, it seems that the number of fellow citizens does matter. The number of people Daniel can help as a nurse is not very dependent on the total population, but the number of people that his soccer skills can delight varies linearly with the total population, and if the latter number is large enough, it seems that it would be quite reasonable for Daniel to opt to be a soccer player. So we could have a case where if Daniel is Belgian he should become a nurse but if Brazilian then a soccer player (unless Brazil has a significantly greater need for nurses than Belgium, that is). But once on the team, it doesn’t seem to matter much.

The map from axiology to moral reasons is quite complex, contextual, and heavily agent-centered. The hope of reducing moral reasons to axiology is very slim indeed.

Wednesday, October 12, 2022

Divine permission ethics

There are two ways of thinking about the ethics of consent.

On the first approach, there are complex prohibitions against non-consensual treatment in a number of areas of life, with details varying depending on the area of life (e.g., the prohibitions are even more severe in sexual ethics than in medicine). Thus, this is a picture where we start with a default permission, and layer prohibitions on top of it.

On the second, we start with a default autonomy-based prohibition on one person doing anything that affects another. That, of course, ends up prohibiting pretty much everything. But then we layer exceptions on that. The first is a blanket exception for when the affected person consents in the fullest way. And then we add lots and lots more exceptions, such as when the the effect is insignificant, when one has a special right to the action, etc.

The second approach is interesting. Most ethical systems start with a default of permission, and then have prohibitions on top of that. But the second system starts with a default of prohibitions, and then has permissions on top of that.

The second approach raises this question. Given that the default prohibition on other-affecting actions is grounded in autonomy, how could anything but the other’s consent override that prohibition? I think one direction this question points is towards something I’ve never heard explored: divine permission ethics. God’s permission seems our best candidate for what could override an autonomy-based prohibition. So we might get this picture of ethics. There is a default prohibition on all other-affecting actions, followed by two exceptions: when the affected person consents and when God permits.

I still prefer the first approach.

Thursday, October 6, 2022

Having to do what one thinks is very likely wrong

Suppose Alice borrowed some money from Bob and promised to give it back in ten years, and this month it is time to give it back. Alice’s friend Carl is in dire financial need, however, and Alice promised Carl that at the end of the month, she will give him any of her income this month that she hasn’t spent on necessities. Paying a debt is, of course, a necessity.

Now, suppose neither Alice nor Bob remember how much Alice borrowed. They just remember that it was some amount of money between $300 and $500. Now, obviously in light of her promise to Bob:

  1. It is wrong for Alice to give less to Bob than she borrowed.

But because of her promise to Carl, and because any amount above the owed debt is not a necessity:

  1. It is wrong for Alice to give more to Bob than she borrowed.

And now we have a puzzle. Whatever amount between Alice gives to Bob, she can be extremely confident is either less or more than she borrowed, and in either case she does wrong. Thus whatever Alice does, she is confident she is doing wrong.

What should Alice do? I think it’s intuitive that she should do something like minimize the expected amount of wrong.