Showing posts with label epistemic reasons. Show all posts
Showing posts with label epistemic reasons. Show all posts

Monday, October 30, 2023

Types of reasons

There are two ways of drawing a distinction between moral and epistemic reasons:

  1. What kind of value grounds the reasons (epistemic or moral).

  2. What kind of thing are the reasons reasons for (e.g., beliefs vs. actions).

If we take option (1), then there will be epistemic reasons not merely for beliefs, but for actions. Thus, the scientist will have epistemic reasons for doing a particularly informative experiment and the teacher may have epistemic reasons for engaging the students in a certain didactically beneficial group activity—i.e., in both cases, epistemic goods (to self and/or others) justify the action.

I like option (2). Moral reasons are reasons for action, while epistemic reasons are reasons for having a belief or credence or the like.

Here are some reasons for not drawing a distinction between reasons for action in terms of the kind of value as in (1).

First, we would morally admire someone who sacrificed a well-paying and easy career option to become a science teacher at an inner city school in order to pass the gift of knowledge to students. In other words, our admiration for someone who at significant personal cost promotes an epistemic value (by otherwise morally upstanding means) is moral.

Second, if we distinguish moral and epistemic reasons for action, consider conflicts. We would have to say that a scientist may have moral reasons to come home on time to feed her hungry children, and epistemic reasons to complete an experiment that cannot be done at another time. But now whether it is right to come home on time or to complete the experiment depends on the details. If the information gained from the experiment is unimportant while the experiment will take hours, and the kids are very hungry, coming home on time is right. But if the children are only very slightly hungry, and the experiment would only protract this hunger by a few minutes, while being extremely illuminating, staying a few minutes may well be the right thing to do.

Right in what way? Well, I think once again the kind of praise that we would levy on the scientist who balances their epistemic goals and their children’s needs well is moral praise. But then the moral praise does not always align with what I have been assuming are moral reasons for action. For we would not morally praise the scientist who neglects a short but extremely illuminating observation in order to make their children dinner a few minutes earlier. Such a scientist would have an insufficient love of epistemic goods. The scientist who hits the right balance is morally praiseworthy. Yet it is very odd to think that one is morally praiseworthy for subordinating moral reasons to non-moral ones!

If you’re not yet convinced by this case, consider one where the moral and non-moral goods are to the same person. A parent is explaining some very interesting matter of science to a child. The child would rather eat a few minutes earlier. If there really is a moral/epistemic reason distinction in actions, then the parent’s reasons for explaining are epistemic and the reasons for feeding are moral. But it could be morally praiseworthy to finish out the explanation.

Third, there are multiple kinds of non-epistemic good: health, virtue, appreciation, friendship, etc. The heterogeneity between them does not appear to be significantly less than that between all of them taken together and the epistemic goods. It seems that that if we are cutting nature at the joints, there is no reason to posit a particularly significant cut between the epistemic and non-epistemic goods. Instead, we should simply suppose that there is a variety of types of good, such as maybe health, virtue, beauty, friendship and understanding (and almost certainly others). All of these are alike in being goods, and different from each other as to the fundamental kind of good. To give the honorific “moral” to all of the ones on this list other than understanding seems quite arbitrary.

On the other hand, the distinction as to the type of thing that the reasons are reasons for does seem quite significant. Reasons for action and reasons for belief are quite different things because we respond, or fail to respond, to them quite differently: by willing and by believing, respectively.

It is interesting to ask this question. If the will has moral reasons, and the intellect has epistemic reasons, are there other faculties that have other reasons? Maybe. We can think of a reason R for ϕing in a faculty F as something that has a dual role:

  1. it tends to causally contributes to ϕing within F

  2. its presence (and causal contribution?) partially grounds ϕing counting as an instance of proper activity of F.

(Thus, reasons are causes-cum-justifiers.)

Are there things like that for other faculties F than will and intellect? Yes! The presence of a certain bacterium or virus may be a reason for the immune system to react in certain way. Humans thus have moral, epistemic and immune reasons, distinguished respectively by being reasons for the will, the intellect and the immune system. And there are doubtless many more (e.g., I expect there are reasons for all our sensory systems’ identifications of stimuli).

Some of these reasons are tied to specific types of goods. Thus, epistemic reasons are tied to epistemic goods, and immune reasons are tied to health goods. But moral reasons are different, in that action has a universality about it where any type of good—including epistemic and health ones—can ground a moral reason. And both epistemic and moral reasons tend to be different from immune reasons in that in the normal course of immune functioning we do not process them intellectually, while both epistemic and moral reasons are intellectually processed in normal use.

Wednesday, April 12, 2017

Types of normativity

It is widely thought that our actions are governed by at least multiple types of normativity, including the moral, the prudential and the epistemic, and that each type of normativity comes along with a store of reasons and an ought. Moreover, some actions—mental ones—can simultaneously fall under all three types of normativity.

Let’s explore this hypothesis. If we make this distinction between types of normativity, we will presumably say that morality is the realm of other-concerned reasons and prudence is the realm of self-concerned reasons. Suppose that at the cost of an hour of torture, you can save me from a minor inconvenience. Then (a) you have a moral reason to save me from the inconvenience and (b) you have a prudential reason not to save me.

It seems clear that you ought to not save me from the inconvenience. But what is this ought? It isn’t moral, since you have no moral reasons not to save me. Moreover, what explains the existence of this ought seem to be prudential reasons. So it seems to be a prudential ought.

But actually it’s not so clear that this is a prudential ought. For a further part of the explanation of why you ought not save me is that the moral reasons in favor of saving me from a minor inconvenience are so very weak. So this is an ought that is explained by the presence of prudential reasons and the weakness of the opposed moral reasons. That doesn’t sound like an ought belonging to prudential normativity. It seems to be a fourth kind of ought—an overall ought.

But perhaps moving to a fourth kind of ought was too quick. Consider that it would be wrongheaded in this case to say that you morally ought to save me, even though all the relevant moral reasons favor saving me and if these were all the reasons you had, i.e., if there were no cost to saving me from inconvenience, it would be the case that you morally ought to save me. (Or so I think. Add background assumptions about our relationship as needed to make it true if you’re not sure.) So whether you morally ought to save me depends on what non-moral reasons you have. So maybe we can say that in the original case, the ought really is a prudential ought, even though its existence depends on the weakness of the opposed moral reasons.

This, however, is probably not the way to go. For it leads to a great multiplication of types of ought. Consider a situation where you have moral and prudential reasons in favor of some action A, but epistemic reasons to the contrary. We can suppose that the situation is such that the moral reasons by themselves are insufficient to make it be the case that you ought to perform A, and the prudential reasons by themselves are insufficient, but when combined they become sufficiently strong in contrast with the epistemic reasons to generate an ought. The ought which they generate, then, is neither moral nor prudential. Unless we’ve admitted the overall ought as a fourth kind, it seems we have to say that the moral and prudential reasons generate a moral-and-prudential ought. And then we immediately get two other kinds of ought in other cases: a moral-and-epistemic ought and a prudential-and-epistemic ought. So now we have six types of ought.

And the types multiply. Suppose you learn, by consulting an expert, that an action has no cost and there are either moral or prudential considerations in favor of the action, but not both. You ought to do the action. But what kind of ought is that? It’s some kind of seventh ought, a disjunctive moral-exclusive-or-prudential kind. Furthermore, there will be graded versions. There will be a mostly-moral-but-slightly-epistemic ought, and a slighty-moral-but-mostly-epistemic ought, and so on. And what if this happens? An expert tells you, correctly or not, that she has discovered there is a fourth kind of reason, beyond the moral, prudential and epistemic, and that some action A has no cost but is overwhelmingly favored by the fourth kind of reason. If you trust the expert, you ought to perform the action. But what is the ought here? Is it "unknown type ought"?

It is not plausible to think that oughts divide in any fundamental way into all these many kinds, corresponding to different kinds of normativity.

Rather, it seems, we should just say that there is a single type of ought, an overall ought. If we still want to maintain there are different kinds of reasons, we should say that there is variation in what kinds of reasons and in what proportion explain that overall ought.

But the kinds of reasons are subject to the same line of thought. You learn that some action benefits you or a stranger, but you don’t know which. Is this a moral or a prudential reason to do the action? I suppose one could say: You have a moral reason to do the action in light of the fact that the action has a chance of benefiting you, and you have a prudential reason to do the action in light of the fact that the action has a chance of benefiting a stranger. But the reason-giving force of the fact that action benefits you or a stranger is different from the reason-giving force of the facts that it has a chance of benefiting you and a chance of benefiting the stranger.

Here’s a technical example of this. Suppose you have no evidence at all whether the action benefits you or the stranger, but it must be one or the other, to the point that no meaningful probability can be assigned to either hypothesis. (Maybe a dart is thrown at a target, and you are benefited if it hits a saturated non-measurable subset and a stranger is benefited otherwise.) That you have no meaningful probability that the action benefits you is a reason whose prudential reason-giving force is quite unclear. That you have no meaningful probability that the action benefits a stranger is a reason whose moral reason-giving force is quite unclear. But the disjunctive fact, that the action benefits you or the stranger, is a quite clear reason.

All this makes me think that reasons do not divide into discrete boxes like the moral, the prudential and the epistemic.

Monday, November 7, 2016

The direction of fit for belief

It’s non-instrumentally good for me to believe truly and it’s non-instrumentally bad for me to believe falsely. Does that give you non-instrumental reason to make p true?

Saying “Yes” is counterintuitive. And it destroys the direction-of-fit asymmetry between beliefs and desires.

But it’s hard to say “No”, given that surely if something is non-instrumentally good for me, you thereby have have non-instrumental reason to provide it.

Here is a potential solution. We sometimes have desires that we do not want other people to take into account in their decision-making. For instance, a parent might want a child to become a mathematician, but would nonetheless be committed to having the child to decide on their life-direction independently of the parent’s desires. In such a case, the parent’s desire that the child become a mathematician might provide the child with a first-order reason to become a mathematician, but this reason might be largely or completely excluded by the parent’s higher-order commitment. And we can explain why it is good to have such an exclusion: if a parent couldn’t have such an exclusion, she’d either have to exercise great self-control over her desires or would have to have hide them from their children.

Perhaps we similarly have a blanket higher-order reason that excludes promoting p on the grounds that someone believes p. And we can explain why it is good to have such an exclusion, in order to decrease the degree of conflict of interest between epistemic and pragmatic reasons. For instance, without such an exclusion, I’d have pragmatic reason to avoid pessimistic conclusions because as soon as we came to them, we and others would have reason to make the conclusions true.

By suggesting that exclusionary reasons are more common than I previously thought, this weakens some of my omnirationality arguments.

Wednesday, April 14, 2010

Epistemic and moral reasons: An Aristotelian view

I've previously argued that epistemic reasons are a kind of moral reason. But thanks to conversations with a patient colleague, I'm starting to see the plausibility of the standard view that they are different things. The only problem is that this is leading me to the view that epistemic reasons when distinguishable from moral ones are only reasons in an analogical sense.

The following argument moves me:

  1. Moral reasons always concern something up to the will.
  2. Epistemic reasons do not always concern something up to the will.
  3. Therefore, some epistemic reasons are not moral reasons.

What was stopping me previously from paying attention to arguments like this was that I had a very hard time seeing how a reason could fail to be a reason for the will. But consider the following statement:

  1. That the muscle received an electrical impulse from a nerve was a reason for the muscle to contract.
This statement seems to make sense. Moreover, "reason" here does not simply mean "cause". One way to see that is that causation is factive: A causes B only if both A and B occur. But (4) is compatible with the muscle not contracting. Rather, (4) states a teleological connection: it was the proper function of the muscle to contract upon receipt of the electrical impulse from a nerve.

So, the suggestion is that epistemic reasons insofar as they do not concern something up to the will are simply teleological connections that specify what it is normal to believe or not believe (or take some other attitudes towards) in the given circumstances.

At the same time, teleological connections within the human being also give rise to reasons for the will when that which is concerned in the connections is voluntary. Thus, suppose it is normal for human beings to breathe 12 breaths per minute at rest. You're overexcited, but at rest. If you are capable of controlling your breathing rate, the normalcy fact gives you a reason to breathe at 12 breaths per minute, and this is a reason for the will, not just a reason for the lungs. For it is good to function properly, and we always have a reason to will a willable good. Similarly, sometimes to believe or cease to believe a proposition is under partial or complete voluntary control, and in those cases the teleological connections that constitute epistemic reasons give rise to reasons for the will.

Still, there is something weird about talking about reasons for muscles or lungs. These are "reasons" in an analogical sense. And to the extent that epistemic reasons are exactly the same sort of non-voluntary thing, they too are reasons in an analogical sense.

This has the interesting consequence that because empirical data about how humans function gives evidence for claims about how humans ought to function (e.g., if we find out that the heart usually pumps blood, this gives us evidence for the claim that the heart should pump blood), likewise empirical data about humans think gives evidence for epistemically normative claims. Of course, this evidence is defeasible, in both cases.

Sunday, February 21, 2010

Children and God

This builds on a comment I made to yesterday's post, which was inspired by a remark of my wife's.

If God does not exist, then, in normal cases[note 1], the biological parents of a child are the persons directly and fully responsible for the child's existence. Thus, if God does not exist, then the parents, collectively, have the sort of role that, on traditional Christian views on which God directly creates the human being by creating the human being's soul, God has. But for the human parents to see themselves as having this God-like role distorts the parent-child relationship. There is thus moral reason for parents to believe in a deity who directly creates each human being.

If one thinks—as I think one should—that the fact that one has moral reason to believe p is itself evidence for p (it is much more likely that one have moral reason to believe a truth than to believe a falsehood), it follows that the above considerations not only give a moral reason for to believe in a deity, but they give an epistemic reason as well.