Monday, January 2, 2017

Humean views of rationality and the pursuit of money

Consider a Humean package view of rationality where:

  1. Then end of practical rationality is desire satisfaction.

  2. All the rational motivational drive in our decisions comes from our desires.

  3. There are no rational imperatives to have desires.

Now suppose that you learn that some costless action will further one or more of your desires, but you have no idea which desire or desires will be furthered by that action. (If we want to have some ideality constraints on which desires make action rational—say, only desires that would survive idealized psychotherapy—then we can suppose that you also know that the desire or desires furthered by the action will satisfy those constraints. I will ignore this wrinkle.)

Any theory of rationality that holds it to be rational to pursue one’s desires should hold it both rational and possible to take that costless action. In the abstract, a case where you know that some desire will be furthered but have no idea which one seems a strange edge case. But actually there is nothing all that strange about this. When money is offered to us, sometimes we have a clear picture of what the money would allow us to do. But sometimes we don’t: we just know that the money will help further some end or other. (Of course, in some people, the pursuit of money may have a non-instrumental dimension, but that’s vicious and surely unnecessary.)

So now let’s go back to the costless action that furthers one or more of your desires and the desire theory of rational motivation. How can this theory accommodate this action?

Option 1: Particular desires. You pick some desire of yours—let’s say, a desire to read a good book—and you think to yourself: “There is a non-zero probability that the action furthers my desire to read a good book.” Then the desire to read a good book, in the usual end-to-means ways, motivates you to do the costless action.

That, of course, could work. And in fact, in the case of money we do sometimes proceed by imagining something that we could buy. However, thinking that what motivates one is just the non-zero probability of furthering a particular desire gets things wrong for two reasons. The first is that we could imagine the case being enriched by your learning that the desire that will be furthered by the action is none of the desires that would come to mind if you were to spend less than a minute thinking about the case but that you need to make your decision within a minute. The inability to think of a particular desire that even might be furthered by the action does not affect the ratioanl possibility of taking the costless action.

The second is that this approach gets the strength of motivation wrong. You have many desires, and the desire to read a good book is only one among many. The probability that that desire to read a good book would be furthered by the costless action might well be tiny, especially if you received the further information that it is only one of your desires that is furthered by the action. Such a small probability of a benefit could still motivate you to take a costless action, but it may not work for similar cases where there is a modest cost. For instance, we can suppose you learn that:

  1. The benefit is roughly equal to reading a good book as measured by desire-satisfaction.

  2. The cost is roughly equal to a tenth of the benefit of reading a good book as measured by desire-satisfaction.

  3. You have a hundred desires and the one furthered is but one of them.

Well, then, the action is clearly worth it by (4) and (5). But it’s not worth doing the action simply on a one percent chance that it will lead to reading a good book, since the cost is ten percent of the benefit of reading a good book.

One might try to remedy the second problem by mentally going through a larger number of desires so as to increase the probability that some one of the desires will be fulfilled. But we still have the first objection—there may not be enough time to do this—and surely it is implausible that one would have to go through such mental lists of desires in order to get the motivation.

Option 2: A higher-order desire to have satisfied desires. Suppose you have a higher-order desire H to satisfy lower-order desires. Then while you don’t know which lower-order desire is furthered by the action, you do know that this higher-order desire is furthered by them.

This approach seems to lead to an unfortunate double-counting. When you sit down to read a good book, do you really get two benefits, one of reading the book and the other of furthering the higher-order desire to have satisfied lower-order desires? If not, the approach is problematic. But if so, then it gets the rational strength of motivation wrong. For suppose that you are choosing between two actions. Action A will lead to your reading a good book. Action B will lead to the fulfillment of an unknown desire other than reading a good book, a desire you nonetheless know to have the same weight. On the higher-order solution, it seems you have a double motivation for action A, namely H and the desire to read a good book, but only a single motivation for action B, namely H, and hence you should have a twice as strong rational motivation for A. But that’s surely not rational!

Maybe, though, you can get out of the double-counting in some way, by having some story about desire-overlap, so that H and the desire to read a good book don’t add up to a double desire. I suspect that this may undercut the force of the story, by making H not be a real desire.

But there is a second and more serious problem with the story. Suppose that Jim has all the usual lower-order order desires but lacks H. If rational motivation comes from desires, then Jim will not be rationally motivated to the action. (Maybe he will have some accidental non-rational motivation for the action.) But surely not going for a costless action that he knows will fulfill some desire of his will be a rational failing, assuming that it’s rational to fulfill one’s desires. Hence there will have to be a rational imperative to have H among one’s desires, contrary to the third part of the Humean package we are exploring.

Now I suppose we could drop the third part of the Humean picture, and hold that rationality requires some desires like H. But I think this makes the rest of the picture less plausible. If rationality requires one to have certain desires, it could just as well require one directly to fulfill certain ends, thereby undercutting the second part of the Humean picture.

Finally, I should note that not all non-Humeans should rejoice at this argument. For similar considerations may apply against some other views. For instance, some Natural Law views that tie motivation very tightly to basic goods may have this problem.


Heath White said...

On any Humean view, we get some desires by reasoning. E.g. I want X, I can only get X by getting Y, so I want Y. That is some kind of inference.

There should be similar kinds of inference from “Money is good for many things I want” to “I want money”. Also from “I want X; I want Y; Z will help me achieve either X or Y, though I don’t know which; so I want Z”.

Maybe not everyone goes through these inferences and so not everyone has the derived desire.

As far as motivation, I think we could say that the rational motivation provided by the derived desires depends on (grounded in, etc.) the motivation provided by the original desires. That should handle the double-counting worry.

Alexander R Pruss said...

But we don't want to have a premise that actually includes a disjunction of all our desires, since that's too long a premise for us to practically think.
And if instead we just quantify existentially, then the reasoning doesn't actually connect with any desire. Someone who has no desire but mistakenly thinks she does could still reason that way and so one could have practical reasoning in the absence of desires.

Alexander R Pruss said...

The hypothesis that you have no desires is really weird--maybe on Humean grounds you don't count as an agent then. But suppose this hypothesis. You have only two desires, for drink and food, but you are honestly mistaken and you think you have other desires. An expert then tells you that a costless action will fulfill a desire OTHER than for food or drink. You believe them, so you do the action. This is surely possible, but the resulting action is not motivated by any desire you have, since it's not motivated by the desire for food nor by the desire for drink and these are the only desires you have.

Heath White said...

That's a good point. I think I'm persuaded. The resulting picture is that desires can be rationally derived from non-desires.

Alexander R Pruss said...

So, if it's established that the Humean has to accept that some rational motivation is possible without being derived from desire, does she have a good reason to deny that this could happen in the case of moral beliefs?

Heath White said...

I am a little worried about "rational motivation." It is ambiguous between "reason / justification" and "causal oomph". In the latter case, it seems to me the Humean should agree that anything can cause anything. So I think we are asking about the structure of justifications that a Humean should acknowledge.

Well, contrary to my last comment, maybe there is an out. The Humean *could* sort of bite the bullet and say,

"Normal developed people have a higher-order desire to satisfy their desires. If that higher-order desire exists, then there is a desire-involving inference from that desire and 'I have some desire that money will satisfy, though I'm not thinking which right now' to 'I want money'. If for some strange reason a person does not have that higher-order desire, and isn't thinking of a particular desire money will satisfy, there is no reason to want money."

This seems odd but maybe it is consistent.

But if we think it is wrong, does it follow that moral beliefs could play the same function? Well, one way we might get the higher-order desire by deriving it from a principle of desire-satisfaction, which is what the Humean system embodies: I *know* I have various desires, and I have a principle (I guess) of satisfying them, so I *want* to have satisfied desires. But then it seems that whatever principles other principles we can believe, maybe we can derive desires from them, too.

If the Humean doesn't want to admit this then I think they have to take the odd line I suggested above.

This is good enough to write up.

Alexander R Pruss said...

There is also a technical problem with formulating the higher-level desire. One can't put it in the standard "x desires that p" format. For how would it look? "x desires that all of x's desires are satisfied"? But a normal person knows that in the life all her desires won't be satisfied no matter what she does. (She *might* think there is an afterlife in which they might be satisfied, but in any case money is unlikely to help one get there.)

This technical problem can be overcome by allowing for non-propositional desires, but it is a cost of the theory that one needs to do that.

Here's a Christmasy way of putting the original problem. You get a wrapped gift from a friend and have no idea what desire of yours it will satisfy. You know that the friend is so thoughtful that the gift will be (a) great and (b) surprising. There is no point trying to figure out which desire of yours it will satisfy--it is very unlikely to be any that you think of. So why bother unwrapping?

Heath White said...

Maybe the technical problem can be overcome by moving the quantifiers around. Not: I have a desire that all my desires be satisfied; but: For each desire that I have, I desire that it be satisfied. I'm not sure about this since I'm not sure you can want the fulfillment of a desire of whose existence you are unaware or not thinking of particularly.

If that will not work, then maybe we can appeal to "partial fulfillment" of a desire. For some desires, partial fulfillment is better than nothing; e.g. if I want a million dollars then half a million is still pretty good. (Other desires won't work this way: if I want a heart transplant, half a transplant is no good.) The desire that all my desires be fulfilled would be along this line: something is better than nothing, at least in the case of the fulfillment of non-derived desires.

Alexander R Pruss said...

If for each desire I have, I desire that it be satisfied, then when I start with n desires, this creates another n desires (the desire that desire 1 be satisfied, the desire that desire 2 be satisfied, etc.) But this just gets back to the original problem: I don't know which two of the 2n desires is satisfied by the desire.

Partial fulfillment could work, but it's pretty complicated, because we have to measure the degree of fulfillment. I desire that my children flourish and I desire to eat cake. Having the former satisfied counts for more in respect of partial fulfillment than having the latter satisfied. Yet formally with respect to the desire that all my desires be fulfilled, the two are on par.

IanS said...

In the days of empire, colonists liked to complain that the “natives” were “lazy”. By this they meant that the natives worked until they had enough money to buy what they immediately wanted, then stopped. So maybe Heath White’s “odd line” is not so odd. The natives may have been short-sighted, but they acted rationally to satisfy their desires as far as they could see.

...especially if you received the further information that it is only one of your desires that is furthered by the action. How could you or anyone else could know this, except in contrived cases? I suspect that if you demand consistency with tricky conditions like this, you will be driven, von Neumann – Morgenstern style, to expected utility maximization. Desires would be reduced to utilities on states of the world. This no doubt counts as Humean. The problem is that it is not credible as a model of human action.

A naive approach to the Christmas present: You believe that your friend’s gift will be a pleasant surprise. You like pleasant surprises. So you unwrap it. This may seem to beg the question, but it need not. Maybe your friend’s past gifts were pleasant surprises, so you make the induction “Whenever I have unwrapped his gifts, I received a pleasant surprise. So it’s probably worth unwrapping this gift.” Note that you don’t have to care, or even remember, what his past gifts were, just that they were pleasant surprises. The induction is essentially “unwrap gift, feel good”. Of course, this does not fit an “x desires that p” format, if p is required to be a proposition about the world – the “pleasant surprise” involves a mental state. Maybe this is an argument against a particular formalisation of Humeanism, not against Humeanism itself.