Friday, September 24, 2010

Getting below the hood on belief and desire

For the past couple of days I have been thinking hard about a bunch of puzzling but common kinds of mental states:

  1. I think it's more likely that there are fewer rather than more universes.
  2. I don't ever want to have physical pain.
  3. Don Juan wants every woman.
Cases (2) and (3) sound like ordinary propositional desires: I desire that I never have physical pain and Don Juan desires that he have every woman. But they are not ordinary propositional desires as far as I can tell. For, obviously, (2) motivates me to ensure that I take an aspirin when I have a headache. However, taking an aspirin when I have a headache is not an effective means to the end of never having any physical pain, because some physical pain is unavoidable, so even if I prevent this one, the proposition <I never have any physical pain> will still be false. Likewise, (3) motivates Don Juan to seduce Elvira, but seducing Elvira is not an effective means to having every woman, since the end of having every woman is unattainable (and a good thing, that).

Case (1) could be a case of ordinary propositional belief: either a belief about objective or about subjective probabilities. However, I don't mean it this way. I mean (1) to entail that I think it more likely that there are 17 universes than that there are 177.

Mark Murphy suggested to me that in cases like (2) and (3), what we have is a disposition or tendency to form a desire. Thus, (3) says that when Don Juan meets Elvira, he is disposed to form the desire to have Elvira. I think this unduly complicates things, especially in the case of (2). If I am about to undergo a dental procedure, I do not need to posit a new desire to explain my motivation that the dental procedure be painless: (2) is enough. Moreover, a dispositional reading leads to the prediction that if I find out I must have a pain in a month, either at noon or at 1 pm (these are the two slots available in the dentist's schedule), I will desire not to have a pain at noon and I will desire not to have a pain at 1 pm, which desires will be conflicting. But it does not appear correct to think of my decision whether to have the pain at noon or at 1 pm as a resolution between two conflicting desires. Rather, it seems correct to say that in regard to (2), the decision as to the time makes no difference (unless there are additional facts, like that I feel pain more keenly at lunch time). Moreover, dispositions can be frustrated. If so, I might have (2) but fail to develop a desire to fail to have a pain at t. Nonetheless, surely, a pain at t would frustrate the mental state reported by (2).

I likewise do not think (1) simply expresses a disposition to assign a credence in a particular way.

Another option is to suppose that (1)-(3) summarize an infinite number of mental states, maybe like this:

  1. For all n, I assign credence 2n to the proposition that there are exactly n universes.
  2. For all t, I desire not to have a pain at t.
  3. For all x, if x is a woman, Don Juan wants x.
I think there is something to this suggestion, but it errs in two ways. First, there are only finitely many numbers and times that my mental states directly refer to. Second, there are many women that Don Juan has never heard of, and cannot have formed a de re desire for. Third, there is a modal problem. If a non-actual woman were to be actual, Don Juan would be motivated to seduce her by (3), but none of the desires reported by (6) would motivate him to seduce her. Fourth, obviously (4) is just much too precise as compared to (1).

Let's start over with a different tack. Consider the List Model of belief and desire. According to the List Model, the mind contains something like a list of propositions, some of which have a credence written beside them and some of which have a utility written beside them. The above considerations that unless the list is infinite, and maybe even if it is infinite, the List Model is not adequate to modeling all our mental states.

Here is a reason to deny the List Model given a brain view of the mind. Our brains are efficient. They don't need to store a separate desire not to have a pain at t for every different t. Surely, the brain compresses data.

Here is another idea, which I got from Trent Dougherty. Plausibly, our minds keep beliefs in bins. For instance, Trent says he's got a bin of mathematical claims told to him by me. All of these have basically the same credence. Now, I think that when Trent gets some evidence that reduces his belief in my reliability, he does not go through a whole bunch of items on his mental list, erase the credence there, and write in a new one. He simply changes the credence for the bin as a whole.

The data compression and binning considerations by themselves show that the List Model is inadequate. They show that beliefs are not fundamental. Of course, naturalists already thought that: there are, they thought, non-mental states that constitute beliefs. However, when we combine the compression and binning considerations, with cases like (1)-(3), we may be led to the interesting view that not only are beliefs and desires not fundamental simpliciter, but they are not mentally fundamental: there are more basic mental states that constitute beliefs and desires. More generally, I suspect that no propositional attitudes are fundamental.

Here is the direction in which I am exploring this suggestion. Introduce two kinds of mental states: doxins and orektins. These encode constraints, respectively, on assignments of credence and utility (or pro-attitude or value) to propositions. But they are not propositional attitudes themselves. One way to represent a doxin or an orektin is with a jussive sentence like "The credences/utilities shall be such that..." They are mental states because they have a logical complexity. Claims about credences/utilities then reduce to to claims about doxins and orektins.

The simplest kind of doxin or orektin constrains a single proposition to have a particular credence or utility. This doxin or orektin, at least when not contradicted by any other doxin or orektin, then makes it be true that the proposition has that credence or utility (it is not just a tendency to form a credence or utility; the jussive "The utility of p shall be +7" is more like a performative than an imperative).

But there are more complex doxin and orektins. For instance (1)-(3) correspond respectively to:

  1. My doxin: "The credence of <There are n universes> shall be strictly monotone decreasing in n."
  2. My orektin: "The utility of <I have a physical pain at t> shall be negative."
  3. Don Juan's orektin: "The utility of <I have woman x> shall be positive for every woman x."

There are various neat technical definitions one can make. I have many details to work out. We need a notion of a credence- and/or utility-function fitting with a set of doxins and/or orektins (and fit may come in internal and external varieties). We need the notion of a set of doxins or orektins determining a particular credence or utility value (basically: every function that fits the set of doxins or orektins assigns that value). We can then say that an agent is committed to a credence or utility value for p if some set of doxins or orektins that she has determines it. We can say that she accepts or has the credence or utility value if the determination is obvious enough, maybe. There is tricky stuff here.

Three interesting fruits of this. The first is that we have a neat solution to the Pierre problem. Recall that Pierre believes that Londres est belle, while thinking that London is ugly. Well, here is what we say about this: Pierre has two doxins:

  1. "The credence of the proposition expressed by 'Londres est belle' shall be high"
  2. "The credence of the proposition expressed by 'London is beautiful' shall be low".
While the proposition is the same in both cases, the doxins are different. The first doxin (externally) commits Pierre to the proposition that London is beautiful having high credence, while the second (externally) commits him to the proposition having low credence. Similarly, Pierre has two orektins:
  1. "The utility of the proposition expressed by 'Je suis en Londres' shall be high"
  2. "The utility of the proposition expressed by 'I am in London' shall be low.

A second fruit of this may be an account of vague beliefs and desires. Maybe my belief that Jones is bald can be represented by some vague doxin like:

  1. "The credence of Jones has at least n hairs is low for high n and high for low n."

A third fruit is that it may help explain how the Christian can believe everything taught by the Church. For the Christian can have the orektin:

  1. "The credence of anything taught by the Church is high."
And so in some sense he believes even doctrines he hasn't heard.

There is a ton of detail work needed.

2 comments:

Alexander R Pruss said...

Hypothesis: Rationality concerns doxins and orektins directly. In particular, questions of justification primarily concern doxins.

Heath White said...

It seems to me that one might have different uses for desires and beliefs. For example, engineering (how is the mind put together?) versus justification/explanation (what made that an intelligible action?). Doxins/oretkins might be important for the first but the List Model might be important for the latter.

Consider by way of analogy how we think of logics. You can think of them as axioms and inference rules (analogous to the doxins) or you can think of them as sets of theorems (analogous to Lists). What you want to do with a logic makes a difference to how you think of it.