Monday, September 29, 2014

Two kinds of desire strength

Suppose I am designing a simple vacuuming robot not unlike a Roomba, but a little more intelligent. I might set up the robot to have multiple drives or "desires" including the drive to maintain well-charged batteries and to maintain a clean floor. The robot, then, will use its external and internal sensors to obtain some relevant pieces of information: how much dirt remains on the floor, how low its battery charge is and how far away from its charging station it is. I now imagine the processor uses the dirt-remaining value to calculate how much it "wants" to continue vacuuming and the battery charge sensor and the distance from the charging station to calculate how much it "wants" to recharge. These two want-values, together with any others, then go to a decision subroutine, whose specifications are as follows:

  1. When one want-value is much greater than the sum of all the others, go for that one.
  2. When (1) is false, choose randomly between the want-values with choice probabilities proportioned to the want-values.
(Why not simply go for the strongest desire? Maybe because some randomization might prevent systematic errors, like areas distant from the charger that never get cleaned.)

Suppose now that the robot suffers from a hardware or software failure that in high temperature conditions makes the decision subroutine count the floor-cleaning want at double weight. Thus the robot cleans the floors more when it's hot in the house, even when it is short of battery charge.

Suppose it's a hot day, and the robot's sensor calculations give respective values 2.2 and 4.0 to the floor-cleaning and battery-recharge wants. Then in one perfectly intelligible sense the battery-recharge want is almost twice as strong as the floor-cleaning want. But most of the time in this state, the robot will continue to clean the floor, and in that sense the floor-cleaning want is somewhat stronger than the battery-recharge want.

We can and should distinguish between the nominal desire strengths, which are 2.2 and 4.0, and the effective desire strengths, which are 4.4 and 4.0, due to the buggy way the decision procedure handles the cleaning want when the temperature is high. We might also, in a more theory-laden way, call the desire strengths as they feed into the decision subroutine the "content strengths" and the desire strengths as they drive the decision the "motivational strengths."

In fact, what I said about nominal and effective strengths can be generalized to nominal and effective desires full stop. After all, we can imagine a bug where in the decision procedure under some conditions the memory location holding the cleaning-want value is overwritten with the memory location holding the present temperature. In positive temperature situations, this can result in the creation of an effective desire to clean the floors in the complete absence of a nominal desire for that, and in negative temperature situations, it can create an effective desire not to clean the floors, even though there is a nominal desire to clean them.

Surely our own decisions are subject to a similar distinction. Even if in fact the nominal and effective strengths of our desires are always equal—a very implausible hypothesis, especially in light of the apparent ubiquity of akrasia—the two could come apart.

By definition, one does tend to act on the effective desires and the effective desire strengths. But surely it is nominal desires and nominal desire strengths that more affect how one should act by one's own lights. When a discrepancy happens, it is a malfunction, a failure of rationality.

If one wants to connect this post with this one, the distinction I am making here is a distinction between two kinds of degrees of preference on the content side. So if that post is correct, we really have a three-fold distinction: the conscious intensity, the content (or nominal) strength and the motivational (or effective) strength.

I suspect that when we think through this, some Humean theses about action and morality become much less plausible.

No comments: