Showing posts with label preference. Show all posts
Showing posts with label preference. Show all posts

Friday, October 1, 2021

A simple moral preference circle with infinities

Here is a simple moral preferability circle. Suppose there are infinite many human strangers numbered ..., −3, −2, −1, 0, 1, 2, 3, ... all of whom, in addition to two cats, are about to drown. Consider these options:

A. Save the strangers numbered 0, 1, 2, ....

B. Save the strangers numbered −1, −2, −3, ... and one cat.

C. Save the strangers numbered 1, 2, 3, ... and both cats.

Option B beats Option A: If we had to choose between strangers 0, 1, 2, ... and strangers −1, −2, −3, ..., we should clearly be indifferent. Toss in the cat, and now it looks like we have a reason to save the second set of strangers.

Option C beats Option B: If we had to choose between strangers −1, −2, −3, ... and strangers 1, 2, 3, ..., we should be indifferent. But now observe that in Option C one more cat is saved, and it sure looks like we should go for C.

Option A beats Option C: Option A replaces the two cats with stranger 0, and surely it’s better to save one human over two cats.

If you don’t think we have moral reasons to save cats, replace saving the cats from drowning with saving two human strangers from ten minutes of pain.

I am now toying with an intuitively very appealing solution to problems like the above: we have no moral rules in such outlandish cases. I think this can be said on either natural law or divine command theory. On natural law, it is unsurprising if our nature does not provide guidance in situations where we are far from our natural environment. On divine command theory, why would God bother giving us commands that apply to situations so far from ones we are going to be in?

Monday, July 10, 2017

Permissibility of the natural

The usual way to argue that an action is permissible is to argue that the arguments against the action’s permissibility fail. But it would be really nice to be able to give a more positive argument for an action’s permissibility. Sometimes one can do so by showing that the action is obligatory, but (a) that doesn’t help with the permissibility of non-obligatory actions, and (b) often an argument for the obligatoriness of a positive action presupposes the action’s permissibility (e.g., the obligation to kill a dog that is attacking one’s child when no other means of defense is available presupposes the general permissibility of killing dogs with good reason).

Here is a place where Natural Law (NL) can provide something quite useful, namely this principle:

  1. If A is a natural action, then normally A is permissible.

This principle could, for instance, be used to generate intuitively compelling positive arguments for such controversial theses as:

  1. It is normally permissible to eat animals.

  2. It is normally permissible for us to reproduce.

  3. It is normally permissible for us to prefer those more closely related to us.

In addition to Natural Lawyers, theists in general might have reason to endorse (1), on the grounds that our nature comes from God.

Of course, there is always going to be a difficulty in determining whether the antecedent of (1) is true.

Non-theistic non-NL theories are unlikely to endorse (1) except as a rule of thumb. And it will be an interesting explanatory question on those theories why then (1) is true even as a rule of thumb.

Thursday, October 27, 2016

Three strengths of desire

Plausibly, having satisfied desires contributes to my well-being and having unsatisfied desires contributes to my ill-being, at least in the case of rational desires. But there are infinitely many things that I’d like to know and only finitely many that I do know, and my desire here is rational. So my desire and knowledge state contributes infinite misery to me. But it does not. So something’s gone wrong.

That’s too quick. Maybe the things that I know are things that I more strongly desire to know than the things that I don’t know, to such a degree that the contribution to my well-being from the finite number of things I know outweighs the contribution to my ill-being from the infinite number of things I don’t know. In my case, I think this objection holds, since I take myself to know the central truths of the Christian faith, and I take that to make me know things that I most want to know: who I am, what I should do, what the point of my life is, etc. And this may well outweigh the infinitely many things that I don’t know.

Yes, but I can tweak the argument. Consider some area of my knowledge. Perhaps my knowledge of noncommutative geometry. There is way more that I don’t know than that I know, and I can’t say that the things that I do know are ones that I desire so much more strongly to know than the ones I don’t know so as to balance them out. But I don’t think I am made more miserable by my desire and knowledge state with respect to noncommutative geometry. If I neither knew anything nor cared to know anything about noncommutative geometry, I wouldn’t be any better off.

Thinking about this suggests there are three different strengths in a desire:

  1. Sp: preferential strength, determined by which things one is inclined to choose over which.

  2. Sh: happiness strength, determined by how happy having the desire fulfilled makes one.

  3. Sm: misery strength, determined by how miserable having the desire unfulfilled makes one.

It is natural to hypothesize that (a) the contribution to well-being is Sh when the desire is fulfilled and −Sm when it is unfulfilled, and (b) in a rational agent, Sp = Sh + Sm. As a result of (b), one can have the same preferential strength, but differently divided between the happiness and misery strengths. For instance, there may be a degree of pain such that the preferential strength of my desire not to have that pain equals the preferential strength of my desire to know whether the Goldbach Conjecture is true. I would be indifferent whether to avoid the pain or learn whether the Goldbach Conjecture is true. But they are differently divided: in the pain case Sm >> Sh and in the Goldbach case Sm << Sh.

There might be some desires where Sm = 0. In those cases we think “It would be nice…” For instance, I might have a desire that some celebrity be my friend. Here, Sm = 0: I am in no way made miserable by having that desire be unfulfilled, although the desire might have significant preferential strength—there might be significant goods I would be willing trade for that friendship. On the other hand, when I desire that a colleague be my friend, quite likely Sm >> 0: I would pine if the friendship weren’t there.

(We might think a hedonist has a story about all this: Sh measures how pleasant it is to have the desire fulfilled and Sm measures how painful the unfulfilled desire is. But that story is mistaken. For instance, consider my desire that people not say bad things behind my back in such a way that I never find out. Here, Sm >> 0, but there is no pain in having the desire unfulfilled, since when it’s unfulfilled I don’t know about it.)

Thursday, May 14, 2015

Preference structures had by no possible agent

Say that a preference structure is a total, transitive and reflexive relation (i.e., a total preorder) on centered worlds--i.e., world-agent pairs <w,x>. Then there is a preference structure had by no possible agent. This is in fact just an easy adaptation of the proof of Cantor's Theorem.

Let c be my own centered world <@,Pruss>. We now define a preference structure Q as follows. If agent x at world w, where <w,x> is not the same as <@,Pruss>, prefers her own centered world <w,x> to c, then we say that c is Q-preferable to <w,x>; otherwise, we say that <w,x> is Q-preferable to c. Then we say that all the centered worlds that according to the preceding are Q-preferable to c are Q-equivalent and all the centered worlds we said to be less Q-preferable than c are also Q-equivalent. Thus, Q ranks centered worlds into three classes: those less good than c, those better than c and finally c itself.

But now note that no possible agent has Q as her preference structure. First of all, I at the the actual world do not have Q as my preference structure--that's empirically obvious, in that the worlds do not fall into three equipreferability classes for me. And if <w,x> is different from <@,Pruss>, then x's preference-order at w (if any) between c and <w,x> differs from what Q says about the order.

So what? Well, I think this provides a slight bit of evidence for the idea that agents choose under the guise of the good.

Wednesday, April 24, 2013

An interesting preference structure

Sam invites me to a home-sewn costume party. While I'd love to come, I would much rather not spend the time to sew costume. Sam offers to do it for me. I know that it would take many hours for him to do it, and I would feel bad having him put this effort in when I could do it myself.

This generates a circular preference structure if we restrict to pairwise comparisons, assuming in each case that the third option is not available:

  • Not coming to party beats sewing a costume.
  • Sam's sewing a costume for me beats not coming to the party.
  • My sewing a costume for me beats Sam's sewing a costume for me.

But if all three options are available, then I think I am stuck sewing a costume for me. For I just can't let Sam do the work for me simply because it's a lot of trouble for me, assuming I can do the work myself. Initially my choices were between sewing for myself and missing the party, and I preferred missing the party. But Sam's offering of a third option forced me to switch.

This kind of thing is a way for Sam to manipulate my behavior if I am a nice guy who doesn't want to put Sam to the trouble. In the case at hand, this means that Sam probably should not make me the offer to sew the costume, since by offering, he brings it about that I will go to the trouble myself. In cases where it is important that I go to the party, this manipulation may be perfectly fine—I've used it in an important case several years ago.