Showing posts with label decisions. Show all posts
Showing posts with label decisions. Show all posts

Monday, January 23, 2017

Prosthetic decision-making

Let’s idealize the decision process into two stages:

  1. Intellectual: Figure out the degrees to which various options promote things that one values (or desires, judges to be valuable, etc.).

  2. Volitive: On the basis of this data, will one option.

On an idealized version of the soft-determinist picture, the volitive stage can be very simple: one wills the option that one figured out in step 1 to best promote what one values. We may need a tie-breaking procedure, but typically that won’t be invoked.

On a libertarian picture, the volitive stage is where all the deep stuff happens. The intellect has delivered its judgment, but now the will must choose. On the best version of the libertarian picture, typically the intellect’s judgment includes a multiplicity of incommensurable options, rather than a single option that best promotes what one values.

On the (idealized) soft-determinist picture, it seems one could replace the mental structures (“the volitive faculty”) that implement the volitive stage by a prosthetic device (say, a brain implant) that follows the simple procedure without too much loss to the person. The actions of a person with a prosthetic volitive faculty would be determined by her values in much the same way as they are in a person with a normal volitive faculty. What is important is the generation of input to the volitive stage—the volitive stage is completely straightforward (except when there are ties).

On the libertarian picture, replacing the volitive faculty by a prosthesis, however, would utterly destroy one as a responsible agent. For it is here, in the volition, that all the action happened.

What about replacing the intellectual faculty by a prosthesis? Well, since the point of the intellectual stage is to figure out something, it seems that the point of the intellectual stage would be respected if one replaced it by an automated process that is at least as accurate as the actual process. Something else would be lost, but the main point would remain. (Compare: Something would be lost if one replaced a limb by a prosthetic that functioned as well as the limb, but the main point would remain.)

So, now, we can imagine replacing both faculties by prostheses. There is definite loss to the agent, but on the soft-determinist picture, there isn’t a loss of what is central to the agent. On the libertarian picture, there is a loss of what is central to the agent as soon as the volitive faculty is replaced by a prosthesis.

The upshot of this is this: On the soft-determinist picture, making decisions isn’t what is central to one as an agent. Rather, it is the formation of values and desires that is central, a formation that (in idealized cases) precedes the decision process. On the libertarian picture, making decisions—and especially the volitive stage of this process—is central to one as an agent.

Monday, October 24, 2016

Two senses of "decide"

Suppose:

  1. Alice sacrifices her life to protect her innocent comrades.

  2. Bob decides that if he ever has the opportunity to sacrifice his life to protect his innocent comrades, he’ll do it.

We praise Alice. But as for Bob, while we commend his moral judgment, we think that he is not yet in the crucible of character. Bob’s resolve has not yet been tested. And it’s not just that it hasn’t been tested. Alice’s decision not only reveals but also constitutes her as a courageous individual. Bob’s decision falls short both in the revealing but also in the constituting department (it’s not his fault, of course, that the opportunity hasn’t come up).

Now compare Alice and Bob to Carl:

  1. Carl knows that tomorrow he’ll have the opportunity to sacrifice his life to protect his innocent comrades, and he decides he will make the sacrifice.

Carl is more like Bob than like Alice. It’s true that Carl’s decision is unconditional while Bob’s is conditional. But even though Carl’s decision is unconditional, it’s not final. Carl knows (at least on the most obvious way of spelling out the story) that he will have another opportunity to decide come tomorrow, just as Bob will still have to make a final decision once the opportunity comes up.

I am not sure how much Bob and Carl actually count as deciding. They are figuring out what would or will (respectively) be the thing to do. They are making a prediction (hypothetical or future-oriented) about their action. They may even be trying by an act of will to form their character so as to determine that they would or will make the sacrifice. But if they know how human beings function, they know that their attempt is very unlikely to be successful: they would or will still have a real choice to make. And in the end it probably wouldn’t surprise us too much if, put to the test, Bob and Carl failed to make the sacrifice.

Alice did something decisive. Bob and Carl have yet to do so. There is an important sense in which only Alice decided to sacrifice her life.

The above were cases of laudable action. But what about the negative side? We could suppose that David steals from his employer; Erin decides that she will steal if she has the opportunity; and Frank knows he’ll have the opportunity to steal and decides he’ll take it.

I think we’ll blame Erin and Frank much more than we’ll praise Bob and Carl (this is an empirical prediction—feel free to test it). But I think that’s wrong. Erin and Frank haven’t yet gone into the relevant crucible of character, just as Bob and Carl haven’t. Bob and Carl may be praiseworthy for their present state; Erin and Frank may be blameworthy for theirs. But the praise and the blame shouldn’t go quite as far as in the case of Alice and David, respectively. (Of course, any one of the six people might for some other reason, say ignorance, fail to be blameworthy or praiseworthy.)

This is closely to connected to my previous post.

Friday, August 21, 2015

Intra- and inter-choice comparisons of value

Start with this thought:

  1. If I have on-balance stronger reasons to do A than to do B, and I am choosing between A and B, then it is better that I do A than that I do B.
But notice that the following is false:
  1. If in decision X, I choose A over C, and in decision Y, I choose B over D, and I had on-balance stronger reasons to do A than I did to do B, then decision X was better.
To see that (2) is false, suppose that in decision X, you are choosing between your friend's life and your convenience, while in decision Y, you are choosing between your friend's life and my own life. Your reasons to choose your friend's life over your convenience are much stronger (indeed, they typically give rise to a duty) than your reasons to choose your friend's life over your own life. Nonetheless, to save your friend's life at the cost of your own life is a better thing than to save your friend's life at the cost of your own convenience.

There is a whiff of paradoxicality here. But it's just a whiff. If you chose your convenience over your friend's life you'd be a terrible person. So in a case like that described in (2), choosing B (e.g., your friend's life over your life) is a better thing than choosing A (e.g., your friend's life over your convenience), choosing C (e.g., your convenience) is worse than choosing D.

In other words, when you choose A over B, the on-balance strength of reasons for A doesn't correlate--even typically--with the value of your deciding for A. Rather, the on-balance strength of reasons for A correlates (at least roughly and typically) with the value of your deciding for A minus the value of your deciding for B. This is quite clear.

This helps to resolve the paradox of why it is that doing the supererogatory is better than doing the obligatory, even though in a case where an option is obligatory the reasons are stronger than the reasons for supererogation. For omitting the supererogatory is much less bad than omitting the obligatory.

We may even be able to use some of the above to make some progress on the Kantian paradox that a good action by a person with a neutral character is better than a good action by a person with a good character, once we observe that it is worse for a good person to do something bad than for a neutral person to do the same thing, since the good person does two bad things: she does the bad thing in itself and she fights her good personality. Thus, even though the good person has more on-balance reason to do the good thing, because the strength of reasons doesn't correlate with the value of the action but with the value of the action minus the value of the alternative, this does not guarantee that her action has greater value than the good action of the neutral person.

Tuesday, July 17, 2012

Why does brainwashing take away responsibility?

Everybody agrees that brainwashing can remove responsibility for the resulting actions. But how does it do that?

In some cases, brainwashing removes decisions--you just act an automaton without making any decisions. Bracket those cases of brainwashing as not to my purpose. The cases of interest are ones where decisions are still made, but they are made inevitable by the complex of beliefs, desires, habits, values, etc.--the character, for short--implanted by the brainwasher. Of these cases, some will still be not useful for my purposes, namely those where the implanted character is so distorted that decisions coming from the character are not responsible simply by reason of insanity.

The interesting case, for discussion of compatibilism, is where the character is the sort of character that could also result from an ordinary life, and if it resulted from that ordinary life, decisions flowing from that character would be ones that the agent is responsible for.

So now our question is: Why is it that when this character results from the brainwasher's activity, the agent is not responsible for the decisions flowing from it, even though if the character were to have developed naturally, the agent would have been responsible?

I want to propose a simple explanation: In the paradigmatic case when the character (or, more precisely, its relevant features) results from the brainwasher's activity, the agent is not responsible for the character (that this is true is uncontroversial; but my point is not just that this is true, but that it is the answer to the question). Decisions that inevitably flow from a character that one is not responsible for, in external circumstances that we may also suppose one is not responsible for, are decisions that one is not responsible for. When the character results from an ordinary life, one is responsible for the character. But when the character results from brainwashing, typically one is not (the case where one freely volunteered to be brainwashed in this way is a nice test case--in that case, one does have at least some responsibility).

But now we see, just as in yesterday's post, that incompatibilism follows. For what makes us responsible for a character or circumstances are decisions that we are responsible for and that lead in an appropriate way to having that character. If we are only responsible for a decision that inevitably flows from a character in some external circumstances when we are responsible for the character or at least for the external circumstances, then the first responsible decision we make cannot be one that is made inevitable by character and external circumstance.

The way to challenge this argument is to offer alternate explanations of why it is that when character comes from brainwashing one is not responsible for actions that inevitably flow from that character given the external circumstances. My proposal was that the answer is that one's isn't responsible for the character in that case. An alternate proposal is that it is the inevitability that takes away responsibility. This alternative certainly cannot be accepted by the compatibilist.

Friday, May 13, 2011

The Miner Puzzle

Kolodny and MacFarlane give a neat puzzle.

The setup: ten miners are trapped in a shaft—A or B, although we do not know which—and threatened by rising waters. We can block one shaft or neither, but not both. If we block the correct shaft, everyone lives. If we block the wrong shaft, everyone dies. If we do nothing, only one miner dies. (Charlow)
The puzzle is that the following seem to be true:
  1. The miners are in A or the miners are in B.
  2. If miners are in A, we should block A.
  3. If miners are in B, we should block B.
  4. It is not the case that we should block A.
  5. It is not the case that we should block B.
But assuming modus ponens for (2) and (3), the above claims are contradictory.

Some propose dropping modus ponens. But there is a much better solution. Claims (2)-(5) incompletely identify the relevant action types. Actions types should be identified, in part, by the reasons and intentions for them. Should Jones insert a knife into Smith's heart? The question insufficiently specifies the act. Inserting a knife into Smith's heart could be life-saving cardiac surgery or murder. The intentions and reasons matter. To decide what should be done, we need to expand the action descriptions. Here are some possible expanded descriptions:

  1. block A because this has probability 1/2 of killing the miners in B.
  2. block A because this has probability 1/2 of saving the miners in A.
  3. block B because this has probability 1/2 of killing the miners in A.
  4. block B because this has probability 1/2 of saving the miners in A.
  5. block neither because that will save nine.
  6. block neither because that we will kill one.
  7. block A because that will save ten.
  8. block B because that will save ten.
I am a reasons-externalist and I take "because" to be factive. Reasons-internalists will want to replace the "because" claims in (10)-(13) with "because you think".

The description "block A" is ambiguous between actions (6), (7) and (12). Once we disambiguate as above, we can say:

  1. You shouldn't do (6) or (8).
  2. You shouldn't do (7) or (9) if you can do (10) or (12) or (13).
  3. If you can do (12), you should do (12).
  4. If you can do (13), you should do (13).
  5. If you can't do (12) or (13), you should do (10).
But what can you do? If you believe the miners are in A, you can do (12). In that case, you should do (12). If you believe the miners are in B, you can do (13), and so you should. If you have no idea where the miners are, you can't do (12), because it is not possible for you to act because of a reason that isn't available to you. For the same reason, you can't do (13), and so you should do (10).

Can we affirm any conditionals such as (2) or (3)? Not if "should" implies "can". For presumably the way to expand out the "should block" in (2) is not along the lines of (7) but along the lines of (12). And if "should" implies "can", then it is false that if the miners are in A, you should (block A because that will save ten), since you cannot in this case block A because that will save ten, as you are unable to act on that reason.

But suppose you deny that "should" implies "can". Then you can consistently say that:

  1. If the miners are in A, you should (block A because that will save ten),
even though the action in the consequent is impossible to you. And then by (18), you can say:
  1. Even if the miners are in A, you should (block neither because that will save nine),
since although the action in (19) would be the better one, it is not possible for you. And you are not culpable for failing to do the better action because you have an excellent exculpating excuse: you can't do it.

So we have different stories to tell depending on whether "should" implies "can", but they do not practically differ. Both stories agree that in the event that the miners are in A, you should block neither. The second version of the story also says that you should do something else, thereby placing you in a dilemma, but since that something else is impossible, you have a perfectly fine excuse for acting as you do.

But in any case, there is no real paradox.

So where do I stand with regard to (1)-(5)? Well, we need to have some rigorous disambiguation to the "should block". Here is one proposal. The statement "x should A" has the truth conditions "There is a relevant elaboration A* of A such that x should A*", where an elaboration of an action type is a narrower action type. Then if "should" implies "can", then (2) is false, because the only relevant elaboration of "block A" on which the consequent of (2) would be true is (12), and (12) is not doable in the situation as described, and the same goes for (3). And likewise (4) and (5) are both true, because it is not the case that there is an elaboration of "block A" or of "block B" that we should do.

If "should" does not imply "can", then (2) and (3) are true. But by the same token one of (4) and (5)—the one corresponding to where the miners are—is false.

Moreover, in either case, we can add:

  1. You should block neither A nor B.
This may seem to contradict the "should" does not imply "can" statement that one of (4) and (5) is false. But "You should block A" is compatible with "You should block neither A nor B" at least if "should" does not imply "can".

I think Kolodny and MacFarlane would classify my answer as a subjectivist one, since I deny (2) and (3). Their main argument against the subjectivist is this scenario. Suppose the miners are in fact in shaft A. Then we can imagine this dialog. You say you should leave both shafts open because that will save nine. An adviser says: "No, you ought to block shaft A. Doing so will save all ten of the miners." The adviser is disagreeing with you. But how could she be disagreeing with you if your claim that you should leave both shafts open is true?

But on my above story, there is a straightforward way in which the adviser is wrong. How could one elaborate "block shaft A"? If the adviser is suggesting (6) or (7) as the action, then the adviser is giving poor advice, since (6) is wicked, and you shouldn't do (7) when you can do (10), so the adviser would be wrong to advise (7). The only thing the adviser could be reasonably advising would be (12). But unless you believe that the miners are in shaft A, action type (12) is not available to you. So the adviser is advising you to do something you can't do. I suppose there are occasions for remarks such as: "Well the thing you should do is to pay back the money you stole right away. It's really unfortunate that you gambled it all away." But such remarks are unhelpful and are not really advice (except maybe for future occasions). Moreover, in such cases the adviser should not be said to disagree with the claim that the agent should do the best of the courses of action that are in fact open to her.

Monday, July 6, 2009

Pascal's wager and infinity

(Cross-posted to prosblogion).

Some people, I think, are still under the impression that the infinities in Pascal's wager create trouble. Thus, there is the argument that even if you don't believe now, you might come to believe later, and hence the expected payoff for not believing now is also infinite (discounting hell), just as the payoff for believing now. Or there is the argument that you might believe now and end up in hell, so the payoff for believing now is undefined: infinity minus infinity.

But there are mathematically rigorous ways of modeling these infinities, such as Non-Standard Analysis (NSA) or Conway's surreal numbers. The basic idea is that we extend the field of real numbers to a larger ordered field with all of the same arithmetical operations, where the larger field contains numbers that are bigger than any standard real number (positive infinity), numbers that are bigger than zero and smaller than any positive standard real number (positive infinitesimals), etc. One works with the larger field by exactly the same rules as one works with reals. This is all perfectly rigorous.

Let's do an example of how it works. Suppose I am choosing between Christianity, Islam and Atheism. Let C, I and A be the claims that the respective view is true. Let's simplify by supposing I have three options: BC (believe and practice Christianity), BI (believe and practice Islam) and NR (no religious belief or practice).

Now I think about the payoff matrix. It's going to be something like this, where the columns depend on what is true and the rows on what I do:

CIA
BC0.9X-0.1Y0.7X-0.3Y-a
BI0.6X-0.4Y0.9X-0.1Y-b
NR0.4X-0.6Y0.4X-0.6Yc
Here, X is the payoff of heaven and -Y is the payoff of hell, and X and Y are positive infinities. I assume that the Christian and Islamic heavens are equally nice, and that the Christian and Islamic hells are equally unpleasant. The lowercase letters a, b and c indicate finite positive numbers. How did I come up with the table? Well, I made it up. But not completely arbitrarily. For instance, BC/C (I will use that symbolism to indicate the value in the C column of the BC row) is 0.9X-0.1Y. I was thinking: if Christianity is true, and you believe and practice it, there is a 90% chance you'll go to heaven and a 10% chance you'll go to hell. On the other hand, BC/I is 0.7X-0.3Y, because Islam expressly accepts the possibility of salvation for Christians (at least as long as they're not ex-Muslims, I think), but presumably the likelihood is lower than for a Muslim. BI/C is 0.6X-0.4Y, because while there are well developed Christian theological views on which a Muslim can be saved, these views are probably not an integral part of the tradition, so the BI/C expected payoff is lower than the BC/I one. The C and I columns of the tables should also include some finite numbers summands, but those aren't going to matter. A lot of the numbers can be tweaked in various ways, and I've taken somewhat more "liberal" (in the etymological sense) numbers--thus, some might say that the payoff of NR/C is 0.1X-0.9Y, etc.

What should one do, now? Well, it all depends on the epistemic probabilities of C, I and A. Let's suppose that they are: 0.1, 0.1 and 0.8, and calculate the payoffs of the three actions.

The expected payoff of BC is EBC = 0.1 (0.9X - 0.1Y) + 0.1 (0.7X - 0.3Y) + 0.8 (-a) = 0.16X - 0.04Y - 0.8a.

The expected payoff of BI is EBI = 0.15X - 0.05Y - 0.8b.

The expected payoff of NR is ENR = 0.08X - 0.12Y + 0.8c.

Now, let's compare these. EBC - EBI = 0.01X + 0.01Y + 0.8(b-a). Since X and Y are positive infinities, and b and a are finite, EBC - EBI > 0. So, EBC > EBI. EBI - ENR = 0.07X + 0.07Y - 0.8(b+c). Again, then EBI - ENR > 0 and so EBI > ENR. Just to be sure, we can also check EBC - ENR = 0.08X + 0.08Y - 0.8(a+c) > 0 so EBC > ENR.

Therefore, our rank ordering is: EBC > EBI > ENR. It's most prudent to become Christian, less prudent to become a Muslim and less prudent yet to have no religion. There are infinities all over the place in the calculations, but we can rigorously compare them.

Crucial to Christianity being favored over Islam was the fact that BC/I was bigger than BI/C: that Islam is more accepting of salvation for Christians than Christianity is of salvation for Muslims. If BC/I and BI/C were the same, then we'd have a tie between the infinities in EBC and EBI, and we'd have to decide based on comparisons between finite numbers like a, b and c (and finite summands in the other columns that I omitted for simplicity)--how much trouble it is to be a Christian versus being a Muslim, etc. However, in real life, I think the probabilities of Christianity and Islam aren't going to be the same (recall that above I assumed both were 0.1), because there are better apologetic arguments for Christianity and against Islam, and so even if BC/I and BI/C are the same, one will get the result that one should become Christian.

It is an interesting result that Pascal's wager considerations favor more exclusivist religions over more inclusivist ones--the inclusivist ones lower the risk of believing something else, while the exclusivist ones increase it.

It's easy to extend the table to include deities who send everybody to hell unless they are atheists, etc. But the probabilities of such deities are very low. There is significant evidence of the truth of Christianity and some evidence of the truth of Islam in the apologetic arguments for the two religions, but the evidence for such deities is very, very low. We can add another column to the table, but as long as the probability of it is small (e.g., 0.001), it won't matter much.

Wednesday, May 21, 2008

Rational decision theory

Suppose that I assign a non-zero probability to some religion, and this religion tells me that a certain action decreases the probability of an infinitely valuable outcome (e.g., eternity in heaven, or avoiding an eternity in hell). If there is no non-zero probability hypothesis on which the action increases the probability of an infinitely valuable outcome (or decreases the probability of an infinitely disvaluable outcome, but I shall count the avoiding of an infinitely disvaluable outcome itself an infinitely valuable outcome for simplicity), it is plain that in prudential rationality I ought to avoid the action.

Suppose that a number of religions have non-zero probability. Then if A is any action such that at least one of the religions claims that A increases the probability of an infinitely valuable outcome (IVO), and none of the religions claim that A decreases this probability, and, further, there are no non-religious hypotheses of non-zero probability that would make A decrease the probability of an IVO, again I ought to refrain from doing A. Now, sometimes there will be a genuine conflict between religions, where one religion tells me that some action increases the likelihood of an IVO and another tells me that it decreases it. In that case, I need to get my hands dirty with probabilistic calculations. I need to compare the IVOs (not all IVOs are equal; eternity in heaven with daily apple pie is not quite as good as eternity in heaven with daily apple pie à la mode); I need to compare the degree to which according to the respective religions A contributes to the likelihood of the respective IVOs, and finally I need to compare the probabilities I assign to the respective religions. These computations involve comparisons of infinities, but that's not at all a big deal—I just use an appropriate non-standard arithmetical model of infinite values.

Except in the rare cases where things balance out precisely, say when there are only two religions of non-zero probability, and they have equal probability, and one says that A increases the chance of an IVO by 0.2 and other says that it decreases it by 0.2, and the IVO is the same, or in the somewhat less rare, but still rare, cases where no religion of non-zero probability says anything relevant about A, these religious considerations will trump all other self-interested considerations. After all, only religious claims involve IVOs, and any change in the likelihood of an IVO trumps any change in the likelihood of something of finite value.[note 1]

Unless the number of religions that I assign non-zero probability to is very small (say, 0, 1 or 2), or there is a lot of similarity between the religions I assign non-zero probability to, taking these considerations into account will lead to a rather unappealing way of life, since there will be a lot of restrictions on one's actions, as, typically, any action that at least one of the religions forbids will be forbidden to one, as it will be relatively rare that an action forbidden by one religion will be positively required by another.

I think this is a reductio of rational decision theory, whether of a self-interested variety or of a utilitarian stripe. (After all, in utilitarian expected value calculations, I will need to take into account any IVOs for me or for others that have non-zero probability.)