Showing posts with label Pascal's Wager. Show all posts
Showing posts with label Pascal's Wager. Show all posts

Thursday, March 31, 2022

Pascal's Wager for humans at death's door (i.e., all of us)

Much of the contemporary analytic discussion of Pascal’s Wager has focused on technical questions about how to express Pascal’s Wager formally in a decision-theoretic framework and what to do with it when that is done. And that’s interesting and important stuff. But a remark one of my undergrads made today has made think about the Wager more existentially (and hence in a way closer to Pascal, I guess). Suppose our worry about the Wager is that we’re giving up the certainty of a comfortable future secular life for a very unlikely future supernatural happiness, so that our pattern of risk averseness makes us reject the Wager. My student noted that in this case things will look different if we reflect on the fact that we are all facing the certainty of death. We are all doomed to face that hideous evil.

Let me expand on this thought. Suppose that I am certain to die in an hour. I can spend that hour repenting of my grave sins and going to Mass or I can play video games. Let’s suppose that the chance of Christianity being right is pretty small. But I am facing death. Things are desperate. If I don’t repent, I am pretty much guaranteed to lose my comfortable existence forever in an hour, whether by losing my existence forever if there is no God or by losing my comfort forever if Christianity is right. There is one desperate hope, and the cost of that is adding an hour’s loss of ultimately unimportant pleasures to the infinite loss I am already facing. It sure seems rational to go for it.

Now for most of us, death is several decades away. But what’s the difference between an hour and several decades in the face of eternity?

I think there are two existential ways of thinking that are behind this line of thought. First, that life is very short and death is at hand. Second, given our yearning for eternity, a life without eternal happiness is of little value, and so more or less earthly pleasure is of but small significance.

Not everyone thinks in these ways. But I think we should. We are all facing the hideous danger of eternally losing our happiness—if Christianity is right, because of hell, and if naturalism is right, because death is the end. That danger is at hand: we are all about to die in the blink of an eye. Desparate times call for desparate measures. So we should follow Pascal’s advice: pray, live the Christian life, etc.

The above may not compel if the probability of Christianity is too small. But I don’t think a reasonable person who examines the evidence will think it’s that small.

Monday, March 28, 2022

Pascal's Wager and the beatific vision

To resolve the many gods and evil god objections to Pascal’s Wager, we need a way of comparing different infinite positive and negative outcomes. Technically, this is easy: we can represent these outcomes as an infinite quantity in some system like the hyperreals or vector-valued utilities, and then multiply these by probabilities, and add. The real difficulty is philosophical: how do we make probability-weighted comparisons of these infinite utilities? How does, say, a 30% chance of a Christian heaven compare to a 20% chance of a Muslim heaven? How does, say, a 30% chance of a Christian heaven compare to avoiding a 5% chance of a hell from an evil god?

I want to make a suggestion that might help get us started. On Christian orthodoxy, heavenly bliss is primarily constituted by the beatific vision—an intimate union with God where God himself comes to be directly present to consciousness, perhaps in something like the way that the qualia of ordinary acts of perception are often thought to be directly present to consciousness. How nice such an intimate union with a divine being is depends on how good the divine being is. For instance, plausibly, such a union with the kind of being who loves us enough to become incarnate and die for our sins is much better than such a union with a deity who wouldn’t or even couldn’t do that.

Gods that have morally objectionable conditions on how to get to heaven are presumably not going to be all that wonderful to spend an infinite time with—even a small chance of a beatific vision of a perfectly good God would beat a large chance of an afterlife with such a god. (Of course, some people think the Christian God’s conditions are morally objectionable.)

There is an important sense in which the beatific vision is intensively infinitely good—i.e., even a day of the beatific vision has infinite value—because the good of the beatific vision is constituted by the presence of an infinite God. Because of this, afterlives that feature something like the beatific vision may completely trump afterlives theories that do not. This may help with evil god worries, in that it is plausible that suffering we can undergo will intensively be only finitely bad. If B is the value of the beatific vision and H is the (negative) value of hell, then pB + qH will be infinitely positive as long as p > 0.

I am not saying that taking the beatific vision into account solves all the difficulties with Pascal’s wager. But it moves us forward.

Monday, April 12, 2021

Pascal's wager and decision theory

From time to time I find myself musing whether Pascal’s Wager doesn’t simply completely destroy ordinary probabilistic decision theory. Consider an ordinary decision, such as whether to walk or bike to work. There are various perfectly ordinary considerations in favor of one or the other. Biking is faster and more fun, but walking is safer and provides more opportunity for thought.

But in addition to all these, there are considerations having to do with one’s eternal destiny. It is hard to deny that there is a positive probability that we will have an eternal afterlife and that our daily choices will affect whether this afterlife is happy or miserable. But even tiny differences in the probability of eternal happiness infinitely swamp all the ordinary considerations in the decision whether to walk or bike. If the opportunity for more leisurely reflection afforded by walking even slightly increases one’s chance at eternal happiness, that infinite contribution to expected utility completely overcomes all the ordinary considerations. But on the other hand, biking would allow one to arrive at work earlier, and thereby take on a larger share of work burdens, which would lead to growth in virtue, and increase chances of eternal happiness. So in the end, it seems, many of our ordinary everyday decisions end up turning into exercises in balancing tiny differences in the probability of eternal joy, as these swamp all the other ordinary consdierations. And that seems wrong.

One move here is to say that the question of how ordinary approximately morally neutral decisions affect the afterlife is one that we have so little information on that we should bracket the infinities, and just focus on the finite stuff we know about. But on the other hand, does that make sense? After all, perhaps we should put all our mental energies into figuring out this stuff that we have so little information on, as the infinities in the utilities swamp everything else?

Tuesday, December 4, 2018

Pascal's Wager at the social level

There is a discussion among political theorists on whether religious liberty should be taken as special, or just another aspect of some standard liberty like personal autonomy.

Here’s an interesting line of thought. If God exists, then religious liberty is extremely objectively important, indeed infinitely important. Now maybe a secular state should not presuppose that God exists. There are strong philosophical arguments on both sides, and while I think the ones on the side of theism are conclusive, that is a controversial claim. However, on the basis of the arguments, it seems that even a secular state should think that it is a very serious possibility that God exists, with a probability around 1/2. But if there is a probability around 1/2 that religious liberty is infinitely important, then the religious liberty is special.

Tuesday, May 16, 2017

Pascal's Wager and the bird-in-the-hand principle

My thinking about the St Petersburg Paradox has forced me to reject this Archimedean axiom (not the one in the famous representation theorem):

  1. For any finite utility U and non-zero probability ϵ > 0, there is a finite utility V such that a gamble that offers a probability ϵ of getting V is always better than a certainty of U.
Roughly speaking, one must reject (1) on pain of being subject to a two-player Dutch Book. But rejecting (1) is equivalent to affirming:
  1. There is a finite utility U and a non-zero probability ϵ > 0, such that no gamble that offers a probability ϵ of getting some finite benefit is better than certainty of U.
With some plausible additional assumptions (namely, transitivity, and that the same non-zero probability of a greater good is better than a non-zero probability of a lesser one), we get this bird-in-the-hand principle:
  1. There is a finite utility U and a non-zero probability ϵ > 0, such that for all finite utilities V, the certainty of U is better than a probability ϵ of V.
Now, Pascal's Wager, as it is frequently presented, says that:
  1. Any finite price is worth paying for any non-zero probability of any infinite payoff.
By itself, this doesn't directly violate the bird-in-the-hand principle, since in (3), I said that V was finite. But (4) is implausible given (3). Consider, for instance, this argument. By (3), there is a finite utility U and a non-zero probability ϵ > 0 such that U is better than an ϵ chance at N days of bliss for every finite N. A plausible limiting case argument suggests that then U is at least as good as an ϵ chance at an infinite number of days of bliss, contrary to (4)--moreover, then U+1 will be better than an ϵ chance at an infinite number of days of bliss. Furthermore, in light of the fact that standard representation theorem approaches to maximizing expected utility don't apply to infinite payoffs, the natural way to argue for (4) is to work with large finite payoffs and apply domination (Pascal hints at that: he gives the example of a gamble where you can gain "three lifetimes" and says that eternal life is better)--but along the way one will violate the bird-in-the-hand principle.

This doesn't, however, destroy Pascal's Wager. But it does render the situation more messy. If the probability ϵ of the truth of Christianity is too small relative to the utility U lost by becoming a Christian, then the bird-in-the-hand principle will prohibit the Pascalian gamble. But maybe one can argue that little if anything is lost by becoming a Christian even if Christianity is false--the Christian life has great internal rewards--and the evidence for Christianity makes the probability of the truth of Christianity not be so small that the bird-in-the-hand principle would apply. However, people's judgments as to what ϵ and U satisfy (2) will differ.

Pleasantly, too, the bird-in-the-hand principle gives an out from Pascal's Mugger.

Wednesday, August 26, 2015

Let's not model deontic constraints with infinite disutilities

It's natural to model deontic constraints in decision theory by assigning infinite disutility to forbidden actions. This temptation should be resisted. There are too many deontic theories with non-zero probability, and since an infinite disutility multiplied by a non-zero number is still infinite, we would have to take all these deontic theories extremely seriously. And that would lead to constant weighing of infinities against each other and/or an unduly restricted life that must obey prohibitions from fairly crazy (but not so crazy as to have zero probability) theories.

Thursday, January 23, 2014

Pascal's Wager rescued

In its classical formulation, Pascal's Wager contends that we have something like the following payoff matrix:

God existsNo God
Believe+∞a
Don't believe-bc
where a,b,c are finite. Alan Hajek, however, observes that it is incorrect to say that if you don't choose to believe, then the payoff is finite. For even if you don't now choose to believe, there is a non-zero chance that you will later come to believe, so the expected payoff whether you choose to believe or not is +∞.

Hajek's criticism has the following unhappy upshot. Suppose that there is a lottery ticket that costs a dollar and has a 9/10 chance of getting you an infinite payoff. That's a really good deal intuitively: you should rush out and buy the ticket. But the analogue to Hajek's criticism will say that since there is a non-zero chance that you will obtain the ticket without buying it—maybe a friend will give it to you as a gift—the expected payoff is +∞ whether you buy or don't buy. So there is no point to buying. So Hajek's criticism leads to something counterintuitive here, though that won't surprise Hajek. The point of this post is to develop a rigorous principled response to Hajek's criticism embodying the intuition that you should go for the higher probability of an infinite outcome over a lower probability of it.

A gamble is a random variable on a probability space. We will consider gambles that take their values in R*=R∪{−∞,+∞}, where R is the real numbers. Say that gambles X and Y are disjoint provided that at no point in the probability space are they both non-zero. We will consider an ordering ≤ on gambles, where XY means that Y is at least as good a deal as X. Write X<Y if XY but not YX. Then we can say Y is a strictly better deal than X. Say that gambles X and Y are probabilistically equivalent provided that for any (Borel measurable) set of values A, P(XA)=P(YA). Here are some very reasonable axioms:

  1. ≤ is a partial preorder, i.e., transitive and reflexive.
  2. If X and Y are real valued and have finite expected values, then XY if and only if E(X)≤E(Y).
  3. If X and Y are defined on the same probability space and X(ω)≤Y(ω) for every point ω, then XY.
  4. If X and Y are disjoint, and so are W and Z, and if XW and YZ, then X+YW+Z. If further X<W, then X+Y<W+Z.
  5. If X and Y are probabilistically equivalent, then XY and YX.
For any random variable X, let X* be the random variable that has the same value as X where X is finite and has value zero where X is infinite (positively or negatively).

The point of the above axioms is to avoid having to take expected values where there are infinite payoffs in view.

Theorem. Assume Axioms 1-5. Suppose that X and Y are gambles with the following properties:

  1. P(X=+∞)<P(Y=+∞)
  2. P(X=−∞)≥P(Y=−∞)
  3. X* and Y* have finite expected values
Then: X<Y.

It follows that in the lottery case, as long as the probability of getting a winning ticket without buying is smaller than the probability of getting a winning ticket when buying, you should buy. Likewise, if choosing to believe has a greater probability of the infinite payoff than not choosing to believe, and has no greater probability of a negative infinite payoff, and all the finite outcomes are bounded, you should choose to believe.

Proof of Theorem: Say that an event E is continuous provided that for any 0≤xP(E), there is an event FE with P(F)=x. By Axiom 5, without loss of generality {XA} and {YA} are continuous for any (Borel measurable) A. (Proof: If necessary, enrich the probability space that X is defined on to introduce a random variable U uniformly distributed on [0,1] and independent of X. The enrichment will not change any gamble orderings by Axiom 5. Then if 0≤xP(XA), just choose a∈[0,1] such that aP(XA)=x and let F={XA&Ua}. Ditto for Y.)

Now, given an event A and a random variable X, let AX be the random variable equal to X on A and equal to zero outside of A. Let A={X=−∞} and B={Y=−∞}. Define the random variables X1 and Y1 on [0,1] with uniform distribution by X1(x)=−∞ if xP(A) and X1(x)=0 otherwise, and Y1(x)=−∞ if xP(B) and Y1(x)=0 otherwise. Since P(A)≥P(B) by (7), it follows that X1(x)≤Y1(x) everywhere and so X1Y1 by Axiom 3. But AX and BY are probabilistically equivalent to X1 and Y1 respectively, so by Axiom 5 we have AXBY. If we can show that AcX<BcY then the conclusion of our Theorem will follow from the second part of Axiom 4.

Let X2=AcX and Y2=BcY. Then P(X2=+∞)<P(Y2=+∞), X2* and Y2* have finite expected values and X2 and Y2 never have the value −∞. We must show that X2Y2. Let C={X2=+∞}. By subdivisibility, let D be a subset of {Y2=+∞} with P(D)=P(C). Then CX2 and DY2 are probabilistically equivalent, so CX2DY2 by Axiom 5. Let X3=CcX2 and Y3=DcY3. Observe that X3 is everywhere finite. Furthermore P(Y3=+∞)=P(Y2=+∞)−P(X2=+∞)>0.

Choose a finite N sufficiently large that NP(Y3=+∞)>E(X3)−E(Y3*) (the finiteness of the right hand side follows from our integrability assumptions). Let Y4 be a random variable that agrees with Y3 everywhere where Y3 is finite, but equals N where Y3 is infinite. Then E(Y4)=NP(Y3=+∞)+E(Y3*)>E(X3). Thus, Y4>X3 by Axiom 2. But Y3 is greater than or equal to Y4 everywhere, so Y3Y4. By Axiom 1 it follows that Y3>X3. but DY2CX2 and X2=CX2+X3 and Y2=DY2+Y3, so by Axiom 4 we have Y2>X2, which was what we wanted to prove.

Tuesday, January 21, 2014

Utility and the infinite multiverse

If we live in an infinite universe, then when we look at total values and disvalues, total utilities, we will always run into infinities. There will be infinitely many persons, of whom infinitely many will provide instances of flourishing, after all. Now one might say: "So what? Our individual actions only affect a finite portion of that infinite sea of value and disvalue."

But this may be mistaken. For if there are infinitely many persons, presumably there are infinitely many persons who have a rational and morally upright generally benevolent desire. A generally benevolent desire is a distributive desire for each person to flourish. It is not just a desire that the proposition <Everyone flourishes> be true, but a desire in regard to each person, that that person flourish, though the desire may be put in general terms because of course we can't expect people to know who the existent persons are.

Now, if you have a rational and morally upright desire, then you are better off to the extent that this desire is satisfied (some people will think this is true even with "and morally upright" omitted). Thus, if you have a rational and morally upright general benevolence, then even if some men are islands, you are not. Whenever someone comes to be better off, you come to be better off, and whenever someone comes to be worse off, you come to be worse off. So if infinitely many people have a rational and morally upright general benevolence, whenever I directly do something good or bad to you, I thereby benefit or harm infinitely many people. And no matter how small the benefit or harm to each of these generally benevolent people, it surely adds up to infinity.

St. Anselm thought that our sins were infinitely bad as they were offenses against an infinite God. If we live in a multiverse, those of our sins that harm people also harm infinitely many people.

One might object that the generally benevolent person will only be infinitesimally benefitted or harmed by a finite harm to one person in the infinite sea of persons in the multiverse. That may be true of some very weakly benevolent people. But there will also be infinitely many generally benevolent people whose general benevolence will be sufficiently strong that the benefit or harm will be non-infinitesimal. After all, one can imagine a person who, if faced with a choice whether she should gain a dollar or a stranger she knows nothing about should gain a hundred dollars would always prefer the latter option. Such a person counts benefits and harms to other people at at least 1/100th of what such benefits and harms to herself would count as. And so if I deprive anybody of a hundred dollars, each such a generally benevolent person will, in effect, be harmed to a degree equal to a one dollar deprivation. As long as there are infinitely many generally benevolent people with at least that 1:100 preference ratio, the argument will yield that a non-infinitesimal harm to anybody results in an infinite harm. And plausibly there would in fact be infinitely many people with a 1:1 preference ratio, or maybe even a 2:1 preference ratio (they would rather that others benefit than themselves).

So we cannot avoid dealing infinite utilities if there are infinitely many persons. For each of our nontrivial actions will affect infinitely many persons, since infinitely many persons will have rational and morally upright desires that bear on the action.

Moreover, even denying the existence of an infinite multiverse, or of an infinite universe, won't get us off the hook. For even if we don't think such an infinitary hypothesis is true, we surely assign non-zero epistemic probability to it. The arguments against the hypothesis may be strong but are not so strong as to make us assign zero or infinitesimal probability to it. And a non-zero non-infinitesimal probability of an infinite good still has infinite expected utility.

Interestingly, too, as long as overall people flourish across an infinite multiverse, each such non-infinitesimally generally benevolent person will seem to be infinitely well off. Such are the blessings of benevolence in an overall good universe.

The above argument will be undercut if we think that one only benefits from the fulfillment of a desire when one is aware of that fulfillment. But that view is mistaken. An author who wrote a good book is well off for being liked even if she does not know that she is liked.

Monday, December 16, 2013

Pascal's Wager in a social context

One of our graduate students, Matt Wilson, suggested an analogy between Pascal's Wager and the question about whether to promote or fight theistic beliefs in a social context (and he let me cite this here).

This made me think. (I don't know what of the following would be endorsed by Wilson.) The main objections to Pascal's Wager are:

  1. Difficulties in dealing with infinite utilities. That's merely technical (I say).
  2. Many gods.
  3. Practical difficulties in convincing oneself to sincerely believe what one has no evidence for.
  4. The lack of epistemic integrity in believing without evidence.
  5. Would God reward someone who believes on such mercenary grounds?
  6. The argument just seems too mercenary!

Do these hold in the social context, where I am trying to decide whether to promote theism among others? If theistic belief non-infinitesimally increases the chance of other people getting infinite benefits, without any corresponding increase in the probability of infinite harms, then that should yield very good moral reason to promote theistic belief. Indeed, given utilitarianism, it seems to yield a duty to promote theism.

But suppose that instead of asking what I should do to get myself to believe the question is what I should try to get others to believe. Then there are straightforward answers to the analogue of (3): I can offer arguments for and refute arguments against theism, and help promote a culture in which theistic belief is normative. How far I can do this is, of course, dependent on my particular skills and social position, but most of us can do at least a little, either to help others to come to believe or at least to maintain their belief.

Moreover, objection (4) works differently. For the Wager now isn't an argument for believing theism, but an argument for increasing the number of people who believe. Still, there is force to an analogue to (4). It seems that there is a lack of integrity in promoting a belief that one does not hold. One is withholding evidence from others and presenting what one takes to be a slanted position (for if one thought that the balance of the evidence favored theism, then one wouldn't need any such Wager). So (4) has significant force, maybe even more force than in the individual case. Though of course if utilitarianism is true, that force disappears.

Objections (5) and (6) disappear completely, though. For there need be nothing mercenary about the believers any more, and the promoter of theistic beliefs is being unselfish rather than mercenary. The social Pascal's Wager is very much a morally-based argument.

Objections (1) and (2) may not be changed very much. Though note that in the social context there is a hedging-of-the-bets strategy available for (2). Instead of promoting a particular brand of theism, one might instead fight atheism, leaving it to others to figure out which kind of theist they want to be. Hopefully at least some theists get right the brand of theism—while surely no atheist does.

I think the integrity objection is the most serious one. But that one largely disappears when instead of considering the argument for promoting theism, one considers the argument against promoting atheism. For while it could well be a lack of moral integrity to promote one-sided arguments, there is no lack of integrity in refraining from promoting one's beliefs when one thinks the promotion of these beliefs is too risky. For instance, suppose I am 99.99% sure that my new nuclear reactor design is safe. But 99.9999% is just not good enough for a nuclear reactor design! I therefore might choose not promote my belief about the safety of the design, even with the 99.9999% qualifier, because politicians and reporters who aren't good in reasoning about expected utilities might erroneously conclude not just that it's probably safe (which it probably is), but that it should be implemented. And the harms of that would be too great. Prudence might well require me to be silent about evidence in cases where the risks are asymmetrical, as in the nuclear reactor case where the harm of people coming to believe that it's safe when it's unsafe so greatly outweighs the harm of people coming to believe that it's unsafe when it's safe. But the case of theism is quite parallel.

Thus, consistent utilitarian atheists will promote theism. (Yes, I think that's a reductio of utilitarianism!) But even apart from utilitarianism, no atheist should promote atheism.

Friday, December 31, 2010

A stupid way to invest

Here's a fun little puzzle for introducing some issues in decision theory. You want to invest a sum of money that is very large for you (maybe it represents all your present savings, and you are unlikely to save that amount again), but not large enough to perceptibly affect the market. A reliable financial advisor suggests you diversifiedly invest in n different stocks, s1,...,sn, putting xi dollars in si. You think to yourself: "That's a lot of trouble. Here is a simpler solution that has the same expected monetary value, and is less work. I will choose a random number j between 1 and n, such that the probability of choosing j=i is proportional to xi (i.e., P(j=i)=xi/(x1+...+xn)). Then I will put all my money in sj." It's easy to check that this method does have the same expected value as the diversified strategy. But it's obvious that this is a stupid way to invest. The puzzle is: Why is this stupid?

Well, one standard answer is this. This is stupid because utility is not proportional to dollar amount. If the sum of money is large for you, then the disutility of losing everything is greater than the utility of doubling your investment. If that doesn't satisfy, then the second standard answer is that this is an argument for why we ought to be risk averse.

Maybe these answers are good. I don't have an argument that they're not. But there is another thought that from time to time I wonder about. We're talking of what is for you a very large sum of money. Now, the justification for expected-utility maximization is that in the long run it pays. But here we are dealing with what is most likely a one-time decision. So maybe the fact that in the long run it pays to use the simpler randomized investment strategy is irrelevant. If you expected to make such investments often, the simpler strategy would, indeed, be the better one—and would eventually result in a diversified portfolio. But for a one-time decision, things may be quite different. If so, this is interesting—it endangers Pascal's Wager, for instance.

Friday, May 14, 2010

Newcomb's Paradox and Pascal's Wager

Let Egalitarian Universalism (EU) be the doctrine that God exists and gives everyone infinite happiness, and that the quantity of this happiness is the same for everyone. The traditional formulation of Pascal's Wager obviously does not work in the case of the God of EU. What is surprising, however, is that one can make Pascal's Wager work even given the God of EU if one thinks that Bayesian decision theory, and hence one-boxing, is the right way to go in the case of Newcomb's Paradox with a not quite perfect predictor.

Here is how the trick works. Suppose that the only two epistemically available options are EU and atheism, and I need to decide whether or not to believe in God. Given Bayesian decision theory, I should choose whether to believe based on the conditional expected utilities. I need to calculate:

  1. U1=rP(EU|believe) + aP(atheism|believe)
  2. U2=rP(EU|~believe) + bP(atheism|~believe)
where r is the infinite positive reward that EU guarantees everybody, and a and b are the finite goods or bads of this life available if atheism is true. If U1 is greater than U2, then I should believe.

We'll need to use our favorite form of non-standard analysis for handling infinities. Observe that

  1. P(believe|EU)>P(believe|~EU),
since a God would be moderately to want people to believe in him, and hence it is somewhat more likely that there would be theistic belief if God existed than if atheism were true (and I assumed that atheism and EU are the only options). But then by Bayes' Theorem it follows from (3) that:
  1. P(EU|believe)>P(EU|~believe).
Let c=P(EU|believe)-P(EU|~believe). By (4), c is a positive number. Then:
  1. U1U2=rc + something finite.
Since r is infinite and positive, it follows that U1U2>0, and hence U1>U2, so I should believe in EU.

The argument works on non-egalitarian universalism, too, as long as we don't think God gives an infinitely greater reward to those who don't believe in him.

However, universalism is false and one-boxing is mistaken.

Friday, July 10, 2009

From self-interest to morality

On a familiar Hobbesian picture (whether it was that of Hobbes, I know not), a sovereign is needed to enforce the laws in order for moral behavior to become rational, where rationality is equated with self-interest, and once there is a sovereign, it is rational to strictly adopt morality. Gauthier, instead, thinks we can get by with the fact that by strictly committing ourselves to the moral code, we will likely lose out—we'll get caught.

I do not know that either picture is sufficient to show that it is rational to become moral. For, it seems, a smart person with the executive virtues might instead of adopting morality, will adopt almost-morality, such as a disposition to act morally unless one has a better than 99.9% chance of gaining at least twenty million dollars without getting caught. We can imagine the almost moral financier who goes along, as morally as everybody else, cooperating with others, obeying traffic laws, punctiliously handling her clients moneys—as long as less than $20 million is at stake or as long as the chance of getting caught is 0.1% or higher. It seems that from a self-interest perspective, she might do better than just by adopting morality, though on the other hand Gauthier might point to the psychic costs of monitoring for the possibility of getting $20 million dollars with a chance of getting caught under 0.1%. On the other hand, the wishful thinking might add some spice to the person's life. And maybe the person has a pretty good antecedent chance of eventually being able to work the swindle. So, I think, on Gauthier-like and thumbnail-Hobbes-like considerations, it might sometimes only be rational to adopt almost-morality.

But there is a better way to argue for adopting morality. Say that a view is "serious" provided that there is some evidence for it. On all serious non-religious views, all life's payoffs are finite. On some serious religious views, adopting morality increases the chance of an infinite positive payoff, and on some of these also infinitely increases the size of a possible infinite positive payoff (e.g., by moving one from one level in heaven to another, thereby resulting in greater bliss for eternity). On some serious religious views (there is an overlap between these and the former), adopting morality decreases the chance of a negatively infinite payoff, and on some of these also infinitely decreases the size of a possible infinitely negative payoff (e.g., my moving one down to a lower circle of hell). On some serious religious views, the effect of adopting morality on infinite payoffs is inscrutable. On some serious religious views, there either are no infinite payoffs (e.g., religious views that have no afterlife) or the infinite payoffs are only finitely affected by whether one adopts morality (e.g., reincarnationist views on which everyone eventually achieves the same level of bliss, so that how one lives only affects how many lives it takes to do that).

But on no serious religious views is it the case that the effect of adopting morality decreases the chance of a positive infinity payoff, increases the chance of a negative infinity payoff, infinitely decreases a positive infinity payoff, or makes infinitely worse a negative infinity payoff. Putting the above together, and using some coherent way of handling infinities mathematically, and assuming that at least one of the serious religious views on which there is an increase of a probability of a plus infinity, or an infinite increase of the size of a plus infinity, or a decrease of the probability of a minus infinity, or an infinite decrease of the size of a minus infinity given adoption of morality is a view that has non-zero probability, and assuming that non-serious views cancel out or are overwhelmed probabilistically by serious ones, we get the conclusion that self-interest requires that we should adopt morality, rather than almost-morality or any other alternative.

I do want to consider one objection. According to orthodox Christianity, salvation is a fruit of God's grace rather than something we achieve by our own willed effort. Now, one might argue from this fact that it is not the case that I decrease the chance of God giving me the grace of conversion when I adopt the way of life of the pimp over the way of life of a philanthropist. If so, then whether I adopt morality or not will not affect the chances of infinite (whether positive or negative) payoffs. That's fine. But there is no Christian view on which it is the case that we in fact increase the probability of a positive payoff by adopting the way of life of the pimp. Granted, God loves the pimp, but God also loves the philanthropist. The probabilities that God will offer such-and-such a grace to a person are, on these grace-based views, inscrutable. One might worry that the philanthropist is more prone to self-righteousness than the pimp. But just as, according to Christian doctrine, God loves the exploiter, so too does God love the self-righteous. (Of course he hates the exploiting and the self-righteousness, both for the effect on victims, and for the effect on the vicious person.)

But that objection is only relevant if the above-described Christian view is the only one with non-zero probability. (There are some complicated theological and probabilistic questions about some of the arguments in the previous paragraph—it might turn out to be compatible with a grace-based view of salvation that morality, being itself a fruit of grace, increases the chance of salvation, or prepares the way for the acceptance of grace. Also, once one has received grace, by acting seriously immorally, one rejects grace. While God might offer it again, perhaps we cannot count on it.) And if that is the case, then one has other rational reasons to be moral—reasons internal to that Christian view, such as that by being moral, one acts lovingly towards the God who died for one's sins, and lives more fully as a member of the body of Christ. It does not matter for the argument whether a religious view on which morality improves the chance of an infinite payoff is true. All one needs is non-zero epistemic probability.

A more serious objection is with regard to the content of that morality. But among the serious religious views, first there will be agreement that one ought to be moral, so striving to figure out what is moral and striving to do that will be prudent, and, second, there will be agreement on various, though not all, aspects of what being moral entails. In such a case, it will be more prudent to choose the safer route (thus, if one serious view says that contraception is immoral, and no serious view says that contraception is morally required, then one shouldn't contracept).

Monday, July 6, 2009

Pascal's wager and infinity

(Cross-posted to prosblogion).

Some people, I think, are still under the impression that the infinities in Pascal's wager create trouble. Thus, there is the argument that even if you don't believe now, you might come to believe later, and hence the expected payoff for not believing now is also infinite (discounting hell), just as the payoff for believing now. Or there is the argument that you might believe now and end up in hell, so the payoff for believing now is undefined: infinity minus infinity.

But there are mathematically rigorous ways of modeling these infinities, such as Non-Standard Analysis (NSA) or Conway's surreal numbers. The basic idea is that we extend the field of real numbers to a larger ordered field with all of the same arithmetical operations, where the larger field contains numbers that are bigger than any standard real number (positive infinity), numbers that are bigger than zero and smaller than any positive standard real number (positive infinitesimals), etc. One works with the larger field by exactly the same rules as one works with reals. This is all perfectly rigorous.

Let's do an example of how it works. Suppose I am choosing between Christianity, Islam and Atheism. Let C, I and A be the claims that the respective view is true. Let's simplify by supposing I have three options: BC (believe and practice Christianity), BI (believe and practice Islam) and NR (no religious belief or practice).

Now I think about the payoff matrix. It's going to be something like this, where the columns depend on what is true and the rows on what I do:

CIA
BC0.9X-0.1Y0.7X-0.3Y-a
BI0.6X-0.4Y0.9X-0.1Y-b
NR0.4X-0.6Y0.4X-0.6Yc
Here, X is the payoff of heaven and -Y is the payoff of hell, and X and Y are positive infinities. I assume that the Christian and Islamic heavens are equally nice, and that the Christian and Islamic hells are equally unpleasant. The lowercase letters a, b and c indicate finite positive numbers. How did I come up with the table? Well, I made it up. But not completely arbitrarily. For instance, BC/C (I will use that symbolism to indicate the value in the C column of the BC row) is 0.9X-0.1Y. I was thinking: if Christianity is true, and you believe and practice it, there is a 90% chance you'll go to heaven and a 10% chance you'll go to hell. On the other hand, BC/I is 0.7X-0.3Y, because Islam expressly accepts the possibility of salvation for Christians (at least as long as they're not ex-Muslims, I think), but presumably the likelihood is lower than for a Muslim. BI/C is 0.6X-0.4Y, because while there are well developed Christian theological views on which a Muslim can be saved, these views are probably not an integral part of the tradition, so the BI/C expected payoff is lower than the BC/I one. The C and I columns of the tables should also include some finite numbers summands, but those aren't going to matter. A lot of the numbers can be tweaked in various ways, and I've taken somewhat more "liberal" (in the etymological sense) numbers--thus, some might say that the payoff of NR/C is 0.1X-0.9Y, etc.

What should one do, now? Well, it all depends on the epistemic probabilities of C, I and A. Let's suppose that they are: 0.1, 0.1 and 0.8, and calculate the payoffs of the three actions.

The expected payoff of BC is EBC = 0.1 (0.9X - 0.1Y) + 0.1 (0.7X - 0.3Y) + 0.8 (-a) = 0.16X - 0.04Y - 0.8a.

The expected payoff of BI is EBI = 0.15X - 0.05Y - 0.8b.

The expected payoff of NR is ENR = 0.08X - 0.12Y + 0.8c.

Now, let's compare these. EBC - EBI = 0.01X + 0.01Y + 0.8(b-a). Since X and Y are positive infinities, and b and a are finite, EBC - EBI > 0. So, EBC > EBI. EBI - ENR = 0.07X + 0.07Y - 0.8(b+c). Again, then EBI - ENR > 0 and so EBI > ENR. Just to be sure, we can also check EBC - ENR = 0.08X + 0.08Y - 0.8(a+c) > 0 so EBC > ENR.

Therefore, our rank ordering is: EBC > EBI > ENR. It's most prudent to become Christian, less prudent to become a Muslim and less prudent yet to have no religion. There are infinities all over the place in the calculations, but we can rigorously compare them.

Crucial to Christianity being favored over Islam was the fact that BC/I was bigger than BI/C: that Islam is more accepting of salvation for Christians than Christianity is of salvation for Muslims. If BC/I and BI/C were the same, then we'd have a tie between the infinities in EBC and EBI, and we'd have to decide based on comparisons between finite numbers like a, b and c (and finite summands in the other columns that I omitted for simplicity)--how much trouble it is to be a Christian versus being a Muslim, etc. However, in real life, I think the probabilities of Christianity and Islam aren't going to be the same (recall that above I assumed both were 0.1), because there are better apologetic arguments for Christianity and against Islam, and so even if BC/I and BI/C are the same, one will get the result that one should become Christian.

It is an interesting result that Pascal's wager considerations favor more exclusivist religions over more inclusivist ones--the inclusivist ones lower the risk of believing something else, while the exclusivist ones increase it.

It's easy to extend the table to include deities who send everybody to hell unless they are atheists, etc. But the probabilities of such deities are very low. There is significant evidence of the truth of Christianity and some evidence of the truth of Islam in the apologetic arguments for the two religions, but the evidence for such deities is very, very low. We can add another column to the table, but as long as the probability of it is small (e.g., 0.001), it won't matter much.

Monday, June 29, 2009

Pascal's wager and decision theory

I think Pascal's wager could be seen as a way of destroying most of standard decision theory in the case of many agents. The reason for this is that just about any significant choice one makes will have the property that according to some religious views, that choice affects the probabilities of getting an infinite payoff, and unless the agent has a way of assigning zero epistemic probability to that religion, these infinitary considerations will swamp all the finite considerations. Thus, one wonders to oneself: "Should I self-flagellate?" There is an obvious answer: "No, because it hurts." But because there are religious views according to which such self-flagellation helps attain an infinite payoff, then unless one assigns zero probability to these views, the infinitary considerations swamp the finitary considerations coming from the fact that it hurts. One ends up having to compare the increased probability that one will get an infinite payoff if one self-flagellates on religious views that are pro-flagellation with the decreased probability of an infinite payoff on anti-flagellation religions, and the apparently relevant consideration that it hurts just drops out by the wayside (unless the infinitary considerations end up being perfectly balanced).

One might think one can dismiss the infinitary considerations because of problems with weighing infinities. But those can be solved fairly easily by adopting an appropriate version of non-standard arithmetic.

Maybe what this is, though, is not so much a reductio of standard decision theory, as a way of showing that practical rationality requires that one assign non-zero probability to at most one religious view (or maybe one moderately narrow family of closely-related religious views). Dogmatic atheists and dogmatic religionists would like this conclusion. And I am a dogmatic religionist, after all. :-)

Wednesday, May 21, 2008

Rational decision theory

Suppose that I assign a non-zero probability to some religion, and this religion tells me that a certain action decreases the probability of an infinitely valuable outcome (e.g., eternity in heaven, or avoiding an eternity in hell). If there is no non-zero probability hypothesis on which the action increases the probability of an infinitely valuable outcome (or decreases the probability of an infinitely disvaluable outcome, but I shall count the avoiding of an infinitely disvaluable outcome itself an infinitely valuable outcome for simplicity), it is plain that in prudential rationality I ought to avoid the action.

Suppose that a number of religions have non-zero probability. Then if A is any action such that at least one of the religions claims that A increases the probability of an infinitely valuable outcome (IVO), and none of the religions claim that A decreases this probability, and, further, there are no non-religious hypotheses of non-zero probability that would make A decrease the probability of an IVO, again I ought to refrain from doing A. Now, sometimes there will be a genuine conflict between religions, where one religion tells me that some action increases the likelihood of an IVO and another tells me that it decreases it. In that case, I need to get my hands dirty with probabilistic calculations. I need to compare the IVOs (not all IVOs are equal; eternity in heaven with daily apple pie is not quite as good as eternity in heaven with daily apple pie à la mode); I need to compare the degree to which according to the respective religions A contributes to the likelihood of the respective IVOs, and finally I need to compare the probabilities I assign to the respective religions. These computations involve comparisons of infinities, but that's not at all a big deal—I just use an appropriate non-standard arithmetical model of infinite values.

Except in the rare cases where things balance out precisely, say when there are only two religions of non-zero probability, and they have equal probability, and one says that A increases the chance of an IVO by 0.2 and other says that it decreases it by 0.2, and the IVO is the same, or in the somewhat less rare, but still rare, cases where no religion of non-zero probability says anything relevant about A, these religious considerations will trump all other self-interested considerations. After all, only religious claims involve IVOs, and any change in the likelihood of an IVO trumps any change in the likelihood of something of finite value.[note 1]

Unless the number of religions that I assign non-zero probability to is very small (say, 0, 1 or 2), or there is a lot of similarity between the religions I assign non-zero probability to, taking these considerations into account will lead to a rather unappealing way of life, since there will be a lot of restrictions on one's actions, as, typically, any action that at least one of the religions forbids will be forbidden to one, as it will be relatively rare that an action forbidden by one religion will be positively required by another.

I think this is a reductio of rational decision theory, whether of a self-interested variety or of a utilitarian stripe. (After all, in utilitarian expected value calculations, I will need to take into account any IVOs for me or for others that have non-zero probability.)