My thinking about the St Petersburg Paradox has forced me to reject this Archimedean axiom (not the one in the famous representation theorem):
- For any finite utility U and non-zero probability ϵ > 0, there is a finite utility V such that a gamble that offers a probability ϵ of getting V is always better than a certainty of U.
- There is a finite utility U and a non-zero probability ϵ > 0, such that no gamble that offers a probability ϵ of getting some finite benefit is better than certainty of U.
- There is a finite utility U and a non-zero probability ϵ > 0, such that for all finite utilities V, the certainty of U is better than a probability ϵ of V.
- Any finite price is worth paying for any non-zero probability of any infinite payoff.
This doesn't, however, destroy Pascal's Wager. But it does render the situation more messy. If the probability ϵ of the truth of Christianity is too small relative to the utility U lost by becoming a Christian, then the bird-in-the-hand principle will prohibit the Pascalian gamble. But maybe one can argue that little if anything is lost by becoming a Christian even if Christianity is false--the Christian life has great internal rewards--and the evidence for Christianity makes the probability of the truth of Christianity not be so small that the bird-in-the-hand principle would apply. However, people's judgments as to what ϵ and U satisfy (2) will differ.
Pleasantly, too, the bird-in-the-hand principle gives an out from Pascal's Mugger.
Alex,
ReplyDeleteI think a potential difficulty is that there are only finitely many numeric probabilistic assignments that are not comprehension-transcendent for humans. It might still be that for all finite utilities U and a non-zero probability ϵ > 0 that isn't beyond comprehension, there is a finite utility V such that a gamble that offers a probability ϵ of getting V is always better than a certainty of U, and that does not seem to yield the paradox (though there are alternative arguments against that).
Perhaps, a way around that is to say the math is only a way of speaking and modeling it, but the principle still holds as long as humans do assign arbitrarily small nonzero probabilities. However, if - say - humans have only finitely many distinct mental states, this does not hold, either. So, in one way or another, this requires that humans have infinitely many distinct mental states, and moreover, that humans assign infinitely many arbitrarily small nonzero probabilities - or at least that humans potentially do that, but that "potentially" is difficult to make precise, as metaphysical possibility of said assignments would not be enough I think.
As others have said, no one is Archimedian in practice, and for good practical reasons. But it could be argued (e.g. Robert Martin is Stanford Encyclopedia on St. Petersburg), that invoking practical constraints to resolve paradoxes is changing the subject.
ReplyDeleteIf you invoke practical constraints, where does that leave the normative status of decision theory? It seems you would have to build the constraints into the theory. For example, the theory should say how the constraints lead to a particular ε and U in 3. This seems non-trivial.
I don't think this is a matter of practical constraints but of rationality. Consider that my bilking argument against 1 is very much like the Dutch Book arguments people use to establish conclusions about rationality.
ReplyDeleteIf I'm right, it's not impractical to accept a 100 years of torture for the sake of a one in a googolplex chance of eternal bliss. It's irrational.
That said, there is the question of where the parameters in 3 come from. They aren't going to come from purely formal constraints. This means that rationality is not based on purely formal constraints.
I think rational choice is based not only on the agent's preferences (or, if you like, what's good for them), but also their cognitive capabilities.
ReplyDeleteFor example, let's say there is a bet on whether a certain number n is prime. The number is not beyond human comprehension in every sense (I think there are numbers we understand in some ways but not in others), but it's not yet known to human civilization whether it's a prime. The number was picked among odd numbers from the largets known prime to a number greater than n^10 by a new supercomputer (say, it's a new quantum computer and the bet is part of Google's publicity campaign, showing off its unprecedented capabilities, and they're willing to spend a few million dollars on this campaign), by a procedure that's rational to consider random (more details can be given if needed).
Alice is a normal human being with access to no special equipment, her income is average for an American, and she may:
1. Bet that the number n is a prime. If it is, she gets $100. If it's not, she has to pay $10.
2. Bet that the number n is not a prime. If it's not, she gets $50000. If it is, she has to pay $10.
3. Decline to bet.
The computer's capabilities allow it to determine whether n is a prime, with an extremely low probability of error (we can give an approximate number if needed).
It's rational for Alice to bet that n is not a prime. But if Alice had different cognitive capabilities and she could determine quickly and without cost whether n is a prime (with a probability of error less than, say, 1 in a 10^50), then the rational bet would depend on whether she determines it's a prime (assuming she still cares about money).
The scenario is a rough sketch, and it could be improved in order to make it natural that her capabilities change but not her preferences - not in a way that affects the bet, to be more precise -, that the company is willing to offer the money, etc., but the idea is that rational choice depends on cognitive abilities in addition to preferences. In fact, even a wager against something that is logically necessary (e.g., whether a certain formula is a theorem of a first order predicate calculus, for example we could posit a formula that says on a standard interpretation that some axioms imply that n is a prime, and it turns out that that is a theorem) would be rational for limited agents.
That seems right. But the irrationality of accepting a googolplex years of torture for the sake of a one in a googolplex chance of eternal bliss doesn't seem to depend on cognitive abilities very much. Of course, one has to understand what a googolplex is, what torture is, etc. But as long as one satisfies these preconditions, the judgment seems right.
ReplyDeleteI think your example involves a number of subtleties that one should look at carefully. Maybe it shows there is something to rational choice beyond preferences and cognitive abilities. But I'm not sure, because of those subtleties and the effect they might have. For example:
ReplyDelete1. Can we actually assign a probability of 1/googolplex?
The issue is that even if we understand a googoleplex on some level - but I think there are different levels of understanding -, if we can assign a probability of 1/googolplex, then it seems to me we can also assign 1/n, for each n greater than 1 and less than a googolplex. But that would seem to require that we can actually have more than a googolplex different mental states - not just in a sense of metaphysical possibility, but at least it would have to be nomologically possible, it seems to me.
Is that true?
I don't know. If the universe is continuous, then maybe so. But what if this is not so, and it's discrete?
I don't know whether it's discrete, but if it is discrete, then it may well be that the number of mental states available to us is less than 1 googolplex.
2. Similarly, if the number of mental states we can have is finite, then it seems probable that we can only assign finite utilities, so the eternal bliss would be also finite.
3. Even if potentially we can assign infinite utility, I don't know that eternal bliss has infinite utility, based on our preferences.
On that note, if you leave aside googolplexes, would it be rational to accept 10000 years of torture for the sake of a 1 in 1 chance of eternal bliss?
If not, perhaps on our preference structure, avoiding some bad things is more valuable than getting any positive things. Granted, eternal bliss also involves implicitly the avoidance of a negative thing - namely, death -, but the negative of 10000 years of torture - if it's horrible enough, at least - might outweigh it. It's not so easy to say.
I think in order to deal with some of those subtleties, we'd need to consider a number of parallel cases. For example, we can remove the issue of positive vs. negative comparing not torture with bliss, but torture with more torture. But still, there are further difficulties if one assumes that we can assign all of those distinct probabilities, because in that case sorites-like scenarios create a number of additional complications, and we can't properly reject the scenarios on the basis that our language is not precise enough (or similar grounds), once we have already assumed that we can actually assign probabilities like 1/googolplex.
That aside, there is the issue of the rationality of the preference structure of an agent. Perhaps, there is more to rational choice as long as there is such thing as an irrational preference structure (even if it's a consistent one). But that, I think, is a different issue, and even in that case, it may be that rational choice depends on whether the preferences are rational, what they actually are if rational, and the cognitive abilities of the agent.
I don't think we need to be able to assign the intermediate probabilities. The assignment can be linguistic in nature, and we do have a simple definition of a googolplex.
ReplyDeleteI do think this requires saying that credences are not something like intensities of belief. They are more finegrained than our intensities are.
This is relevant: http://alexanderpruss.blogspot.com/2010/09/getting-below-hood-on-belief-and-desire.html?m=1
That would seem to require at least rejecting not only that they are intensities of belief, but that they are the intensities of some mental state - whatever that is.
ReplyDeleteIn my view, the assignment of - say - 1/1000, or 1/100, etc., are just approximations to what we actually assign, which would indeed be similar to the intensities of belief, or something in the vicinity - or rather, belief would be a type of very high assignment.
But alright, let's assume we're justified in rejecting the above (i.e., intensities of belief or anything else).
If the assignment is merely linguistic, there is the question of how that tracks rational choices, at least as long as the number of assignments we can make is finite.
For example, we have a definition of googolplex, and many other numbers, on base-10, and we can assign probabilities like 1/10, 1/100, 1/googolplex, as well as others like 1/2, 1/3, etc. However, we cannot assign all of the probabilities in between, but we assign those that we have a simple definition of, and we have a certain order.
Now, let's say that in another human civilization, they use a base-8 system. They have other simple definitions of other numbers, so their assignments will be different from ours. But it's not at all clear - at least, we shouldn't believe them without considerably more evidence - that they assign probabilities in ways psychologically different from the way we do, as a result their rational choices are different from ours, etc., let alone they (or we) can assign smaller probabilities (depending on which numbers in their system have easier definitions, or more complicated ones).
Maybe there is a way around that, but it's not trivial; argumentation would be required.
Granting that as you say it's only linguistic and we have no problem assigning 1/googolplex, I think there are other issues.
Do you think it would be rational to accept 1 googolplex years of torture in order to get a certainty of eternal bliss after that?
P.S: Thanks for the link. I'll think about that more carefully.
Alex,
ReplyDeleteI've been thinking about linguistic assignments of probability. As I see it, that's not what probabilistic assignments are. Here's an argument:
In addition to the base-8 example, here's a perhaps more direct difficulty: there are cultures without words for numbers. They only have words for "small amount", and "large amoun", or at most, "one", "few", and "many". If probabilistic assignments are linguistic, people on those cultures just don't make them. But that implies that the correct theory of rational choice for them does not involve probabilities. In particular, the bird-in-the-hand principle is false for members of the Piraha tribe (at least, as long as they weren't taught numbers by someone else, but at least nearly all weren't, and none was until, say, a century ago).
But does the correct theory of rational choice for humans depend on language?
Preferences and cognitive ability are variable among humans, but this would require that even the basic theory be variable - and massively so -, and would not assign probabilities in cultures without numbers, would assign very different ones in cultures with other bases, etc.
On the basis of that and other factors, in my assessment, intuitive probabilistic assignments - whatever they are - are not linguistic.
In re: doxins and orektins. I'll have to think about them a lot more, but one thing I can tell is that the theory is not true if assignments of credence are linguistic, because in that case, the Piraha would have no doxins or orektins even though we do, and that seems clearly not to be the case: cognitive variation among humans may be big, but not to the point that the structure of the mind is so radically different that even the basic mental states are radically different.
I do not think that all credence assignments are linguistic in nature. The small child who has no probabilistic language can have an imprecise credence in some proposition p.
ReplyDeleteTo be honest, I have no idea how our credences work. But it seems clear that our decision-theoretic behavior makes language-based distinctions that are finer grained than "intensities" of mental states are likely to be. Sam offers you a lottery ticket for $1 for one of two lotteries whose prize is ten million dollars. In the first lottery, there are a million tickets and in the second there are a million and ten tickets. It is extremely unlikely that our intensities of mental states distinguish between 1/1000000 and 1/1000010. But it is also clear that we should rationally decide to go for the first lottery, assuming we grasp all the relevant mathematical and empirical facts. So either (a) our credences don't supervene on "intensities", or (b) our rational decision behavior doesn't track our credences. I prefer (a).
I think that if it is the case that our mental states do not distinguish between 1/1000000 and 1/1000010, probably the distinction we're making is in part between "more probable" and "less probable", and in part something else, but "more probable" and "less probable" is enough to make a rational choice given that the prize is the same. However, in that case we're probably not actually assigning 1/1000000 or 1/1000010. Rather, those number are a model of our intuitive probabilistic assessments that in this particular case, does not match it, though it may well approximate it enough for the purposes we may have (though that depends on the purposes).
ReplyDeleteThat aside, I'm not sure it is extremely unlikely that our intensities of mental states distinguish between 1/1000000 and 1/1000010. Why do you think so?
Perhaps, we should try to narrow down what we mean by "intensities". There might be some miscommunication going on.
Still, let's grant for the sake of the argument that our credences don't supervene on intensities (whatever those are). In other words, let's assume (a).
Even so, I think the problem for the linguistic approach remains, because not only small children but normal adults with no probabilistic language and not even numerical language can have credences for many, many things. However, if we can assign linguistic credences like 1/googolplex.
If our rational decision behaviored tracked those linguistic credences, then our decision behavior would be tracking completely different things: the credences of numberless people and perhaps our credences when we're not thinking explicitly about probabilities, and then the numeric and linguistic credences (also called "credences", but apparently a very different object), and mixing them up in the same process (e.g., to apply Bayes, even if intuitively). That seems very unlikely to me, either as a psychology hypothesis or as an epistemic one.
So, assuming that we can assign 1/googolplex, etc., but that that is linguistic, then it seems to me that even assuming (a), our linguistic credences are probably not tracked by our decision-making process, which would instead track something else (whatever the non-linguistic credences are). I guess it could be argued that even if psychologically, our decision-making process does not track our linguistic credences, our rational decision making process ought to track them. But that would need some argumentation, and I find it intuitively very improbable. Why would it be rational to be tracking such different things?
That aside, I do suspect that having mathematical and probabilistic language might allow us to introduce finer-grained distinctions in our decision-making process, and as a result increase the number of credences we can assign, but the credences themselves (even if associated with some number in mathematical language) are something else, and remain non-linguistic.
This comment has been removed by the author.
ReplyDelete