Showing posts with label lottery. Show all posts
Showing posts with label lottery. Show all posts

Friday, September 21, 2018

Lottery cases and Bayesianism

Here’s a curious thing. The ideal Bayesian reasons about all contingent cases just as she reasons about lottery cases. If the reasoning doesn’t yield knowledge in lottery cases (i.e., if the ideal Bayesian can’t know that she won’t win the lottery), it doesn’t yield knowledge in any contingent cases. So, if the ideal Bayesian doesn’t know in lottery cases, she doesn’t know in any contingent cases. So she knows in lottery cases, I say.

Wednesday, September 19, 2018

Gettier and lottery cases

I am going to give some cases that support this thesis.

  1. If you can know that you won’t win the lottery, then in typical Gettier cases you are in a position to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

  1. You have a ticket.

  2. If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

  3. So, very likely it’s a losing ticket.

  4. So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it is a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

  1. You seem to see a sheep.

  2. If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

  3. So, very likely there is a sheep in the field.

  4. So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

Objection 1: In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

Response: If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in some Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

Objection 2: You don’t know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

Response: If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

  1. You will get the treatment.

  2. If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

  3. So, very likely you will recover.

  4. So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

Tuesday, November 21, 2017

Perfect rationality and omniscience

  1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

  2. A perfectly rational agent must believe anything there is overwhelming evidence for.

  3. A perfectly rational agent must have consistent beliefs.

  4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

  5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

  6. So, a perfectly rational agent is never in a lottery situation. (3,5)

  7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.

Thursday, May 3, 2012

Curley's crooked lottery

Every week, Curley sells a thousand tickets for a lottery, where the tickets are ten dollars each and the prize is fifty thousand dollars, entering the names in a ledger, numbered from one to a thousand. He then goes to random.org to choose a number between 1 and 1000, and that's the winner's number. Next he instructs his secretary, Moe, to type up letters to the thousand entrants, each of which expresses Curley's regrets that the entrant did not win the lottery. Curley never bothers to look up in the ledger who the winner is, but he knows that Moe does. Normally Moe then brings the thousand letters for Curley to sign, with the winner's letter—which of course also regrets to inform that the entrant did not win—on top, and Curley knows that.

Every week, thus, Curley rakes in ten thousand dollars less administrative costs by lying to one person—the "winner". The winner never comes to claim the prize, and so all is financially well for Curley.

This week, however, as Moe brings the letters to Curley, he trips in Curley's sight and the letters get all mixed up. Curley still signs the thousand letters. Each letter that Curley signs is very likely to be true.

It seems that in this week's lottery, Curley has managed to avoid lying. He does not assert to anybody anything that he disbelieves. He does, of course, sign the letter misinforming Patricia Hammerford, the winner, that she is not the winner. But while he is signing it, he believes it is very likely true, indeed has probability 0.999, that she is not the winner. Of course, there may be a moral problem with saying something that one thinks is very likely true but which one does not yet believe. But that does not seem to be a very large moral problem. It's not lying.

Here's one thing you could say. Each individual letter that Curley signs this week involves his asserting something that he does not believe, though he does take it to be probable. In itself, each letter is not a large moral problem. But in aggregate, especially as the signing of the letters is all a part of a single action plan, we have a large moral problem.

This could be. But I also think one might have the intuition that what Curley is doing this week is morally on par with the lying he engaged in during the previous weeks. And I am not sure the above aggregate story yields that.

Here's the start of a solution I like: Curley intends to assert to the winner that he or she is not the winner. He fulfills this plan by asserting a parallel claim to each entrant. The following seems true:

  1. Fulfilling the intention to assert to the winner that he or she is not the winner is morally on par with lying.

But I am having a difficult time formulating an appropriately general form of this principle.

An alternative approach is to say that one is lying whenever one asserts something that one does not believe, even if one does not disbelieve it either. Thus, Curley is lying, even though he believes that what he is asserting is true. A problem with this is that it makes Curley be lying a thousand times this week, while last week he only lied once. Maybe the thousand lies are small (because he thinks that likely he's saying the truth in each case), but they add up to an equivalent of the big lie from the previous week. But I am dubious of such moral arithmetic.

Thursday, February 23, 2012

Infinite lotteries and infinitesimal probabilities

The argument in this post is based on a construction by Dubins (see Example 2.1 here) that I've switched into an infinitesimal case.

Suppose you can have an infinite lottery with ticket numbers 1,2,3,... and each ticket has infinitesimal probability (perhaps the same one for each). Then really weird stuff can happen. Say I toss a fair coin, but don't show you the result. Instead, you know for sure that I will do this:

  • If the coin was tails, I run an infinite lottery with ticket numbers 1,2,3,... and with each ticket having infinitesimal probability
  • If the coin was heads, I run an infinite lottery with the same ticket numbers, but now the probability of ticket n is 2n.
And you know for sure that I will then announce the result of the lottery.

Here's the oddity. No matter what my announcement, you will end up all but certain—i.e., assigning a probability infinitesimally short of 1—that the coin was heads. Here's why. Suppose I announce ticket n. Now, P(n|heads)=2n but P(n|tails) is infinitesimal. Plugging these facts into Bayes' theorem, and assuming that your prior probability for heads was 1/2 (actually, all that's needed is that it be neither zero nor infinitesimal), your posterior probability P(heads|n) ends up equal to 1−a where a is infinitesimal.

So I can rationally force you to be all but certain that it was heads, simply by telling you the result of my lottery experiment. And by reversing the arrangement, I could force you to be all but certain that it was tails. Thus there is something pathological about the infinite lottery with infinitesimal probabilities.

This is, to me, yet another of the somewhat unhappy results that show that probability theory has a quite limited sphere of epistemological application.