I am going to give some cases that support this thesis.

- If you can know that you won’t win the lottery, then in typical Gettier cases you are in a
*position*to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

You have a ticket.

If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

So, very likely it’s a losing ticket.

So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it *is* a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

You seem to see a sheep.

If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

So, very likely there is a sheep in the field.

So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

**Objection 1:** In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

**Response:** If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in *some* Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

**Objection 2:** You *don’t* know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

**Response:** If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

You will get the treatment.

If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

So, very likely you will recover.

So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

## 9 comments:

When you see the fluffy white dog and mistake

itfor a sheep, your justification for believing that there is a sheep is clearly the wrong sort. Hiding that fact in some statistics cannot make it any better.There is no such mistake in the lottery case. There is just the statistics. That is why you can know that you will not win the lottery, but not know that there is a sheep, even though they are statistically similar cases.

In the lottery case, I see the envelope and I mistake it for one that holds a white ticket. Seems like an exact parallel. In both cases, I don't draw my conclusion from the mistake. I draw my conclusion solely from the statistics.

Not quite exact:

You see a dog and mistake it for a sheep; still, you know there is a sheep via statistics.

You see an envelope containing a red ticket and mistake it for an envelope containing a white ticket; still, you know there is a losing ticket via statistics.

Maybe that difference is not important when reasoning from the statistics; but, I suspect that such reasoning is invalid:

There is some property in common between anything and any group of things (if only the disjunctive property of being one of either that thing or else that group), as you observe. But therefore you might "know" all sorts of things, by making mistakes. Consequently such reasoning is invalid (such "knowledge" is too easy).

Such knowledge is made easier, but it's still non-trivial: you still need justification, truth and belief.

(S) A sheep-or-pebble-like object is not usually a sheep.

Clearly (S) is true (such an object is usually a pebble).

Let us suppose that Bob sees a sheep-or-pebble-like object.

By (S) Bob

knowsthat it is not a sheep (if it is not a sheep).So, Bob sees what looks like a sheep, and believes that it is not a sheep, the justification for his belief being (S); and it seems to me that you profess that

(i) if it is a fake sheep, then Bob

knewthat it was not a sheep, and(ii) such knowledge is non-trivial.

It seems to me that I must have misunderstood you, somehow ...

What strikes me, on re-reading my last comment

is that (i) is false because Bob is clearly

madto have his belief for such a reason as (S).

And that is because (S) flies in the face of natural kinds.

And that is the big difference, with the lottery, I think.

Regarding (ii) maybe it is not trivial

because it is not knowledge ((i) is false)

because knowledge requires rationality, or something.

In the Gettier case, there are natural kinds, so the reasoning is not mad.

But, I wonder how different your (6)--(9) is, from the Gettier case ...

In the Gettier case, you see what you take to be a sheep,

that is, you see what seems to be a sheep, and so you believe that it is a sheep.

Implicit justification for that step is naturally going to be statistical.

Implicitly, you will have something like (6)--(9), but, instead of (7),

(7*) If you seem to see a sheep, it’s very likely that you see a sheep.

And yet in the Gettier case you do not have knowledge.

The only difference that I can see is that your (7) is a bit like my "mad" case,

in that your (7) flies in the face of natural kinds.

You say that adding the “unseen sheep” disjunct only makes the reasoning better,

but I think that it might make it a lot worse (via my "bad company" argument).

That is (I do tend to ramble, sorry)

were you to reason via (6)--(9)

then you would have got very lucky, because

there are lots of other similar ways to reason.

Either you do a lot of that, and are irrational,

or else you got very, very lucky.

Either way, you would not get knowledge.

Nice post.Keep updating

Artificial Intelligence Online TrainingPost a Comment