Showing posts with label lottery paradox. Show all posts
Showing posts with label lottery paradox. Show all posts

Saturday, January 21, 2023

Knowing you will soon have enough evidence to know

Suppose I am just the slightest bit short of the evidence needed for belief that I have some condition C. I consider taking a test for C that has a zero false negative rate and a middling false positive rate—neither close to zero nor close to one. On reasonable numerical interpretations of the previous two sentences:

  1. I have enough evidence to believe that the test would come out positive.

  2. If the test comes out positive, it will be another piece of evidence for the hypothesis that I have C, and it will push me over the edge to belief that I have C.

To see that (1) is true, note that the test is certain to come out positive if I have C and has a significant probability of coming out positive even if I don’t have C. Hence, the probability of a positive test result will be significantly higher than the probability that I have C. But I am just the slightest bit short of the evidence needed for belief that I have C, so the evidence that the test would be positive (let’s suppose a deterministic setting, so we have no worries about the sense of the subjunctive conditional here) is sufficient for belief.

To see that (2) is true, note that given that the false negative rate is zero, and the false positive rate is not close to one, I will indeed have non-negligible evidence for C if the test is positive.

If I am rational, my beliefs will follow the evidence. So if I am rational, in a situation like the above, I will take myself to have a way of bringing it about that I believe, and do so rationally, that I have C. Moreover, this way of bringing it about that I believe that I have C will itself be perfectly rational if the test is free. For of course it’s rational to accept free information. So I will be in a position where I am rationally able to bring it about that I rationally believe C, while not yet believing it.

In fact, the same thing can be said about knowledge, assuming there is knowledge in lottery situations. For suppose that I am just the slightest bit short of the evidence needed for knowledge that I have C. Then I can set up the story such that:

  1. I have enough evidence to know that the test would come out positive,

and:

  1. If the test comes out positive, I will have enough evidence to know that I have C.

In other words, oddly enough, just prior to getting the test results I can reasonably say:

  1. I don’t yet have enough evidence to know that I have C, but I know that in a moment I will.

This sounds like:

  1. I don’t know that I have C but I know that I will know.

But (6) is absurd: if I know that I will know something, then I am in a position to know that the matter is so, since that I will know p entails that p is true (assuming that p doesn’t concern an open future). However, there is no similar absurdity in (5). I may know that I will have enough evidence to know C, but that’s not the same as knowing that I will know C or even be in a position to know C. For it is possible to have enough evidence to know something without being in a position to know it (namely, when the thing isn’t true or when one is Gettiered).

Still, there is something odd about (5). It’s a bit like the line:

  1. After we have impartially reviewed the evidence, we will execute him.

Appendix: Suppose the threshold for belief or knowledge is r, where r < 1. Suppose that the false-positive rate for the test is 1/2 and the false-negative rate is zero. If E is a positive test result, then P(C|E) = P(C)P(E|C)/P(E) = P(C)/P(E) = 2P(C)/(1+P(C)). It follows by a bit of algebra that if my prior P(C) is more than r/(2−r), then P(C|E) is above the threshold r. Since r < 1, we have r/(2−r) < r, and so the story (either in the belief or knowledge form) works for the non-empty range of priors strictly between r/(2−r) and r.

Tuesday, October 25, 2022

Learning from what you know to be false

Here’s an odd phenomenon. Someone tells you something. You know it’s false, but their telling it to you raises the probability of it.

For instance, suppose at the beginning of a science class you are teachingabout your studnts about significant figures, and you ask a student to tell you the mass of a textbook in kilograms. They put it on a scale calibrated in pounds, look up on the internet that a pound is exactly 0.45359237 kg, and report that the mass of the object is 1.496854821 kg.

Now, you know that the classroom scale is not accurate to ten significant figures. The chance that the student’s measurement was right to ten significant figures is tiny. You know that the student’s statement is wrong, assuming that it is in fact wrong.

Nonetheless, even though you know the statement is wrong, it raises the probability that the textbook’s mass is 1.496854821 kg (to ten significant figures). For while most of the digits are garbage, the first couple are likely close. Before you you heard the student’s statement, you might have estimated the mass as somewhere between one and two kilograms. Now you estimate it as between 1.45 and 1.55 kg, say. That raises the probability that in fact, up to ten significant figures, the mass is 1.496854821 kg by about a factor of ten.

So, you know that what the student says is false, but your credence in the content has just gone up by a factor of ten.

Of course, some people will want to turn this story into an argument that you don’t know that the student’s statement is wrong. My preference is just to make this statement another example of why knowledge is an unhelpful category.

Thursday, October 6, 2022

Having to do what one thinks is very likely wrong

Suppose Alice borrowed some money from Bob and promised to give it back in ten years, and this month it is time to give it back. Alice’s friend Carl is in dire financial need, however, and Alice promised Carl that at the end of the month, she will give him any of her income this month that she hasn’t spent on necessities. Paying a debt is, of course, a necessity.

Now, suppose neither Alice nor Bob remember how much Alice borrowed. They just remember that it was some amount of money between $300 and $500. Now, obviously in light of her promise to Bob:

  1. It is wrong for Alice to give less to Bob than she borrowed.

But because of her promise to Carl, and because any amount above the owed debt is not a necessity:

  1. It is wrong for Alice to give more to Bob than she borrowed.

And now we have a puzzle. Whatever amount between Alice gives to Bob, she can be extremely confident is either less or more than she borrowed, and in either case she does wrong. Thus whatever Alice does, she is confident she is doing wrong.

What should Alice do? I think it’s intuitive that she should do something like minimize the expected amount of wrong.

Thursday, May 21, 2020

Double Effect and the death penalty

Each of the ten million densely populated planets in Empress Alice’s vast intergalactic empire has an average of one person on death row who has exhausted all appeals. Empress Alice’s justice system is a really good one, but she knows it to be fallible like all justice systems, and her statistics show there is one in a million chance that someone sentenced to death who exhausted all appeals is nonetheless innocent. So Alice knows that of the hundred million people on death row, at least one is innocent (assuming independence, the probability that all are guilty is 0.99999910000000 = 0.000045). (If we think, as I do, that under ordinary circumstances the death penalty is unjustified, we may suppose that the empire is suffering from extraordinary circumstances such that roughly one case per planet of the death penalty is justified.)

Every year, there is a Day of Justice. On that day, the Empress issues the order that all who are on death row and have exhausted appeals are to be executed.

So, the Empress intentionally kills a million people. That by itself sounds terrible, but we have to remember that she has ten million planets each with billions of people in her empire. Alice is a morally sensitive person, and she is weighed down by unspeakable grief over what justice requires of her, but being an Empress she must do justice.

But what is worse, the Empress knows that at least one (and probably a couple more) of the million people she intentionally kills is innocent. And yet it seems wrong to intentionally kill those who are innocent.

Now, it seems that I’ve just committed a serious slip in reasoning. I’ve moved from the claim that Alice intentionally kills the million people to the claim that each was intentionally killed. Let’s say that Bob is one of the handful of innocents. Then Alice does not intentionally kill Bob, because she does not know anything about Bob specifically. Well, but that can be remedied. We may suppose that for a month prior to Justice Day, the Empress spends all her waking hours looking at the photo of every person she is to have executed, and praying a quick and specific prayer for them. At some point in the month, she did look at Bob’s photo and prayed: “God, have mercy on Bob and give comfort to his victims and his family.” We may even suppose that Alice has a photographic memory and when she issued her order, she saw all million people before her mind’s eye. That shouldn’t make any moral difference to the justification of the executions, though it adds to Alice’s imperial burden.

Perhaps the thing to say is this: Alice did a wrong unknowingly. A few of the people she had executed should not have been executed, but since she did not know who they were, she did not do wrong in intentionally killing them. But the worrying thing is that Alice also did know that she was doing a wrong. She knew that one of the people was innocent.

But maybe here I am sliding back and forth between two actions: the overarching “Execute them all” action, and the specific actions of killing Bob, Carl, and all the other people whose faces bring tears to the tips of Alice’s tentacles. The overarching action is not wrong, but it is known to include a wrong component. The specific actions include some that are wrong, but they are not known to be such.

But we (and Alice) are not home free yet. For Alice’s overarching action clearly is an action that she foresees to result in the deaths of innocents. Thus, its justification requires something like the Principle of Double Effect. Now one of the conditions in the Principle of Double Effect is that none of the means be evil. But killing Bob (and Carl and all the others) is indeed a means to executing “all who have been sentenced to death and have exhausted their appeals”. So among the means, there are some that are bad. And the Empress knows this. She just doesn’t know which ones.

Can we get Alice off the hook by saying that she is intending only the deaths of the guilty? But how is she planning to kill the guilty, if not by means of “executing them all”? And she who intends the end intends the means, so if she intends to kill the guilty by “executing them all”, she must be intending to execute them all.

This seems to be a serious problem for Double Effect.

One possible solution is this. Alice really is only intending the deaths of the guilty. And the means that she intends to this end are: Kill the guilty Bob, kill the guilty Carl, and so on for about a million others. Each of these means is legitimate. But she also knows that some of the means will fail. For since Bob is not guilty, killing the guilty Bob will fail. It is weird to have an action that is overall successful but some of the means to which fail. But that can still happen: think of cases where there is a multiply redundant safety procedure, which is overall successful even though some of the means in it fail.

Wednesday, January 30, 2019

Justification and units of assertion

It’s clear to me that each of two assertions could individually meet the evidential bar for assertibility, but that their conjunction, being typically less probable than either conjucnt, might not. But then there is something very strange about the idea that one could justifiably assert “S1. S2.” but not “S1 and S2.” After all, is there really a difference in what one is saying when one inserts a period and when one inserts an “and”?

Perhaps the thing to say is that the units of assertion are in practice not single sentences, but larger units. How large? Well, not whole books. Plainly, as the preface paradox notes, one can be justified in producing a book while thinking there is an error somewhere in it (as long as one does not know where the error lies). I think not whole articles, either. Again, we expect to be mistaken somewhere in a complex article. Perhaps the unit of assertion is something more of the order of a paragraph or less, but more than a sentence.

If so, then in typical cases “S1. S2.” will be a single unit of assertion, and to be justified in asserting the unit, one needs to be justified in the conjunction. This gives us a pretty precise definition of a unit of assertion: a unit of assertion is an assertoric locution that is lengthwise maximal with respect to needing to be justified.

What in practice determines the unit of assertion is probably determined by a mix of content, context, intonation, length of pauses, etc. For instance, a topic switch is apt to end a unit of assertion, and it may sometimes make a difference how long the pause between the sentences in “S1. S2.” with respect to whether the sentences form a single unit of assertion.

Surely people have written on this.

Tuesday, November 21, 2017

Perfect rationality and omniscience

  1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

  2. A perfectly rational agent must believe anything there is overwhelming evidence for.

  3. A perfectly rational agent must have consistent beliefs.

  4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

  5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

  6. So, a perfectly rational agent is never in a lottery situation. (3,5)

  7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.

Wednesday, February 10, 2010

Asserting a conjunction

One might think that to assert a conjunction is the same as asserting the conjuncts. However, the lottery paradox shows that this is false. I can relatively unproblematically say: "One of x1,...,xN will win. x1 won't win. ... xN won't win." But if I said "One of x1,...,xN will win and x1 won't win and ... and xN won't win", then I would have said something I know to be necessarily false.