Here’s a curious thing. The ideal Bayesian reasons about all contingent cases just as she reasons about lottery cases. If the reasoning doesn’t yield knowledge in lottery cases (i.e., if the ideal Bayesian can’t know that she won’t win the lottery), it doesn’t yield knowledge in any contingent cases. So, if the ideal Bayesian doesn’t know in lottery cases, she doesn’t know in any contingent cases. So she knows in lottery cases, I say.
I grant that you knew that you would not win, given that you did not win; and yet it is a correct use of "know" to say that
ReplyDeleteyou did not know that you would not win because the whole point of a lottery is that you might win.
And in exactly that sense of "know" why should the ideal Bayesian know contingent matters (or indeed, matters that are not known to not be contingent)? She can still know stuff in your sense.
Why pin the sense of "know" down to one or the other?
Is that how we use "know" in English? Philosophical disagreement is evidence that we do not.
Is there a difference between "mere" statistical probabilities and subjective Bayesian probabilities? It seems that for "mere" probabilities (I'll take lotteries to be the prime example) aren't the sort of things appropriately tied to our epistemic faculties in a knowledge-producing way. Of course, uncertainty can be modeled like "mere" statistical probabilities (that seems to be the point of Bayesianism). Car thefts in the area I parked in happen at such-and-such a percentage, my perceptual faculties sometimes give false deliverances, but they're probably reliable, etc. I'm suggesting that when the statistical analogy of Bayesianism gets applied to "mere" statistics, the disanalogies begin to surface.
ReplyDeleteWhat are the disanalogies? That's hard to say. Lotteries are designed to preclude epistemic access. But knowledge of other contingent things doesn't seem to be this way; perceivable things interact with my perceptual faculties, human behaviors are intelligible and so (to some limited degree) predictable, we causally influence the world in order to bring about certain items of our knowledge ("I parked my car over there."). But car thieves design their thefts to preclude others' epistemic access. Still, car thefts are in principle knowable in ways that a lottery outcome (prior to the outcome - but are car thefts knowable prior to the theft?) is not. Maybe it has something to do with ordinary events being tied to the causal order of the world. But so are lotteries (though they're tied to things like random number generators or ping pong balls being blown about by jets of air). Maybe it's that "mere" statistics are unintelligible in a certain sense - there's no reason why one number should come up rather than another (though there is a reason why one number comes up - the RNG spits out a number after a certain interval, a hand reaches in and grabs a numbered ping pong ball), while there is a reason why the car-theft rate is what it is, or why my perceptual faculties misled me. But statistics make generalities intelligible even when the individual outcomes seem unintelligible (ie. the reason this outcome happens is because it happens x% of the time).
I'm starting to talk myself out of the original intuition, but I'd appreciate your thoughts. (And sorry if this is Bayesianism 101)
Burke:
ReplyDeleteI think from the Bayesian point of view, the main difference between lotteries and the ordinary cases is that in the lottery cases there is a lot of work going into ensuring that all the outcomes have equal probability, so we have a better handle on the probabilities involved.
The Bayesian will think of perceptions as akin to lotteries. We could run something like a lottery based on misperception. We randomly choose an apple or an orange and show it to Smith, an ordinary person. If Smith judges correctly whether it's an apple or an orange, you lose the lottery. If Smith makes a mistake, you win the lottery.
Suppose Smith announces it's an apple. I take it that you can know (without looking yourself at the fruit) that there is an apple there, that Smith announced correctly (because ordinary people generally do), etc. But there is some chance that Smith's announcement is wrong. From the Bayesian point of view, it's hard to see a relevant distinction between this and a lottery case.
Thanks! The lottery and perception analogy is helpful.
ReplyDeleteDavid pointed this out the other day: it seems odd that we have a better handle on the probabilities involved in lotteries than many ordinary cases - yet we (or at least those who see asymmetry between ordinary and lottery cases) often want to attribute knowledge to the ordinary and not the lottery cases! (In cases of similar odds.)
The thing is that last "similar odds" because we really have no idea what the odds are in perception cases.
ReplyDeleteThere are some observational frequencies, for ranges of subjects, but nothing for our own personal case, only our own assessments of our own faculties. And that matters because when we see an apple and say, to someone else, that we know that it is an apple, we are letting them know that they can think and act as though it is an apple. We seem to be providing then with some sort of guarantee. And if so, then we do that personally, not as one of a range of subjects (even if the person we are talking to sees us as one of a range of subjects).
The thing about the ideal Bayesian is that although they deal with probabilities, those numbers come from statistics. They are messy things. They are quite unlike the probabilities of the lottery. They are a bit more like one's own perceptions, and the more intelligent the ideal Bayesian the more so. The statistician has to add their own experience into the interpretation of the bare numbers. It is a messy business, quite unlike the pictures philosophers paint, I think.
Lotteries are nice to think about, for that very reason. Or consider the toss of a fair coin. It is 50/50 whether it will land with the head side facing up. But it is not quite 50/50. Suppose we have one that is very slightly more likely to land headsup. Suppose you know that it is. Do you believe that it will land headsup? I do not; I believe about as much as not that it will land headsup. Now we have a very bent coin that is twice as likely to land headsup. Do you believe that it will land headsup? Let us say that you do.
Now I ask you: "You believe that it will, but do you know that it will?" You know what I am asking: I am not asking if you are 100% certain, I am asking if "you know that it will" land headsup. I would not say that I knew so much, because I take claims to know to be like the giving of a guarantee. I would guarantee that it was not going to land on its edge. The thing about the very intelligent ideal Bayesian statistician advising a politician is that she will want to be very careful about what she claims to know. She might not even want to say that she knows that it will not land on its edge!
We would say, correctly, that she does know that, of course; but in general there is something to be said for taking an expert's word on what they know.
ReplyDeleteIncidentally, I take my "I believe about as much as not" from the idea of beliefs as the causes of free acts: when we work out what people's beliefs (usually tacit beliefs) are from how they act, and especially how they bet, we find that there are degrees of belief. (Insofar as belief is an all-or-nothing affair, I do not believe that the fair coin will land heads-up, and I do not believe that it will not.)
ReplyDeleteI agree about the messiness of the probabilities involved in perceptions. But that only makes it *harder* to know in perceptual cases.
ReplyDeleteBayesianism and knowledge make an awkward mix – if your credences are all that is relevant, what do you gain by saying that sufficiently high credence amounts to knowledge? On the face of it, a thorough-going Bayesian would have no use for such knowledge.
ReplyDeleteIn pure subjective Bayesianism, credences need not be justified; they need only be consistent. Granted, a sensible Bayesian would have reasons for her credences, and in particular, reasons for high credences. But would those reasons necessarily be strong enough to serve as the justification required for knowledge?