## Saturday, October 20, 2007

### Circles of justification

This is a fun little riddle, coming from a discussion with Dan Johnson. At t2 Mary believes q because she believes p. At t1 (t1<t2), she had come to believe p because she had believed q. No new evidence came in after t1 for p. Yet her beliefs that p and q are both justified and, indeed, knowledge. How could this be?

One solution: p and q are mathematical theorems. At t0 (t0<t1), Mary saw a proof of q. At t1, she saw that p easily follows from q. Between t1 and t2, Mary forgot all about q, the proof of q, and the fact that she derived p from q. She continued to know p, since we know mathematical theorems that we once had known the proofs of even if we do not remember these proofs. At t2, Mary realized that q easily follows from p, and came to believe q. Since she knew p, she now has knowledge of q.

Comments: This appears to involve a circularity in the order of justification, but only if we confuse the contents of beliefs with believings (or types of belief with belief tokens). Mary has three relevant, believings: (1) her believing between t0 until after t1 that q, (2) her believing starting at t1 that p, and (3) her new believing that q starting at t2. Here, (1) has independent justification; the justification of (2) depends on the justification of (1); the justification of (3) depends on the justification of (2). There is no real circularity.

Lydia McGrew said...

I take it that you are holding that in 3, at t2, Mary's belief that p is justified occurrently by her belief in some proposition like, "I remember having seen an excellent proof for that." And in fact, she _has_ seen an excellent proof for p--a proof that runs through q.

Now suppose that she were reminded at t2 that the proof to which she is referring ran through q, but she were not able to remember that she had at t0 and presumably at t1 good independent reason for q. Would this not undermine her reason for q? And if, on the other hand, she did remember that she had good independent reason at t0 and t1 for q, and this were a sufficiently good memory, then the derivation of q from p at t2 would not add to her legitimate confidence in q.

The most interesting question here, it seems to me, is whether at t2 when Mary tacitly is relying on "I had a good proof for p at t1," is she really also tacitly relying on "I had a good _independent_ proof for p at t1"? Doesn't she really need to be believing this? It seems plausible that she is and does need to be, or otherwise she wouldn't go on to use p to derive q. But in that case, she has a false crucial premise at t2 and therefore does not have knowledge of q at t2.

Alexander R Pruss said...

Here's what I'm thinking. We know a lot of things where we no longer remember how we learned them or where the justification for them came from. Nonetheless we continue to know them. It's kind of like knowledge by testimony, but here the testimony is by one's past self, as it were.

When pressed about these things, I think we'll say something like: "I know it, but I don't remember how I learned it." So I don't think there is in general any tacit reliance on any particular historical claim about whether how we got to know the proposition. Maybe at times the thought is a bit more developed: "I believe p. But p is not the sort of thing that I'd believe unless I had good reason to believe."

Note that if p entails q and q entails p, the argument:

p
Therefore, q
Therefore, p
Therefore, q

is a valid argument from p to q. There is no logical problem in going around in a circle like that--it's just inefficient. And I think this is what's going on in the example of mine (inspired by Dan Johnson's ideas).

Lydia McGrew said...

"I believe p. But p is not the sort of thing that I'd believe unless I had good reason to believe."

That's fine as far as it goes. It could be an inductive argument. What intrigues me about your example is the question of whether, in the state in which you infer q from p, you are not also tacitly assuming that you have good independent reason to believe p--independent, that is, of q.

I have a feeling that if we could find an example where p and q are not mathematical propositions and the knowledge in question is never deductive, the urgency of the circular issue might be more evident. But I'm having trouble dreaming one up.

Mike said...

Hi Alex,

A quick solution. Let the prior probabilities Pr(E) = Pr(B) = .4, so neither is worthy of belief. It is possible that Pr(B/E) = Pr(E/B) = .7, so both are worthy of belief. Let the agent calculate Pr(B/E) and subsequently calculate Pr(E/B). Let the agent learn nothing new between the calculations. He then is justified in believing B based on E and justified in believing E based on B. And there is no circularity that I can see.

Alexander R Pruss said...

Mike:

I am afraid I cannot follow your example. The agent first calculates P(B|E)=.7. But this does not justify belief in B unless belief in E is justified. The agent then calculates P(E|B)=.7. But this does not justify belief in E unless belief in B is justified.

In fact, I think the form of justication you suggest is would justify everything. For if your reasoning works, it works a fortiori if we replace .7 with 1. But now take the case where P(E|B)=P(B|E)=1. We could use this form of reasoning to justify any belief which has a non-zero prior. Let E be such a belief. Then set B=E. Then P(E|B)=P(B|E)=1. So by the argument you give, the agent is justified in believing E.

Alexander R Pruss said...

Lydia:

There are non-mathematical cases where this is very interesting. One of our grad students is working on these, and I don't know if he's ready to share them.

Mike said...

Here's a concrete example. Given a fair die the Pr(I rolled even)= 1/2. The Pr(I roll even/2 v 4)=1. The Pr(2 v 4)= 1/3. The Pr(2 v 4/I roll even) = 2/3. Suppose I learn that I rolled even. I am justified in believing that I rolled 2 v 4. Now I might forget the basis for my belief that I rolled even, or I might not. It does not matter, since it is still true that the justified belief that I rolled 2 v 4 guarantees that the belief that I rolled even has a probability of at least 2/3. So they justify each other.

Alexander R Pruss said...

Mike:

Yes, that works, so one can have this quasi-circularity even with probabilities less than one.

Vlastimil Vohánka said...

Alex, Lydia, Mike,

A radical skeptic would object: I challenge the claim that we know that P when we remember that we saw a proof for P.

More generally, the skeptical problem, I call it the problem of discursivity, is as follows: how to know some proposition P is true when (1) we do not think (grasp) the whole justification (proof, demonstration) of P simultaneously (as a whole, at the same time, at a time), and when (2) a remembrance (memory) is doubtful.

The intuitive idea the problem behind is this: sometimes we know some proposition P is true (or probable) in such a way that we conceive logical relations of many other (true or probable) propositions which entail (or make probable) P. In other words, sometimes we know P in such a way that we think entailment of P by many other (true or probable) propositions; whereas some of these propositions are self-evident, very plausible, etc.

However, at least sometimes we are not able to think the whole argument simultaneously. In every moment we think only a part of the whole. E.g., P is entailed by Q, R, S, T and U. Suppose we (in a given moment) conceive and think entailment of P by T and U. But in this moment we do not conceive and think entailment of T by Q, R and S - so we do not know T, and so neither P. If we, in every moment, conceive and think only a part of the given whole, then our knowledge of P seems to be threatened. At least, it seems that it is not a CLEAR AND DISTINCT knowledge.

Someone can say: my remememberance of a justification (or of my grasping a justification) is sufficient. However, a sceptic, notoriously floating in the mind of a philosopher, objects: veracity of a remembrance (memory) is doubtful; especially if we concede that some remembrances are incorrect or inaccurate. (Even the help of written notes is a problematic one. Sceptic says: maybe your notes are unreliable; maybe your interpretation of the notes is incorrect.)These difficulties seem to obstruct certain and extensive, non-trivial system of knowledge.

A note: I admit my dependance on memory (or written notes) when formulating this text. However, this does not shoe by itself that these epistemological resources which I use here are veracious.

The problem of discursivity, as I call it, is relatively unusual in philosophy. Though, it is mentioned or entertained, at least to a certain degree, by Aristotle, Sextus Empiricus, Descartes, Locke, Brentano, Husserl, Reid, Russell, Wittgenstein, Ayer, N. Malcolm, J. Patocka, Chisholm, Plantinga. Barry Smith relevantly notes in this context:

"The fulfilled apprehension of an entire theory, however, and therefore also of an entire domain of scientific objects, is ruled out by factual constraints on consciousness. Our properly scientific knowledge is always partial and incomplete, as contrasted with that direct knowledge of objects which is vouchsafed to us through inner and outer perception. Scientific knowledge is indeed a cognitive possession that survives even when the relevant objects are not themselves present to the cognising subject. And as Dallas Willard points out in his remarkably sophisticated study of this aspect of Husserl's logic, the absence of the relevant objects is 'of necessity the normal case in scientifically organized research and knowledge' (Willard 1984, p.12). This partiality, too, may be made the object of its own kind of theoretical investigation, an investigation of the various different ways in which our cognitive acts may fall short of the ideal of theory or of knowledge in the strict and proper sense. And indeed Husserl's framework provides us with the means not only for investigating the structures of a science as a deductively closed collection of fulfilled cognitions and validations in specie, but also for coming to an understanding of the nature and status of the various definitions, algorithms and other auxiliary devices which enable the scientist to economise on cognitive fulfilments in more or less justified ways."
http://ontology.buffalo.edu/smith//articles/lfo.html

About two years ago, I inquired into Descartes, and I found that he proposes three attempts to solve the problem.

(A) In his Regulae, Descartes has some rules which should enable to understand the whole proof simultaneously. The rules seem to be banal and too abstractly formulated, but, as I read somewhere, Descartes treats some of them more concretely in his scientific texts.

(B) Use written notes. As I have said already, this attempt is problematic for the reasons mentioned above. And here is another one: as Bill Vallicella wrote on his blog, "I can write down the steps in an argument, and in this way 'store' the argument. But in order to generate knowledge of the argument's conclusion, I have to read through the steps and retain in short-term memory the premises and subconclusions. So I must rely on my memory. But how do I know that my memory can be trusted beyond the present moment?"

(C) Prove the existence of God and also that falsity of our remembrances is, at least under the relevant conditions, incompatible with the existence of God, who supplies our tendency to take them as veracious. My comment is that Descartes’ proof of God’s existence is problematic, and that the theistic proof of reliability of our remembrances would have to be understood simultaneously.

Lydia McGrew said...

Mike and Alex,

I've been toying with examples like this. I don't like the "they justify each other" language. I think it is confusing. But I think now that I was also wrong earlier to suggest that you must believe you had independent evidence for the one (that this was a necessary premise). Rather, I think the correct way to look at it is that, since the probabilities of the two propositions are mutually correlated, the vague memory that you have a good reason for believing one of them is evidence that *supports both of them*. Neither is what I call a "conduit" of evidence in that case to the other, and neither should be regarded as a premise for the other. I have a paper on mutual support and foundationalism with an interesting appendix up on my web page. Don't have time to put in the URL now. When the two entail each other they shd. be regarded as the "same node" of the evidence tree and affected by the outside evidence (in this case, your memory that you have a reason for one of them) directly and jointly. This is the kind of situation that our appendix calls a "trivial tangle." When they are merely highly correlated but not entailing, then neither screens the impact of the vague-memory-of-support evidence from the other. Hence, properly speaking, that vague memory is evidence for both with neither acting as a conduit-premise for it to the other. Hence there isn't even an appearance of circularity.

I think from the quick look so far that this applies to Mike's dice example but will have to think about it more.

The crucial point is that your vague idea that you have a good reason for believing one of them really tells you nothing about the directionality of that evidence.

Alexander R Pruss said...

Lydia:

A reason I don't want to treat both as occupying the same node is that the implications between p and q might be highly non-trivial. If we model any two propositions each of which can be used to prove the other as a single node, then all of mathematics is a single node. And then we lose a lot.

Moreover, the implications between p and q might not be logical. Suppose I learn p. Then I gather non-trivial evidence that supports the material conditional that if p, then q. (This evidence might be empirical, might include testimony, etc.) I then conclude q. Next, I forget p. But then I gather a completely different set of non-trivial evidence that supports the material condition that if q, then p. (Maybe this includes the testimony of a different set of experts.) So I conclude that p.

Here, I don't think one can analyze this by putting p and q as a single node.

I think the type/token distinction for beliefs is what we want. In fact, justification seems to be primarily a property not of belief types but of belief tokens. And as regards tokens, there is no circularity in any of these cases.

Lydia McGrew said...

If they do entail each other, then in the example as you originally give it you have to be aware of that entailment at both times or else the example doesn't go through anyway. So there's no problem w.r.t. non-triviality in taking them to occupy the same node. That is, even if the entailment is non-trivial, you have access to it at all the times in question.

If they do not entail each other, then they occupy different nodes, but there is no screening the second time around. I expect we cd. treat them as occupying different nodes, too, if there was entailment but you did not know the entailment relation and knew merely that they were highly correlated. But then too, as far as I can see, screening would not hold in the second case so the vague evidence that one was true would be evidence for both without that evidence having to "pass through" the other. Remember that if A is evidence (even non-deductive evidence) for B, then automatically B is evidence for A, by the positive relevance criterion definition of "evidence for." In other words, the probability of each is higher given the other, which just means that they will both rise to some extent (though of course not necessarily to the same extent) at the same time. In this case your vague realization that you "have some reason for believing B" tells you nothing about whether that reason is screened from B by A or not, or anything about directionality at all. So that vague knowledge is just evidence that raises the probability of both of them without routing through one to the other. (Think of it as having lines going to both nodes without either node standing between the evidence and the other node.) This _should_ cover your example of getting different sets of evidence for the conditionals at different times, though I'll give it more thought.

Here's the link for that paper:

http://www.lydiamcgrew.com/ErkenntnisMutualSupportrevised.pdf

It's highly relevant to the technical aspects of this discussion and may be to your student's work as well. Should be out soon in Erkenntnis, if they stop diddling around.

Alexander R Pruss said...

Lydia,

Actually, I think you need to be aware of only one entailment at a time. At t0, Mary saw that p follows from q. She need not have seen that q follows from p, except in the sense in which any mathematical truth follows from any other.

By t2, "Mary forgot all about q, the proof of q, and the fact that she derived p from q." I can add to that: she also forgot how p can be proved from q, except in the sense in which any mathematical truth follows from any other.

Or is it the "except" here that you're referring to?

Anyway, thanks a lot for the link. It should be quite helpful.

Lydia McGrew said...

I was assuming that the subject had access to both entailments at both times. If you knock that out, I think you can regard them as different nodes. But in that case, it seems to me correct to say that in the second inferential step there is not screening of the vague evidence for p from q. Here I'm using "screening" to refer to screening both by p and by ~p. Correct me if I'm wrong, but it seems to me that if p entails q but *for all you know* q does not entail p, then ~p leaves open the possibility that q is true. If it is not the case that both p and ~p screen the vague evidence in question, then you can have evidential force from that evidence that has impact on q without its "passing through" p.

(All this "passing through" and "conduit" stuff is discussed in the Erkenntnis paper. It all has to do with screening.)

This seems to me to be true whether we're talking about empirical or complex mathematical truths where (in the latter case) the double entailment is not known. If all you know at the second point is that you have some reason to think p true, this tells you nothing about whether or not that evidence, if you knew it specifically and not just by second-level description (as "some evidence") is directly pertinent to q or is routed to q through p. Even if you have some specific set of reasons at this point that supports the conditional "if p, then q," that still doesn't mean that p is acting as a conduit of your vague recollection of having reason to believe p to q. It's still entirely possible that whatever the specific evidence is for p (which you can't now remember) is evidence directly pertinent to q.

(For example, you might know at t2 that a theory predicts a certain piece of data. And you might vaguely realize that you have some reason to believe the theory. But that still doesn't tell you whether that reason, described specifically, is observational evidence about the piece of data or is independent evidence for the theory.)

Lydia McGrew said...

Addendum: Here's one way of seeing why vague talk of "justifying each other at different times" is problematic. If we don't say something more careful and detailed than that about direction of inference, how do we make a principled objection to the following: At t2, as given in the original puzzle, Mary infers q from p. This involves a rise in the probability of q from what it was in between t1 and t2, at let's say t1.5, where we have said that she forgot she had independent reason to believe q. But at t3, Mary is given once again the proof for q that she had at t0 and t1, she now raises the probability of q _again_ thus "double-dipping" on the evidential impact of that proof (or in an empirical case, empirical evidence) for q, though she doesn't realize she is doing so.

The problem here arises from treating the knowledge of p at t2 as though it is independent of q.

One could argue that when we forget evidence there's always such a risk of double-dipping, even when we're just dealing with one claim. You could have a reason for q, forget it, have only a vague sense that you had a reason for q, then be given the evidence again and update again, thus double-dipping. But while this is a possibility (and a reason to be careful about vague feelings that we have a reason for believing things), it is even more plausible if we give ourselves the impression that we have some line of evidence that comes "through" a separate claim--evidence for q by way of independent evidence for some other proposition. Moreover, we need to say why updating twice in this way is epistemologically incorrect. Otherwise Mary could do it even if, at t3, she _realizes_ that this is the same argument she had before. She could say, "Yes, but at t2 I was inferring q from p, but now I'm inferring q from this proof." Which would obviously be incorrect.