Showing posts with label Gettier. Show all posts
Showing posts with label Gettier. Show all posts

Tuesday, January 21, 2025

Competent language use without knowledge

I can competently use a word without knowing what the word means. Just imagine some Gettier case, such as that my English teacher tried to teach me a falsehood about what “lynx” means, but due to themselves misremembering what the word means, they taught me the correct meaning. Justified true belief is clearly enough for competent use.

But if I then use “lynx”, even though I don’t know what the word means, I do know what I mean by it. Could one manufacture a case where I competently use a word but don’t even know what I mean by it?

Maybe. Suppose I am a student and a philosopher professor convinces me that I am so confused that don’t know what I mean when I use the word “supervenience” in a paper. I stop using the word. But then someone comments on an old online post of mine from the same period as the paper, in which post I used “supervenience”. The commenter praises how insightfully I have grasped the essence of the concept. This someone uses a false name, that of an eminent philosopher. I come to believe on the supposed authority of this person that I meant by “supervenience” what I in fact did mean by it, and I resume using it. But the authority is false. It seems that now I am using the word without knowing what I mean by it. And I could be entirely competent.

Tuesday, January 19, 2021

Sheep in sheep's clothing

Suppose you know the following facts. In County X, about 40% of sheep wear sheep costumes. There is also the occasional trickster who puts a sheep costume on a dog, but that’s really rare: so rare that 99.9% of animals that look like sheep are sheep, most of them being ordinary sheep but a large minority being sheep dressed up as sheep.

You know you’re in County X, and you come across a field with an animal that looks like a sheep. There are three possibilities:

  1. It’s an ordinary sheep. Probability: 59.94%

  2. It’s a sheep in sheep costume. Probability: 40.06%

  3. It’s some other animal in sheep costume. Probability: 0.10%.

You’re justified in believing that (1) or (2) is the case, i.e., that the animal is a sheep. And if it turns out that you’re right, then I take it you know that it’s a sheep. You know this regardless of whether it’s an ordinary sheep or a sheep in sheep costume.

But now consider County Y which is much more like the real world. You know that in County Y, only about 0.1% of sheep wear sheep costumes. And there is the occasional trickster who puts a sheep costume on a dog. In County Y, once again, 99.9% of animals that look like sheep are sheep, and 99.9% of those are ordinary sheep without sheep’s costumes.

Now you know you’re in County Y and you come across an animal that looks like a sheep. You have three possibilities again, but with different probabilities:

  1. It’s an ordinary sheep. Probability: 99.80%

  2. It’s a sheep in sheep costume. Probability: 0.10%.

  3. It’s some other animal in sheep costume. Probability: 0.10%.

In any case, the probability that it’s a sheep of some sort is 99.9%. It seems to me that just as in County X, in County Y you know that what you’re facing is a sheep regardless of whether it’s an ordinary sheep or a sheep in sheep costume.

But if what you’re facing is a sheep dressed up as a sheep, then you are in something very much like a standard Gettier case. So in some standard Gettier cases, if you reason probabilistically, it is possible to know.

Thursday, January 14, 2021

Probabilistic reasoning and disjunctive Gettier cases

A disjunctive Gettier case looks like this. You have a justified belief in p, you have no reason to believe q, and you justifiedly believe the disjunction p or q. But it turns out that p is false and q is true. Then you have a justified true belief in p or q, but that belief doesn’t seem to be knowledge.

Some philosophers, like myself, accept Lottery Knowledge: we think that in a sufficiently large lottery with sufficiently few winning tickets, for any ticket n that in fact won’t win, one knows that n won’t win on the probabilistic grounds that it is very unlikely to win.

Interestingly, assuming Lottery Knowledge, in at least some disjunctive Gettier cases one has knowledge of the disjunction. For suppose that 99.8% is a sufficient probability for knowledge in lottery cases. Consider a lottery with 1000 tickets, numbered 1–1000, and one winner. I will then have a justified belief that the winning ticket is among tickets 1 through 998 (inclusive). Let this be p. Suppose that unbeknownst to me, p is false and the winning ticket is number 999. Let q be the proposition that the winning ticket is number 999.

Then I have the structure of a disjunctive Gettier case: I have a justified belief in p, I have no reason to believe q, and I justifiedly believe p or q.

Now given Lottery Knowledge, I know that ticket 1000 doesn’t win. But p or q is equivalent to the claim that ticket 1000 doesn’t win, so I know p or q.

Thus, given Lottery Knowledge, I can have a case with the structure of a disjunctive Gettier case and yet know.

Note that usually one thinks in disjunctive Gettier cases that one’s belief in the true disjunction is inferred from one’s belief in the false disjunct p. But that’s not actually how I would think about such a lottery. My credence in the false disjunct p is 0.998. But my credence in the disjunction is higher: it’s 0.999. So I didn’t actually derive the disjunction from the disjunct.

So, someone who thinks probabilistically can have knowledge in at least some disjunctive Gettier cases.

Even more interestingly, the point seems to carry over to more typical Gettier cases that are not probabilistic in nature. Consider, for instance, the standard disjunctive Gettier case. I have good evidence that Jones owns a Ford. You have no idea where Brown is. But since I accept that Jones owns a Ford, I accept that Jones owns a Ford or Brown is in Barcelona. It turns out that Jones doesn’t own a Ford, but Brown is in Barcelona. So I have a justified true belief that Jones owns a Ford or Brown is in Barcelona, but it’s not knowledge.

However, if I think about things probabilistically, my belief in the disjunction is not simply derived from my belief that Jones owns a Ford. For my credence in the disjunction is higher than my credence that Jones own a Ford: after all, no matter how unlikely it is that Brown is in Barcelona, it is still more likely that Jones owns a Ford or Brown is in Barcelona than that Jones owns a Ford.

So it seems that I have a good inference that Jones owns a Ford or Brown is in Barcelona from the high probability of the disjunction. Of course, a good deal of the probability of the disjunction comes from the probability of the false disjunct. However, that doesn’t rule out knowledge if there is Lottery Knowledge: after all, a good deal of the probability of the disjunction in our lottery case could have been seen as coming from the false disjunct that the the winning number is between 1 and 998.

Perhaps the difference is this. In the lottery case, there were alternate paths to the high probability of the true disjunction. As I told the story, it seemed like most of the probability that the winning ticket was either from 1 to 998 (p) or equal to 999 (q) came from the first disjunct. But the disjunction is equivalent to many other similar disjunctions, such as that the ticket is in the set {2, 3, ..., 999} or is equal to 1, and in the case of the latter disjunction, the high probability disjunct is true. But in the Ford/Barcelona case, there doesn’t seem to be an alternate path to the high probability of the disjunction that doesn’t depend on the high probability of the false disjunct.

But it’s not clear to me that this difference makes for a difference between knowledge and lack of knowledge.

And it’s not clear that one can’t rework the Ford/Barcelona case to make it more like the lottery case. Let’s consider one way to fill out the story about how my mistake in thinking Jones owns a Ford came about. I’ve seen Jones driving a Ford F-150 at a few minutes past midnight yesterday, and I knew that he owned that Ford because I drove him to the car dealership when he bought it five years ago. Unbeknownst to me, Fred sold the Ford yesterday and bought a Mazda. Now, it is standard practice that when people buy cars, they eventually sell them: few people keep owning the same car for life.

So, my belief that Jones owned a Ford came from my knowledge that Jones owned a Ford early in the morning yesteray and my false belief that he didn’t sell it later yesterday or today. But now we are in the realm of a lottery case. For from my point of view, the day on which Fred sells the car is something random. It’s unlikely that that day was yesterday, because there are so many other days on which he could sell the car: tomorrow, the day after tomorrow, and so on, as well as the low probability option of his never selling it.

Now consider this giant exclusive disjunction, which I know to be true in light of my knowledge that Jones hadn’t yet sold the Ford as of early morning yesterday.

  1. Jones sold the Ford yesterday and Brown is not in Barcelona, or Jones sold the Ford today and Brown is not in Barcelona, or Jones is now selling the Ford and Brown is not in Barcelona, or Jones will sell the Ford later today and Brown is not in Barcelona, or Jones will sell the Ford tomorrow and Brown is not in Barcelona, or … (ad infinitum), or Jones will never sell the Ford and Brown is not in Barcelona, or Brown is in Barcelona.

Each disjunct in (1) is of low probability, but I know some disjunct is true. This is now very much like a lottery case. Its being a lottery case means that I should—assuming the probabilities are good enough—be able to know that one of the disjuncts other than the first two is true. But if I can know that that one of the disjuncts other than the first two is true, then I should be able to know—again, assuming the probabilities are good enough—that Jones hasn’t sold the Ford yet or Brown is in Barcelona. And if I can know that, then there should be no problem about my knowing that Jones owns a Ford or Brown is in Barcelona.

So, it’s looking like I can have knowledge in typical disjunctive Gettier cases if I reason probabilistically.

Thursday, September 27, 2018

Learning without change in beliefs

There are books of weird mathematical things (e.g., functions with strange properties) to draw on for the sake of generating counterexamples to claims. This post is in the spirit of an entry in one of these books, but it’s philosophy, not mathematics.

Surprising fact: You can learn something without gaining or losing any beliefs.

For suppose proposition q in fact follows from proposition p, and at t1 you have an intellectual experience as of seeing q to follow from p. On the basis of that experience you form the justified and true belief that q follows from p. This belief would be knowledge, but alas the intellectual experience came from a chemical imbalance in the brain rather than from one’s mastery of logic. So you don’t know that q follows from p.

Years later, you consider q and p again, and you once again have an experience of q following from p. This time, however, the experience does come from your mastery of logic. This time you see, and not just think you see, that q follows from p. Your belief is now overdetermined: there is a Gettiered path to it and a new non-Gettiered path to it. The new path makes the belief be knowledge. But to gain knowledge is to learn.

But this gain of knowledge need not be accompanied by the loss of any beliefs. For instance, the new experience of q following from p doesn’t yield a belief that your previous experience was flawed. Nor need there by any gain of beliefs. For while you might form the second order belief that you see q following from p, you need not. You might just see that q follows from p, and form merely the belief that q follows from p, without forming any belief about your inner state. After all, this is surely more the rule than the exception in the case of sensory perception. When I see my colleague in the hallway, I will often form the belief that she is in the hallway rather than the self-regarding belief that I see her in the hallway. (Indeed, likely, small children and most non-human animals never form the “I see” belief.) And surely this phenomenon is not confined to the case of sensory perception. At least, it is possible to have intellectual perceptions where we form only the first-order belief, and form no any self-regarding second-order belief.

So, it is possible to learn something without gaining or losing beliefs.

In fact, plausibly, the original flawed experience could have been so clear that we were fully certain that q follows from p. In that case, the new experience not only need not change any of our beliefs, but need not even change our credences. The credence was 1 before, and it can’t go up from there.

OK, so we have a counterexample. Can we learn anything from it?

Well, here are two things. One might use the story to buttress the idea that even knowledge of important matters—after all, the relation between q and p might be important—is of little value. For it seems of very little value to gain knowledge when it doesn’t change how one thinks about anything. One might also use it to argue that either understanding doesn’t require knowledge or that understanding doesn’t have much value. For if understanding does require knowledge, then one could set up a story where by learning that q follows from p one gains understanding—without that learning resulting in any change in how one thinks about things. Such a change seems of little worth, and hence the understanding gained is of little worth.

Wednesday, September 19, 2018

Gettier and lottery cases

I am going to give some cases that support this thesis.

  1. If you can know that you won’t win the lottery, then in typical Gettier cases you are in a position to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

  1. You have a ticket.

  2. If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

  3. So, very likely it’s a losing ticket.

  4. So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it is a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

  1. You seem to see a sheep.

  2. If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

  3. So, very likely there is a sheep in the field.

  4. So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

Objection 1: In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

Response: If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in some Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

Objection 2: You don’t know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

Response: If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

  1. You will get the treatment.

  2. If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

  3. So, very likely you will recover.

  4. So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

Friday, September 14, 2018

A puzzle about knowledge in lottery cases

I am one of those philosophers who think that it is correct to say that I know I won’t win the lottery—assuming of course I won’t. Here is a puzzle about the view, though.

For reasons of exposition, I will formulate it in terms of dice and not lotteries.

The following is pretty uncontroversial:

  1. If a single die is rolled, I don’t know that it won’t be a six.

And those of us who think we know we won’t win the lottery will tend to accept:

  1. If ten dice are rolled, I know that they won’t all be sixes.

So, as I add more dice to setup, somewhere I cross a line from not knowing that they won’t all be six to knowing. It won’t matter for my puzzle whether the line is sharp or vague, nor where it lies. (I am inclined to think it may already lie at two dice but at the latest at three.)

Let N be the proposition that not all the dice are sixes.

Now, suppose that ten fair dice get rolled, and you announce to me the results of the rolls in some fixed order, say left to right: “Six. Six. Six. Six. Six. Six. Six. Six. Six. And five.”

When you have announced the first nine sixes, I don’t know N to be true. For at that point, N is true if and only if the remaining die is six, and by (1) I don’t know of a single die that it won’t be a six.

Here is what puzzles me. I want to know if in this scenario I knew N in the first place, prior to any announcements or rolls, as (2) says.

Here is a reason to doubt that I knew N in the first place. Vary the case by supposing I wasn’t paying attemption, so even after the ninth announcement, I haven’t noticed that you have been saying “Six” over and over. If I don’t know in the original scenario where I was paying attention, I think I don’t know in this case, either. For knowledge shouldn’t be a matter of accident. My being lucky enough not to pay attention, while it better positioned me with regard to the credence in N (which remained very high, instead of creeping down as the announcements were made), shouldn’t have resulted in knowledge.

But if I don’t know after the ninth unheard announcement, surely I also don’t know before any of the unheard announcements. For unheard announcements shouldn’t make any difference. But by the same token, in the original scenario, I don’t know N prior to any of the announcements. For it shouldn’t make any difference to whether I know at t0 whether I will be paying attention. When I am not paying attention, I have a justified true belief that N is true, but I am Gettiered. Further, there is no relevant epistemic difference between me before the die rolls and between the die rolls and the start of the announcements. If I don’t know N at the latter point, I don’t know N at the beginning.

So it seems that contrary to (2) I don’t know N in the first place.

Yet I am still strongly pulled to thinking that normally I would know that the dice won’t all be sixes. This suggests that whether I will know that the dice won’t all be sixes depends not only on whether it is true, but what the pattern of the dice will in fact be. If there will be nine sixes and one non-six, then I don’t N. But if it will be more “random looking” pattern, then I do know N. This makes me uncomfortable. It seems wrong to think the actual future pattern matters. Maybe it does. Anyway, all this raises an interesting question: What do Gettier cases look like in lottery situations?

I see four moves possible here:

A. Reject the move from not knowing in the case where you hear the nine announcements to not knowing in the case where you failed to hear the nine announcements.

B. Say you don’t know in lottery cases.

C. Embrace the discomfort and allow that in lottery cases whether I know I won’t win depends on how different the winning number is from mine.

D. Reject the concept of knowledge as having a useful epistemological role.

Of these, move B, unless combined with D, is the least plausible to me.

Thursday, September 8, 2016

Intuitive moral knowledge

People intuitively know that stealing is wrong. Maybe stealing is wrong because it violates the social institution of property which is reasonably and appropriately instituted by each community. Maybe stealing is a violation of the natural relation that an agent has to an object upon mixing her labor with it. Maybe stealing violates a divine command. But people's intuitive knowledge that stealing is wrong does not come from their knowledge of such reasons for the wrongness of stealing. So how is it knowledge?

It's not like when the child knows Pythagoras' Theorem to be true but can't prove it. For she knows the theorem to be true because she gets her belief from the testimony of other people who can prove it. But that's not how the knowledge that stealing is wrong works. People can intuitively know that stealing is wrong without their belief having come directly or indirectly from some brilliant philosopher who came up with a good argument for its wrongness.

Perhaps there is some evolutionary story. Communities where there was a widespread belief that stealing is wrong survived and reproduced while those without the belief perished, and there was no knowledge at all the back of the belief formation. However, perhaps, it came to be knowledge, because this evolutionary process was sensitive to moral truth. However, it is dubious that this evolutionary process was sensitive to moral truth as such. It was sensitive to the non-moral needs of the community, and sometimes this led to moral truth and sometimes to moral falsehood (as, for instance, when it led to the conviction that it is right to enslave members of other communities). So if this is the story where the belief came from, it's not a story about knowledge. At best, the intuitive conviction that stealing is wrong, on this story, is a justified true belief, but it's Gettiered.

This, I think, is an interesting puzzle. There is, presumably, a very good reason why stealing is wrong, but the intuitions that we have do not seem to have the right connection to that reason.

Unless, of course, we did ultimately get the knowledge from someone who has a very good argument for the wrongness of stealing. As I noted, it is very implausible that we got it from a human being who had such an argument. But maybe we got it from a Creator who did.

Friday, October 10, 2008

More on conjunctive characterization

In an earlier post, I had argued that there is something generally fishy about conjunctive characterizations of non-stipulative concepts, such as defining a murder as an act that is both a killing and morally wrong. Natural concepts just don't have conjunctive analyses, and so one can typically find counterexamples to conjunctive analyses simply by looking at cases where the conjuncts are coincidentally satisfied (e.g., something might be a killing but be morally wrong for a reason independent of its being a killing, such as because it is also an instance of promise-breaking, and this does not make it a murder). In a post on prosblogion today, I use this fact to refute a particular argument against Molinism.

What I want to offer here is two hypotheses about why conjunctive definitions sound so plausible to us. The first hypothesis, stated in the prosblogion post, is is that conjunctive claims often carry an implicature of relevant connection between conjuncts. If someone tells me that he went to the store and bought a pound of butter, I assume that he bought the pound of butter at that store (order matters here: "I bought a pound of butter and went to the store" carries an implicature that the pound of butter was not bought at that store; in that case, the connection is a negative one). If I am told that Fred intended to meet George and Fred did meet George, I tend to assume that Fred intentionally met George, though that does not strictly follow.

The second hypothesis is that our minds are designed for finding connections. When we read a set of statements, or see a bunch of evidence, we tacitly assume a relation between them. It is not a matter of implicature, because the phenomenon is more than just a linguistic one. I see an open garbage can, and I smell a stink. I assume that what I see and what I smell are the same thing. Often this is justified. But this habit of mentally inserting connections can be pernicious. When we see a bunch of conjoined statements, it is our natural reaction to imagine a situation where they are co-satisfied in a related way. It was the genius of Gettier to show us that in philosophically evaluating a conjunctive characterization we need to look at cases of unrelated co-satisfaction.

At the same time, it may be that some conjunctive characterizations are close to the truth—to get the truth, all we need to add is that the conditions are satisfied in relevantly related ways. It could be that knowledge will be justified true belief once we understand that it must be justified, true and believed in relevantly related ways. (I wonder if what counts as relevantly related might be contextual.) Of course this is not satisfying to the philosopher—we want to make explicit the relevant relation, but perhaps this just cannot be done.

Thursday, April 10, 2008

Conjunctive analyses

Sometimes we try to analyze a concept as a conjunction of two or more concepts. Thus, we might say that x knows p provided p is true and x justifiably believes p. Frequently, such proposed analyses founder on counterexamples—Gettier examples in this case.

I want to highlight one kind of failure. Sometimes analyzing x's being an F in terms of x's being a G and x's being an H, fails because to be an F, not only does x have to be a G and an H, but x's Gness and Hness have to be appropriately connected. While Gness and Hness are ingredients in Fness, their interconnection matters, just as one doesn't simply specify an organic compound by listing the number of atoms of each type in the compound, but one must also specify their interconnection.

I suspect this kind of connection-failure of conjunctive definitions is common. One way to see what is wrong with the justified true belief analysis of knowledge is to note that there has to be a connection between the justification and the truth and the belief. Specifying what the connection has to be like is hard (that is my understatement of the week).

Here's another case of the same sort. Suppose we say that an action is a murder provided it is a killing and morally wrong. Then we have a counterexample. Igor, who used to be a KGB assassin, has turned over a new leaf. As part of his turning over a new leaf, he has promised his wife that, no matter what, he will never kill again, no matter what. Maybe in ordinary cases that promise would be inappropriate. But given Igor's life history, it is quite appropriate. Now, Tatyana has just mugged Igor and is about to stab him to death so as not to leave any witnesses. Igor picks up a rock and kills her in self-defense. What he has done was a killing and it was morally wrong—it was the breaking of a promise. But it wasn't a murder because the connection between the fact that the action was a killing and the fact that the action was morally wrong wasn't of the right sort. (One might try to say that it was a killing and immoral, but wasn't immoral qua killing.)

When we hear a conjunctive analysis being given in philosophy, I think it's time to look for a connection-counterexample, a case where each conjunct is satisfied, but the satisfaction of the conjuncts lacks the right kind of interconnection. Sometimes, I think, one can intuitively tell that a proposed analysis is unsatisfactory for lack of such interconnection even without coming up with a counterexample. Here is a case in point. Consider the notion of "causal necessitation". A natural-sounding definition is this: an event E causally necessitates an event F provided that (i) it is nomically necessary that if E holds, then F holds; and (ii) E causes F. But even if it turns out that this is a correct characterization—that necessarily E causally necessitates F if and only if (i) and (ii) hold—I don't think it's a good definition. For it misses out the fact that one wants a connection between the necessitating and the causing—the co-presence of the two factors shouldn't be merely coincidental. But it's really hard to come up with an uncontroversial case where we have a difference between the two. (Interestingly, it may be possible to do so if Molinism is true.)

We are rightly suspicious of disjunctive analyses. I think we should have a similar, though weaker, suspicion of conjunctive ones.

There is a structural connection between the points in this post and Aristotle's Metaphysics H6. The point is also similar to Geach's discussion of the good. We cannot define a "good basketball player" as someone who is (i) good and (ii) a basketball player.