Sunday, October 14, 2018

A simple reductive theory of consciousness

I think it is possible for one mind to have multiple spheres of consciousness. One kind of case is diachronic: there need be no unity of consciousness between my awareness at t1 and my awareness at t2. Split brain patients provide synchronic example. (I suppose in both cases one can question whether there is really only one mind, but I’ll assume so.)

What if, then, it turned out that we do not actually have any unconscious mental states? Perhaps what I call “unconscious mental states” are actually conscious states that exist in a sphere of consciousness other than the one connected to my linguistic productions. Maybe it is the sphere of consciousness connected to my linguistic productions that I identify as the “conscious I”, but both spheres are equally mine.

An advantage of such a view would be that we could then accept the following simple reductive account of consciousness:

  • A conscious state just is a mental state.

Of course, this is only a partial reduction: the conscious is reduced to the mental. I am happy with that, as I doubt that the mental can be reduced to the non-mental. But it would be really cool if the mystery of the conscious could be reduced.

However, the above story still doesn’t fully solve the problem of consciousness. For it replaces the puzzle as to what makes some of my mental states conscious and others unconscious with the puzzle of what makes a plurality of mental states co-conscious, i.e., a part of the same sphere of consciousness. Perhaps this problem is more tractable than the problem of what makes a state conscious was, though?

Friday, October 12, 2018

Scepticism about culpability

I rarely take myself to know that someone is culpable for some particular wrongdoing. There are three main groups of exception:

  1. my own wrongdoings, so many of which I know by introspection to be culpable

  2. cases where others give me insight into their culpability through their testimony, their expressions of repentance, etc.

  3. cases where divine revelation affirms or implies culpability (e.g., Adam and David).

In type 2 cases, I am also not all that confident, because unless I know a lot about the person, I will worry that they are being unfair to themselves.

I am amazed that a number of people have great confidence that various infamous malefactors are culpable for their grave injustices. Maybe they are, but it seems easier to believe in culpability in the case of more minor offenses than greater ones. For the greater the offense, the further the departure from rationality, and hence the more reason there is to worry about something like temporary or permanent insanity or just crazy beliefs.

I don’t doubt that most people culpably do many bad things, and even that most people on some occasion culpably do something really bad. But I am sceptical of my ability to know which of the really bad things people do they are culpable for.

The difficulty with all this is how it intersects with the penal system. Is there maybe a shallower kind of culpability that is easier to determine and that is sufficient for punishment? I don’t know.

Being mistaken about what you believe

Consider:

  1. I don’t believe (1).

Add that I am opinionated on what I believe:

  1. For each proposition p, I either believe that I believe p or believe that I do not believe p.

Finally, add:

  1. My beliefs are closed under entailment.

Now I either believe (1) or not. If I do not believe (1), then I don’t believe that I don’t believe (1), by closure. But thus, by (2), I do believe that I do believe (1). Hence in this case:

  1. I am mistaken about what I do or do not believe.

Now suppose I do believe (1). Then I believe that I don’t believe (1), by closure and by what (1) says. So, (4) is still true.

Thus, we have an argument that if I am opinionated on what I believe and my beliefs are closed under entailment, then I am mistaken as to what I believe.

(Again, we need some way of getting God out of this paradox. Maybe the fact that God’s knowledge is non-discursive helps.)

Wednesday, October 10, 2018

Socratic perfection is impossible

Socrates thought it was important that if you didn't know something, you knew you didn't know it. And he thought that it was important to know what followed from what. Say that an agent is Socratically perfect provided that (a) for every proposition p that she doesn't know, she knows that she doesn't know p, and (b) her knowledge is closed under entailment.

Suppose Sally is Socratically perfect and consider:

  1. Sally doesn’t know the proposition expressed by (1).

If Sally knows the proposition expressed by (1), then (1) is true, and so Sally doesn’t know the proposition expressed by (1). Contradiction!

If Sally doesn’t know the proposition expressed by (1), then she knows that she doesn’t know it. But that she doesn’t know the proposition expressed by (1) just is the proposition expressed by (1). So Sally doesn’t know the proposition expressed by (1). So Sally knows the proposition expressed by (1). Contradiction!

So it seems it is impossible to have a Socratically perfect agent.

(Technical note: A careful reader will notice that I never used closure of Sally’s knowledge. That’s because (1) involves dubious self-reference, and to handle that rigorously, one needs to use Goedel’s diagonal lemma, and once one does that, the modified argument will use closure.)

But what about God? After all, God is Socratically perfect, since he knows all truths. Well, in the case of God, knowledge is equivalent to truth, so (1)-type sentences just are liar sentences, and so the problem above just is the liar paradox. Alternately, maybe the above argument works for discursive knowledge, while God’s knowledge is non-discursive.

Tuesday, October 9, 2018

Epistemic scores and consistency

Scoring rules measure the distance between a credence and the truth value, where true=1 and false=0. You want this distance to be as low as possible.

Here’s a fun paradox. Consider this sentence:

  1. At t1, my credence for (1) is less than 0.1.

(If you want more rigor, use Goedel’s diagonalization lemma to remove the self-reference.) It’s now a moment before t1, and I am trying to figure out what credence I should assign to (1) at t1. If I assign a credence less than 0.1, then (1) will be true, and the epistemic distance between 0.1 and 1 will be large on any reasonable scoring rule. So, I should assign a credence greater than or equal to 0.1. In that case, (1) will be false, and I want to minimize the epistemic distance between the credence and 0. I do that by letting the credence be exactly 0.1.

So, I should set my credence to be exactly 0.1 to optimize epistemic score. Suppose, however, that at t1 I will remember with near-certainty that I was setting my credence to 0.1. Thus, at t1 I will be in a position to know with near-certainty that my credence for (1) is not less than 0.1, and hence I will have evidence showing with near-certainty that (1) is false. And yet my credence for (1) will be 0.1. Thus, my credential state at t1 will be probabilistically inconsistent.

Hence, there are times when optimizing epistemic score leads to inconsistency.

There are, of course, theorems on the books that optimizing epistemic score requires consistency. But the theorems do not apply to cases where the truth of the matter depends on your credence, as in (1).

Monday, October 8, 2018

Evidentialism, and self-defeating and self-guaranteeing beliefs

Consider this modified version of William James’ mountaineer case: The mountaineer’s survival depends on his jumping over a crevasse, and the mountaineer knows that he will succeed in jumping over the crevasse if he believes he will succeed, but doesn’t know that he will succeed as he doesn’t know whether he will come to believe that he will succeed.

James used his version of the case to argue that pragmatic reasons can legitimately override lack of epistemic reasons.

But what is interesting to me in my variant is the way it provides a counterexample to evidentialism. Evidentialists say that you epistemically should form your beliefs only on the basis of evidence. But notice that although the belief that he will succeed at the jump needs to be formed in the absence of evidence for its truth, as soon as it is formed, the belief itself becomes its own evidence to the point that it turns into knowledge. The belief is self-guaranteeing. So there seems to be nothing to criticize epistemically about the formation of the belief, even though the formation is independent of evidence. In fact, it seems, there is a good epistemic reason to believe, since by believing the mountaineer increases the stock of his knowledge.

Moreover, we can even make the case be one where the evidence on balance points against the proposition. Perhaps the mountaineer has attempted, in safer circumstances, to get himself to believe that he can make such a jump, and seven times out of ten he has failed at both self-induction of belief, and also at the jump. But in the remaining three times out of ten, he succeeded at both. So, then, the mountaineer has non-conclusive evidence that he won’t manage to believe that he will succeed (and that he won’t succeed). If he comes to believe that he will succeed, he comes to believe this against the evidence—but, still, in doing, he increases his stock of knowledge, since the belief, once believed, is self-guaranteeing.

(This phenomenon of self-guaranteeing belief reminds me of things that Kierkegaard says about faith, where faith itself is a miracle that hence is evidence for its truth.)

Interestingly, we might also be able to construct cases of well-evidenced but self-defeating beliefs. Consider a jeweler who has noticed that she is successful at cutting a diamond if and only if she believes she will be unsuccessful. Her theory is that belief in her success makes her insufficiently careful. Over time, she has learned to suspend judgment in her success, and hence to be successful. But now she reflects on her history, and she finds herself with evidence that he will be successful in cutting the next diamond. Yet if she believes on this evidence, this will render her overconfident, and hence render the belief false!

This is related to the examples in this paper on lying.

So perhaps what the evidentialist needs to say is that you epistemically may believe p if and only if the evidence says that if you believe p, p is true?

Friday, October 5, 2018

"The" natural numbers

Benacerraf famously pointed out that there are infinitely many isomorphic mathematical structures that could equally well be the referrent of “the natural numbers”. Mathematicians are generally not bothered by this underdetermination of the concept of “the natural numbers”, precisely because the different structures are isomorphic.

What is more worrying are the infinitely many elementarily inequivalent mathematical structures that, it seems, could count as “the natural numbers”. (This becomes particularly worrying given that we’ve learned from Goedel that these structures give rise to different notions of provability.)

(I suppose this is a kind of instance of the Kripke-Wittgenstein puzzles.)

In response, here is a start of a story. Those claims about the natural numbers that differ in truth value between models are vague. We can then understand this vagueness epistemically or in some more beefy way.

An attractive second step is to understand it epistemically, and then say that God connects us up with his preferred class of equivalent models of the naturals.

Color-remapping

I’ve often wondered what would happen if one wore color-remapping glasses. Would the brain adapt, and everything would soon start looking normal, much as it does when you wear vision-inverting prisms. Well, it turns out that there is an interesting two-subject (the investigators) study using LCD glasses linked to a camera and did a color-space rotation. They found interesting results, but found that over six days there was no adaptation: stop signs still looked blue, the sky still looked green, and broccoli was red. This is not conclusive, since six days might just not be long enough. I would love to see the results of a longer study.

The philosophical relevance, of course, is to inverted-spectrum thought experiments.

Thursday, October 4, 2018

Panrepresentationalism

Panpsychists, as the term is commonly understood, think everything is conscious. An attractive but underexplored view is that everything nonderivatively represents. This was Leibniz's view, I suspect. One can add to this a reduction of experience to representation, but one does not have to.

Tuesday, October 2, 2018

When God doesn't act for some reason

Here’s an argument for a thesis that pushes one closer to omnirationality.

  1. God is responsible for all contingent facts about his will.

  2. No one is responsible for anything that isn’t an action (perhaps internal) done for a reason or the result of such an action.

  3. If God doesn’t act on a reason R that he could have acted on, that’s a contingent fact about his will.

  4. So, if God doesn’t act on a reason R, then either (a) God couldn’t have acted on R, or (b) God’s not acting on R itself has a reason S behind it.

Thursday, September 27, 2018

Learning without change in beliefs

There are books of weird mathematical things (e.g., functions with strange properties) to draw on for the sake of generating counterexamples to claims. This post is in the spirit of an entry in one of these books, but it’s philosophy, not mathematics.

Surprising fact: You can learn something without gaining or losing any beliefs.

For suppose proposition q in fact follows from proposition p, and at t1 you have an intellectual experience as of seeing q to follow from p. On the basis of that experience you form the justified and true belief that q follows from p. This belief would be knowledge, but alas the intellectual experience came from a chemical imbalance in the brain rather than from one’s mastery of logic. So you don’t know that q follows from p.

Years later, you consider q and p again, and you once again have an experience of q following from p. This time, however, the experience does come from your mastery of logic. This time you see, and not just think you see, that q follows from p. Your belief is now overdetermined: there is a Gettiered path to it and a new non-Gettiered path to it. The new path makes the belief be knowledge. But to gain knowledge is to learn.

But this gain of knowledge need not be accompanied by the loss of any beliefs. For instance, the new experience of q following from p doesn’t yield a belief that your previous experience was flawed. Nor need there by any gain of beliefs. For while you might form the second order belief that you see q following from p, you need not. You might just see that q follows from p, and form merely the belief that q follows from p, without forming any belief about your inner state. After all, this is surely more the rule than the exception in the case of sensory perception. When I see my colleague in the hallway, I will often form the belief that she is in the hallway rather than the self-regarding belief that I see her in the hallway. (Indeed, likely, small children and most non-human animals never form the “I see” belief.) And surely this phenomenon is not confined to the case of sensory perception. At least, it is possible to have intellectual perceptions where we form only the first-order belief, and form no any self-regarding second-order belief.

So, it is possible to learn something without gaining or losing beliefs.

In fact, plausibly, the original flawed experience could have been so clear that we were fully certain that q follows from p. In that case, the new experience not only need not change any of our beliefs, but need not even change our credences. The credence was 1 before, and it can’t go up from there.

OK, so we have a counterexample. Can we learn anything from it?

Well, here are two things. One might use the story to buttress the idea that even knowledge of important matters—after all, the relation between q and p might be important—is of little value. For it seems of very little value to gain knowledge when it doesn’t change how one thinks about anything. One might also use it to argue that either understanding doesn’t require knowledge or that understanding doesn’t have much value. For if understanding does require knowledge, then one could set up a story where by learning that q follows from p one gains understanding—without that learning resulting in any change in how one thinks about things. Such a change seems of little worth, and hence the understanding gained is of little worth.

Tuesday, September 25, 2018

Faith and belief

Christians are called to have faith in Jesus Christ.

The Old Testament, however, is big on not putting our faith in anything other than God.

Thus, someone who has faith in Jesus Christ but does not believe that Jesus Christ is God is risking violating a central principle of the Old Testament.

Moreover, faith in Jesus requires submission to Jesus. But Jesus wants his followers to obey the central principles of the Old Testament.

Thus, for someone aware of these observations, it is not possible to have faith in Jesus Christ without believing that he is God. This is a serious problem for accounts of faith that claim that a Christian need not have any doctrinal beliefs.

Friday, September 21, 2018

Lottery cases and Bayesianism

Here’s a curious thing. The ideal Bayesian reasons about all contingent cases just as she reasons about lottery cases. If the reasoning doesn’t yield knowledge in lottery cases (i.e., if the ideal Bayesian can’t know that she won’t win the lottery), it doesn’t yield knowledge in any contingent cases. So, if the ideal Bayesian doesn’t know in lottery cases, she doesn’t know in any contingent cases. So she knows in lottery cases, I say.

Wednesday, September 19, 2018

Gettier and lottery cases

I am going to give some cases that support this thesis.

  1. If you can know that you won’t win the lottery, then in typical Gettier cases you are in a position to know, in the sense that there is a line of statistical reasoning such that if you engage in it, then you know.

There are two conclusions you could draw. You might think, as I do, that you know you won’t win the lottery (assuming that in fact you won’t win). In that case, (1) will offer something rather cool: it will tell you that if you reason in a statistical sort of way, you can get knowledge where instead you would have had mere justified true belief. If knowledge is valuable, that will be a good reason to reason in that statistical way rather than the more usual way.

Or you might think the consequent of (1) is absurd, and conclude that you don’t know you won’t win the lottery.

Start with the colorful lottery case. In this case, there are 1000 lottery tickets, each in a sealed envelope. Of these, 990 are printed on white paper, and they are all losing tickets. The remaining 10 tickets are printed on red paper, and 9 of them are winning tickets. You’re handed a sealed envelope. So, you have a 99.1% probability of losing. I will assume that that’s good enough for knowledge. (If not, tweak the numbers. More on that issue later.) Suppose you come to believe you will lose on the basis of this argument:

  1. You have a ticket.

  2. If you have a ticket, it’s very likely that it’s a white losing ticket or a red losing ticket.

  3. So, very likely it’s a losing ticket.

  4. So (ampliatively) it’s a losing ticket.

Suppose, further, that in fact you’re right—it is a losing ticket. Then assuming you know in lottery cases, your belief that you will lose is knowledge. And I further submit that it doesn’t matter for knowledge whether your ticket is actually printed on white or red paper. All that matters for knowledge is that it is in fact a losing ticket. The reason it doesn’t matter is that the color of the tickets is just noise in the story. Clearly if your ticket is white, you knew you’d lose. But you also know it if it’s red. For regardless of which losing ticket you have, there is always some property (perhaps a disjunctive one) such that your losing ticket and all the nine winning tickets have that. That that property is redness doesn’t seem to matter at all.

So, I take it that if you can know that you won’t win the lottery, then in the colorful lottery case you know you won’t win when in fact you have a losing ticket—even if that losing ticket is red.

Now let’s move to Gettier cases. Take a standard Gettier case where you see what looks like a sheep in the field, and you come to believe that it is a sheep, and so you conclude that there is a sheep in the field. But in fact what you see isn’t a sheep but a dog standing in front of a sheep. So, you have a justified true belief that there is a sheep, but you don’t know it. But suppose that instead of reasoning through the claim that you see a sheep, you reason as follows:

  1. You seem to see a sheep.

  2. If you seem to see a sheep, it’s very likely that you see a sheep in the field or there is an unseen sheep in the field.

  3. So, very likely there is a sheep in the field.

  4. So (ampliatively) there is a sheep in the field.

This seems to be perfectly good reasoning. It clearly would be good reasoning if you have “you see a sheep in the field” in the consequent of (7). But adding the “unseen sheep” disjunct only makes the reasoning better. Moreover, this reasoning is exactly parallel to the colorful lottery case. So just as in the colorful lottery case you know that you have a losing ticket regardless of whether you have a white ticket or a red losing ticket, likewise in this case, you know that there is a sheep in the field regardless of whether you see a sheep in the field or there is an unseen sheep in the field.

So, it seems that although reasoning via the claim that what you see is a sheep would not lead to knowledge—Gettier is right about that—you had an alternate statistical form of reasoning (6)–(9) that would have given you knowledge.

If knowledge is valuable—something I doubt—that’s reason to prefer such statistical forms of reasoning.

Objection 1: In lottery cases you only know when the probabilities are overwhelming while in ordinary perceptual knowledge cases you know with much smaller probabilities. Thus, perhaps, you only know in lottery cases when the chance of winning is something like one in a million, but in perceptual cases you can know even if there is a 1% chance of the observations going wrong. If so, then standard Gettier cases may not be parallel to those (colorized) lottery cases where there is knowledge, because the “very likely” in (3) will have to be much stronger for knowledge than in (7).

Response: If you can know in perceptual cases at, say, 99% probability but need a much higher probability in lottery cases, you have the absurd consequence that you can say things like: “I think it’s a lot more likely that I lost the lottery than that I see a sheep. But I know that I see a sheep and I don’t know that I lost the lottery.” Now, this sort of reasoning will not appeal to everyone. But I think it is one of the main intuitions behind the thought that you know in lottery cases at all. And my argument was predicated on knowing in lottery cases.

Moreover, even if there is the sort of difference that the objection claims, we still get the very interesting result that in some Gettier cases—say, ones that involve highly reliable perception—we can gain knowledge if we switch to statistical reasoning.

Objection 2: You don’t know you will lose in the colorful lottery case when in fact you have a red losing ticket but you do know when in fact you have a white ticket.

Response: If that case doesn’t convince you, consider this variant one. A doctor has a treatment for a condition that on some, but relatively rare, occasions goes away on its own. The treatment is highly effective. Most of the time it fixes the condition. The doctor reasons:

  1. You will get the treatment.

  2. If you will get the treatment, very likely either you will recover because of it or you will recover spontaneously.

  3. So, very likely you will recover.

  4. So (ampliatively) you will recover.

And this reasoning seems to yield knowledge—at least if we admit knowledge of chancy future things as friends of knowledge in lottery cases must—as long as you will in fact recover, and will yield knowledge regardless of whether you recover because of the treatment or spontaneously.

Monday, September 17, 2018

Non-propositional conveyance

One sometimes hears claims like:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed propositionally.

But what kind of a thing are those things? Facts? Not quite. For while some of the “things that can be conveyed … that cannot be conveyed propositionally” are in fact real and true, some are not. Leni Riefenstahl’s Triumph of the Will and Fritz Lang’s M are both good candidates for conveying “things … that cannot be conveyed propositionally”. But Triumph in doing so conveys falsehoods about the Nazi Party while M conveys truths about the human condition. But facts just are. So, the “things” are not just facts.

What I said about Triumph and M is very natural. But if we take it literally, the “things” must then be the sorts of things that can be true or false. But the primary bearers of truth are propositions. So when we dig deeper, (1) is undermined. For surely we don’t want to say that Triumph and M convey propositions that cannot be conveyed propositionally.

Perhaps, though, this was too quick. While I did talk of truth and falsehood initially, perhaps I could have talked of obtaining and not obtaining. If I did that, then maybe the “things” would have turned out to be states of affairs (technically, of the abstract Plantinga sort, not of the Armstrong sort). But I think there is good reason to prefer propositions to states of affairs here. First, it is dubious whether there are impossible states of affairs. But not only can X convey things that aren’t so, it can also convey things that couldn’t be so. A novel or film might convey ethical stuff that not only is wrong, but couldn’t be right. Second, what is conveyed is very fine-grained, and it seems unlikely to me that states of affairs are fine-grained enough. The right candidate seems to be not only propositions, but Fregean propositions.

But (1) still seems to be getting at something true. I think (1) is confusing “propositionally” with “by means of literalistic fact-stating affirmative sentences”. Indeed:

  1. There are things that can be conveyed through X (poetry, novels, film, art, music, etc.) that cannot be conveyed by means of literalistic fact-stating affirmative sentences.

(Note the importance of the word “conveyed”. If we had “expressed”, that might be false, because for any of the “things”, we could stipulate a zero-place predicate, say “xyzzies”, and then express it with “It xyzzies.” But while that sentence manages to express the proposition, it doesn’t convey it.)

Friday, September 14, 2018

A puzzle about knowledge in lottery cases

I am one of those philosophers who think that it is correct to say that I know I won’t win the lottery—assuming of course I won’t. Here is a puzzle about the view, though.

For reasons of exposition, I will formulate it in terms of dice and not lotteries.

The following is pretty uncontroversial:

  1. If a single die is rolled, I don’t know that it won’t be a six.

And those of us who think we know we won’t win the lottery will tend to accept:

  1. If ten dice are rolled, I know that they won’t all be sixes.

So, as I add more dice to setup, somewhere I cross a line from not knowing that they won’t all be six to knowing. It won’t matter for my puzzle whether the line is sharp or vague, nor where it lies. (I am inclined to think it may already lie at two dice but at the latest at three.)

Let N be the proposition that not all the dice are sixes.

Now, suppose that ten fair dice get rolled, and you announce to me the results of the rolls in some fixed order, say left to right: “Six. Six. Six. Six. Six. Six. Six. Six. Six. And five.”

When you have announced the first nine sixes, I don’t know N to be true. For at that point, N is true if and only if the remaining die is six, and by (1) I don’t know of a single die that it won’t be a six.

Here is what puzzles me. I want to know if in this scenario I knew N in the first place, prior to any announcements or rolls, as (2) says.

Here is a reason to doubt that I knew N in the first place. Vary the case by supposing I wasn’t paying attemption, so even after the ninth announcement, I haven’t noticed that you have been saying “Six” over and over. If I don’t know in the original scenario where I was paying attention, I think I don’t know in this case, either. For knowledge shouldn’t be a matter of accident. My being lucky enough not to pay attention, while it better positioned me with regard to the credence in N (which remained very high, instead of creeping down as the announcements were made), shouldn’t have resulted in knowledge.

But if I don’t know after the ninth unheard announcement, surely I also don’t know before any of the unheard announcements. For unheard announcements shouldn’t make any difference. But by the same token, in the original scenario, I don’t know N prior to any of the announcements. For it shouldn’t make any difference to whether I know at t0 whether I will be paying attention. When I am not paying attention, I have a justified true belief that N is true, but I am Gettiered. Further, there is no relevant epistemic difference between me before the die rolls and between the die rolls and the start of the announcements. If I don’t know N at the latter point, I don’t know N at the beginning.

So it seems that contrary to (2) I don’t know N in the first place.

Yet I am still strongly pulled to thinking that normally I would know that the dice won’t all be sixes. This suggests that whether I will know that the dice won’t all be sixes depends not only on whether it is true, but what the pattern of the dice will in fact be. If there will be nine sixes and one non-six, then I don’t N. But if it will be more “random looking” pattern, then I do know N. This makes me uncomfortable. It seems wrong to think the actual future pattern matters. Maybe it does. Anyway, all this raises an interesting question: What do Gettier cases look like in lottery situations?

I see four moves possible here:

A. Reject the move from not knowing in the case where you hear the nine announcements to not knowing in the case where you failed to hear the nine announcements.

B. Say you don’t know in lottery cases.

C. Embrace the discomfort and allow that in lottery cases whether I know I won’t win depends on how different the winning number is from mine.

D. Reject the concept of knowledge as having a useful epistemological role.

Of these, move B, unless combined with D, is the least plausible to me.

The value of knowledge

Here’s a curious phenomenon. Suppose I have enough justification for p that if p is in fact true, then I know p, but suppose also that my credence for p is less than 1.

Now consider some proposition q that is statistically independent of p and unlikely to be true. Finally consider the conjunctive proposition r that p is true and q is false.

If I were to learn for sure that r is true, I would gain credence for p, but it wouldn’t change whether I know whether p is true.

If I were to learn for sure that r is false, my credence for p would go down. How much it would go down depends on how unlikely q is. Fact: If P(q)=(2P(p)−1)/P(p), where P is the prior probability, then if I learn that r is false, my credence for p goes to 1/2.

OK, so here’s where we are. For just about any proposition p that I justifiedly take myself to know, but that I assign a credence less than 1 to, I can find a proposition r with the property that learning that r is true increases my credence in p and that learning that r is false lowers my credence in p to 1/2.

So what? Well, suppose that the only thing I value epistemically is knowing whether p is true. Then if I am in the above-described position, and if someone offers to tell me whether r is true, I should refuse to listen. Here is why. Either p is true or it is not true. If p is true, then my belief in p is knowledge. In that case, I gain nothing by learning that r is true. But learning that r is false would lose my knowledge, by reducing my credence in p to 1/2. Suppose p is false. Then my belief in p isn’t knowledge. In the above setup, if p is false, so is r. Learning that r is false, however, doesn’t give me knowledge whether p is true. It gives me credence 1/2, which is neither good enough to know p to be true nor good enough to know p to be false. So if p is false, I gain nothing knowledge-wise.

So, if all I care about epistemically is knowing the truth about some matter, sometimes I should refuse relevant information on the basis of epistemic goals (Lara Buchak argues in her work on faith that sometimes I should refuse relevant information on the basis of non-epistemic goals; that’s a different matter).

I think this is not a very good conclusion. I shouldn’t refuse relevant information on the basis of epistemic goals. Consequently, by the above argument, knowing the truth about some matter shouldn’t be my sole epistemic goal.

Indeed, it should also be my goal to avoid thinking I know something that is in fact false. If I add that to my goals, the conclusion that I should refuse to listen to whether r is true disappears. For if p is false, although learning that r is false wouldn’t give me knowledge whether p is true, in that case it would take away the illusion of knowledge. And that would be valuable.

Nothing deep in the conclusions here. Just a really roundabout argument for the Socratic thesis that it’s bad to think you know when you don’t.

Thursday, September 13, 2018

What's the good of consciousness?

A question has hit me today that I would really want to have a good answer to: What’s the point of consciousness? I can see the point of reasoning and knowledge. But one can reason and have knowledge without consciousness. What would we lose if we were all like vampire Mary?

One could suppose that the question has a false presupposition, namely that there is a point to consciousness. Perhaps consciousness is just an evolutionary spandrel of something genuinely useful.

Still, it seems plausible that there be an answer. I can think of two.

First, perhaps consciousness is needed for moral responsibility, while moral responsibility is clearly valuable. But this won’t explain what the point of brute animals being conscious.

Second, maybe contemplation of truth is valuable, where we use “contemplation” broadly to include both sensory and non-sensory versions. And while one can have unconscious knowledge, one cannot have unconscious contemplation. But why is contemplation of truth valuable? Intuitively, it’s a more intimate connection with truth than mere unconscious knowledge. But I fear that I am not making much progress here, because I don’t know in what way it’s more intimate and why this intimacy is valuable.

Perhaps there is a theistic story to be told. All truth is either about God or creation or both. Contemplating truths about God is a form of intimacy with God. But creation also images God. So contemplating truths about creation is also a form of intimacy with God, albeit a less direct one. So, perhaps, the value of consciousness comes from the value of intimacy with God.

Or maybe we can say that intimacy with being is itself valuable, and needs not further explanation.

Wednesday, September 12, 2018

Vampire Mary

In Peter Watts’ superb novel Blindsight, vampires are animals that function intelligently but lack consciousness. The lack of a detour of information processing through consciousness systems allows them to react with superhuman speed to stimuli.

It seems to me to be logically possible to have beings that have no consciousness but have knowledge and intelligence. After all, there are many things I currently know that I am not currently conscious of, and probably a lot of our thinking is unconscious. I don’t see why this couldn’t happen all the time.

If we want to allow this possibility, we have an interesting variant of the Mary thought experiment. Vampire Mary knows all of physics. But she has never experienced anything. Whatever we say about the original Mary and the quale of red, it seems plausible that vampire Mary has no idea what it is like to have an experience of red, or of anything else. And hence experience goes beyond physics.

Plausible, yes, but I am not satisfied with just that...

Tuesday, September 11, 2018

A simple version of the Mary argument

The following cute argument is valid.
  1. If physicalism is true, all reality is effable (because it can all be expressed in the language of completed physics).
  2. Qualia are ineffable.
  3. So, physicalism is not true.
Personally, while I accept the conclusion, I am inclined to deny (2), since it seems to me that it's easy to express a quale: the quale of red is an experience whose intentional object is an instance of redness. (For the same reason, I think the problem of qualia reduces to the problem of intentionality. And that's the real problem.)

Virtue versus painlessness

Suppose we had good empirical data that people who suffer serious physical pain are typically thereby led to significant on-balance gains in virtue (say, compassion or fortitude).

Now, I take it that one of the great discoveries of ethics is the Socratic principle that virtue is a much more significant contributor to our well-being than painlessness. Given this principle and the hypothetical empirical data, it seems that then we should not bother with giving pain-killers to people in pain—and this seems wrong. (One might think a stronger claim is true: We should cause pain to people. But that stronger claim would require consequentialism, and anyway neglects the very likely negative effects on the virtue of the person causing the pain.)

Given the hypothetical empirical data, what should we do about the above reasoning. Here are three possibilities:

  1. Take the Socratic principle and our intuitions about the value of pain relief to give us good reason to reject the empirical data.

  2. Take the empirical data and the Socratic principle to give us good reason to revise our intuition that we should relieve people’s pain.

  3. Take the empirical data and our intuitions about the value of pain relief to give us good reason to reject the Socratic principle.

Option 1 may seem a bit crazy. Admittedly, a structurally similar move is made when philosophers reject certain theodical claims, such as the Marilyn Adams claim that God ensures that all horrendous suffering is defeated, on the grounds that it leads to moral passivity. But it still seems wrong. If Option 1 were the right move, then we should now take ourselves (who do not have the hyptohetical empirical data) to have a priori grounds to hold that serious physical pain does not typically lead to significant on-balance gains in virtue. But even if some armchair psychology is fine, this seems to be an unacceptable piece of it.

Option 2 also seems wrong to me. The intuition that relief of pain is good seems so engrained in our moral life that I expect rejecting it would lead to moral scepticism.

I think some will find Option 3 tempting. But I am quite confident that the Socratic principle is indeed one of the great discoveries of the human race.

So, what are we to do? Well, I think there is one more option:

  1. Reject the claim that the empirical data plus the Socratic principle imply that we shouldn’t relieve pain.

In fact, I think that even in the absence of the hypothetical empirical data we should go for (4). The reason is this. If we reject (4), then the above reasoning shows that we have a priori reasons to reject the empirical data, and I don’t think we do.

So, we should go for (4), not just hypothetically but actually.

How should this rejection of the implication be made palatable? This is a difficult question. I think part of the answer is that the link between good consequences and right action is quite complex. It may, for instance, be the case there are types of goods that are primarily the agent’s own task to pursue. These goods may be more important than other goods, but nonetheless third parties should pursue the less important goods. I think the actual story is even more complicated: certain ways of pursuing the more important goods are open to third-parties but others are not. It may even be that certain ways of pursuing the more important goods are not even open to first-parties, but are only open to God.

And I suspect that this complexity is species-relative: agents of a different sort might have rather different moral reasons in the light of similar goods.

Monday, September 10, 2018

Infinity, Causation and Paradox: Kindle Edition

The Kindle edition of my Infinity, Causation and Paradox book is now out. Alas, the price is excessive (a few dollars cheaper than the hard cover), but for those who prefer electronic editions, or don't want to wait for the hardcover edition, it might be worth it.

Friday, September 7, 2018

Beauty and goodness

While listening to a really interesting talk on beauty in Aquinas,I was struck by the plausibility of the following idea (perhaps not Aquinas'): The good is what one properly desires to be instantiated; the beautiful is what one properly desires to behold. So the distinction between them is in how we answer Diotima's question about desire (or eros): what do we want to do with the object of desire?

Wednesday, September 5, 2018

Quasi-causation

You pray for me to get a benefit and God grants your prayer. The benefit is in an important sense a result of your prayer. But you didn’t cause the benefit, for if you had, it would have been an instance of causation with God as an intermediate cause, and it seems to violate divine aseity for God ever to be an intermediate cause.

Still, relations involving to the benefit is relevantly like a causal one. For instance, means-end reasoning applies just as it does to non-deterministic causal chains:

  • You want me to improve morally. I will improve morally if God gives me grace. So you pray that God gives me grace.

And I owe you gratitude, though I owe more to God.

There are even cases of blameworthiness where the action “goes through God”. For instance, it is a standard view (and dogma for Catholics) that God creates each soul directly. But a couple can be blameworthy for having a child in circumstances where the child can be reasonably expected to grow up morally corrupted (e.g., suppose that a white supremacists are sure to steal one’s children if one has any). Or consider sacramental actions: a couple can be blameworthy for marrying unwisely, a priest for consecrating the Eucharist in a sacrilegious context, etc.

I call these sorts of relations “quasi-causal”. It would be good to have an account of quasi-causation.

Perhaps Lewis-style counterfactual accounts of causation, while not being good accounts of causation nonetheless provide a good start at accounts of quasi-causation?

Are there any cases of quasi-causation that do not involve God? I am not sure. Perhaps constitutive explanations provide cases. Suppose your argument caused the other members of the committee to vote for the motion. Their voting for the motion partially constituted the passing of the motion. But perhaps it is not correct to say that you caused, even partially, the passing of the motion. For what you caused is the vote, and the vote isn’t the passing, but merely partially constitutive of it. But maybe we can say you quasi-caused the passing of the motion.

This post is really an invitation for people to work on this interesting notion. It also comes up briefly towards the end of my new infinity book (which is coming out in about two weeks).

Tuesday, September 4, 2018

Conciliationism with and without peerhood

Conciliationists say that when you meet an epistemic peer who disagrees with you, you should alter your credence towards theirs. While there are counterexamples to conciliationism here is a simple argument that normally something like conciliationism is correct without the assumption of epistemic peerhood:

  1. That someone’s credence in a proposition p is significantly below 1/2 is normally evidence against p.

  2. Learning evidence against a proposition typically should lower one’s credence.

  3. So, normally, learning that someone’s credence is significantly below 1/2 should lower one’s credence.

In particular, if your credence is above 1/2, then learning that someone else’s is significantly below 1/2 should normally lower one’s credence. And there are no assumptions of peerhood here.

The crucial premise is (1). Here is a simple thought: Normally, people’s credences are responsive to evidence. So when their credence is low, that’s likely because they had evidence against a proposition. Now the evidence they had either is or is not evidence you also have. If you know it is not evidence you also have, then learning that they have additional evidence against the proposition should normally provide you with evidence against it, too. If it is evidence you also have, that evidence should normally make no difference. You don’t know which of these is the case, but still the overall force of evidence is against the proposition.

One might, however, have a worry. Perhaps while normally learning that someone’s credence is significantly below 1/2 should lower one’s credence, when that someone is an epistemic peer and hence shares the same evidence, it shouldn’t. But actually the argument of the preceding paragraph shows that as long as you assign a non-zero probability to the person having more evidence, their disagreement should lead you to lower your credence. So the worry only comes up when you are sure that the person is a peer. It would, I think, be counterintuitive to think you should normally conciliate but not when you are sure the other person is a peer.

And I think even in the case where you know for sure that the other person has the same evidence you should lower your credence. There are two possibilities about the other person. Either they are a good evaluator of evidence or not. If not, then their evaluation of the evidence is normally no evidence either for or against the proposition. But if they are good evaluators, then their evaluating the evidence as being against the proposition normally is evidence that the evidence is against the proposition, and hence is evidence that you evaluated badly. So unless you are sure that they are a bad evaluator of evidence, you normally should conciliate.

And if you are sure they are a bad evaluator of evidence, well then, since you’re a peer, you are a bad evaluator, too. And the epistemology of what to do when you know you’re bad at evaluating evidence is hairy.

Here's another super-quick argument: Agreement normally confirms one's beliefs; hence, normally, disagreement disconfirms them.

Why do I need the "normally" in all these claims? Well, we can imagine situations where you have evidence that if the other person disbelieves p, then p is true. Moreover, there may be cases where your credence for p is 1.

Friday, August 31, 2018

Peers and twins

I just realized something that I should have known earlier. Suppose I have a doppelganger who is just like me and goes wherever I go—by magic, he can occupy a space that I occupy—and who always sees exactly what I see and who happened always to judge and decide just as I do. What I’ve just realized is that the doppelganger is not my epistemic peer, even though he is just like me.

He is not my peer because he has evidence that I do not and I have evidence that he does not. For I know what experiences I have and he knows what experiences he has. But even though my experiences are just like his, they are not numerically the same experiences. When he sees, it is through his eyes and when I see, it is through my eyes.

Suppose that on the basis of a perception of a distant object that looked like a dog I formed a credence of 0.98 that the object is a dog, and my doppelganger did the same thing. And suppose that suddenly a telepathic opportunity opens up and we each learn about the other’s existence and credences.

Then our credences that the distant object is a dog will go up slightly, because we will each have learned that someone else’s experiences matched up with ours. Given that the other person in this case is just like me, this doesn’t give me much new information. It is very likely that someone just like me looking in the same direction would see things the same way. But it is not certain. After all, my perception could still be due to a random error in my eyes. So could my doppelganger’s be. But the fact that our perceptions match up rules makes it implausible to suppose the random error hypothesis, and hence it raises the credence that the object really is a dog. Let’s say our credences will go up to 0.985.

Now suppose that instead this is a case of slight disagreement: His credence that there is a dog there is 0.978 and mine is 0.980, this being the first time we deviate in our whole lives. I think the closeness to me of the other’s judgment is still evidence of correctness. So I think my credence, and his as well, should still go up. Maybe not to 0.985, but maybe 0.983.

Wednesday, August 29, 2018

What I found in my mailbox

Update: The OUP website says that the official release date is September 16, 2018.

Two kinds of partial causation

It’s interesting that there are at least two significantly different kinds of partial causation. In both of the following cases it seems reasonable to say that x partially causes y:

  1. x and z together cause y

  2. x causes z and z is a part of y.

I.e., the partiality can be on either side of the causal relation. And one might even combine the two, no?

My previous post was about partial causation where the partiality was on the side of the cause, not the side of the effect.

Partial causation and causeless events

  1. If ordinary events can happen without any cause at all, they can happen with a partial cause and no full cause.

  2. A partial cause is a part of a full cause.

  3. Nothing can happen with a partial cause and no full cause.

  4. So, ordinary events cannot happen causelessly.

The argument for (2) is that (2) is the most obvious way to define a partial cause.

The argument for (1) is as follows. Suppose you and I lift a sofa in world w1 in such a way that the exertion of each of us only partly explains the rising of the sofa, as neither exertion is enough to cause the rising. If ordinary events can happen without any cause at all, there is a world w2 where the sofa rises causelessly, with neither you nor I doing anything. But if w1 and w2 are possible, likewise a world w3 is possible where only I exert myself just as in w1 and you sit back and the sofa rises in response to my exertion, and nothing else causally impacts the sofa’s rising. Since my exertion is not enough to cause the rising of the sofa in w1, and in w3 I exert myself to the same degree, my exertion is no more a full cause of the sofa’s rising in w3 than it is in w1. Hence, in w3, I partially causes the sofa to rise, without there being a full cause, just as the consequent of (1) claims.

If I were inclined to deny (4), I would want to argue that (2) is not the right way to define a partial cause. But I don’t know a better way.

Saturday, August 25, 2018

Internal time and God

  1. The internal time of a substance is constituted by the causal order within its accidents.
  2. But God is a substance that has no accidents.
  3. So God has no internal time.
Pity that both premises are controversial.

Beliefless Christianity

A number of authors have claimed that it is possible to practice the Christian faith without assigning a high epistemic probability to central doctrines of Christianity. Here is an interesting problem with such a practice. A central part of Christian practice is to worship Jesus Christ as God. Now, Jesus Christ is uncontroversially a man. Christianity adds that he is also God. If that additional belief is false, then we who worship Jesus Christ as God are idolaters. But it is wrong to undertake a serious risk of idolatry. Thus, it is only permissible to practice the Christian faith if by one's lights the risk of idolatry is not serious. And the only way that can be is if one assigns a high epistemic probability to the doctrine that Jesus Christ is God. Thus, it seems, at least this central doctrine of the Incarnation needs to have a high epistemic probability if one is to be morally justified in practicing the Christian faith.

There is, however, a hole in the argument. Idolatry is only a great evil if God exists. Now imagine someone who assigns a high conditional probability to the Incarnation on the condition that God exists, but who assigns a low unconditional probability to both the Incarnation and the existence of God. Such a person can reason as follows. Either God exists or not. If God does not exist, there is not much evil in idolatry, and so not much harm in worshiping Jesus as God. If God does exist, however, then probably the Incarnation is true, and the value of worshiping Jesus outweighs the risks, since the risks are small.

So, what I think my overall argument shows is that it is wrong to practice the Christian faith without assigning a high epistemic probability to the doctrine of the Incarnation if one assigns a significantly higher epistemic probability to theism. Thus, someone who comes to be convinced that theism is true but assigns a low epistemic probability to Christianity should not practice Christianity.

Objection: Perhaps it is just as morally evil to fail to worship as God someone who is in fact God as it is to worship as God someone who is not. In that case, by not practicing Christianity, one also takes on a great moral risk, and perhaps the risks cancel out.

Response: I think it not just as morally evil to fail to worship as God someone who is in fact God. As far as we know, John the Baptist did not worship Jesus as God, but we have no reason to think that this was a great evil, on the par of idolatry.

Thursday, August 23, 2018

Another way out of the closure argument

Consider this standard closure of the physical argument for physicalism (Papineau gives one very close to this):

  1. Our conscious states have physical effects.

  2. All physical effects are fully caused by physical causes.

  3. There is (typically) no overdetermination.

  4. So, our conscious states are (at least typically) physical.

Many dualists question (2), and epiphenomenalists question (1). But there is another move that seems to me to be promising.

When we say that our conscious states have physical effects, we don’t mean that our conscious states are the full causes of physical effects. Descartes himself would say that the movements of the particles in the pineal gland are partly caused by the conscious choice and partly caused by the prior state of the particles.

In other words, (1) just tells us that our conscious states are partial causes of physical effects. Given this, what (1)–(3) license us in concluding is only:

  1. Our conscious states are (at least typically) parts of physical causes.

But to conclude from (5) that our conscious states are physical, it seems we need some premise like:

  1. All the parts of physical things are physical.

But (6) is worth questioning. Note first that it is easier to find false than true cases of principles like:

  1. All the parts of Fs are Fs

(E.g., electrons are parts of red things, but electrons aren’t red.) So why think (6) is true?

So, it seems that (6) needs some argument.

And in fact there are serious metaphysical views on which (6) is false. Consider, for instance, bundle theory: substances are bundles of properties. Well, rocks are physical objects, but a part of the bundle that makes up a rock will be the abstract entity of rockiness. But abstract entities aren’t physical.

Or take a reading (perhaps a misreading) of Leibniz on which physical objects are constituted by non-physical monads, and suppose that constituents count as parts.

Or, most promisingly, take Aristotle’s view on which all physical objects have form. Form is immaterial, and plausibly non-physical. Hylomorphism thus escapes the closure argument.

More generally, for all we know, the fundamental structure of reality is such that physically fundamental things are not ontologically fundamental but themselves have parts that are not physical.

Evan Rosa's interview with me

I just happened to come across an interview that Evan Rosa did with me about half a decade ago, when my One Body book came out. As far as I can tell, the interview was only posted last month.

Justice and gratitude

It is galling to be punished or even criticized unjustly. But it can also be galling to be rewarded or even praised unjustly. Over the past two years, two of my graduate students have received grants. They did all the work. But because of university policy, I had to be listed as the PI on the grants. And I’ve been getting multiple letters from the administration congratulating me on the grants. That’s galling.

I think God would be similarly galled if he were thanked for something he didn’t do, unless he did something just as good or better. And so God would have strong reason to act to ensure that such thanks would not be forthcoming.

Thus, we have reason to think that whatever people sincerely thank God for, God has either done that—or something at least as good—for them. In particular, we have reason to think that God has become incarnate and died for our sins or has done something at least as good.

Notice an interesting way that this argument makes available something like an implicit faith to non-Christian theists. For non-Christian theists also have reason to believe, on the strength of this argument, that God did something at least as good as what Christianity says he did, and to thank God for doing this. If they then thank God for "doing something at least this good", they would be implicitly thanking God for the Incarnation and Redemption, since in fact that is something God did that was "at least this good".

Wednesday, August 22, 2018

Dentistry, deontology, Double Effect and hypnosis

You are a dentist and a teenage Hitler comes to you to have a bad tooth removed. You only have available an anaesthetic with this feature: Within eight hours of the start of anaesthesia, a neutralizer must be given, otherwise the patient dies. This is not a problem: the extraction will only taken an hour.

You remove the tooth, and are about to administer the neutralizer when you learn that if Hitler survives, he will kill tens of millions of people. And now it seems you have a question whether to save the life of a person who will kill millions if saved. You apply the Principle of Double Effect and check whether the conditions are satisfied:

  • Your end is good: Yup, saving the life of an innocent teenager.

  • The action is good or neutral in itself: Yes, administering a neutralizer.

  • The foreseen evils are not intended by you either as a means or as an end: Yes, you do not intend the deaths either as an end or as a means.

  • The foreseen evil is not disproportionate to the intended good: Ah, here is the rub. How can the deaths of tens of millions not be disproportionate to the saving of the life of one?

So it seems that the Principle of Double Effect forbids you to administer the neutralizer, and you must allow Hitler to die. In so doing, you will be violating your professional code of ethics, and you will no doubt have to resign from the dental profession. But at least you won’t have done something that would cover the world with blood.

This is still counterintuitive to me. It feels wrong for a medical professional to deliberately stop mid-procedure in this way.

One can try soften the worry by thinking of other cases. Suppose that the neutralizer bottle has been linked by a terrorist to a bomb a mile away, so that picking up the bottle will result in the death of dozens of people. In that case it is clearly wrong for the dentist to complete the operation. But the Hitler case still feels different, because it is the very survival of Hitler that one doesn’t want to happen. It is a bit more like a case where the terrorist informs you that if the patient survives the procedure, the terrorist will kill many innocents. I still think that in that case you shouldn’t finish the procedure. But it’s a tough case.

Suppose you are with me so far. Now, here is a twist. You learn of Hitler’s future murders prior to the start of the procedure. You are the only dentist around. Should you perform the procedure?

Here are four possible courses of action:

  1. You do nothing. The teenage Hitler suffers toothache for many a day, and then later on kills tens of millions.

  2. You perform the extraction without anaesthesia. The teenage Hitler suffers excruciating pain, and then later on kills tens of millions.

  3. You perform the procedure, including both anaesthesia and neutralizer. The teenage Hitler’s pain is relieved, but then later on he kills tens of millions.

  4. You administer the anaesthesia, remove the bad tooth, and stop there. The teenage Hitler dies, but the world is a far better place.

Assume for simplicity that it is the same tens of millions who die in cases 1, 2 and 3.

So, now, which course of action should you intend to embark on? Option 4, while consequentialistically best, is not acceptable given correct deontology (if you are a consequentialist, the rest won’t be very interesting to you). For if you intend to go for Option 4, you will do so in order to kill Hitler by administering the anaesthesia while planning not to administer the neutralizer. And that’s wrong, because he is a juridically innocent teenager.

Option 3 seems clearly morally superior to Options 1 and 2. After all, one innocent person—the teenage Hitler—is better off in Option 3, and nobody is worse off there.

But you cannot morally go through with Option 3. For as soon as you’ve applied the anaesthesia, the Double Effect reasoning we went through above would prohibit you from applying the neutralizer. So Option 3 is not available to you if you expect to continue to act morally, because if you continue to act morally, you will be unable to administer the neutralizer.

What should you do? If you had a time-delay neutralizer, that would be the morally upright solution. You give the time-delay neutralizer, administer anaesthesia, remove the bad tooth, and you’re done. Tens of millions still die, but at least this innocent teenager won’t be suffering. It seems a little paradoxical that Option 3 is morally impossible, but if you tweak the order of the procedures by using a time-delay, you get things right. But there really is a difference between the time-delay case and Option 3. In Option 3, your administering the neutralizer kills tens of millions. But administering the time-delay neutralizer prior to the procedure doesn’t counterfactual results in the deaths of tens of millions, because had you not administered the time-delay neutralizer, you wouldn’t then administer the anaesthesia (Option 2) or you wouldn’t then perform the procedure at all (Option 1), and so tens of millions would still die.

Here is another interesting option. Suppose you could get yourself hypnotized so that as soon as the tooth is removed, you just find yourself administering the neutralizer with no choice on your part. That, I think, would be just like the time-delay neutralizer, and thus it seems permissible. But on the other hand, it seems that it is wrong to get yourself hypnotized to involuntarily do something that it would be wrong to do voluntarily, and to administer to Hitler the neutralizer after the anaesthesia is something that it would be wrong to do voluntarily. Perhaps, though, it is always wrong to get yourself hypnotized with the intention of taking away your of choice (maybe that’s a failure of respect for oneself)? Or maybe it is sometimes permissible to hypnotize yourself to involuntarily do something that it would be wrong to voluntarily do. (Here is a case that seems acceptable. You hypnotize yourself to involuntarily say: “I am now speaking involuntarily.” It would be a lie to say that voluntarily!)

Tuesday, August 21, 2018

Time and marriage

Consider this sequence of events:

  • 2000: Alice marries Bob.

  • 2010: Bob dies.

  • 2020: Alice marries Carl.

  • 2030: Alice and Carl invent time machine and travel to 2005 where they meet Bob.

Then, in 2005, Alice is married to Bob and Alice is married to Carl. But she is not a bigamist.

Hence, marriage is not defined by external times like 2005, but by internal times, like “the 55th year of Alice’s life”. To be a bigamist, one needs to be married to two different people at the same internal time. A marriage taken on at one internal time continues forward in the internal future.

And while we’re at it, the twin paradox shows that it is possible for two people to be married to each other and for one to have been married 10 years and the other to have been married 30 years. Again, it’s the internal time that matters for us.

Monday, August 20, 2018

Tropes of tropes

Suppose that x is F if and only if x has a trope of Fness as a part of it.

Here is a cute little problem. Suppose Jim is hurting and has a trope of pain, call it Pin. But Pin is an improper part of Pin. Thus, Pin has a trope of pain—namely itself—as a part of it, and hence Pin is hurting. Thus, wherever someone is hurting, there is something else hurting, too, namely their pain.

The standard move against “two many thinkers” moves is to say that one of them is thinking derivatively. But if we do that, then it looks like the fact that Jim is hurting is more likely to be derivative than the fact that Pin is hurting. For Jim hurts in virtue of having Pin as a part of it, while Pin hurts in virtue of having itself as a part of it, which seems a non-derivative way of hurting. But it seems wrong to say that Jim is hurting merely derivatively, so the real subject of the pain is Pin.

An easy solution is to say that x is F if and only if x has a trope of Fness as a proper part of it.

But this leads to an ugly regress. A trope is a trope, so it must have a trope of tropeness as a proper part of it. The trope of tropeness is also a trope, so it must then have another trope of tropeness as a proper part and so on. (This isn’t a problem if you allow improper parthood, as then you can arrest the regress: the trope of tropeness has itself as an improper part, and that’s it.)

One can, of course, solve the problem by saying that the trope theory only applies to substances: a substance x is F if and only if x has a trope of Fness as a proper part of it, while on the other hand, tropes can have attributes without these attributes being connected with the tropes having tropes. But that seems ad hoc.

As a believer in Aristotelian accidents and forms, which are both basically tropes, I need to face the problem, too. I have two ways out. First, maybe all tropes are causal powers. Then we can say that if “is F” predicates a power, then x is F if and only if x has a trope of Fness as a proper part. But for attribution of non-powers, we have a different story.

Second, maybe the relation between objects and their tropes is not parthood, but some other primitive relation. Some things stand in that relation to themselves (maybe, a trope of tropeness stands in that relation to itself) and others do not (Pin is not so related to itself). This multiplies primitive relations, but only if the relation of parthood is a primitive relation in the system.

Saturday, August 18, 2018

An argument that motion doesn't supervene on positions at times

In yesterday’s post, I offered an argument by my son that multilocation is incompatible with the at-at theory of motion. Today, I want to offer an argument for a stronger conclusion: multilocation shows that motion does not even supervene on the positions of objects at times. In other words, there are two possible worlds with the same positions of objects at all times, in one of which there is motion and in the other there isn’t.

The argument has two versions. The first supposes that space and time are discrete, which certainly seems to be logically possible. Imagine a world w1 where space is a two-dimensional grid, labeled with coordinates (x, y) where x and y are integers. Suppose there is only one object, a particle quadlocated at the points (0, 0), (1, 0), (0, 1) and (1, 1). These points define a square. Suppose that for all time, the particle, in all its four locations, continually moves around the square, one spatial step at a temporal step, in this pattern:

(0, 0)→(1, 0)→(1, 1)→(0, 1)→(0, 0).

Then at every moment of time the particle is located at the same four grid points. But it is also moving all the time.

But there is a very similar world, w2, with the same grid and the same multilocated particle at the same four grid points, but where the particle doesn’t move. The positions of all the objects at all the times in w1 and w2 are the same, but w1 has motion and w2 does not.

Suppose you don’t think space and time can be discrete. Then I have another example, but it involves infinite multilocation. Imagine a world w3 where the universe contains a circular clock face plus a particle X. None of the particles making up the clock face move. But the particle X uniformly moves clockwise around the edge of the clock face, taking 12 hours to do the full circle. Suppose, further, that X is infinitely multilocated, so that it is located at every point of the edge of the clock face. In all its locations X moves around the circle. Then at every moment of time the particle is located at the same point, and yet it is moving all the time.

Now imagine a very similar world w4 with the same unmoving clock face and the same spacetime, but where the particle X is eternally still at every point on the edge of the clock face. Then w3 and w4 have the same object positions at all times, but there is motion in w3 and not in w4.

I think the at-at theorist’s best bet is just to deny that there is any difference between w1 and w2 or between w3 and w4. That’s a big bullet to bite, I think.

It would be nice if there were some way of adding causation to the at-at story to solve these problems. Maybe this observation would help: When the particle in w1 moves from (0, 0) to (1, 0), maybe this has to be because something exercises a causal power to make a particle that was at (0, 0) be at (1, 0). But there is no such exercise of a causal power in w2.

Friday, August 17, 2018

Bilocation and the at-at theory of time

I was telling my teenage children about the at-at theory of motion: an object moves if and only if it is in one location at one time and in another location at another time. And then my son asked me a really cool question: How does this fit with the possibility of being multiply located at one time?

The answer is it doesn’t. Imagine that Alice is bilocated between disjoint locations A and B, and does not move at either location between times t1 and t2. Nonetheless, by the at-at theory, Alice counts as moving: for at t1 she is in location A while at t2 she is in location B.

My response to my son was that this was the best argument I heard against the at-at theory. My son responded that the argument doesn’t work if multilocation is impossible. That’s true. But there is good reason to think bilocation is possible. First, the real presence of Christ in the Eucharist appears to require multilocation. Second, God is present everywhere, but never moves. Third, there is testimonial evidence to saints bilocating. Fourth, the argument only needs the logical possibility of bilocation. Fifth, time-travel would make it possible to stand beside oneself.

(The time-travel case is probably the least compelling, though, as an argument against the at-at theory. For the at-at theorist could say that the times in the definition of motion are internal times rather than external ones, and time travel only allows one to be in two places at one external time.)

I’ve been inclining to think the at-at theory is inadequate. Now I am pretty much convinced, but I am not sure what alternative to embrace.

One might just try to tweak the at-at theory. Perhaps we say that an object moves if and only if the set of its locations is different between times. But that isn’t right. Suppose Alice is bilocated between locations A and B at t1, but at t2 she ceases to bilocate, defaulting to being in location A. Then the set of locations at t1 is {A, B} while at t2 it is {A}. But Alice hasn’t moved: cessation of bilocation isn’t motion. Nor will it help to require that the sets of locations at the two times have the same cardinalities. For imagine that Alice is bilocated at locations A and B at t1, and then she ceases to be located at B, defaulting to A, and walks over to location A′ at t2. Then Alice has moved, but the sets of locations at t1 and t2 have different cardinalities. I don’t know that there is no tweak to the at-at theory that might do the job, but I haven’t found one.

Scattered thoughts on self-identification

Among other things, I am a mathematician and a Wacoan. It is moderately important to my self-image, my “identity”, that I practice mathematics and that I live in Waco. But there is an important difference between the two contributions. My identifying as a mathematician also includes a certain kind of “fellow feeling” towards other mathematicians qua mathematicians, a feeling of belonging in a group, a feeling as of being part of a “we”. But while I love living in Waco, I do not actually have a similar “fellow feeling” towards other Wacoans qua Wacoans , a feeling as of being part of a “we” (perhaps I should). It’s just that I do not exemplify the civic friendship that Aristotle talks about.

An initial way of putting the distinction is this:

  1. identifying with one’s possession of a quality versus identifying with being a member of the group of people who possess the quality.

This correctly highlights the fact that self-identification is hyperintensional, but it’s not quite right. Two finalists for some distinction can identify with being a member of the group of people who are finalists, and yet they need not—but can—have a “we”-type identification with this group.

It seems to me that the distinction I am after cannot be captured by egocentric facts about property possession. The “we”-type of identification is not a self-identification of oneself as having a certain quality. It seems to me that we have two different logical grammars of self-identification:

  1. (a) identifying with one’s possession of a quality versus (b) identifying with the group of people who possess the quality.

I think some people go more easily from (a) to (b), and some people—including me—go less easily.

I wonder if it is possible to have (b) without (a). I don’t know, but I suspect one can. It may be that some herd animals have something like (b) without having anything like (a). So why couldn’t humans?

I think the move from (a) to (b) tends to be a good thing, as it is expressive of the good of sociality.

There are also second- and third-person analogues to (2):

  1. (a) identifying a person with their possession of a quality versus (b) identify them with the group of people who possess the quality.

Regarding (b), I am reminded of Robert Nozick’s remark that people in romantic relationships want to be acknowledged as part of a “we”. In other words, people in romantic relationships want second- and third-person identification of them as part of the pair (a kind of group) of people in the particular relationship. I wonder if that’s possible without (a). Again, I am not sure.

I think 3(a) and 3(b) have a potential for being dangerous. One thinks of stereotyping here.

I think 2(a) and 2(b) also have a potential for danger, albeit a different one. The danger is that both kinds of self-identification lead to an inflexibility with respect to the quality or community. But sometimes we need to change qualities or communities, or they are changed on us. I suppose 2(a) and 2(b) are not so problematic with respect to qualities or groups that one ought to maintain oneself as having or belonging to (e.g., virtue or the Church).

Thursday, August 16, 2018

Evil artifacts

Short version of my argument: Artifacts can be evil, but nothing existent can be evil, so artifacts do not exist.

Long version:

  1. Paradigmatic instruments of torture are evil.

  2. Nothing that exists is evil.

  3. So, paradigmatic instruments of torture do not exist.

  4. All non-living complex artifacts are ontologically on par.

  5. Paradigmatic instruments of torture are inorganic complex artifacts.

  6. So, non-living complex artifacts do not exist.

The argument for 1 is that paradigmatic instruments of torture are defined in part by their function, which function is evil.

The argument for 2 is:

  1. Everything that exists is either God or created by God.

  2. God is not evil.

  3. Nothing created by God is evil.

  4. So, nothing that exists is evil.

I think 4 is very plausible, and 5 is uncontroversial.

(My argument nihilism about artifacts is inspired by a rather different but also interesting theistic argument for the same conclusion that Trent Dougherty just sent me, but his argument did not talk of evil.)

Wednesday, August 15, 2018

Natural hope

One of the striking things to me about Aristotle is the pessimism. For instance, in Book IX of the Nicomachean Ethics, we’re told that vicious persons shouldn’t even love themselves, and that when one friend sufficiently outstrips another in moral excellence—whether through the one improving or the other declining—the friendship must be dropped. I do not see the virtue of hope in Aristotle, say, hope that the vicious may improve, too. For the wicked, there is just despair. (Aristotle’s odious doctrine of “natural slavery” has some similarities.)

Christianity, on the hand, professes hope to be a virtue. But the hope that Christianity talks of is a supernatural infused virtue, a virtue that comes only as a gift of God’s grace. And Aristotle, of course, is interested in the natural virtues.

But grace builds on nature. So one would expect there to be a natural counterpart to the supernatural virtue of hope. Compare how there are natural loves that are a counterpart to the supernatural virtue of charity. There should be a natural virtue of hope, too.

But given the dark empirical facts about humanity, a habit of hope apart from grace would seem to be an irrational optimism rather than a virtue.

Perhaps, though, there is something in between irrational optimism and supernatural hope: perhaps there is room for a hope grounded in natural theology. Natural theology teaches that there is a perfectly good God. Yet there is so much that is awful in the world. But given theism there is good reason to think that the future will bring something better, and hence there is a natural justification for hope.

I am not sure I want to say that natural hope requires actual belief in God. But for that hope to be a virtue and (hence) a part of a rational state of mind, it may well require that the hoping individual be in an epistemic position to rationally believe. Thus, for natural hope to be a virtue seems to require that hopers be in a position to believe that there is a God.

Aristotle, of course, did believe in a God, or gods. But these gods were uninvolved with human affairs, and hence not a good ground for hope.

Reflecting on the above, it seems to me that to overcome the pessimism of Aristotle, one needs more than just a remote hope, but a seriously robust hope.

Monday, August 13, 2018

Calling for an explanation

If I am playing a board game and the last ten rolls of my die were 1, that calls out for an explanation. If only Jewish and Ethiopian people get Tay-Sachs disease, that calls out for an explanation.

It seems right to say that

  1. a fact calls out for an explanation provided it is the sort of fact that we would expect to have an explanation, a fact whose nature is such that it "should" have an explanation, a fact such that we would be disappointed in reality in not having an explanation of.

But now consider two boring facts:

  1. 44877 x 5757 = 258356889
  2. Bob is wearing a shirt
These are facts that we all expect to have an explanation (e.g., the explanation of (2) is long and boring, involving many instance of the distributive law and the explanation of (3) presumably has to do with psychosocial and physical facts). They are, moreover, facts that "should" have an explanation. There would be something seriously wrong with logic itself if a complex multiplication fact had no explanation (it's certainly not a candidate for being a Goedelian unprovable truth), and with reality if people wore shirts for no reason at all.

So by (1), these would have to be facts that call out for an explanation. But I don't hear their cry. I am confident that they have explanations, but I wouldn't say that they call out for them. So it doesn't seem that (1) captures the concept of calling out for an explanation.

As I reflect on cases, it seems to me that calling out for an explanation has something to do with the intellectual desirability of having an explanation rather. Someone with a healthy level of curiosity would want to know why the last ten rolls were 1 or why only Jewish and Ethiopian people get Tay-Sachs. On the other hand, while I'm confident that there is a fine mathematical reason why 44877 x 5757 = 258356889, I have no desire to know that reason, even though I have at least a healthy degree of curiosity about mathematics.

This suggests to me an anthropocentric (and degreed) story like the following:

  1. A fact calls out for an explanation to the degree that one would be intellectually unfulfilled in not knowing an explanation.

It is sometimes said that a fact's calling out for an explanation is evidence that it has an explanation. I think (4) coheres with this. That something is needed for our fulfillment is evidence that the thing is possible. For beings tend to be capable of fulfillment. (This is a kind of cosmic optimism. No doubt connected to theism, but in what direction the connection runs needs investigation.)

Sunday, August 12, 2018

Generate bookmarklet dynamically from gist

Let's say you want to make some bookmarklets be available to readers of your website and you want to be able to update them conveniently without having to re-encode your javascript into a bookmarklet and edit your website html. Here's a simple method. Post the bookmarklet on gist.github.com, and then edit and use the following html/javascript code to fetch the javascript and automatically generate a bookmarklet:

<p>My bookmarklet is here: <a href="__error__" id="myBookmarklet1">My Bookmarklet</a>.</p>
<script>
var linkId = "myBookmarklet1";
var gistLink = "https://gist.githubusercontent.com/arpruss/74abc1bc95ae08e543b9b74f15a23b07/raw";
fetch(gistLink).then(function(response) {
    if (!response.ok) {
        //alert("Error fetching "+response.statusText);
    }
    else {
        response.text().then(function(text) {
            var link = document.getElementById(linkId);
            link.href = "javascript:"+encodeURIComponent("(function(){"+text+"})()");
        });
    }
}); 
</script>

For a live example, see my previous post.

Fix aspect ratio of online videos

My wife and I were watching Mr. Palfrey of Westminster on Acorn, and the aspect ratio on s2e1 was 11% off. It was really annoying me (especially before I realized it was just that one episode that was bad). So I wrote a little bookmarklet to adjust the aspect ratio of all html5 videos in a web page.

Here it is: Stretch Video.

To use it, drag it from the above link to your browser’s bookmark bar (which you can show and hide in Chrome with shift-ctrl-b). Then when you have the video on your screen, click on the bookmark and enter the horizontal and vertical stretch ratios, or the correct aspect ratio.

For full-screen video, try first resizing and then switching to full-screen (on some websites, like YouTube, there will be a one second delay before the video stretches on full-screen toggle). (On Firefox, you can also pull up bookmarks in full-screen mode with shift-ctrl-b, which helps.)

To cancel the effect, just reload your video page.

And for fun, here is a Video Rate bookmarklet (we wouldn't want to treat space very differently from time, would we?).

Public domain source code is here.

Friday, August 10, 2018

Mathematical structures, physics and Bayesian epistemology

It seems that every mathematical structure (there are some technicalities as to how to define it) could metaphysically be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.

A natural law or divine command epistemology can solve this problem by requiring us to assign zero probability to some non-actual physical structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori. In other words, our Creator can make us so that we only take epistemically seriously a small subset of the possibilia. This might help with the problem of scepticism, too.

Thursday, August 9, 2018

Two puzzles about pain and time

Supposing the growing block theory of time is correct and you have a choice between two options.

  1. You suffer 60 minutes of pain from 10:30 pm to 11:30 pm.
  2. You suffer 65 minutes of pain from 10:50 pm to 11:55 pm.

Clearly, all other things being equal, it is irrational to opt for B. But supposing growing block theory is true, there are only past and present pains, and no future pains, so why is it irrational to opt for B?

Well, maybe rationality calls on us to make future reality be better, and we have:

  1. If you opt for A, then at 11:55 reality will contain 60 minutes of pain

  2. If you opt for B, then at 11:55 reality will contain 65 minutes of pain.

Opting for B will make reality worse (for you) at 11:55, so it seems irrational to choose B. However, we also have facts like these:

  1. If you opt for A, then at 11:30 reality will contain 60 minutes of pain.

  2. If you opt for B, then at 11:30 reality will contain 55 minutes of pain.

Thus, opting for A will make reality worse at 11:30. Why should the 11:55 comparison trump the 11:30 comparison?

One answer is this: The 11:55 comparison continues forever. If you choose B, then reality tomorrow, the day after tomorrow, and so on will be worse than if you choose B, as on all these days reality will contain the 65 minutes of past pain instead of the mere 60 minutes if you choose A.

However, this answer isn’t the true explanation. For suppose time comes to an end tonight at midnight. Then it’s still just as obvious that you should opt for A instead of B. However, now, it is only during the ten minute period after 11:50 pm and before midnight that reality-on-B is worse than reality-on-A, while reality-on-A is better than reality-on-B during the whole of the 80 minute period strictly between 10:30 pm and 11:50 pm. It is mysterious why the comparison during the 10 minute period starting 11:50 pm should trump the comparison during the 80 minute period ending at 11:50 pm.

I suppose the growing blocker’s best bet is to say that later comparisons always trump earlier ones. It is mysterious why this is the case, though.

The story is also puzzling for the presentist, as I discuss here. But there is no problem for the eternalist: on B reality always contains more pain than on A.

However, there is a different puzzle where the growing blocker can tell a better story than the eternalist. Suppose you will live forever, and your choice is between:

  1. You will feel pain from 10 pm to 11 pm every day starting tomorrow
  2. You will feel pain from 9 am to 11 am every day starting tomorrow.

Intuitively, you should go for C rather than D. But on eternalism, on both C and D reality includes an equal infinite number of hours of pain. But on growing block, after 9 am tomorrow, reality will be worse for you if you choose D rather than C. Indeed, at every time after 9 am, on option D reality will contain at least twice as much pain for you as on option C (bracketing any pains prior to 9 am tomorrow). So it’s very intuitive that on growing block you should choose C.

Maybe, though, the eternalist can say that utility comparisons involving infinities just are going to be counterintuitive because infinities are innately counterintuitive, as our intuitions are designed/evolved for dealing with finite cases. Moreover, we can tell similar puzzles involving infinities without involving theories of time. For instance, suppose an infinite line of people numbered 1,2,3,…, all of whom are suffering headaches, and you have a choice whether to relieve the headache of the persons whose number is even versus the headache of the persons whose number is prime. The intuition that C is better than D seems to be exactly parallel to the intuition that it’s better to benefit the even-numbered rather than the prime-numbered. But the latter intuition is not defensible. (Imagine reordering the people so now the formerly prime-numbered are even-numbered and vice-versa. Surely such a reordering shouldn’t make any moral difference.) So perhaps we need to give up the intuition that C is better than D?