Tuesday, January 31, 2017

Tim O'Connor coming to Baylor

We hired Tim O'Connor starting Fall 2017. It's very exciting for metaphysics at Baylor.

Humean metaphysics implies Cartesian epistemology

Let’s assume two theses:

  1. Humean view of causation.

  2. Mental causalism: mental activity requires some mental states to stand in causal relations.

If I accept these two theses, then I can a priori and with certainty infer a modest uniformity of nature thesis. Here’s why. On mental causalism, mental activity requires causation. On Humeanism, causation depends on the actual arrangement of matter. If the regularities found in my immediate vicinity do not extend to the universe as a whole, then they are no causal laws or causal relations. Thus, given causalism and Humeanism, I can infer a priori and with certainty from the obvious fact that I have mental states that there are regularities in the stuff that my mind is made of that extend universally. In other words, we get a Cartesian-type epistemological conclusion: I think, so there must be regularity.

In other words, Humean metaphysics of nature plus a causalist theory of mind implies a radically non-Humean epistemology of nature. The most plausible naturalist theories of mind all accept causalism. So, it seems, that a Humean metaphysics of nature plus naturalism—which is typically a part of contemporary Humean metaphysics—implies a radically non-Humean epistemology of nature.

So Humean metaphysics and epistemology don’t go together. So what? Why not just accept the metaphysics and reject the epistemology? The reason this is not acceptable is that the Cartesian thesis that the regularity of nature follows with certainty from what I know about myself is only plausible (if even then!) given Descartes’ theism.

Monday, January 30, 2017

Normative powers and theism

There’s a curious puzzle for the following conjunction of views:

  1. theism
  2. normative power account of promises.

To introduce the puzzle, think about making baskets. I have the power to make a (pretty shoddy, I expect) basket come into existence. I would exercise the power by going to the river, gathering reeds and weaving them together. But God can directly make the basket come into existence, simply by willing it to exist. The point generalizes: all the things I can make exist, God can simply make exist by willing them to exist.

On the normative power account of promises, by going up to a friend and promising to dance a jig, I make an obligation for myself come into existence. So God can simply will my obligation to dance a jig into existence.

But that seems wrong. Of course, God can bring it about that I am obligated to dance a jig. God has a myriad of ways of doing so. God can, for instance, make a rich person inform me that if and only if I dance a jig, she’ll give a million dollars to a good cause. Or God can simply issue a command to me to dance a jig. But the idea that God can simply will the obligation into existence seems wrong. That would imply that there is a world just like this one, differing only in respects like: (a) God wills that I be obligated to dance a jig, (b) I am obligated to dance a jig and (c) I ignorantly fail in that obligation. That just doesn’t seem right. (The world where God commands me to dance a jig is different: it is essential to a command that it be expressed to the person being commanded.)

Well, but in a sense there are some things God can’t bring about simply by willing them, even though we can. For instance, I can bring into existence a hand-made basket. But God can’t bring a hand-made basket into existence simply by willing it, because the concept of a hand-made basket precludes it being brought into existence in any way but by hand. So our principle that God can directly bring into existence anything we can bring into existence needs to be qualified to exclude things whose description specifies something about how they are brought into existence. (If essentiality of origins holds, then things whose description include de re reference may be like that.)

But obligation to dance a jig doesn’t seem to be like that. It doesn’t seem to carry reference to how it’s brought about, in the way that hand-made basket does. There are multiple ways an obligation to dance a jig can come about, e.g., promises, authority and consequences.

I think a natural law approach has a nice escape from this. Suppose it is a part of the concept of an obligation that it be partly constituted by the nature of the obligated entity. Then God can’t just directly bring about obligations by willing them into existence. He would have to bring about an entity with a particular nature. God could bring it about that an agent is obligated to jig, but he would have to do it either by working through general norms grounded in the agent’s nature (say, by issuing commands if the agent has a nature that requires her to obey) or by creating an agent with a particular sort of nature, say a nature that strives to jig.

And divine command theories also don't have any problem: God commands us to keep promises, and that's all there is to that. There is, however, a difficult question there about the grounds of God's obligation to keep promises.

Should a non-theist care at all about what I said? I think so. Even if there were no God, the thought experiment of God simply willing the normative fact seems illuminating. It suggests that normative facts aren’t just be free-floating facts to be brought about by “normative powers”.

Thursday, January 26, 2017

Pain and unpleasantness

I just realized something that probably was obvious to many people: the opposite of the pleasant isn’t the painful, but the unpleasant. Great physical effort can be painful, but it could also just be effortful and unpleasant (though there can also be a pleasure mixed with the unpleasantness). An even clearer case is distasteful food, which is unpleasant but not painful to eat.

While it would sound wimpy to talk about “the problem of the unpleasant” in place of “the problem of pain”, pain isn’t in general worse than the non-painfully unpleasant. For instance, there are distasteful foods to which I would prefer the pain of a flu shot.

It may be that among physically unpleasant events, pains monopolize the top of the unpleasantness hierarchy (test case: extreme bitterness—maybe that’s actually painful?). So while some instances of non-painful physical unpleasantness are worse than some instances of painful physical unpleasantness, some instances of physical pain are worse than any non-painful physical unpleasantness. Because of this, we worry about “the problem of pain” rather than “the problem of unpleasantness”, and we talk of grievous emotions as psychological pain rather than psychological unpleasantness. Talk of “unpleasantness” implicates we are talking about something that isn’t severe.

Even further complicating matters, it seems plausible that there are instances of physical pain that aren’t actually unpleasant. Mild soreness after exercise—that feeling of muscles well-used but not abused—might be in that category. This may even be true of some cases of psychological pain. Someone close to you has died, and you’re too numb to feel grief. Then suddenly the grief floods in, tears flow. It’s painful, but it need not be unpleasant—unlike the numbness, which was definitely unpleasant. It might even be pleasant, the pleasure of emotions functioning properly.

I know that the standard way of analyzing cases like that of exercise soreness or a flood of tears is that there is an unpleasant core but it’s outbalanced by associated pleasures. But I think that may be mistaken. The pain is there, but I am not sure that there need be any unpleasantness. What we have is something that would have been unpleasant in isolation, but in conjunction with the rest of the context we get a complex feeling that is not unpleasant. This unpleasantness is perhaps something like the red, blue and green subpixels in the white areas of your screen—the subpixels each contribute to the color (if the blue weren’t there, the area would look yellow rather than white), and in isolation they would produce their respective colors, but in the context their color is lost. Likewise, it may be better phenomenology to say that in cases like these the pain would be unpleasant in isolation, but in the context its unpleasantness is lost.

If this is right, then philosophers really should be talking about the problem of unpleasantness rather than the problem of pain, simply being careful to cancel the implicature that the unpleasantness isn’t severe.

Wednesday, January 25, 2017

A method for blocking deflation of ontological debates

Consider Hirsch-type deflationary views on which many differences in ontology are simply verbal differences. A standard case is nihilism and universalism about composition: the nihilist says that multiple things can never compose a whole and the universalist says that every plurality must compose a whole. The deflationist sees the two views as notational variants. The universalist’s sentences describe the same facts as the nihilist’s. We can maybe even translate with little if any loss between the two idioms, replacing the nihilist’s quantifiers with quantifiers restricted to simples on the universalist’s side, and replacing the universalist’s quantifiers with plural quantification, or quantification over sets, or some other device acceptable to the nihilist.

Note, first, that in this particular case there is a bit of a problem. The universalist might allow for composed objects that have no simple parts—“gunk”. The claim that possibly there is gunk is one that cannot be translated into any statement in the nihilist’s language that has a hope of being true. The nihilist’s usual way of translating a universalist’s statement is to use plural quantification. So the statement that possibly there is gunk is going to get translated into something like the statement that possibly there is a plurality of things none of which is a simple. But that’s obviously false given nihilism, since the nihilist’s quantifiers can only quantify over simples, and so the statement basically says that there are simples none of which is a simple. Thus, we have a genuine, non-verbal disagreement.

So the only way we can take a nihilist-universalist disagreement to be merely verbal is if both theorists deny the possibility of gunk. I think they should deny the possibility of gunk.

Here is a second case, where disagreement on composition cannot be deflated. Consider a brutal composition view like Markosian’s. On this view, there will be possible worlds with the same simple objects standing in the same non-mereological relations but differing as to composition facts. For instance, in one world there might be three rocks that make up a whole and in the other world the very same three rocks do not make up a whole, even though they are arranged in exactly the same way. Any nihilist or universalist description of the two worlds will be unable to distinguish such worlds, but on a brutal composition view, there can be such pairs. Here we have a real disagreement, one that cannot be taken to be merely verbal. The brutal composition theorist has more possibilities than the nihilist and universalist. And the brutal composition theorist’s statement that the two worlds differ in composition facts but not in non-mereological facts either has no translation into either nihilist or universalist language or translates into something that is clearly false on the given theory.

The brutal compositionalist has an additional “degree of freedom”, as the scientist would say, on her theory than the nihilist or universalist does. The case here is similar to those dualists who believe in the possibility of zombies. While the disagreement between a dualist who thinks the mental supervenes on the physical and the pure physicalist could seem to be merely verbal to some (though I think it’s a mistake to see it that way), the disagreement between a dualist who thinks that the mental does not supervene on the physical and the pure physicalist is certainly not merely verbal.

In general, thus, the modal ramifications of theories can block deflationary moves. One theory may allow for a possibility that simply rules out the other theory (e.g., gunk ruling out nihilism), or one theory may posit contingent facts that do not supervene on reality as describable in the other theory (the brutal composition or zombie cases).

This leaves the possibility that there will be some ontological debates that are merely verbal. Perhaps the debate between the nihilist and the anti-gunk universalist is merely verbal. But that some pairs of ontological theories disagree merely verbally is not a very interesting deflationary thesis.

Moreover, I think that once we see that there are nearby debates that are clearly not merely verbal, the plausibility of the deflationary move in the cases that looks more verbal goes down. Once we realize that among the views under discussion there is a brutal composition view on which there is a possible world just like ours but where nihilism contingently holds and a possible world just like ours but where universalism contingently holds, it becomes pretty clear that holding nihilism to hold necessarily will also differ from holding universalism to hold necessarily. (That said, there may be particular variants on universalism that just are notational variants on nihilism. Say, ones where the quantifiers are stipulated in terms of plural quantification over simples.)

Tuesday, January 24, 2017

The unconscious: A tool for studying consciousness

There is an old joke: to find out if a computer is conscious, you program it to tell the truth, and you ask it if it is. The point behind the joke is deep, I think. If we made a computer, we would know how we could make it correctly report sensor data like the current temperature whether outside or inside the computer. But how would we make it sense whether or not it is conscious?

If had a consciousness sensor, we could do some nice experiments on the nature of consciousness. We could, for instance, test naturalistic hypotheses that consciousness is the product of the appropriate kinds of complexity of data processing.

But it turns out that we do have a tool here. We are blessed with having unconscious thought in addition to conscious thought, and we can tell the two apart. Of course, we can only indirectly discern the presence of unconscious thought—but once we’ve learned that a thought process occurred, then we can use introspective memory to check (fallibly) whether it was conscious or not.

Does this tool provide any useful data? I think so. For instance, it empirically verifies the premise of this one-premise argument:

  1. Some of our unconscious thinking is just as sophisticated as some of our conscious thinking.

  2. So, consciousness is not the product of the sophistication of our thinking.

Of course, only a very incautious naturalist would hold that consciousness is the product of the sophistication as such of our thinking. But, still, it’s pretty nice to have an empirical argument here.

Similarly, we can rule out the hypothesis that consciousness is a function of sophisticated irreducibly first-person thought, since it is clear that our unconscious thought is deeply concerned with first-person issues, and in sophisticated ways. Likewise, I suspect, we can rule out the hypothesis that consciousness is a function of sophisticated second-order thought or even sophisticated second-order irreducibly first-person thought. A primary way in which we detect unconscious thinking is when we suddenly come to a conclusion “out of nowhere”. The difficulty of reaching that conclusion is evidence of the sophistication of the thought process that led to it. And I think that such eureka moments can happen in all subject matter, including that of second-order irreducibly first-person thoughts.

This makes the challenge for naturalist theorist of mind tough: they need to identify as the basis of consciousness a type of mental processing that cannot ever occur as part of the rich tapestry of our unconscious mental lives.

There is, though, an interesting sceptical response. Perhaps what we call our “unconscious thinking” is in fact conscious. There are two ways this could happen. First, perhaps, there is another thinker in me, one who thinks my unconscious thoughts. Second, maybe I have two centers of consciousness. It’s hard to rule out these hypotheses empirically. But they are rather crazy, and all empirical confirmation requires the rejection of crazy hypotheses.

Monday, January 23, 2017

Prosthetic decision-making

Let’s idealize the decision process into two stages:

  1. Intellectual: Figure out the degrees to which various options promote things that one values (or desires, judges to be valuable, etc.).

  2. Volitive: On the basis of this data, will one option.

On an idealized version of the soft-determinist picture, the volitive stage can be very simple: one wills the option that one figured out in step 1 to best promote what one values. We may need a tie-breaking procedure, but typically that won’t be invoked.

On a libertarian picture, the volitive stage is where all the deep stuff happens. The intellect has delivered its judgment, but now the will must choose. On the best version of the libertarian picture, typically the intellect’s judgment includes a multiplicity of incommensurable options, rather than a single option that best promotes what one values.

On the (idealized) soft-determinist picture, it seems one could replace the mental structures (“the volitive faculty”) that implement the volitive stage by a prosthetic device (say, a brain implant) that follows the simple procedure without too much loss to the person. The actions of a person with a prosthetic volitive faculty would be determined by her values in much the same way as they are in a person with a normal volitive faculty. What is important is the generation of input to the volitive stage—the volitive stage is completely straightforward (except when there are ties).

On the libertarian picture, replacing the volitive faculty by a prosthesis, however, would utterly destroy one as a responsible agent. For it is here, in the volition, that all the action happened.

What about replacing the intellectual faculty by a prosthesis? Well, since the point of the intellectual stage is to figure out something, it seems that the point of the intellectual stage would be respected if one replaced it by an automated process that is at least as accurate as the actual process. Something else would be lost, but the main point would remain. (Compare: Something would be lost if one replaced a limb by a prosthetic that functioned as well as the limb, but the main point would remain.)

So, now, we can imagine replacing both faculties by prostheses. There is definite loss to the agent, but on the soft-determinist picture, there isn’t a loss of what is central to the agent. On the libertarian picture, there is a loss of what is central to the agent as soon as the volitive faculty is replaced by a prosthesis.

The upshot of this is this: On the soft-determinist picture, making decisions isn’t what is central to one as an agent. Rather, it is the formation of values and desires that is central, a formation that (in idealized cases) precedes the decision process. On the libertarian picture, making decisions—and especially the volitive stage of this process—is central to one as an agent.

Sunday, January 22, 2017

The Tammes problem

I wanted to 3D print a ball with dimples like a golf ball, so I got to looking up how to evenly distribute points over the surface of a sphere. Thinking about this problem leads to a very natural optimization problem: given a natural number n, place n points on the surface of a sphere in a way that maximizes the shortest distance between any two points. This problem has a name: it is the Tammes problem. Of course, for my purposes, it really doesn't matter whether I have an exact solution to the problem--an approximate one will do.

A natural way to try to approximately solve the problem is to pretend that the points are particles that have repulsive forces between them, and then run a computer simulation of initially randomly distributed particles moving under the influence of these forces, with some frictional damping.

Inspired by this paper, I initially worked with a repulsive force inversely proportional to dp, where d is the distance between the points and p is an exponent that is ramped up as the simulation progresses.

Experimenting with various parameters, I found it was helpful to start with p=1 and go up to p=4.5 and then stay at p=4.5 for a while before finishing the simulation. Velocity-dependent friction seems to work a little better than velocity-independent friction. The physical precision of the simulation, of course, doesn't matter at all, except as a means to getting a large minimum spacing. For 500 points, with 1000 steps of Euler-Cromer simulation and carefully tuned parameters (friction, dynamic step size, ramping schedule), I was able to get a minimum spacing of about 0.153 in one run. There is a theorem by Fejes-Tóth that implies one can't do better than 0.1702, so it's pretty close.

A hint from the above-linked paper helped along the way: one can arrange the initial positions of the particles to be symmetric around the origin (i.e., if we place one particle is at x, we place another at −x). Then we only need to simulate the motions of half of the particles, since the movements of the other half are just a reflection about the origin. Of course, this optimization only works if n is even (though if n is odd, it still may be worth arranging all but one particles symmetrically initially).

Then I had another idea. At each simulation time, we already calculate the current distance dmin between the two particles closest together, and what we want to do is to particularly strongly push apart those particles whose distance is close to that. After a fair amount of fine-tuning, I ended up modifying the repulsive force to (dcdmin)p, where now we ramp p from 1 to 4.5, and c from 0 to 0.9. The result was noticeably better: my best answer for n=500 went from 0.153 to about 0.162, and typical runs give me about 0.161 after only 500 steps.

In the videos, the diameters of the red ball are equal to dmin, so the point is to maximize the size of the balls without allowing them to collide. The code is written in C. It's been some time since I've programmed in C, so it was fun to go back to C. And it was also fun to go back to programming a numerical simulation, which I did a lot of back when I was a teenager. Since my teenage years, things have changed. Computers are so much faster that my ordinary laptop has no difficulty with handling n-body interaction for n around 500 or 1000: back as a teenager, the most I worked with was about n=64. Moreover, my laptop has multiple cores, and OpenMP makes it super easy to split n-body problems between cores. My Dell laptop does a 500 step calculation with 500 particles in 1.4 seconds (but for even n, we have a symmetry optimization that cuts computation time in about a half; 499 particles takes 2.6 seconds).

Enjoy the code. It's in messy but portable C, and there is a a 32-bit Windows binary that uses all the cores you have. Just give tammes one argument: the number of particles. Visualization is done by feeding an -animate option and piping to a Classic VPython script.

You can can generate an OpenSCAD golfball from the code by using a -scad option instead. Unfortunately, OpenSCAD is really slow in processing the output of tammes. The golfball on the right has n=336. I seem to have read that that's a pretty normal n for golfballs.

Thursday, January 19, 2017

Degrees of freedom

The number of degrees of freedom in a system is the number of numerical parameters that need to be set to fully determine the system. Scientists have an epistemic preference for theories that posit systems with fewer degrees of freedom.

But any system with n real-valued degrees of freedom can be redescribed as a system with only one real-valued degree of freedom, where n is finite or countable. For instance, consider a three-dimensional system which is fully described at any given time by a position (x, y, z) in three-dimensional space. We can redescribe x, y and z by real-valued variable X, Y and Z in the interval from 0 and 1, for instance by letting X = 1/2 + π−1arctan x and so on. Now write out these new variables in decimal:

  • X = 0.X1X2X3...
  • Y = 0.Y1Y2Y3...
  • Z = 0.Z1Z2Z3...

Finally, let:

  • W = 0.X1Y1Z1X2Y2Z2X3Y3Z3....

Then W encodes all the information about X, Y and Z, which in turn encode all the information about (x, y, z) and hence about our system at a given time. (This obviously generalizes to any finite number of degrees of freedom. For a countably infinite one, things are slightly more complicated, but can still be done.)

There is a lesson here, even if not a particularly deep one. The epistemic preference for theories that have fewer degrees of freedom cannot be separated from the the epistemic preference for simpler theories. For of course rewriting a theory that made use of (x, y, z) in terms of W is in practice going to make for a significantly messier theory. So we cannot replace a simplicity preference by a preference for a low number of degrees of freedom.

Objection: Instead of a simplicity preference, we may a priori specify that laws of nature be given by differential equations in terms of the variables involved. But when, say, x, y and z vary smoothly over time, it is very unlikely that W will do so as well.

Response: But one can find a replacement for W that is smoothly related to x, y and z up to any desired degree of precision, and hence we can give a differential-equation based theory that fits the experimental data pretty much equally well but has only one degree of freedom.

Tuesday, January 17, 2017

Stochastic substitutions, rationality and consent

Suppose that we have a minor parking infraction that, in justice, deserves about a $40 fine. Let’s suppose the infraction is so clear when it occurs that there cannot be reasonable appeal. The local authorities used to levy a $40 fine each time they saw a violation, but to reduce administrative costs, they raised the fine to $200, but each time the parking enforcement officer sees a violation he tosses a twenty-sided die, and only levies the fine if the die shows a one.

A one in twenty chance of losing $200 is a much better deal than a certainty of losing $40 dollars. So it seems that the treatment of violators is less harsh under the new system, and one can’t complain about it. But tell that to the person who gets the $200 fine. It seems unjust to impose a $200 fine for an infraction that in justice deserves only a $40 fine. But how can it be unjust when this is a better deal?

Here’s a possibly related case. Suppose you leave your wallet lying around and I take $10 out of your wallet and put back $20. You can complain about my handling your possessions, but it’s weird to call me a thief, unless there was something special about that $10 bill. But what if I take $10 out of your wallet, roll a six sided die, and put back $2000 if and only if the die shows a one? A one in six chance of $2000 is a way better deal than a certainty of $20. But if I end up putting nothing back, then I’m clearly a thief.

In both cases, we have two ways of treating someone, A and B. Treatment B is clearly a better deal for them than treatment A, and treatment A is not unjust for the patient. It seems to follow that we can impose treatment B in place of treatment A. But no!

It’s not, I think, just that people evaluate risk differently. For I think that the judgment that the randomized deal imposes an injustice remains even if we know the patient would have opted for the randomized deal had she been asked. The mere fact that you would have been happy to pay $20 to get a one in six chance of winning $2000 does not give me the right to play that lottery with your money on your behalf. Consent would need to be actually given, not merely hypothetically.

There seems to be an interesting lesson here: choices have a value that isn’t merely epistemic. The value of people having people make their own choices is not just for us to find out what is best for them or even what is best for them by their own lights. Another lesson is that it seems to matter that A is better in some respect (that of certainty) even if B is better in all respects.

But the above line of thought neglects a complication. While most people would be happy to get the one in six chance of winning $2000 in place of $20, most people would rather that such substitutions not be made without their being consulted. Perhaps that’s the relevant hypothetical question: Would you like having such substitutions made without consultation? Suppose the answer is “yes”. Is it clear that it’s wrong for me to make the substitution without asking you?

I am inclined to think it’s still wrong, unless you indicated in some way that you want substitutions of such a sort made for you.

First person pronouns and lying

Suppose that I have Alister, who is an identical twin of Bob, registered in my class. One day, Alister is not feeling well, but knowing about Baylor's absence policy, he asks Bob to attend in his place, and while he's there to tell me, quite correctly, that the term paper that Alister is working on is almost finished. So, Bob comes class, and at the end says to me: "I'm almost finished with the paper for you." Bob is deceiving me by pretending to be Alister. But is he telling a truth or a falsehood at the end? That depends on what the referent of "I" is. If the referent is Bob, then Bob is telling me something false, but if the referent is Alister, then Bob is telling me something true. I take the referent to be Alister, and Bob expects me to take the referent to be Alister. I form the belief that Alister is almost finished with his paper for me, which is true. On the other hand, I also take the referent to be this man, and Bob expects me to do so. I also form the belief that this man is almost finished with his paper for me, which is false.

I think that in this context, what matters, what is salient in the communication, is that Alister---the guy registered for my class---is almost finished with the paper. That this guy is finished with the paper is not relevant. This suggests that there is no lie. But I am not sure.

Vertical uniformity of nature

One often talks of the “uniformity of nature” in the context of the problem of induction: the striking and prima facie puzzling fact that the laws of nature that hold in our local contexts also hold in non-local contexts.

That’s a “horizontal” uniformity of nature. But there is also a very interesting “vertical” uniformity of nature. This is a uniformity between the types of arrangements that occur at different levels like the microphysical, the chemical, the biological, the social, the geophysical and the astronomical. The uniformity is different from the horizontal one in that, as far as we know, there are no precisely formulable laws of nature that hold uniformly between levels. But there is still a less well defined uniformity whose sign is that same human methods of empirical investigation (“the scientific method”) work in all of them. Of course, these methods are modified: elegance plays a greater role in fundamental physics than in sociology, say. But they have something in common, if only that they are mere refinements of ordinary human common sense.

How much commonality is there? Maybe it’s like the commonality between novels. Novels come in different languages, cultural contexts and genres. They differ widely. But nonetheless to varying degrees we all have a capacity to get something out of all of them. And we can explain this vague commonality quite simply: all novels (that we know of) are produced by animals of the same species, participating to a significant degree in an interconnected culture.

Monotheism can provide an even more tightly-knit unity of cause that explains the vertical uniformity of nature—one entity caused all the levels. Polytheism can provide a looser unity of cause, much more like in the case of novels—perhaps different gods had different levels in nature delegated to them. Monotheism can do something similar, if need be, by positing angels to whom tasks are delegated, but I don’t know if there is a need. We know that one artist or author can produce a vast range of types of productions (think of a Michelangelo or an Asimov).

Any case, the kind of vague uniformity we get in the vertical dimension seems to fit well with agential explanations. It seems to me that a design argument for a metaphysical hypothesis like monotheism, polytheism or optimalism based on the vertical uniformity might not have some advantages over the more standard argument from the uniformity of the laws of nature. Or perhaps the two combined will provide the best argument.

Friday, January 13, 2017

Lying, acting and trust

A spy's message to his handler about troop movements is intercepted. The message is then changed to carry the false information that the infantry will be on the move without artillery support and sent onward. Did those who changed the message lie?

To lie, one must assert. But suppose the handler finds out about the change. Could she correctly say: "The counterintelligence operatives asserted to us that the infantry would be on the move without artillery support?" That just seems wrong. In fact, it seems similar to the oddity of attributing to an actor the speech of a character (though with the important difference that the actor does not typically speak to deceive). The point is easiest to see, perhaps, where there are first person pronouns. If part of the message says: "I will be at the old barn at 9 pm", it is surely false that the counterintelligence staff asserted they will be at the old barn (even though, quite possibly, they will--in order to capture the handler), but it also doesn't seem right to say that the counterintelligence staff asserted that the spy will be there.

The trust account of lying, defended by Jorge Garcia and others, seems to fit well with this judgment. On this account, to lie is to solicit trust while betraying it. But one can only betray a trust in oneself. The counterintelligence operatives, however, did not solicit the handler's trust in themselves: rather, they were relying on the handler's trust in the spy, and that trust the operatives cannot betray.

But there are some difficult edge cases. What if a counterintelligence operative dons a mask that makes him look just like the spy, and speaks falsehoods with a voice imitating the spy? But what if a spy goes to a foreign country with an entirely fictional identity? I am inclined to think that on the trust account the two cases are different. When one imitates the spy, one relies on the faith and credit that the spy has, and one isn't soliciting trust for oneself. When one dresses up as someone who doesn't exist, I think one is trying to gain faith and credit for oneself, and it seems one is lying. But I am not sure where the line is to be drawn.

Thursday, January 12, 2017

Justice and the afterlife

  1. If there is no afterlife, then promoting justice sometimes requires acting unjustly.

  2. Promoting justice never requires acting unjustly.

  3. So, there is an afterlife.

In support of 1, just think of cases where it looks like great injustices can only be stopped by minor injustices. Perhaps the only way to get an unjust dictator out of power is to spread the rumor that he is unfaithful to his wife. Perhaps the only way to bust a criminal organization is to have an informer make false promises. Of course, these cases presume that this life is all there is. If there is an afterlife, perhaps things are so arranged that all wrongs are righted in some way, so on the whole all will have justice. But without an afterlife, these cases are very compelling.

I think the weakness here is the "requires". There is a normative and a non-normative sense of "requires". Justice non-normatively requires A provided that justice cannot be had without A. Justice normatively requires A provided that in light of justice we are morally required to provide A. My line of thought above established claim 1 in the non-normative sense of requiring, whereas claim 2 is most plausible only in the normative sense of requiring.

Maybe. But I still think that 1 also has some plausibility with the normative sense of "requires" and 2 has some plausibility with the non-normative sense, so the argument as a whole has plausibility when "requires" is read consistently. The argument raises the probability of the conclusion. By how much, I do not know.

Being different at different times without changing

Here’s a curious thing: an unchanging object can have one shape at one time and a different shape at a different time.

Example 1: In the context of special relativity, times are spacelike hyperplanes. Suppose a special relativistic universe, and suppose that an object is an unchanging cube. Well, being a cube is not invariant between reference frames. So there will be one reference frame F1 at which the object is an unchanging cube and another reference frame F2 where it has some other unchanging shape. Each reference frame defines a family of times, i.e., spacelike hyperplanes. At the times of F1, the object is cubical and at the times of F2, the object is not cubical. Hence, at one time the object has one shape and at another it has another.

One might think that this example can be handled as follows: the object unchangingly is a cube-relative-to-F1 and a non-cube-relative-to-F2, and it is a cube-relative-to-F1 even at the times of F2 and a non-cube-relative-to-F2 even at the times of F1. But that’s probably mistaken. It seems to make no sense to talk of the shape-relative-to-F1 at times in F2. So we still have a difference in relative shape: the shape-relative-to-F1 is well-defined at F1 times but not well-defined at F2 times.

Example 2: Different universes will have different spacetimes, and hence different times. Suppose an object that is wholly present simultaneously in multiple universes—after all, that seems no harder than multilocation within the universe, and we have some evidence of miracles where a saint is in more than one place at the same time (for an account of such possibilities, see this). In each universe the object is unchanging, but it has a different shape in different universes. Since the different universes come with different times, the object has one shape at one time and a different shape at a different time.

This seems to be a refutation of the at-at theory of change, on which change just is difference in properties across times. But while the cases, if possible, do indeed refute that theory, there is a slightly richer at-at theory that is unaffected by them:

  • an object changes from having P to having Q provided that it has P and not Q at an earlier time and has Q and not P at a later time

  • an object changes with respect to having a property P provided that it changes from having P to not having P or from having not-P to having P.

So it’s easy to fix the at-at theory. Still, I think something has been learned here: there is an essential directionality to change.

Wednesday, January 11, 2017

Change and intervals

Suppose a Newtonian universe where an elastic and perfectly round ball is dropped. At some point in time, the surface of the ball will no longer be spherical. If an object is F at one time and not F at another, while existing all the while, at least normally the object changes in respect of being F. I am not claiming that that is what change in respect of F is (as I said recently in a comment, I think there is more to change than that), but only that normally this is a necessary and sufficient condition for it. So the ball changes with respect to sphericity, and specifically changes from being spherical to being non-spherical.

When does the ball change from spherical to non-spherical? There are two kinds of times: times when the ball is still spherical and times when the ball is no longer spherical. At any time t at which the ball is no longer spherical it is already true that for some time the ball wasn’t spherical. Why? Well, whenever the ball isn’t spherical, it differs from sphericity by some non-zero amount, and it takes some time for the ball to deform by that amount. But if at a time t the ball had not been spherical for a while, then it’s not changing from being spherical to being non-spherical—rather, it had already changed.

What about times at which the ball is still spherical? These can be further subdivided into the pre-impact times and the time of impact. It’s clear that at the pre-impact times, the ball isn’t changing from being non-spherical to being spherical.

That leaves exactly one possible answer to the question of when the ball changes from being non-spherical to being spherical: at the time of impact. Now, at the time of impact, the ball is still spherical. We now have two interesting issues. The first is that if the future is open, there need be no fact of the matter at the time of impact that the ball will ever be anything but spherical (a powerful being could, for instance, make the ball penetrate the ground without changing shape). So if the future is open, it is not true at the time of impact that the ball is changing from spherical to non-spherical, since change with respect to sphericity requires being spherical and being non-spherical at different times. The second is that even if the future is closed, it seems awkward to say that at the time of impact the ball is changing with respect to sphericity. After all, the ball still is spherical then, and has been spherical for a while, and so it doesn’t seem right to say that something that is in the same state as it’s been for a while is changing with respect to that state.

So it seems that at no time is the ball changing from spherical to non-spherical. At any given time it either has already changed or it is going to change, but it never is changing.

What if time is necessarily discrete? That doesn’t change the arguments that the ball isn’t changing pre-impact or at the time of impact. But it allows for one more option: perhaps the ball counts as changing at the instant right after impact. On a discrete-time view, that is the first moment at which the ball is non-spherical. I am inclined to say: “No, the ball isn’t changing any more. It already has changed.”

Here’s a super-quick way of putting the above, neutrally between discrete and continuous time:

  • When the ball is spherical, it will change but isn’t changing yet.

  • When the ball is non-spherical, it has already changed but isn’t changing any more.

Since obviously we don’t want to deny that change happens, what should we say? I see two options. The first is to say that change is something that only makes sense from a four-dimensional perspective. To say that change happens is not to say anything about how the world is at a time, but how the world is at two or more times, just as to say that the road narrows at the 10 mile point isn’t really to say just what the road is like at the 10 mile point, but what it’s like before the 10 mile point and what it’s like after the 10 mile point.

But I think there is another option. Suppose that time is discrete, but that in addition to having instants it also has intervals between the instants. Then if t1 is the instant of impact and t2 is the next instant, there will be an interval I from t1 to t2. This interval is not like the intervals of mathematics—it isn’t a set of points of time between t1 and t2 inclusive, because on the theory in question there are only two points of time between t1 and t2 inclusive. Rather it is at least as fundamental as the instants themselves (and perhaps grounds the instants—but we don’t need that right now). Then we can say that the ball is changing from spherical to non-spherical at I.

On this story, we can say that change always happens at some time. But times include both instants and intervals. And change is something that doesn’t happen at an instant—that seems obvious when put that way—but something that happens at an interval.

But here is an interesting problem. It seems that for every time t at which the ball exists, either it is spherical at t or it’s not spherical at t. But what if t is the interval I? Then the ball is spherical at the beginning of the interval and non-spherical at its end. It seems it’s neither spherical nor non-spherical at I.

But that doesn’t follow. I think we can simply say that the ball is not spherical at I, because it’s not the case that it’s spherical throughout I. (A pipe that is square at some point in its length is not round.)

So we have come back to the idea that the ball changes from being non-spherical to being spherical at a time when it is already non-spherical. But that’s OK, because that time is an interval, and we cannot say that it is wholly non-spherical at that interval. It is non-spherical because it is partly non-spherical and partly spherical on that interval, because it is changing from spherical to non-spherical.

So, change happens at intervals. Or at least first-order change does. Second-order change, however, can be taken to occur at instants. Thus, if t1 is the instant of impact and t2 is the next instant and I0 is the minimal interval just preceding t1 while I1 is the interval from t1 to t2 (which I previously just called I), then at I0 the ball isn’t changing in sphericity, while at I1 it is. And we can say that at t1 it is changing from not changing in sphericity to changing in sphericity. Third-order change, then, will take place at intervals, fourth-order change at instants, and so on. There is no vicious regress: we just need two kinds of things, instants and intervals.

This is pretty complicated, more complicated than the simple story that change doesn’t happen at a time but at a pair (or more) of times. But it also gives me a nice story about what’s lacking in the at-at theory of change. It may be necessarily the case that an object changes if and only if it is one way at one time and another way at another time. But that isn’t what change is. What change is is having an interval of time such that the object is one way at one endpoint and another way at the other endpoint. But an interval is something over and beyond its endpoints. If, perhaps per impossibile, God were to annihilate the interval I between t1 and t2, the ball would be first spherical and then non-spherical, but it wouldn’t have changed from spherical to non-spherical.

Tuesday, January 10, 2017

Infinity, Causation and Paradox

I've just signed a contract with Oxford for this book, with a manuscript delivery date in September.

Analogue jitter in motivations and the randomness objection to libertarianism

All analogue devices jitter on a small time-scale. The jitter is for all practical purposes random, even if the system is deterministic.

Suppose now that compatibilism is true and we have a free agent who is determined to always choose what she is most strongly motivated towards. Now suppose a Buridan’s ass situation, where the motivations for two alternatives are balanced, but where the motivations were acquired in the normal way human motivations are, where there is an absence of constraint, etc.

Because of analogue jitter in the brain, sometimes one motivation will be slightly stronger and sometimes the other will be. Thus which way the agent will choose will be determined by the state of the jitter at the time of the choice. And that’s for all practical purposes random.

Either in such cases there is freedom or there is not. If there is no freedom in such cases, then the compatibilist has to say that people whose choices are sufficiently torn are not responsible for their choices. That is highly counterintuitive.

The compatibilist’s better option is to say that there can still be freedom in such cases. It’s a bit inaccurate to say that the choice is determined by the jitter. For it’s only because the rough values of the strengths of the motivations are as they are that the jitter in their exact strength is relevant. The rough values of the strengths of the motivations are explanatorily relevant, regardless of which way the choice goes. The compatibilist should say that this kind of explanatory relevance is sufficient for freedom.

But if she says this, then she must abandon the randomness objection against libertarianism.

Spiritual but not religious

A lot of people identify as spiritual but not religious. It would be interesting to have statistics on how common this is among professional philosophers. There are lots of naturalists and a significant minority of theists of definite religion, but I just haven’t run across many in between. But shouldn’t one expect that there be a lot of philosophers like that, convinced by argument or just intuition that there is much more to the world than science could possibly get at, but not convinced by the arguments for any particular religion? Maybe it’s because as a profession we prefer definite views? Or maybe there are many philosophers in this category but they just don’t talk about it that much?

I do think it’s important not to downplay the intellectual bona fides of the “spiritual but not religious”. The arguments that there is more to the world and to life than there is room for in naturalism, that there is something “spiritual”, are very strong indeed. (Josh Rasmussen’s and my forthcoming Necessary Existence is relevant here, as are considerations about the meaning of life, the narrow space for normativity and mind on naturalist views, the implausibility of holding that there be a whole category of human experience that is never veridical, etc.) I think there are strong arguments that this something “spiritual” includes God, and there are strong arguments that Catholic Christianity is correct. But it should be very easy to imagine being convinced by the arguments for a spiritual depth to the world but not being convinced by the further arguments (I am not taking a stance in this post on whether it would be rational full stop to be in this position—I do, after all, think the arguments going all the way to Catholicism are strong).

Monday, January 9, 2017

Epicurus on death

There is the classic Epicurean argument that:

  1. You aren’t harmed by death when dead, since then you don’t exist, and

  2. You aren’t harmed by death when alive, since then you’re still alive,

so:

  1. You aren’t ever harmed by death.

I just thought of a cute way to make the argument slightly more compelling. Take it, contrary to fact but in accord with what the Epicureans believed, that death is the permanent cessation of existence.

Now let’s imagine a scenario where everything, including time itself, comes to an end at the last moment of your life. And for simplicity (this doesn’t affect anything) let’s suppose you came into existence at the beginning of time. Then you are never dead. When we think about this scenario, the analogue of claim 1 is trivially true, for you’re never dead. Thus on this scenario, all that needs to be thought about is an analogue of of claim 2 (with “death” being understood not as an event but as the fact that one’s life has an end) plus the additional highly plausible claim:

  1. The scenario where everything, including time, comes to an end at the last moment of your life is no better for you than the scenario where you alone come to an end then.

I don’t think this makes the argument much more compelling, because I don’t think claim 1 was ever the real problem with the Epicurean argument. But in the scenario where time comes to an end, I think we avoid some irrelevant objections to the argument.

The real problem with the Epicurean argument is, I think, two-fold. First, I think 2 is dubious: your well-being at one time can depend on what happens or does not happen at other times.

Second, one can accept 3 and still think you’re harmed by death. For one can hold that one isn’t ever harmed by death, i.e., that there is no time at which one is harmed by death, but nonetheless as a four-dimensional whole one is worse off for death. Here’s one way to make the point. Suppose that by choosing a medical regimen for you, I can choose whether:

  • You are unconscious for ten years, and then you live ten years while experiencing two units of wholesome pleasure each day, without anything negative, and then you cease to exist.
  • You live ten years while experiencing one unit of wholesome pleasure each day, without anything negative, and then you cease to exist.

If I choose the regimen that gives you the second life, I harm you overall but you aren’t ever harmed—there is no time at which you’re worse off for that option.

Maps from desires and beliefs to actions

On a naive Humean picture of action, we have beliefs and desires and together these yield our actions.

But how do beliefs and desires yield beliefs? There are many (abstractly speaking, infinitely many, but perhaps only a finite subset is physically possible for us) maps from beliefs and desires to actions. Some of these maps might undercut essential functional characteristics of desires—thus, perhaps, it is impossible to have an agent that minimizes the satisfaction of her desires. But even when we add some reasonable restrictions, such as that agents be more likely to choose actions that are more likely to further the content of their desires, there will still be infinitely many maps available. For instance, an agent might always act on the strongest salient desire while another agent might randomly choose from among the salient desires with weights proportional to the strengths—and in between these two extremes, there are many options (infinitely many, speaking abstractly). Likewise, there are many ways that an agent could approach future change in her desires: allow future desires to override present ones, allow present desires to override future ones, balance the two in a plethora of ways (e.g., weighting a desire by the time-integral of its strength, or perhaps doing so after multiplying by a future-discount function), etc.

One could, I suppose, posit an overridingly strong desire to act according to one particular map from beliefs and desires to actions. But that is psychologically implausible. Most people aren’t reflective enough to have such a desire. And even if one had such a desire, it would be unlikely to in fact have strength sufficient to override all first-order desires—rare (and probably silly!) is the person who wouldn’t be willing to make a slight adjustment to how she chooses between desires in order to avoid great torture.

Nor will it help to move from desires to motivational structures like preferences or utility assignments. For instance, the different approaches towards risk and future change in motivational structure will still provide an infinity of maps from beliefs (or, more generally, representational structures) and motivational structures to actions.

Here’s one move that can be made: Each of us in fact acts according to some “governing mapping” from motivational and representational structures to actions (or, better, probabilities of actions, if we drop Hume’s determinism as we should). We can then extend the concept of motivational structure to include such a highest level mapping. Thus, perhaps, our motivational structure consists of two things: an assignment of utilities and a mapping from motivational and representational structures to actions.

But at this point the bold Humean claim that beliefs are impotent to cause action becomes close to trivial. For of course everybody will agree that we all implement some mapping from motivational and representational structures to actions or action probabilities (maybe not numerical ones), and if this mapping itself counts as part of the motivational structure, then everyone will agree that we all have a motivational structure essential to all of our actions. A naive cognitivist, for instance, can say that the governing mapping is one which assigns to each motivational and representational structure pair the action that is represented as most likely to be right (yes, this mapping doesn’t depend on the specific contents of the motivational structure).

Perhaps, though, a Humean can at least maintain a bold claim that motivational structures are not subject to rational evaluation. But if she does that, then the only way she can evaluate the rationality of action is by the action’s fit to the motivational and representational structures. But if the motivational structures include the actually implemented governing mapping, then every action an agent performs fits the structures. Hence the Humean who accepts the actual governing mapping as part of the motivational structure has to say that all actions are rational. And that’s a bridge too far.

Of course a non-Humean also has to give an account of the plurality of ways in which motivational and representational structures can be mapped onto actions. And if the claim that there is an actually implemented governing mapping is close to trivial, as I argued, then the non-Humean probably has to accept it, too. But she has at least one option not available to the Humean. She can, for instance, hold that motivational structures are subject to rational evaluation, and hence that there are rational constraints—maybe even to the point of determining a unique answer—on what the governing mapping should be like.

Saturday, January 7, 2017

Looping and eternal pleasure

Scenario 1: You experience a day of deeply meaningful bliss and then are annihilated.

Scenario 2: You experience a day of deeply meaningful bliss and then travel back in time, with memories reset, to restart that very same day of an internally looping life.

Scenario 3: You experience a day of deeply meaningful bliss, over and over infinitely many times, with memories reset.

Here are some initial intuitions I have:

  1. Scenario 3 is much better than Scenario 1.

  2. Scenario 3 is at most a little better than Scenario 2.

But the following can be argued for:

  1. Scenario 2 is no better than Scenario 1.

After all, you experience exactly the same period of bliss in Scenarios 1 and 2. Granted, in Scenario 1 you are annihilated, but (a) that doesn’t hurt, and (b) the only harm from the annihilation is that your existence is limited to a single day, which is also the case in Scenario 2. Time travel is admittedly cool, but because of the memory reset in Scenario 2, you don’t get the satisfaction of knowing you’re a time-traveler.

This is a paradox. How to get out of it? I see two options:

  1. Deny the possibility of internal time loops.

  2. Affirm that Scenario 3 is much better than Scenario 2.

Regarding 4, one would also have to deny the possibility of external time loops. After all, it wouldn’t be significantly all that different for you if everybody’s time looped together in the same way, and so external time loops can be used to construct a variant on Scenario 2.

I personally like both 4 and 5.

Objection: On psychological theories of personal identity, memory reset is death and hence in Scenario 3 you only live one day.

Response 1: Psychological theories of personal identity are false.

Response 2: Modify Scenario 3. Before that day of bliss, you have a completely neutral day. On each of the days of deeply meaningful bliss, you remember that neutral day, but then have amnesia with respect to the last 24-hour period once each blissful day ends. By psychological theories, there is identity between the person on each blissful day and the neutral day, and hence by symmetry and transitivity of identity, there is identity between the person over all the blissful days.

Note: Scenario 1 is inspired by a question by user “Red”.

Thursday, January 5, 2017

Eternal pleasure

Suppose the minute of the greatest earthly pleasure you’ve ever tasted was repeated, over and over, for eternity, with your memory reset before each repeat. If hedonism were true, this would be a truly wonderful life, much better than your actual life. But it seems to be a pretty rotten life. So hedonism seems quite far from the truth.

But could there, perhaps, be a pleasure such that eternal repetition of it, in and of itself, would be worth having? It would have to be a pleasure that carries its meaningfulness in itself, one whose quale itself is deeply meaningful. It would have to have be an experience of infinite depth. Could we have such an experience? With Aquinas, I think philosophy cannot answer this question, though theology can.

Monday, January 2, 2017

Humean views of rationality and the pursuit of money

Consider a Humean package view of rationality where:

  1. Then end of practical rationality is desire satisfaction.

  2. All the rational motivational drive in our decisions comes from our desires.

  3. There are no rational imperatives to have desires.

Now suppose that you learn that some costless action will further one or more of your desires, but you have no idea which desire or desires will be furthered by that action. (If we want to have some ideality constraints on which desires make action rational—say, only desires that would survive idealized psychotherapy—then we can suppose that you also know that the desire or desires furthered by the action will satisfy those constraints. I will ignore this wrinkle.)

Any theory of rationality that holds it to be rational to pursue one’s desires should hold it both rational and possible to take that costless action. In the abstract, a case where you know that some desire will be furthered but have no idea which one seems a strange edge case. But actually there is nothing all that strange about this. When money is offered to us, sometimes we have a clear picture of what the money would allow us to do. But sometimes we don’t: we just know that the money will help further some end or other. (Of course, in some people, the pursuit of money may have a non-instrumental dimension, but that’s vicious and surely unnecessary.)

So now let’s go back to the costless action that furthers one or more of your desires and the desire theory of rational motivation. How can this theory accommodate this action?

Option 1: Particular desires. You pick some desire of yours—let’s say, a desire to read a good book—and you think to yourself: “There is a non-zero probability that the action furthers my desire to read a good book.” Then the desire to read a good book, in the usual end-to-means ways, motivates you to do the costless action.

That, of course, could work. And in fact, in the case of money we do sometimes proceed by imagining something that we could buy. However, thinking that what motivates one is just the non-zero probability of furthering a particular desire gets things wrong for two reasons. The first is that we could imagine the case being enriched by your learning that the desire that will be furthered by the action is none of the desires that would come to mind if you were to spend less than a minute thinking about the case but that you need to make your decision within a minute. The inability to think of a particular desire that even might be furthered by the action does not affect the ratioanl possibility of taking the costless action.

The second is that this approach gets the strength of motivation wrong. You have many desires, and the desire to read a good book is only one among many. The probability that that desire to read a good book would be furthered by the costless action might well be tiny, especially if you received the further information that it is only one of your desires that is furthered by the action. Such a small probability of a benefit could still motivate you to take a costless action, but it may not work for similar cases where there is a modest cost. For instance, we can suppose you learn that:

  1. The benefit is roughly equal to reading a good book as measured by desire-satisfaction.

  2. The cost is roughly equal to a tenth of the benefit of reading a good book as measured by desire-satisfaction.

  3. You have a hundred desires and the one furthered is but one of them.

Well, then, the action is clearly worth it by (4) and (5). But it’s not worth doing the action simply on a one percent chance that it will lead to reading a good book, since the cost is ten percent of the benefit of reading a good book.

One might try to remedy the second problem by mentally going through a larger number of desires so as to increase the probability that some one of the desires will be fulfilled. But we still have the first objection—there may not be enough time to do this—and surely it is implausible that one would have to go through such mental lists of desires in order to get the motivation.

Option 2: A higher-order desire to have satisfied desires. Suppose you have a higher-order desire H to satisfy lower-order desires. Then while you don’t know which lower-order desire is furthered by the action, you do know that this higher-order desire is furthered by them.

This approach seems to lead to an unfortunate double-counting. When you sit down to read a good book, do you really get two benefits, one of reading the book and the other of furthering the higher-order desire to have satisfied lower-order desires? If not, the approach is problematic. But if so, then it gets the rational strength of motivation wrong. For suppose that you are choosing between two actions. Action A will lead to your reading a good book. Action B will lead to the fulfillment of an unknown desire other than reading a good book, a desire you nonetheless know to have the same weight. On the higher-order solution, it seems you have a double motivation for action A, namely H and the desire to read a good book, but only a single motivation for action B, namely H, and hence you should have a twice as strong rational motivation for A. But that’s surely not rational!

Maybe, though, you can get out of the double-counting in some way, by having some story about desire-overlap, so that H and the desire to read a good book don’t add up to a double desire. I suspect that this may undercut the force of the story, by making H not be a real desire.

But there is a second and more serious problem with the story. Suppose that Jim has all the usual lower-order order desires but lacks H. If rational motivation comes from desires, then Jim will not be rationally motivated to the action. (Maybe he will have some accidental non-rational motivation for the action.) But surely not going for a costless action that he knows will fulfill some desire of his will be a rational failing, assuming that it’s rational to fulfill one’s desires. Hence there will have to be a rational imperative to have H among one’s desires, contrary to the third part of the Humean package we are exploring.

Now I suppose we could drop the third part of the Humean picture, and hold that rationality requires some desires like H. But I think this makes the rest of the picture less plausible. If rationality requires one to have certain desires, it could just as well require one directly to fulfill certain ends, thereby undercutting the second part of the Humean picture.

Finally, I should note that not all non-Humeans should rejoice at this argument. For similar considerations may apply against some other views. For instance, some Natural Law views that tie motivation very tightly to basic goods may have this problem.