Tuesday, July 22, 2014


  1. Every meaning derives from components to which intelligent beings have assigned a meaning.
  2. Some things that have a meaning that does not derive from components to which earthly beings have assigned a meaning.
  3. Therefore, there is a non-earthly intelligent being.

I suggest two examples for premise (2).

Life: Life has a meaning. But a meaning of life that derives from our assignments is not a meaning that matters to us. What we have assigned meaning to, we could reassign meaning to. If the meaning of life were merely a matter of human assignment, then humanity's search for meaning would be a mere matter of curiosity, of figuring out how our ancestors have assigned meaning and how those meanings combine. It would be either like searching for the meaning of an ancient inscription (a case where we don't know the meanings of the components) or like parsing a complex sentence in first order logic (a case where we know the meanings of the components but don't know how they go together). There would be no deep existential relevance in such a meaning, since we could just as well assign a meaning ourselves. It would be just a meaning assigned by peers.

This example shows that the meaning of life needs not just to be a meaning assigned by a non-earthly intelligent being, but by a being whose meaning-assignments have deep existential relevance to us. A being with a deep kind of authority. So not just some space alien that seeded life on earth, say.

The sublime: Any case of the sublime—say, the Orion Nebula or Beethoven's 9th—has a meaning that escapes us, all of us. Cases of the sublime can be natural or human-made, but in both cases they have a meaning beyond us. And that meaning-beyond-us isn't just a matter of being better at figuring out how components combine, in the way that the meaning of a sentence of First Order Logic is. In a piece of the sublime we don't know very well, but can only vaguely sense, what the meaningful components are, and we are not responsible for the mysterious meaningfulness of these components. Even in the human-made cases, the creator is a servant to that mysterious meaning of the components.

Moreover, the meaning of the sublime piece is one that we resonate with, one we have a kind of grasp of—or maybe that has a grasp on us—that ever eludes us. We have a resonance to the meaning of the sublime. So whatever story we give about that meaning, we also need to give a story about how it's a meaning we resonate to. There could be aliens that have assigned deep mythological interpretations to various components of the Orion Nebula. But that isn't the meaning we resonate to. So, once again, the argument not only yields a non-earthly intelligence, but one who can make us resonate to his designs.

Saturday, July 19, 2014

What is a material object?

I've found the notion of a material object very puzzling. Here is something that would render it less puzzling to me:

  • x is a material object if and only if x has limited location.
There would then be three ways for an object y to be immaterial:
  1. There are locations and y has no location.
  2. There are no locations.
  3. There are locations and y is unlimited in location.
It would now be plausible that a perfect being would be necessarily immaterial. A perfect being doesn't need anything other than itself, so it could exist in worlds where there are no locations, in which worlds it would have type 2 immateriality. And in worlds where there are locations, a perfect being would be unlimited in location, and would have type 3 immateriality. Thus, in all worlds, a perfect being would have immateriality. But in no world would a perfect being have type 1 immateriality.

One might worry that there could be an animal that is as big as space itself, and then it would count as an immaterial object. But even though the animal would be everywhere, it wouldn't be everywhere in every part and respect. Its digestive system would be here but not there, and so on.

Alternately, one might stick to our definition of materiality as limited location, but modalize. Maybe "limited location" is a modal concept, so that a being that could be limited in location is thereby limited in location.

Thursday, July 17, 2014

More on the Adams Thesis

The Adams Thesis for a conditional → says that P(AB)=P(B|A). There are lots of theorems, most notably due to Lewis, that say that this can't be right, but they all make additional assumptions. On the other hand, van Fraassen has a paper arguing that any countable probability space can be embedded in a probability space that has a conditional → which satisfies the Adams Thesis and a whole bunch of axioms of conditional logic. The proof in the paper appears incomplete to me (it is not shown that all necessary conditions for the choice of [A,B] are met). Anyway, over the last couple of days I've been working on this, and I think I have a proof (written, but needing proofreading) of a generalization of van Fraassen's thesis that drops the countability assumptions (but uses the Axiom of Choice).

The conditional logic one can have along with the Adams Thesis is surprisingly strong. In my construction, for each A, the function CA(B)=(AB) is a boolean algebra homomorphism. Thus, we have Weakening, Conjunction of Consequents, Would=Might, and the Conditional Law of Excluded Middle. The main plausible axioms that we don't get are Weak Transitivity and Disjunction of Antecedents (can't get in the former case; don't know about the latter).

The proof isn't that hard once one sees just how to do it, but it ends up using the Maharam Classification Theorem, the von Neumann-Maharam Lifting Theorem and oodles of Choice, so it's not elementary.

Tuesday, July 15, 2014

Trust and the prisoner's dilemma

This is pretty obvious, but I never quite thought of it in those terms: The prisoners' dilemma shows the need for the virtue of trust (or faith, in a non-theological sense). In the absence of contrary evidence, we should assume others to act well, to cooperate.

This assumption perhaps cannot be justified epistemically non-circularly, at least not without adverting to theism, since too much of our knowledge rests on the testimony of others, and hence is justified by trust. Our own observations simply are not sufficient to tell us that others are trustworthy. There is too much of a chance that people are betraying us behind our backs, and it is only by relying on theism, the testimony of others, or directly on trust, that we can conclude that this is not so.

It seems to me that the only way out of the circle of trust would be an argument for the existence of a perfect being (or for some similar thesis, like axiarchism) that does not depend on trust, so that I can then conclude that people created by a perfect being are likely to be trustworthy. But perhaps every argument rests on trust, if only a trust in our own faculties?

Saturday, July 12, 2014

Responsibility and randomness

Consider this anti-randomness thesis that some compatibilists use to argument against libertarianism:

  1. If given your mental state you're at most approximately equally likely to choose A as to choose B, you are not responsible for choosing A over B.
Note that being in such a state of mind is compatible with determinism, since even given determinism one can correctly say things like "The coin is equally likely to come up tails as heads."

Thesis (1) is false. Here's a counterexample. Consider the following family of situations, where your character is fixed between them: You choose whether to undergo x hours of torture in order to save me from an hour of torture. If x=0.000001, then I assume you will be likely to choose to save me from the torture—the cost is really low. If x=10, then I would expect you to be very unlikely to save me from the torture—the cost is disproportionate. Presumably as x changes between 0.000001 and 10, the probability of your saving me changes from close to 1 to close to 0. Somewhere in between, at x=x1 (I suppose x1=1, if you're a utilitarian), the probability will be around 1/2. By (1), you wouldn't be responsible for choosing to undergo x1 hours of torture to save me from an hour of torture. But that's absurd.

Thus, anybody who believes in free will, compatibilist or incompatibilist, should deny (1).

Now, let's add two other common theses that get used to attack libertarianism:

  1. If a choice can be explained with antecedent mental conditions that yield at most approximately probability 1/2 of that choice, a contrastive explanation of that choice cannot be given in terms of antecedent mental conditions.
  2. One is only responsible for a choice if one can give a contrastive explanation of it in terms of antecedent mental conditions.
Since (2) and (3) imply (1), and (1) is false, it follows that at least one of (2) and (3) must be rejected as well.

There is an independent argument against (1). The intuition behind (1) is that responsibility requires that a choice be more likely than its alternative. But necessarily God is responsible for all his choices. And surely it was possible in at least one of his choices for him to have chosen otherwise (otherwise, how can he be omnipotent?). If the choice he actually made was not more likely than the alternative, then he was not responsible by the intuition. But God is always responsible. Suppose then the choice he actually made was more likely than the alternative. Nonetheless, he could have made the alternative choice, and had he done so, he would have done something less likely than the alternative, and by the intuition he wouldn't have been responsible, which again is impossible. Thus, the theist must reject the intuition.

Thursday, July 10, 2014

Kant and Lewis on our freedom

Kant (on one reading) holds that the initial conditions of the universe and the laws of nature depend on us (noumenally speaking). This reconciles determinism with freedom: sure, our actions are determined by the laws and initial conditions, but the laws and initial conditions are up to us. Kant also thinks that a further merit of this view is that one can blame people whose misdeeds come from a bad upbringing, because noumenally speaking they were responsible for their own upbringing.

Lewis holds that freedom is compatible with determinism, and in a deterministic world had one acted otherwise, the laws would have been different.

Everybody agrees that the view I ascribe to Kant is crazy (though not everybody agrees that the ascription is correct). But Lewis's view is supposed to be much saner than Kant's.

How? The obvious suggestion is that Lewis only makes the laws depend counterfactually on our actions (assuming determinism) while Kant makes the laws depend explanatorily on our actions. But that suggestion doesn't work, since Lewis's best-systems account of laws makes the laws depend on the law-governed events, and so it makes the laws depend not just counterfactually on our actions but also explanatorily: the laws' being as they are is grounded in part in our actions. So both accounts make the laws explanatorily depend on us.

Admittedly, Kant also makes the past, not just the laws, depend on our actions. But that's also true for Lewis, albeit to a smaller degree, because of his doctrine of small miracles...

Monday, July 7, 2014

A quick Thomistic argument for alternate possibilities

  1. I freely choose between A and B only if I am deciding in the light of a non-dominated reason for A and a non-dominated reason for B.
  2. A non-dominated reason for C is a causal power for deciding in favor of C.
  3. If x has a causal power for φing, then x can φ.
  4. So, if I freely choose between A and B, then I can decide in favor of A and I can decide in favor of B.

Widerker on Pruss on incompatibilism

David Widerker has a very nice post explaining my version of the consequence argument.

Sunday, July 6, 2014

Light-up wax dragon

The summer is a nice time for various non-philosophical projects. My daughter had the idea for this project. We humans really like light, don't we?

Thursday, July 3, 2014

Natural law and normativity

Start with this idea:

  • An activity A is φly required of x if and only if (and because) x's not performing A constitutes a failure of x's φ faculty.
For "φ" we can fill in "epistemic", "practical", "cardiovascular", etc.

But there is a problem: how do we identify the epistemic, practical and cardiovascular faculties? We could try to pick them out in some plausible way: the epistemic faculty is our faculty of belief formation, the moral faculty is the will, and the cardiovascular faculties are the heart and blood vessels.

However, I think things aren't that easy. The account of a cardiovascular faculty doesn't work: not every failure of heart function need be a cardiovascular failure, since a heart (if not in us, then in some other species) may have non-cardiovascular function (e.g., of providing an internal clock). On the other hand, a representation is a belief at least in part because it is an output of the epistemic faculties. A faculty is a will because it aims at the production of actions (i.e., one sense of praxeis). The epistemic and practical cases, thus, end up defining epistemic and practical requirement in terms of the proper functioning of the faculties of epistemic and practical production.

But what are epistemic and practical productions? (And by analogy, what's a cardiovascular production?) We could try to identify them by ostension. My believing that I have two legs is an epistemic production, while my writing this post is a practical production.

There are two problems with the ostensive approach. The first is the problem of aliens. Aliens can have epistemic and practical faculties, i.e., intellects and wills, but they might not have exactly productions that fit in the same natural kind as our believings and doings. This problem is similar to the problem of multiple realizability for token identity physicalism in the theory of mind.

The second is the "So what?" problem. If I simply ostend to two of my faculties, whether through their productions or otherwise, that leaves it mysterious why the normativity that they generate is particularly important. There is something deeper and more important about epistemic and practical requirements than about cardiovascular ones. A person who is an epistemic or practical disaster but who has a well-functioning cardiovascular system is much worse off than one who is an epistemic and practical success but has a disaster of a cardiovascular system (maybe is surviving on life support). The obvious natural law solution to the "So what?" problem is to say that our human nature makes epistemic and practical flourishing more non-instrumentally important to us. But that raises the question of whether there couldn't be beings who are very much like us, yet whose natures elevate the cardiovascular over the rational and practical. It seems to be because the epistemic and practical faculties are what they are that they are more non-instrumentally important to our flourishing than our cardiovascular faculties, rather than because of our human nature.

I don't know how serious the two problems are. Maybe one can and should bite the bullet on them.

Wednesday, July 2, 2014

A plan for your life

Consider this argument:

  1. There is a comprehensive plan for your life not of your making.
  2. The best hypothesis to explain (1) is that the plan is God's.
  3. So, probably, God exists.
More could be said about (2) and the inference to (3). But I want to focus on (1). It seems pretty clear that (1) begs the question against the atheist or agnostic: the only reason to think (1) is true is that one thinks there is a Planner, and this the atheist and agnostic do not believe.

But I think this is too quick. I think a lot of people may have an intuition of (1) that is not simply based on a belief in a Planner. That intuition may be basic or it may be inferred inductively from various events in the person's life having an apparent plot, and more than a plot, a plan made with the person in sight. I remember a student who professed to be an atheist telling me that she feels that her life has a plan, and that she doesn't know if she can fit this with her atheism. (I told her she needed to figure this out.) She may have been exceptional: many atheists probably do not have the intuition of (1). But at least in regard to her, the argument wouldn't have begged the question.

And even if the intuition of (1) were always based on theism, that would not make the argument question begging in every case. For one could use Dan Johnson's brilliant observation on the ontological argument here. Suppose someone is reasonably a theist (e.g., due to a sensus divinitatis), then reasonably infers (1), then for some unreasonable reason (say, the wrong kind of social pressures) becomes an atheist but still maintains the belief in (1). Her belief in (1) remains reasonable—it is her atheism that is unreasonable on this story. (I don't need any claim like that every atheist is unreasonable. But this one I am supposing to be.) Then she would be reasonable in inferring back to theism from (1).

Monday, June 30, 2014

Measuring rotational speed with a phone and an LED

Over the weekend, I was having fun with using an LED as a photodiode, and hooking it up to my oscilloscope. This can be used to measure the speed of a drill (just stick a reflective spot on a matte chuck and use a flashlight). I was going to make an Instructable on measuring rotational speeds of various objects, but my son told me that most people don't have an oscilloscope. But then I found you can just connect the LED to the microphone input on a phone and use a free oscilloscope app, and use that to measure rotational speed. And so I made an Instructable that doesn't need an oscilloscope.

The argument from vagueness

Here's an argument inspired by Plantinga's argument from counterfactuals:

  1. The meaning of a word is wholly determined by the decisions of language users.
  2. The meaning of "bald" is not wholly determined by the decisions earthly language users.
  3. Therefore, there is a non-earthly language user whose decisions at least partly determine the meaning of "bald".
The argument for (2) is this:
  1. In any hypothetical sequence to whose last member "bald" does not apply and to whose first it does, there is a transition point in the sequence, i.e., a member to whom "bald" applies but to whose successor it does not.
  2. The points in a hypothetical sequence at which "bald" does or does not apply are wholly determined by the meaning of "bald".
  3. There are hypothetical sequences where the decisions of earthly language users do not determine the transition point.
  4. So, (2) is true.
The argument for (6) is simply to exhibit a series of people, with someone completely bereft of hair on one end, and someone with a full head of hair on the other, with very slight transitions. Clearly our decisions and those of our ancestors do not determine where the transition point is. Claim (5) seems very plausible.

That leaves (4). But that's a matter of logic for any fixed sequence, as a standard argument for epistemicism points out. For suppose there is no transition point. Then:

  • not ("bald" applies to xn and "bald" does not apply to xn+1)
is true for n=1,...,N where N is the number of items in the series. Given the fact that "bald" applies to x1, we can then conclude by classical logic that "bald" applies to x2, and to x3, and so on up to xN, contrary to assumption.

And the best candidate for the non-earthly language user is God. For any finite language user, say an alien who gave us language, would be in the same boat: its decisions would be insufficient to determine all meaning.

Friday, June 27, 2014

Stomp rockets out of magazines

Here's my new and improved Instructible on recycling magazines into stomp rockets.

Intentions, tryings and Double Effect

Bob buys a lottery ticket, hoping to win but knowing that it's exceedingly unlikely.

Suppose Bob wins. We can't say that his winning is an unintended side-effect in the sense involved in the Principle of Double Effect. But it is also odd to say that he intended to win, given that he knows how exceedingly unlikely it is. The phrase "hoping to win"much more apt than "intending to win." Likewise, it doesn't seem right to say that winning was a part of Bob's plan. He'd have to be crazy to plan on winning. Nonetheless, winning is something he aimed at, and his action would have been a failure—an expected failure—if he didn't win.

I intend to post this post, and posting this post is a part of my action plan. Bob's relationship to winning only differs quantitatively from my relationship to posting this post. In both cases, there is probability of success somewhere between 0 and 1. In my case, it's close to 1. In Bob's case, it's close to 0. Neither of us can disclaim responsibility upon success. Both of us have our hearts set upon the goal, and our action is defective if it doesn't reach that goal. The difference is that Bob expects it to be defective while I expect mine to be successful (at least in respect of posting—whether it will be successful in respect of philosophical progress is a different question).

There is a yet third kind of case, that of "stretch goals". Suppose Sally buys a lottery ticket in order to support the government activities that the lottery funds, while at the same time still hoping to win (perhaps she plans to donate any winnings to the state, and thereby support the same government activities even more). If Sally wins, again that's not an unintended side-effect of the Double Effect sort. Winning is indeed something she aims at, something she has heart set on. But it's a stretch goal: if she doesn't accomplish it, her action need not be a failure in any way. It is even more awkward to say that Sally intends to win, or that winning is part of her action plan, than it is to say these things about Bob.

Both Bob and Sally are trying to win, but neither is intending to win. The difference between them is that if Bob doesn't win, his action fails, but if Sally doesn't win, his action doesn't need to fail in any way.

All this means that the traditional formulation of the Principle of Double Effect in terms of effects that are intended and effects that are not is incomplete.

I think we do a bit better, then, to formulate Double Effect not in terms of what one is intending, but in terms of what one is trying to do. The classical formulation tells us something like this:

  1. An action expected to have an evil effect can be permissible when and only when one is intending a proportionate good and one does not intend the evil effect (either as a means or as an end).
Of course, when the action is expected to have an effect, the distinction between what one intends and what one tries for disappears. But we should extend the principle:
  1. An action that has a chance of an evil effect can be permissible when and only when one is trying for a proportionate good and one is not trying for the evil effect (either as a means or as an end).

A bonus of (2) is that while some have claimed that merely instrumental goals are not intended, thereby destroying the distinction that Double Effect is about, it is obvious that an agent is trying to make these goals happen. Whatever we say about whether the terror bomber is intending to do, it's clear that he's trying to kill innocent people.

I also think that talking in terms of trying instead of intending has the benefit of further de-psychologizing the notion and avoiding the inner-speech objection to Double Effect (which says that one ends up justifying actions simply by thinking about them differently). It is even more obvious that the moral worth of an action depends on what one was trying to do than that it depends on what one was intending.

Now my own preferred reformulation of Double Effect is even more radical than (2): it replaces intention with accomplishment. I think (2) is a step along the path to that reformulation, since trying is more intimately linked to accomplishments than intending is (pace what I say about intention in that paper). If something is an accomplishment of mine, I tried to bring it about under some description. But I needn't have intended it under any description, as the cases of Bob and Sally show.