Thursday, February 26, 2015

Counting up arguments

Some time in the fall, Ted Poston asked me how I thought one should model the force of multiple arguments for the existence of God in a Bayesian setting. There are difficulties. For instance, when we discover a valid argument, what we are discovering is the necessary truth that if the premises are true, the consequent is as well. But necessary truths should have probability one. And it's hard to model learning things that have probability one. Moreover, the premises of the arguments are typically not something we are sure of. At the time, I suggested that we conditionalize on the contingent propositions reporting that the premises seem true. Poston ended up going with an urn model instead.

I want to try out another model for counting up the force of multiple arguments, one where we not worry about what is and what isn't necessary. I will develop the story with a toy model that has prior probabilities that make calculation easy, leaving it for future investigation to weaken my assumptions of prior independence and equiprobability.

So, suppose we're looking at decent (say: valid, non-question-begging) arguments for and against a conclusion q, and we find that there are m arguments for and n arguments against. How likely is q given this? Start the model by identifying in each argument the controversial premise. (If there is more than one, conjoin them.) Thus, we now have m+n controversial premises. Let's say that premises p1,...,pm support arguments for q and pm+1,...,pm+n support arguments for ~q.

Prior to the discovery of the arguments, in my model I will take the propositions p1,...,pm,pm+1,...,pm+n,q to be all independent, and, further, to each have probability 1/2.

I now model the discovery of the arguments as a discovery of material conditionals. Thus, we discover the m material conditionals p1q,...,pmq that favor q and the n material conditionals pm+1→~q,...,pm+n→~q that favor ~q. How do we model this discovery? We simply ignore all the messy details that the discoveries were at least in part a matter of discovering logical connections (though perhaps only in part; some of the premises beside the controversial premise might have been empirical). We simply conditionalize on the m+n discovered material conditionals.

What's the result? Well, we could use Bayes' Theorem, but that's just a tool for computing conditional probabilities, and sometimes other methods work better. We have m+n+1 "propositions of interest" (i.e., q and the pi). Our prior probabilities assign equal chances to each of the 2m+n+1 possible ways of assigning True or False to the propositions of interest. When we conditionalize on the material conditionals we rule out some combinations. For instance, if we assign True to p1, we had better assign True to q as well, and we had better assign False to pm+1,...,pm+n, all on pain of contradiction.

We can say something about how many truth assignments remain after the conditionalizations:

  1. Assign False to all the pi and False to q: one combination
  2. Assign False to all the pi and True to q: one combination
  3. Assign False to p1,...,pm and True to at least one of pm+1,...,pm+n and False to q: 2n−1 combinations
  4. Assign True to at least one of p1,...,pm and False to all of pm+1,...,pm+n and True to q: 2m−1 combinations.
All the combinations remain equally likely after conditionalizations. The final probability of q then is just the proportion of the combinations where q comes out true amongst all four types of combinations. Now, q comes out true in combinations of types (2) and (4). Thus, the final probability of q given the discovery D of the arguments is:
  • P(q|D)=2m/(2n+2m).
Try some numbers. Say that we have 3 arguments in favor and 1 against. P(q|D)=23/(23+21)=8/10=0.8. With 4 arguments in favor and 1 against, P(q|D)=16/18=0.89 (for this, think of the theism-atheism debate as pitting the cosmological, design, religious experience and moral arguments against the argument from evil). If we have 10 arguments in favor and 2 against, then P(q|D)=1024/(1024+4)=0.996.

For a more realistic model, we will need to change our priors for the controversial premises so that they aren't all 1/2. Some of the controversial premises of the arguments will be fairly plausible and they will have priors higher than 1/2. Some may not be all that plausible and will have priors lower than 1/2. And maybe the conclusion q will have a prior other than 1/2. Furthermore, there may be mutual dependencies among the controversial premises over and beyond the dependencies induced by the fact that some of them imply q and others imply ~q (the latter dependencies are handled by our conditioning). All of this would require fiddling with the priors, and the simple "counting combinations" method of calculating the posterior P(q|D) will need to be replaced by a more careful calculation. Nonetheless, the principle will be the same.

Tuesday, February 24, 2015

Mathematics and the actual infinite

  1. If mathematical realism is true, there is an actual infinite.
  2. The best alternatives to mathematical realism require the possibility of an actual infinite.
  3. So, probably, an actual infinite is possible.
What needs justifying is (2). Here, I say that the best alternative to mathematical realism is either fictionalism or some version of structuralism. Structuralism says that mathematics describes possible structures. But if there cannot be an actual infinite, then there is no possible structure that is described by arithmetic. On the other hand, fictionalism is very problematic when it is impossible for anything like the fictional story to be true.

Monday, February 23, 2015

Essential properties and self-dependence

My being seated now is causally explained by my having sat down. Suppose that my being seated now is an essential property of me. Then, had I not sat down, I wouldn't have existed. So my very existence would have counterfactually depended on my own causal activity. But that would absurdly make me be something too much like a causa sui.

This seems to generalize into an argument for a general principle:

  1. No property that counterfactually depends on an entity's own non-essential causal activity can be essential to that entity.
I needed to specify that the relevant causal activity is non-essential, because there is nothing deeply absurd about an entity that has some essential properties follow from causal activity that's essential to the entity.

But the form of argument that I used to get to (1) gives other results. For instance, a leading theory of personal identity holds that in cases where symmetric fission occurs—apparently, a person splitting into two—there were already two co-located people there before the fission. But why are there two people there? Presumably precisely because fissioning occurred—otherwise, there would have been only one.[note 1] Thus the existence of the two people is explained by the fission. But surely if fission is possible, it's possible that it be triggered by the non-essential action of the fissioning individual or individuals. In that case, then, the existence of the two people is explained in part by their very own activity. So this account of fission leads to something like self-dependence and should be rejected.

There may, however, be an objection to the argument. Suppose that the causal self-dependence is not fundamental. Instead, more fundamentally, we have a case where the whole depends on the contingent causal activity of a part. For instance, we may think that the length of an event is an essential property of an event. But the length of an event frequently depends on the contingent causal activity of a part of the event. (Thus, the length of World War II depends on the effects of D-Day, even though D-Day is a part of World War II.) In some such cases we might say that the whole depends on its own causal activity, since the causal activity of the part can sometimes be attributed to the whole.

I am not convinced. I think that in cases like this, it is incorrect to say that the whole depends on the whole's causal activity, but that it depends on the part's causal activity, and in this case the part's causal activity is not in fact correctly attributed to the whole.

In any case, the part-based objection will only apply in cases where the whole's causal activity is derivative from the part's causal activity. I think free choices are not derivative from the causal activity of anything other than the person as a whole, so in the fission case it's still impossible that whether there are two persons should depend on their choices. But this is rather controversial.

If there are cases where the whole depends on its own causal activity because its causal activity is derivative from a part's causal activity, then (1) needs to be qualified to apply to fundamental entities.

Friday, February 20, 2015

A cardinality objection to unrestricted modal profiles

The modal profile of an object tells us which worlds the object exists in and what it consists of in those worlds.

The unrestricted modal profiles (UMP) thesis says that for any map f that assigns to some worlds w a concrete object f(w) in w and that assigns nothing to other worlds, there is a possible concrete object Of such that Of exists in all and only the worlds w to which f assigns an object and has the property that in w, Of is wholly composed of f(w) (or of parts of f(w) that compose f(w)).

One can think of UMP as the next step after unrestricted composition (UC) which holds that for any concrete objects there is an object composed of them. UC is not enough to guarantee the existence of ordinary objects like tables and chairs, since there is no guarantee that the modal profile of a UC-guaranteed object composed of the particles in a table will match the modal profile of a table. But UC+UMP, plus a thesis about the physical world being made of temporal parts of particles, will give us what we need here.

However, UMP is false.

  1. There is a set of all actual concrete objects.
  2. If UMP is true, then for any cardinality K, there are at least K actual concrete objects.
  3. So, if UMP is true, there is no set of all actual concrete objects. (By 2)
  4. So, UMP is not true. (By 1 and 3)
Now, claim (1) is very plausible. Claim (3) follows from (2) since for any set there is a greater cardinality by Cantor's Theorem and so it's impossible to have a set whose cardinality is at least as large as every cardinality.

That leaves (2). Assume UMP. Suppose K is any infinite cardinality (we don't need to worry about the finite case). For any K, there is a possible world w with at least K concrete objects (say, K photons). Let x be any concrete object in the actual world @. Then there will be at least K maps f with the property that f(@)=x and f(w) is a concrete object in w and f assigns nothing to any other world. To each such map f there corresponds at least one distinct object Of in the actual world. (Distinct as difference in modal profiles implies non-identity of objects by Leibniz's Law.) So there are at least K concrete objects in the actual world.

Wednesday, February 18, 2015

A fallacy of probabilistic reasoning with an application to sceptical theism

Consider this line of reasoning:

  1. Given my evidence, I should do A rather than B.
  2. So, given my evidence, it is likely that A will be better than B.
This line of reasoning is simply fallacious. Decisions in many contexts where deontological-like concerns are not relevant are appropriately made on the basis of expected utilities. But the following inference is fallacious:
  1. The expected utility of A is higher than that of B.
  2. So, probably, A has higher utility than B.
In fact it may not even be possible to make sense of (4). For instance, suppose I am choosing between playing one of two indeterministic games that won't be played without me. I must play exactly one of the two. Game A pays a million dollars if I win, and the chance of winning is 1/1000. Game B pays a hundred, and the chance of winning is still 1/1000. Obviously, I should play game A, since the expected utility is much higher. But unless something like Molinism is true, if I choose A, there is no fact of the matter as to how B would have gone, and if I choose B, there is no fact of the matter as to how A would have gone. So there is no fact of matter as to whether A or B would have higher utility.

But even when there is a fact of the matter, the inference from (3) to (4) is fallacious, due to simple cases. Suppose that a die has been rolled but I haven't seen the result. I can choose to play game A which pays $1000 if the die shows 1 and nothing otherwise, or I have option B which is just to get a dollar no matter what. Then the expected utility of A is about $167 (think 1000/6) and the expected utility of B is exactly $1. However, there is a 5/6 chance that B has higher utility.

The lesson here is that our decisions are made on the basis of expected utilities rather than on the basis of the probabilities of the better outcome.

Now the application. One objection to some resolutions to the problem of evil, notably sceptical theism, is this line of thought:

  1. We are obligated to prevent evil E.
  2. So, probably, evil E is not outweighed by goods.
But this is just a version of the expectation-probability fallacy above. Bracketing deontological concerns, what is relevant to evaluating claim (5) is not so much the probability that evil E is or is not outweighed by goods, but the expected utility of E or, more precisely, the expected utilities of respectively preventing or not preventing E. On the other hand, what is relevant to (6) is precisely the probability that E is outweighed.

One might worry that the case of responses to the problem of evil isn't going to look anything like the cases that provide counterexamples to the expectation-probability fallacy. In other words, even though the expectation-probability fallacy is a fallacy in most cases, it isn't fallacious in the case of (5) and (6). But it's possible to provide a counterexample to the fallacy that is quite close to the sceptical theism case.

At this point the post turns a little more technical, and I won't be offended if you stop reading. Imagine that a quarter has been tossed a thousand times and so has a dime. There is now a game. You choose which coin counts—the quarter or the time—and then sequentially over the next thousand days you get a dollar for each heads toss and pay a dollar for each tails toss. Moreover, it is revealed to you that the first time the quarter was tossed it landed heads, while the first time the dime was tossed it landed tails.

It is clear that you should choose to base the game on the tosses of the quarter. For the expected utility of the first toss in this game is $1, and the expected utility of each subsequent toss is $0, for a total expected utility of one dollar, whereas the expected utility of the first toss in the dime-based game is $(-1), and the subsequent tosses have zero expected utility, so the expected utility is negative one dollar.

On the other hand, the probability that the quarter game is better than the dime game is insignificantly higher than 1/2. (We could use the binomial distribution to say just how much higher than 1/2 it is.) The reason for that is that the 999 subsequent tosses are very likely to swamp the result from the first toss.

Suppose now that you observe Godot choosing to play the dime game. Do you have significant evidence against the hypothesis that Godot is an omniscient self-interested agent? No. For if Godot is an omniscient self-interested agent, he will know how all the 1000 tosses of each coin went, and there is probability that's insignificantly short of 1/2 that they went in such a way that the dime game pays better.

Mysterious thy ways

Imagine an ordinary decent person who is omniscient. Her actions are going to be rather different from what we expect. She would take what would to us be big risks for the sake of small gains, simply because for her there is no risk at all. He stock portfolio is apt to be undiversified and quite strange. If we live in a chaotic world, then she might from time to time be doing some really odd things, like hopping on one leg in order to prevent an earthquake a thousand years hence. There would be bad things she would refrain from preventing because she saw further than we into the consequences, and good things she would avoid for similar reasons.

Now add to this that the person is omnipotent. And morally perfect. These additions would presumably only make the person stranger to us in behavior.

Tuesday, February 17, 2015

The mystical security guard

One objection to some solutions to the problem of evil, particularly to sceptical theism, is that if there are such great goods that flow from evils, then we shouldn't prevent evils. But consider the following parable.

I am an air traffic controller and I see two airplanes that will collide unless they are warned. I also see our odd security guard, Jane, standing around and looking at my instruments. Jane is super-smart and very knowledgeable, to the point that I've concluded long ago that she is in fact all-knowing. A number of interactions have driven me to concede that she is morally perfect. Finally, she is armed and muscular so she can take over the air traffic control station on a moment's notice.

Now suppose that I reason as follows:

  • If I don't do anything, then either Jane will step in, take over the controls and prevent the crash, or she won't. If she does, all is well. If she doesn't, that'll be because in her wisdom she sees that the crash works out for the better in the long run. So, either way, I don't have good reason to prevent the crash.
This is fallacious as it assumes that Jane is thinking of only one factor, the crash and its consequences. But the mystical security guard, being morally perfect, is also thinking of me. Here are three relevant factors:
  • C: the value of the crash
  • J: the value of my doing my job
  • p: the probability that I will warn the pilots if Jane doesn't step in.
Here, J>0. If Jane foresees that the crash will lead to on balance goods in the long run, then C>0; if common sense is right, then C<0. Based on these three factors, Jane may be calculating as follows:
  • Expected value of non-intervention: pJ+(1−p)C
  • Expected value of intervention: 0 (no crash and I don't do my job).
Let's suppose that common sense is right and C<0. Will Jane intervene? Not necessarily. If p is sufficiently close to 1, then pJ+(1−p)C>0 even if C is a very large negative number. So I cannot infer that if C<0, or even if C<<0, then Jane will intervene. She might just have a lot of confidence in me.

Suppose now that I don't warn the pilots, and Jane doesn't either, and so there is a crash. Can I conclude that I did the right thing? After all, Jane did the right thing—she is morally perfect—and I did the same thing as Jane, so surely I did the right thing. Not so. For Jane's decision not to intervene may be based on the fact that her intervention would prevent me from doing my job, while my own intervention would do no such thing.

Can I conclude that I was mistaken in thinking Jane to be as smart, as powerful or as good as I thought she was? Not necessarily. We live in a chaotic world. If a butterfly's wings can lead to an earthquake a thousand years down the road, think what an airplane crash could do! And Jane would take that sort of thing into account. One possibility was that Jane saw that it was on balance better for the crash to happen, i.e., C>0. But another possibility is that she saw that C<0, but that it wasn't so negative as to make pJ+(1−p)C come out negative.

Objection: If Jane really is all-knowing, her decision whether to intervene will be based not on probabilities but on certainties. She will know for sure whether I will warn the pilots or not.

Response: This is complicated, but what would be required to circumvent the need for probabilistic reasoning would be not mere knowledge of the future, but knowledge of conditionals of free will that say what I would freely do if she did not intervene. And even an all-knowing being wouldn't know those, because there aren't any true non-trivial such conditionals.

Monday, February 16, 2015

Graduality, rights and dualism

  1. If dualism is false, the emergence of a human person is a gradual process.
  2. If the emergence of a human person is a gradual process, the coming into existence of all human rights is a gradual process.
  3. If the coming into existence of a human right is a graduate process, then that right comes in a continuum of degrees from zero to fullness.
  4. All human rights come into existence.
  5. There are human rights that do not come in a continuum of degrees from zero to fullness.
  6. So, dualism is true.
Here, I think of a right as a source (ground?) of moral restrictions on others' activities. The coming into existence of a right is the coming into existence of a source of restrictions. The intuition behind (2) is that as a human person emerges (either as a new entity, or as a human non-person comes to be a person, depending on the particulars of the view), the source of the rights comes into being in lock-step.

The best way to argue for (5) is by way of example. For instance, as the right not to be killed solely for the convenience of others is not something that comes in degrees from zero to fullness.

Friday, February 13, 2015

Grounding overdetermination

Consider:

  1. The sky is blue or snow is white.
This is grounded by:
  1. The sky is blue.
But's it's also grounded by:
  1. Snow is white.
This is a case of grounding overdetermination. I was tempted to characterize this as follows:
  1. p is grounding overdetermined iff p is grounded by q and grounded by r, and qr, and neither q grounds r nor r grounds q.
The proviso at the end ensures that grounding chains do not count as cases of overdetermination. But (4) isn't good enough. Consider this:
  1. The sky is blue or the sky has a color.
This seems to be grounding overdetermined by:
  1. The sky is blue.
  2. The sky has a color.
But (6) grounds (7), so the proviso in (4) kicks in. Oops!

This odd phenomenon is related to a deficiency of the causal analogue of (4):

  1. E is causally overdetermined iff there are C1 and C2, with C1C2, and each of them is a full cause of E and neither of which is a full cause of the other.
But counterexamples akin to (5) are easy to manufacture. Boff is a trainee exterminator and Biff is his boss. Biff tells Boff to imitate everyting he does. Biff pours a lethal dose of crocodile poison into the customer's pond. Boff imitates him and pours another lethal dose. While Biff poured first, Boff poured closer to the crocodile, and as a result both doses arrived at the crocodile simultaneously, each sufficient to kill it. The alligator's death was overdetermined by the pouring of the two doses, even though one pouring caused the other.

We could try this:

  1. p is directly grounding overdetermined iff p is directly grounded by q and directly grounded by r and qr.
And then we could say that:
  1. p is grounding overdetermined iff it is either directly grounding overdetermined or it is grounded in something directly grounding overdetermined.
(And there is an obvious analogue for causation.) But this is only plausible if grounding is transitive. Maybe what we need to add to (10) something like "in a way that doesn't generate a relevant failure of the transitivity of grounding"?

Another difficulty with (9) is that the notion of direct grounding is not so easy to define. The tempting definition of direct grounding is, after all:

  1. p is directly grounded by q iff p is grounded by q and there is no r such that p is grounded by r and r is grounded by q.
But cases of overdetermination like (5) are counterexamples to (11), since (5) is directly grounded by its first disjunct, even though the first disjunct grounds the second and the second grounds all of (5). So one wants to say:
  1. p is directly grounded by q iff p is grounded by q and there is no r such that p is grounded by r and r is grounded by q and this is not a case of grounding overdetermination.
But of course then we have circularity.

All this suggests to me that we need a notion not reducible to grounding to make the above distinctions. We might, for instance, take direct grounding to be primitive. Or overdetermination. Or maybe we could use some version of my grounding graph approach.

Modeling space

The obvious model of a Newtonian space is as the set of all triples (x,y,z) of real-numbered coordinates. But the model does not have isotropy that Newtonian space does. It has privileged directions, such as the x-axis, the y-axis and the z-axis. It has privileged coordinates such as (0,0,0). Of course, physical models generally do have properties that aren't found in what is modeled. If I build a model of the solar system out of fruit, the fact that some of the fruit is sweeter need not model any property of the solar system. If I make a model of an ethanol molecule out of sticks and balls, the balls that represent hydrogen atoms differ in their exact mass, and exhibit scratches, in a way that the hydrogen atoms do not.

Nonetheless, even though this is common to all modeling, there really is something a little unsatisfying when the mathematical model does this. Typically when we mathematically model something, we have to abstract or forget on both sides. On the side of what is modeled, the side of the world, we ignore aspects of the physical structure because otherwise things get too complicated. On the side of the model, we ignore aspects of the mathematical structure because they don't, as far as we know, correspond to anything in the physics. Wouldn't it be nice if we could abstract only on one side, that of the world? But some things that would be nice are not an option.

The above remarks do, I think, make Pythagoreanism less plausible. There seems to be structure in the mathematics that models the world that isn't found in the world. This makes it implausible that the world just is composed of the mathematics.

Thursday, February 12, 2015

Properties of the model and the modeled

My apologies for yet another technical post that's just notes-to-self.

Quantum Mechanics models the world using a Hilbert space. I wonder what we can say about just how much of the structure of the model is meant to be found in what is modeled. In contemporary mathematics, I guess ultimately any Hilbert space will be a very complex construction out of the empty set. Yet it seems absurd to think that the low-level details of the set-theoretic implementation (say, different ways of constructing the natural numbers out of the empty set) would reflect differences in the world. There are way too many ways to implement these details.

But there will also be differences at higher levels. For instance, there will be cases where the Hilbert space is L2(X), "the space of square-integrable functions" on some set X. I put that in scare quotes, because that's not what L2 is, despite often being described so. Rather, it's the space of equivalence classes of square-integrable functions, where two functions are equivalent provided that the set of points where they differ has measure zero. So now we have a question about the model and the modeled. You could think that different members of an equivalence class correspond to different empirically indistinguishable physical states, and the physics simply makes no prediction as to which of the indistinguishable states is exemplified when. Or you could think that each equivalence class corresponds to a single possible physical state. The latter makes for a theory that is simpler and yet seems to give less understanding. It is simpler because it doesn't posit unexplained differences between states. But it seems to give less understanding, because it means that the wavefunction can no longer be seen as an assignment of values to different points in phase space, but rather a more mysterious kind of entity—one modeled as an equivalence class of such assignments.

There may be a third option: There is a privileged member of each equivalence class, and only the privileged member can be physically actualized. This would give us the best of both worlds. We would have a field over phase-space, and no extra indistinguishable physical possibilities. The lack of linear liftings on L2[0,1] makes it a bit harder to realize this hope than one might have wished, but maybe there is still some hope.

Epiphenomenalism and the problem of animal pain

Suppose the following epiphenomenalist thesis is true, at least for non-human animals: qualia do not affect behavior. It's interesting that if this is right, then the argument for atheism from animal pain is seriously weakened. The argument from animal pain contends that God would have reason to prevent many instances of animal pain that he does not in fact prevent. However, we have good reason to think that God's interventions would be targeted and hence minimal. Now a minimal intervention for the prevention of pain is simply to suppress the quale of pain. Given epiphenomenalism, however, suppressing a quale of pain does not affect either behavior or neural state. So if God thus intervened, things wouldn't look any the different. And hence the atheist cannot non-circularly deny that God did intervene to prevent the pain.

Of course, this might be taken to be yet another reason to deny epiphenomenalism.

Wednesday, February 11, 2015

The argument from partial theodicy

The following would be a superb teleological argument for the existence of God if only we had good reason to accept (1) without relying on theism:

  1. Every evil has a theodicy.
  2. If every evil has a theodicy, then probably God exists.
  3. So, probably God exists.
I can think of two (perhaps not ultimately different) ways of making (2) plausible. First, the best explanation of (1) would be that God exists. Second, that an evil has a theodicy means that it's the sort of thing that God would have a reason to permit if God existed. But it would be very odd if all evils had this hypothetical God-involving property without God existing. It would be a cosmic coincidence.

But as I said, (1) is the rub. However, what about this version:

  1. Most evils happening to humans have a theodicy.
  2. If most evils happening to humans have a theodicy, then probably God exists.
  3. So, probably, God exists.
And while we're at it, let's add:
  1. If God exists, all evils have a theodicy.
  2. So, probably, all evils have a theodicy.
Premise (5) is harder to justify than (2), but I think the reasoning behind (2) still contributes to the plausibility of (5). The best alternative to theism is a form of naturalism, and we just wouldn't expect most evils, or even most evils happening to people, to have a theodicy on naturalism, so our best explanation for why most such evils have a theodicy is that God exists.

I want to say something about why I am restricting (4) and the antecedent of (5) to evils happening to humans. The reason is that we have much better epistemic access to evils happening to humans, and so we are better able to judge of both the magnitude of the evils and the theodicies and lack thereof.

And (4) is much easier to justify than (1). All we need is enough partial theodicies. Plausibly, for instance, many evils—perhaps it's already most evils—are moral evils that are sufficiently non-horrendous that a free will theodicy directly applies to them. Many evils have a good theodicy in terms of the exercise of virtue they enable. And when I reflect on the evils that have befallen me in my life, it's easy to see that I deserve punishment for them all by my sins, and would have deserved a lot more than I got. Granted, I've lived a charmed life, so the applicability of this will be limited. But between freedom, virtue and punishment, it is plausible that the majority of evils happening to people have been covered.

A somewhat different argumentative route is:

  1. Most evils happening to humans have a theodicy.
  2. The best explanation of (9) is that all evils have a theodicy.
  3. So, probably, all evils have a theodicy.
  4. If all evils have a theodicy, then probably God exists.
  5. At least somewhat probably, God exists.

Finally, there will be first-person versions that make use of a premise like:

  1. Every evil (or: most evils) that happened to me has a theodicy.

Tuesday, February 10, 2015

From unrestricted composition to unrestricted caninity

According to unrestricted composition (UC) for any plurality of things there is a whole that is exactly composed of them. Sider offers a continuity argument for UC. Here's a vivid formulation. Let the Ps be the particles in the even-numbered books on one of my bookshelves. If UC is false, then in the actual world the Ps will be a paradigm case of something that doesn't compose a whole. But there is a world where the Ps compose a dog. And between these two worlds there is a continuous sequence of worlds where the Ps gradually migrate from their every-second-book positioning to their canine positioning. It is absurd to think that suddenly somewhere in this continuous sequence the particles come to compose something. So, Sider concludes, they compose something all along, even in the actual world.

But to a hylomorphist, the argument as I've put it simply fails. There is no world where the Ps compose a dog, since a dog—or any other complex entity—is not composed of matter, but of matter and form. The argument can, however, be reformulated. Say that the Ps materially compose an F provided that the Ps are material and together with some form compose an F. Then the argument gets off the ground. In the actual world, the Ps do not materially compose anything while in the final world they materially compose something. Where along the line do they come to materially compose something?

Now, however, the story is underdescribed. For we have failed to say in which worlds in the sequence there is a substantial form of the dog informing the Ps. Facts about substantial forms should not be assumed to supervene on facts about the arrangement of the particles. There could be zombie dogs that are nothing but heaps of particles looking like a dog. In other words, it's a contingent matter whether a certain kind of arrangement of particles materially composes something—if there is a form informing them, then they compose and if not, not.

Of course, there is a question of explanation: Why is there no form informing the Ps in the actual world but there is one in the the non-zombie dog worlds? But the answers aren't particularly troublesome. Maybe the laws of nature explain that. Maybe God just decides when to create forms and make them inform particles.

However, there is a final move that Sider can make. Instead of asking in which worlds the Ps (materially) compose something, he could ask which arrangements of particles are such that something could be materially composed of the particles in that arrangement. Of course the dog-like arrangement is like that. And the even-numbered-book arrangement is not. So where is the transition in the continuous deformation of the even-numbered-book arrangement into the dog-like arrangement?

This is an interesting question for the hylomorphist. It is closely to the question of what forms there could be (cf. the discussion here and in the Murphy book referenced in the comments there). The hylomorphist could take an unrestricted view. There is a sufficiently wide variety of possible forms and defects that any possible arrangement of matter is compatible with being informed by some form—perhaps defectively. There could be a possible world where something looking just like our even-numbered-book arrangement is a highly defective (it doesn't grow or reproduce) plant.

Nonetheless, there is a remaining problem. While the even-numbered-book arrangement may be apt for materially composing a defective plant, it's surely inapt for materially composing a dog. So there will seem to be a discontinuous transition between those arrangements that can and those that cannot materially compose a dog. One answer here is that "dog" is vague. This doesn't fit with traditional Aristotelian views, though, on which all dogs have an exactly similar form, and so one could meaningfully ask about the range of arrangements that could be informed by a form that's exactly like that. But perhaps the Aristotelian can yield some ground here. Another answer would be unrestricted canine composition: any material arrangement could materially compose a dog, albeit a highly defective one. I am somewhat drawn to this strange view. Yet is it that strange? I think I can imagine a dog continuously deforming into the even-numbered-book arrangement but where rather than dying the dog comes to be more and more defective. I am dualist enough that I can even imagine the dog being conscious throughout the process.

Monday, February 9, 2015

Guessing strategies and causal finitism

Suppose that during an infinite past a fair die was rolled every day, and that this game will end in a year. You know all the outcomes of the past rolls. Before each roll, you are asked whether you think the roll will come up six. If you answer correctly, you get a dollar. Otherwise, you lose a dollar.

There is an obvious strategy: Always guess "No." Then out of six rolls, on average, you will win five times and lose once, so you will on average make about 67 cents per roll. Here's a very reasonable claim:

  1. Guessing "no" is the optimal strategy for an agent that does not have foreknowledge of the future.

But it turns out that, given the Axiom of Choice, there is a strategy that beats this, a strategy guaranteed that you will win infinitely often and lose at most finitely often, and hence that gives you a long-run average of a dollar per roll, rather than the measly 67 cents of our above strategy. The strategy is to use a variant of the solution to the fourth hat puzzle here. For the technically minded reader I'll sketch the strategy below.

But (1) is obviously true: it's clear that whenever you are being asked to guess, you should say "no", and surely that's the best policy. So (1) is both true and false on the above assumptions (including the assumptions needed to make the alternate strategy go). And hence I think we should reject the possibility of knowing the outcomes of a backwards-infinite sequence of coin tosses. And the best way to do that is to embrace causal finitism: to deny that anything (say, your current knowledge) can depend on infinitely many events.

For the technically minded reader, here's the strategy. Consider the set of all backwards-infinite sequences of die rolls. Say two sequences are equivalent if they differ in only finitely many places. For any equivalence class E of sequences, choose a member f(E) (by the Axiom of Choice). Now whenever you're asked to make a guess, you already know all but finitely many of the items in the actual world's sequence of rolls. So you know which equivalence class E the actual world's sequence will fall into. So you guess according to f(E). And since the actual world's sequence differs from f(E) in only finitely many places, you're right all but finitely often.