Monday, April 30, 2018

Avoiding double counting of culpabilities

Here’s an interesting double-counting problem for wrongdoing. Alice stands to inherit a lot of money from a rich uncle in Australia. Bob thinks he stands to inherit a lot of money from a rich uncle in New Zealand. Both of them know that it’s wrong to kill rich uncles for their inheritance, but each of them nonetheless hires a hitman with the instruction to kill the rich uncle. Both hitmen run off with the money and do nothing. But Bob in fact has no uncles—he was misinformed.

Here are some plausible observations:

  1. Alice culpably committed two wrongs: she violated her conscience and she wronged her uncle by hiring a hitman to kill him.

  2. Bob culpably committed only one of these wrongs: he violated his conscience.

  3. Bob is just as morally culpable as Alice.

Here is one way to reconcile these observations. We should distinguish between something like moral failings of the will, on the one hand, and wrongdoings, on the other. It is the moral failings of the will that result in culpability. This culpability then will qualify one or more wrongdoings. But the amount of culpability is not accounted by looking at the culpable wrongdoings, but at the moral failings of the will. A being that executes unalloyed perfect justice will look only at these failings of the will. Alice and Bob each morally failed in the same way and to the same degree (as far as the stories go), and so they are equally culpable. But, nonetheless, Alice has two culpable wrongdoings—culpable through the same moral failing of the will, which should not be double counted for purposes of just punishment.

Friday, April 27, 2018

Love and deontology

Sometimes it wrongs a person to intentionally do them what is known to be in their own best interest. If by torturing you for 60 minutes I can prevent you from being tortured in the same way for 70 minutes by someone else, it may be in your best interest that I torture you. But it is still wrong for me to torture you. Cases of this sort can be multiplied, though of course only deontologists will find any of them plausible.

(One can also analyze these cases as ones where the action is wrong because it is a violation of the agent’s own human dignity. I think the actions are violations of the agent’s own dignity, but they are violations of the agent’s dignity because they wrong the other party.)

These are cases where your action wrongs someone but causes them on balance benefit. This means that to be wronged does not entail being on balance harmed.

Here is how I think we should think of these cases. The true ethics is an ethics of love: I should love everyone. But benevolence is only one of the three fundamental aspects of love, with the other two being union and appreciation. To wrong someone is to violate one or more of the three aspects of love. If I intentionally do something that is known to be in your best interest, I do not violate the benevolence aspect of love. But I may violate one of the other two aspects. In the cases I am thinking of, like torture, the act is an affront to your human dignity, and by affronting your human dignity I am directly acting against the appropriate kind of unitive relationship between human beings—hence, I violate the unitive aspect of love.

It may seem, however, that these are cases where I have a real moral dilemma. For if I refuse to do the act, then it seems I am violating the benevolence of love. But this is mistaken. To fail to be benevolent is not to oppose benevolence. Some cases are obvious. If I fail to be benevolent to you because someone just as close to me has a greater need, I may have done something not in your best interest, but I have not violated the benevolence of love. Now, if I intentionally did to you what was not in your best interest because it was not in your best interest, then I have violated love.

Thursday, April 26, 2018

Alethic Platonism

I’ve been thinking about an interesting metaphysical thesis about arithmetic, which we might call alethic Platonism about arithmetic: there is a privileged, complete and objectively correct assignment of truth values to arithmetical sentences, not relative to a particular model or axiomatization.

Prima facie, one can be an alethic Platonist about arithmetic without being an ontological Platonist: one can be an alethic Platonist without thinking that numbers really exist. One might, for instance, be a conceptualist, or think that facts about natural numbers are hypothetical facts about sequences of dashes.

Conversely, one can be an ontological Platonist without being an alethic Platonist about arithmetic: one can, for instance, think there really are infinitely many pluralities of abstracta each of which is equally well qualified to count as “the natural numbers”, with different such candidates for “the natural numbers” disagreeing on some of the truths of arithmetic.

Alethic Platonism is, thus, orthogonal to ontological Platonism. Similar orthogonal pairs of Platonist claims can be made about sets as about naturals.

One might also call alethic Platonism “alethic absolutism”.

I suspect causal finitism commits one to alethic Platonism.

Something close to alethic Platonism about arithmetic is required if one thinks that there is a privileged, complete and objectively correct assignment of truth values to claims about what sentence can be proved from what sentence. Specifically, it seems to me that such an absolutism about proof-existence commits one to alethic Platonism about the Σ10 sentences of arithmetic.

Wednesday, April 25, 2018

We aren't just rational animals

I think some Aristotelian philosophers are inclined to think that our nature is to be rational animals, so that all rational animals would be of the same metaphysical species. Here is a problem with this. Our nature—or form or essence—specifies the norms for our structure. Our norms specify that we should be bipedal: there is something wrong with us if we are incapable of bipedality. But an intelligent squid would be a rational animal, and its norms would surely not specify that it is supposed to be bipedal. So, it seems, that the hypothetical intelligent squid would have a different nature from ours.

But that was too quick. For it could be that our nature grounds conditionals like:

  1. If you’re human, you should have two arms and two legs

  2. If you’re a squid, you should have eight arms and two tentacles.

We have some reason to think there are such conditional normative facts even if we take our metaphysical species narrowly to be something like human or even homo sapiens, since presumably our nature grounds normative conditionals about bodily structure with antecedents specifying whether we are male or female.

But there is a hitch here: if humans and intelligent squid have the same form, what makes it be the case that for me the antecedent of 1 is true while for Alice (say) the antecedent of 2 is true? I think our best story may be that it is facts about DNA, so in fact the antecedents of 1 and 2 are abbreviations for complex facts about DNA.

That might work for DNA-based animals, which are all the animals we have on earth, but it probably won’t work for all possible animals. For surely there nomically could be animals that are not based on DNA, and it is implausible that we carry in our nature the grounds for an array of conditionals for all the nomically (at least) possible genetic encoding schemes.

I suppose we could take our nature to be rational members of the Animalia, with the assumption that the kingdom Animalia necessarily includes only DNA-based organisms (but not all of them, of course). But Animalia seems a somewhat arbitrary choice of classification to tack on to rationality. It doesn’t have the exobiological generality of animal, the earthly generality of DNA-based organism, or the specificity of human.

It seems to me that

  • rational DNA-based organism, or

  • rational member of genus Homo

are better options for where to draw the lines of our metaphysical species, assuming “rationality” is the right category (as opposed to, say, St. John Paul II’s suggestion that we are fundamentally self-givers), than either rational animal or rational member of Animalia.

Tuesday, April 24, 2018

Balancing between theism and atheism

The problem of evil consists of three main parts:

  • The problem of suffering.

  • The problem of evil choices.

  • The problem of hiddenness (which is an evil at most conditionally on God’s existing).

The theist has trouble explaining why there is so much suffering. The atheist, however, has trouble explaining why there is any suffering, given that suffering presupposes consciousness, and the atheist has trouble explaining why there is any consciousness.

Of course, there are atheist-friendly naturalistic accounts of consciousness. But they all face serious difficulties. This parallels the fact that theists have theodical accounts of why God permits so much suffering, accounts that also face serious difficulties.

So, on the above, considerations of suffering are a net tie between theism and atheism.

The theist does not actually have all that much trouble explaining why there are evil choices. Libertarian free will does the job. Of course, there are some problems with libertarian accounts of free will. These problems are not, I think, nearly as serious as the problems that theists have with explaining why there is so much suffering or atheists have with explaining why there is consciousness. Moreover, there is a parallel problem for the atheist. Evil choices can only exist given free will. Prima facie the most plausible accounts of free will are libertarian agent-causal ones. But those are problematic for the atheist, who will find it difficult to explaining where libertarian agents come from. The atheist probably has to embrace a compatibilist theory, which has at least as many problems as libertarian agent-causalism.

So, considerations of evil choices look at best as a net tie for the atheist.

Finally, there is the problem of hiddenness for the theist. But while the theist has trouble explaining how we don’t all know something so important as the existence of God, the atheist has epistemological trouble of her own: she has trouble explaining how she knows that there is no God. After all, knowledge of the highly abstract facts that enter into arguments regarding the existence of God is not the sort of knowledge that seems to be accessible to evolved natural beings.

So, considerations of knowledge of the existence or non-existence of God look as a net tie.

The problem of evil, however, exhausts the powerful arguments for atheism. But the above considerations far from exhaust the powerful arguments for theism.

The above reasoning no doubt has difficulties. But I want to propose it as a strategy for settling disputes in cases where it's hard to assign probabilities. For even if it's hard to assign probabilities, we can have good intuitions that two considerations are a wash, that they provide equal evidence. And if we can line up arguments in such a way, being more careful with issues of statistical dependence than I was above, then we can come to a view as to which way some bunch of evidence points.

Monday, April 23, 2018

A tweak to the ontomystical argument

In an old paper, I argued that we do not hallucinate impossibilia: if we perceive something, the thing we perceive is possible, even if it is not actual. Consequently, if anyone has a perception—veridical or not—of a perfect being, a perfect being is possible. And mystics have such experiences. But as we know from the literature on ontological arguments, if a perfect being is possible, then a perfect being exists (this conditional goes back at least to Mersenne). So, a perfect being exists.

I now think the argument would have been better formulated in terms of what two-dimensional semanticists like Chalmers call “conceivability”:

  1. What is perceived (perhaps non-veridically) is conceivable.

  2. A perfect being is perceived (perhaps non-veridically).

  3. If a perfect being is conceivable, a perfect being is possible.

  4. A perfect being is possible.

  5. If a perfect being is possible, a perfect being exists.

  6. So, a perfect being exists.

Premise (3) follows from the fact that the notion of a perfect being is not twinearthable, so conceivability and possibility are equivalent for a perfect being (Chalmers is explicit that this is the case for God, but he concludes that God is inconceivable). Premise (1) avoids what I think is the most powerful of Ryan Byerly’s four apparent counterexamples to my original argument: the objection that one might have perceptions that are incompatible with necessary truths about natural kinds (e.g., a perception that a water molecule has three hydrogen atoms).

Friday, April 20, 2018

Non-instrumental pursuit

I pursue money instrumentally—for the sake of what it can buy—but I pursue fun non-instrumentally.

Here’s a tempting picture of the instrumental/non-instrumental difference as embodied in the money fun example:

  1. Non-instrumental pursuit is a negative concept: it is instrumental pursuit minus the instrumentality.

But (1) is mistaken for at least two reasons. The shallower reason is an observation we get from the ancients: it is possible to simultaneously pursue the same goal both instrumentally and non-instrumentally. You might have fun both non-instrumentally and in order to rest. But then lack of instrumentality is not necessary for non-instrumental pursuit.

The deeper reason is this. Suppose I am purely instrumentally pursuing money for the sake of what it can buy, but I then remove the instrumentality, either by ceasing to pursue things that can be bought or by ceasing to believe that money can buy things, without adding any new motivations to my will. Then clearly the pursuit of money rationally needs to disappear—if it remains, that is a clear case of irrationality. But if non-instrumental pursuit were simply an instrumental pursuit minus the instrumentality, then why wouldn’t the removal of the instrumentality from my pursuit of money leave me non-instrumentally and rationally pursuing money, just as I non-instrumentally and rationally pursue fun?

There is a positive element in my pursuit of fun, a positive element that would be lacking in my pursuit of money if I started with instrumental pursuit of money and took away the instrumentality and somehow (perhaps per impossibile) continued (but now irrationally) pursuing money. It is thus more accurate to talk of “pursuit of a goal for its own sake” than to talk of “non-instrumental pursuit”, as the latter suggests something negative.

The difference here is somewhat like the difference between the concepts of an uncaused being and a self-existent being. If you take away the cause of a brick and yet keep the brick (perhaps per impossibile), you have a mere uncaused being. That’s not a self-existent being like God is said to be.

Thursday, April 19, 2018

Affronts to human dignity

Some evils are not just very bad. They are affronts to human dignity. But those evils, paradoxically, provide an argument for the existence of God. We do not know what human dignity consists in, but it isn’t just being an agent, being really smart, etc. For human dignity to play the sort of moral role it does, it needs to be something beyond the physical, something numinous, something like a divine spark. And on our best theories of what things are like if there is no God, there is nothing like that.

So:

  1. There are affronts to human dignity.

  2. If there are affronts to human dignity, there is human dignity.

  3. If there is human dignity, there is a God.

  4. So, there is a God.

This argument is very close to the one I made here, but manages to avoid some rabbit-holes.

Wednesday, April 18, 2018

Van Inwagen on evil

Peter van Inwagen argues that because a little less evil would always serve God’s ends just as well, there is no minimum to the amount of evil needed to achieve God’s ends, and hence the arguer from evil cannot complain that God could have achieved his ends with less evil. Van Inwagen gives a nice analogy of a 10-year prison sentence: clearly, he thinks, a 10-year sentence can be just even if 10 years less a day would achieve all the purposes of the punishment just as well.

I am not convinced about either the punishment or the evil case. Perhaps the judge really shouldn’t choose a punishment where a day less would serve the purposes just as well. I imagine that if we graph the satisfaction of the purposes of punishment against the amount of punishment, we initially get an increase, then a level area, and then eventually a drop-off. Van Inwagen is thinking that the judge is choosing a punishment in the level area. But maybe instead the judge should choose a punishment in the increase area, since only then will it be the case that a lower punishment would serve the purposes of the punishment less well. The down-side of choosing the punishment in that area is that a higher punishment would serve the purposes of the punishment better. But perhaps there is a moral imperative to sacrifice the purposes of punishment to some degree, in the name of not punishing more than is necessary. Mercy is more important than retribution, etc.

Similarly, perhaps, God should choose to permit an amount of evil that sacrifices some of his ends (ends other than the minimization of evil), in order to ensure that the amount of evil that he permits is such that any decrease in the evil would result in a decrease in the satisfaction of God’s other ends. If van Inwagen is right about there not being sharp cut-offs, then this may require God to choose to permit an amount of evil such that more evil would have served God’s other ends better.

The above fits with a picture on which decrease of evil takes a certain priority over the increase of good.

Tuesday, April 17, 2018

In vitro fertilization and Artificial Intelligence

The Catholic Church teaches that it is wrong for us to intentionally reproduce by any means other than marital intercourse (though things can be done to make marital intercourse more fertile than it otherwise would be). In particular, human in vitro fertilization is wrong.

But there is clearly nothing wrong with our engaging in in vitro fertilization of plants. And I have never heard a Catholic moralist object to the in vitro fertilization of farm animals.

Suppose we met intelligent aliens. Would it be permissible for us to reproduce them in vitro? I think the question hinges on whether what is wrong with in vitro fertilization has to do with the fact that the creature that is reproduced is one of us or has to do with the fact that it is a person. I suspect it has to do with the fact that it is a person, and hence our reproducing non-human persons in vitro would be wrong, too. Otherwise, we would have the absurd situation where we might permissibly reproduce an alien in vitro, and they would permissibly reproduce a human in vitro, and then we would swap babies.

But if what is problematic is our reproducing persons in vitro, then we need to look for a relevant moral principle. I think it may have something to do with the sacredness of persons. When something is sacred, we are not surprised that there are restrictions. Sacred acts are often restricted by agent, location and time. They are something whose significance goes beyond humanity, and hence we do not have the authority to engage in them willy-nilly. It may be that the production of persons is sacred in this way, and hence we need the authority to produce persons. Our nature testifies to us that we have this authority in the context of marital intercourse. We have no data telling us that we are authorized to produce persons in any other way, and without such data we should not do it.

This would have a serious repercussion for artificial intelligence research. If we think there is a significant chance that strong AI might be possible, we should stay away from research that might well produce a software person.

The independence of the attributes in Spinoza

According to Spinoza, all of reality—namely, deus sive natura and its modes—can be independently understood under each of (at least) two attributes: thought and extension. Under the attribute of thought, we have a world of ideas, and under the attribute of extesion, we have a world of bodies. There is identity between the two worlds: each idea is about a body. We have a beautiful account of the aboutness relation: the idea is identical to the body it is about, but the idea and body are understood under different attributes.

But here is a problem. It seems that to understand an idea, one needs to understand what the idea is about. But this seems to damage the conceptual independence of the attributes of thought and extension, in that one cannot fully understand the aboutness of the ideas without understanding extension.

I am not sure what to do about this.

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Friday, April 13, 2018

Impairment and non-human organisms

Consider a horse with three legs, a bird with one wing, an oak tree without bark, and a yeast cell unable to reproduce. There is something that all four have in common with each other, and which they also have in common with the human who has only one leg. And it seems to me to be important for an account of disability to acknowledge that which all these five organisms have in common. If the right account of disability is completely disjoined from anything that happens in non-human organisms—or even from anything that happens in non-social organisms—then there is another concept in the neighborhood that we really should also be studying in addition to disability, maybe “impairment”.

Moreover, it seems clear the thing that the five organisms in my examples have in common is bad as far as it goes, though of course it might be good for the organism on balance (the one-winged bird might be taken into a zoo, and thereby saved from a predator).

Thursday, April 12, 2018

Divine authority over us

Imagine a custody battle between Alice and Bob over their child Carl. Suppose the court finds that Alice loves Carl much more than Bob does, that Alice is much wiser than Bob, and that Alice knows Carl and his needs much better than Bob does. Moreover, it is discovered that Bob has knowingly unjustifiedly harmed Carl, while Alice has never done that. In the light of these, it is obvious that Alice is a more fitting candidate to have authority over Carl than Bob is.

But now, suppose x is some individual. Then God loves x much more than I love x, God is much wiser than I, God knows x and his needs much better than I do. Moreover, suppose that I have knowingly unjustifiedly harmed x, while God has never done that. In light of these, it should be plausible that God is a more fitting candidate to have authority over x than I am.

Suppose, however, that I am x. The above is still true. God loves me much more than I love myself; God is much wiser than I; God knows me and my needs much better than I do. And I have on a number of occasions knowingly unjustifiedly harmed myself—indeed, in typical cases when I sin, that’s what has happened—while God has never knowingly unjustifiedly harmed me. So, it seems that God is a more fitting candidate to have authority over me than I am.

I am not endorsing a general principle that if someone loves me more than I love myself, etc., then they are more fit to have authority over me. For the someone might be someone that has little intuitive standing to have authority over me—a complete stranger who inexplicably enormously cares about me might not have much authority over me. But it is prima facie plausible that God has significant authority over me, for the same sorts of reasons that my parents had authority over me when I was a child. And the above considerations suggest that God’s authority over me is likely to be greater than my own authority over myself.

If it is correct that God, if he existed, would have greater authority over me than I have over myself, then that would have significant repercussions for the problem of evil. For a part of the problem involves the question of whether it is permissible for God to allow a person to suffer horrendously even for the sake of greater (or incommensurable but proportionate) goods to them or (especially) another. But it would be permissible for me to allow myself to suffer horrendously for the sake of greater (or incommensurable but proportionate) goods for me or another. If God has greater authority over me than I have over myself, then it would likewise be permissible for God.

This does not of course solve the problem of evil. There is still the question whether allowing the sufferings people undergo has the right connection with greater (or incommensurable but proportionate) goods, and much of the literature on the problem of evil has focused on that. But it does help significantly with the deontic component of the question. (Though even with respect to the deontic aspects, there is still the question of divine intentions—it would I think be wrong even for God to intend an evil for the sake of a good. So care is still needed in theodicy to ensure that the theodicy doesn’t make God out to be intending evils for the sake of goods.)

Wednesday, April 11, 2018

A parable about sceptical theism and moral paralysis

Consider a game. The organizers place a $20 bill in one box and a $100 bill in another box. They seal the boxes. Then they put a $1 bill on top of one of the boxes, chosen at random fairly, and a $5 on top of the other box. The player of the game gets to choose a box, in which case she gets both what’s in the box and what’s on top of the box. Everyone knows that that’s how the game works.

If you are an ordinary person playing the game, you will be self-interestedly rational to choose the box with the $5 on top of it. The expected payoff for the box with the $5 on it is $65, while the expected payoff for the other box is $61, when one has no information about which box contains the $20 and which contains the $100.

If Alice is an ordinary person playing the game and she choses the box with the $1 on top of it, that’s very good reason to doubt that Alice is self-interestedly rational.

But now suppose that I am considering the hypothesis that Bob is a self-interestedly rational being who has X-ray vision that can distinguish a $20 bill from a $100 bill inside the box. Then if I see Bob choose the box with the $1 on top of it, that’s no evidence at all against the hypothesis that he is such a being, i.e., a self-interestedly rational being with X-ray vision. In repeated playings, we’ll see Bob choose the $1 box half the time and the $5 box half the time, if he is such a being, and if we didn't know that Bob has X-ray vision, we would think that Bob is indifferent to money.

Sceptical theism and the infinity of God

I’ve never been very sympathetic to sceptical theism until I thought of this line of reasoning, which isn’t really new, but I’ve just never quite put it together in this way.

There are radically different types of goods. At perhaps the highest level—call it level A—there are types of goods like the moral, the aesthetic and the epistemic. At a slightly lower level—call it level B—there are types of goods like the goods of moral rightness, praiseworthiness, autonomy, the virtue, beauty, sublimity, pleasure, truth, knowledge, understanding, etc. And there will be even lower levels.

Now, it is plausible that a perfect being, a God, would be infinitely good in infinitely many ways. He would thus infinitely exemplify infinitely many types goods at each level, either literally or by analogy. If so, then:

  1. If God exists, there are infinitely many types of good at each level.

Moreover:

  1. We only have concepts of a finite number of types of good at each level.

Thus:

  1. There are infinitely many types of good at each level that we have no concept of.

Now, let’s think what would likely be the case if God were to create a world. From the limited theodicies we have, we know of cases where certain types of goods would justify allowing certain evils. So we wouldn't be surprised if there were evils in the world, though of course all evils would be justified, in the sense that God would have a justification for allowing them. But we would have little reason to think that God would limit his design of the world to only allowing those evils that are justified by the finite number of types of good that we have concepts of. The other types of good are still types of good. Given that there infinitely many such goods, and only finitely many of the ones we have concepts of, it would not be significantly unlikely that if God exists, a significant proportion—perhaps a majority—of the evils that have a justification would have a justification in terms of goods that we have no concept of.

And so when we observe a large proportion of evils that we can find no justification for, we observe something that is not significantly unlikely on the hypothesis that God exists. But if something is not significantly unlikely on a hypothesis, it’s not significant evidence against that hypothesis. Hence, the fact that we cannot find justifications for a significant proportion of the evils in the world is not significant evidence against the existence of God.

Sceptical theism has a tendency to undercut design arguments for the existence of God. I do not think this version of sceptical theism has that tendency, but that’s matter for another discussion (perhaps in the comments).

Bayesianism and the multitude of mathematical structures

It seems that every mathematical structure (there are some technicalities as to how to define it) could in fact be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.

A natural law or divine command appendix to Bayesianism can solve this problem by requiring us to assign zero probability to some structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori.

Monday, April 9, 2018

Reincarnation and theodicy

As I was teaching on the problem of evil today, I was struck by how nicely reincarnation could provide theodicies for recalcitrant cases. “Why is the fawn dying in the forest fire? Well, for all we know, it’s a reincarnation of someone who committed genocide and is undergoing the just punishment for this, a punishment whose restorative effect will only be seen in the next life.” “Why is Sam suffering with no improvement to his soul? Well, maybe the improvement will only manifest in the next life.”

Of course, I don’t believe in reincarnation. But if the problem of evil is aimed at theism in general, then it seems fair to say that for all that theism in general says, reincarnation could be true.

Here is a particular dialectical context where bringing in reincarnation could be helpful. The theist presses the fine-tuning argument. The atheist instead of embracing a multiverse (as is usual) responds with the argument from evil. The theist now says: While reincarnation may seem unlikely, it surely has at least a one in a million probability conditionally on theism; on the other hand, fine-tuning has a much, much smaller probability than one in a million conditionally on single-universe atheism. So theism wins.

Friday, April 6, 2018

Peer disagreement and models of error

You and I are epistemic peers and we calculate a 15% tip on a very expensive restaurant bill for a very large party. As shared background information, add that calculation mistakes for you and me are pretty much random rather than systematic. As I am calculating, I get a nagging feeling of lack of confidence in my calculation, which results in $435.51, and I assign a credence of 0.3 to that being the tip. You then tell me that you you’re not sure what the answer is, but that you assign a credence of 0.2 to its being $435.51.

I now think to myself. No doubt you had a similar kind of nagging lack of confidence to mine, but your confidence in the end was lower. So if all each of us had was their own individual calculation, we’d each have good reason to doubt that the tip is $435.51. But it would be unlikely that we would both make the same kind of mistake, given that our mistakes are random. So, the best explanation of why we both got $435.51 is that we didn’t make a mistake, and I now believe that $435.51 is right. (This story works better with larger numbers, as there are more possible randomly erroneous outputs, which is why the example uses a large bill.)

Hence, your lower reported credence of 0.2 not only did not push me down from my credence of 0.3, but it pushed me all the way up into the belief range.

Here’s the moral of the story: When faced with disagreement, instead of moving closer to the other person’s credence, we should formulate (perhaps implicitly) a model of the sources of error, and apply standard methods of reasoning based on that model and the evidence of the other’s credence. In the case at hand, the model was that error tends to be random, and hence it is very unlikely that an error would result in the particular number that was reported.

Thursday, April 5, 2018

Defeaters and the death penalty

I want to argue that one can at least somewhat reasonably hold this paradoxical thesis:

  • The best retributive justice arguments in favor of the death penalty are sound and there are no cases where the death penalty is permissible.

Here is one way in which one could hold the thesis: One could simply think that nobody commits the sorts of crimes that call for the death penalty. For instance, one could hold that nobody commits murder, etc. But it’s pretty hard to be reasonable in thinking that: one would have to deny vast amounts of data. A little less crazily, one could think that the mens rea conditions for the crimes that call for the death penalty are so strict that nobody actually meets them. Perhaps every murderer is innocent by reason of insanity. That’s an improvement over the vast amount of denial that would be involved in saying there are no murders, but it’s still really implausible.

But now notice that the best retributive justice arguments in favor of the death penalty had better not establish that there are crimes such that it is absolutely morally required that one execute the criminal. First, no matter how great the crime, there are circumstances which could morally require us to let the criminal go. If aliens were to come and threaten to destroy all life on earth with the exception of a mass murderer, we would surely have to just leave the mass murderer to divine justice. Second, if the arguments in favor of the death penalty are to be plausible, they had better be compatible with the possibility of clemency.

Thus, the most the best of the arguments can be expected to establish is that there are crimes which generate strong moral reasons of justice to execute the criminal, but the reasons had better be defeasible. One could, however, think that there defeaters occur in all actual cases. Of course, some stories about defeaters are unlikely to be reasonable: one is not likely to reasonably hold that aliens will destroy all of us if we execute someone.

But there could be defeaters that could be more reasonably believed in. Here are some such things that one could believe:

  • God commanded us to show a clemency to criminals that in fact precludes the death penalty.

  • Criminals being executed are helpless, and killing helpless people—even justly—causes a harm to the killer’s soul that is a defeater for the reasons for the death penalty.

  • We are all guilty of offenses that deserve the death penalty—say, mortal sins—and executing someone when one oneself deserves the death penalty is harmful to one’s character in a way that is a defeater for the reasons for the death penalty.

(I myself am open to the possibility that the first of these could actually be the case in New Testament times.)

Wednesday, April 4, 2018

Group impairment and Aristotelianism

Aristotelians have a metaphysical ground for claims about what is normal and abnormal in an individual: the form of a substance grounds the development of individuals in a teleological ways and specifies what the substance should be like. Thus a one-eyed owl is impaired—while it is an owl, it falls short of the specification in its form.

But there is another set of normalcy claims that are harder to ground in form: claims about the proportions of characteristics in a population. Sex ratios are perhaps the most prevalent example: if all the foals born over the next twenty years were, say, male, then that would be disastrous for the horse as a species. And yet it seems that each individual foal could still be a perfect instance of its kind, since both a male and a female can be a perfect instance of horsehood. Caste in social insects is another example: it would be disastrous for a bee hive if all the females developed into workers, even though each one could be a perfect bee.

The two cases are different. The sex of a horse is genetically determined, while social insect caste is largely or wholly environmental. Still, both are similar in that the species not only has norms as to what individuals should be like but also what the distribution of types of individuals should be. There is not only the possibility of individual but of group impairment. But what is the metaphysics behind these norms?

Infamously, Aristotle interpreters differ on whether forms are individual or common: whether two members of the same species have a merely exactly similar or a numerically identical form. Here is a place where taking forms to be common would help: for then the form could not only dictate the variation between the parts of each organism’s body but also the variation between the organisms in the species. But taking forms to be common would be ethically disastrous, because it would mean that all humans have the same soul, since the soul is the form of the human being.

Here’s my best solution to the puzzle. The form specifies the conditions of the flourishing of an individual. But these conditions can be social in addition to individual. Thus, a perfectly healthy and well-nourished male foal would not be flourishing if it lacks a society with potential future mates. And while each worker bee can internally be a fulfilled worker bee, it is not flourishing if its work does not in fact help support a queen. These social conditions for flourishing are constitutive. It’s not that the lack of a queen will cause the worker bee to die sooner (though for all I know, it might), but that the lack of a queen is constitutive of the worker bee being poorly off.

Once we see that there can be constitutive social conditions for flourishing, it is natural to think that there will be constitutive environmental conditions for flourishing. And this could be the start of an Aristotelian philosophy of ecology.

A multiple-realizability problem for computational theories of mind

Consider a computational theory of mind overlaid on a reductive physicalist ontology. Here’s I think how the story would have to work. We need a mapping between physical system (PS) and an abstract model of computation (AMC), because on a computational theory of mind, thoughts need to be defined in terms of the functioning of an AMC associated with a PS. But there are infinitely many mappings between PSs and AMCs. If thought is defined by computation and yet if we are to avoid a hyper-panpsychism on which every physical system thinks infinitely many thoughts, we need to heavily restrict the mappings between PSs and AMCs. I know of only one promising strategy of mapping restriction, and that is to require that if we specify the PSs using a truly fundamental language—one whose primitives are “structural” in Sider’s sense—the mapping can be sufficiently briefly described.

If we were dealing with infinite PSs and infinite AMCs, there would be a nice non-arbitrary way to do this: we could require that the mapping description be finite (assuming the language has expressive resources like recursion). But with finite PSs and AMCs, that will still generate hyper-panpsychism, since there will be infintely many finite AMCs that can be assigned to a given PS.

This means that not only we have to restrict the mapping description to a finite description, but to a short finite description. Once we do that, we will specify that a PS x thinks the thoughts that are associated with an AMC y if and only if the mapping between x and y is short. One obvious problem here is the seeming arbitrariness of whatever threshold of shortness we have.

But there is another interesting problem. This approach will violate the multiple realizability intuition that leads many people to computational theories of mind. For imagine a reductive physicalist world w* which is just like ours at the macroscopic level, and even at the atomic level, but whose microscopic reduction goes a number of extra levels down, with the reductions being quite complex. Thus, although in our world facts about electrons may be fundamental, in w* these facts are far from fundamental, being reducible to facts about much more fundamental things and reducible in a complex way. Multiple realizability intuitions lead one to think that macroscopic entities in a world like w* that behave just like humans down to the atomic level could think like we do. But if the reduction from the atomic level to the fundamental level in w* is sufficiently complicated, then the brain to human-like AMC mapping in w* will fail to meet the brevity condition, and hence the beings won’t think, or at least not like we do.

The problem is that it is really hard to both avoid hyper-panpsychism and allow for multiple realizability intuitions while staying within the confines of a reductive physicalist computational theory of mind. A dualist, of course, has no such difficulty: a soul can be attached to w*’s human-like organisms with no more difficulty than it can to our world’s human organisms.

Suppose the computationalist denies that multiple realizability extends to worlds like w*. Then there is a new and interesting feature of fine-tuning in our world that calls out for explanation: our world’s fundamental level is sufficiently easily mapped to a neural level to allow the neural level to count as engaging in thoughtful computation.

Tuesday, April 3, 2018

Divine command and natural law epistemology

I am impressed by the idea that other kinds of beings from humans can appropriately have different doxastic practices from ours, in light of:

  1. a different environment which makes different practices truth-conductive, and

  2. different proper goals for their doxastic practices (e.g., a difference of emphasis on explanation versus prediction; a difference in what subject matter is more important).

Option (a) is captured by reliabilism, but reliabilism does not by itself do much to help with (b), and suffers from an insuperable reference class problem.

I know of two epistemological theories that nicely capture the differences between epistemic practices in the light of both (a) and (b):

  • divine command epistemology: a doxastic practice is required just in case God commands it (variant: commands it in light of truth-based goods)

  • natural law epistemology: a doxastic practice is required just in case it is natural to its practitioner (variant: natural and ordered towards truth-based goods).

Both of these theories have an interesting meta-theoretic consequence: they make particularly weird thought experiments less useful in epistemology. For God’s reasons for requiring a doxastic practice may well be linked to our typical way of life, and a practice that is natural in one ecological niche may have unfortunate consequences outside that niche. (That’s sad for me, since making up weird thought experiments is something I particularly enjoy!)

(Note, however, that both of these theories have nothing to say on the question of knowledge. That’s a feature, not a bug. I think we don’t need a concept of (propositional) knowledge, just as we don’t need a concept of baldness. Anything worth saying using the language of “knowledge” or “baldness” can be more precisely said without it—one can talk of degrees of belief and justification, amount of scalp coverage, etc.—and while it’s an amusing question how exactly to analyze knowledge or baldness, it’s just that.)