Showing posts with label sceptical theism. Show all posts
Showing posts with label sceptical theism. Show all posts

Friday, November 15, 2019

Molinism and sceptical theism

When we think of God’s reasons for permitting evils, we tend to think of fairly “natural” connections between evils and goods. But given Molinism, there could be some really weird connections. For instance, it could be that if Alice hadn’t been cut off by Bob in traffic today, Carl who witnessed this would have joined a terrorist organization. Not because there is any intrinsic connection between seeing someone get cut off in traffic and joining a terrorist organization, but just because that’s how the conditionals of free will worked out.

Indeed, a Molinist should expect there to be cases where the Molinist conditionals work out the opposite way to the “natural” connections. Thus, we can have cases where becoming more cowardly results in one’s behaving more courageously, just as a Molinist God might know that if the coin loaded in favor of heads would show tails in the next ten tosses while the coin loaded in favor of tails would show heads in the text ten tosses.

So there seems to me to be a very nice affinity between Molinism and sceptical theism.

It’s really too bad that Molinism is false.

Wednesday, April 11, 2018

A parable about sceptical theism and moral paralysis

Consider a game. The organizers place a $20 bill in one box and a $100 bill in another box. They seal the boxes. Then they put a $1 bill on top of one of the boxes, chosen at random fairly, and a $5 on top of the other box. The player of the game gets to choose a box, in which case she gets both what’s in the box and what’s on top of the box. Everyone knows that that’s how the game works.

If you are an ordinary person playing the game, you will be self-interestedly rational to choose the box with the $5 on top of it. The expected payoff for the box with the $5 on it is $65, while the expected payoff for the other box is $61, when one has no information about which box contains the $20 and which contains the $100.

If Alice is an ordinary person playing the game and she choses the box with the $1 on top of it, that’s very good reason to doubt that Alice is self-interestedly rational.

But now suppose that I am considering the hypothesis that Bob is a self-interestedly rational being who has X-ray vision that can distinguish a $20 bill from a $100 bill inside the box. Then if I see Bob choose the box with the $1 on top of it, that’s no evidence at all against the hypothesis that he is such a being, i.e., a self-interestedly rational being with X-ray vision. In repeated playings, we’ll see Bob choose the $1 box half the time and the $5 box half the time, if he is such a being, and if we didn't know that Bob has X-ray vision, we would think that Bob is indifferent to money.

Sceptical theism and the infinity of God

I’ve never been very sympathetic to sceptical theism until I thought of this line of reasoning, which isn’t really new, but I’ve just never quite put it together in this way.

There are radically different types of goods. At perhaps the highest level—call it level A—there are types of goods like the moral, the aesthetic and the epistemic. At a slightly lower level—call it level B—there are types of goods like the goods of moral rightness, praiseworthiness, autonomy, the virtue, beauty, sublimity, pleasure, truth, knowledge, understanding, etc. And there will be even lower levels.

Now, it is plausible that a perfect being, a God, would be infinitely good in infinitely many ways. He would thus infinitely exemplify infinitely many types goods at each level, either literally or by analogy. If so, then:

  1. If God exists, there are infinitely many types of good at each level.

Moreover:

  1. We only have concepts of a finite number of types of good at each level.

Thus:

  1. There are infinitely many types of good at each level that we have no concept of.

Now, let’s think what would likely be the case if God were to create a world. From the limited theodicies we have, we know of cases where certain types of goods would justify allowing certain evils. So we wouldn't be surprised if there were evils in the world, though of course all evils would be justified, in the sense that God would have a justification for allowing them. But we would have little reason to think that God would limit his design of the world to only allowing those evils that are justified by the finite number of types of good that we have concepts of. The other types of good are still types of good. Given that there infinitely many such goods, and only finitely many of the ones we have concepts of, it would not be significantly unlikely that if God exists, a significant proportion—perhaps a majority—of the evils that have a justification would have a justification in terms of goods that we have no concept of.

And so when we observe a large proportion of evils that we can find no justification for, we observe something that is not significantly unlikely on the hypothesis that God exists. But if something is not significantly unlikely on a hypothesis, it’s not significant evidence against that hypothesis. Hence, the fact that we cannot find justifications for a significant proportion of the evils in the world is not significant evidence against the existence of God.

Sceptical theism has a tendency to undercut design arguments for the existence of God. I do not think this version of sceptical theism has that tendency, but that’s matter for another discussion (perhaps in the comments).

Friday, March 18, 2016

A quick argument against extreme sceptical theism

Extreme sceptical theism holds that no evil is any evidence at all against theism. Here's a counterexample. Let E be this evil: Someone has no relationship with God. Then P(E | no God) = 1 (since if God doesn't exist, no one has a relationship with him, but it's certain that someone--say myself, for Cartesian reasons--exists) but P(E | God exists) < 1. So, E is evidence against the existence of God.

Thursday, August 13, 2015

A modest sceptical theism that doesn't lead to moral scepticism

I'll just baldly give the theory without much argument. Axiology is necessary. It's a necessary truth that friendship and knowledge are good, that false beliefs are bad, etc. But the values have many aspects and exhibit much incommensurability. It's also a necessary truth that the good us to be pursued and the bad avoided. This gives some practical guidance, but mainly in cases where the reasons in favor of an action dominate those against. And that's rare. In typical cases agents face competing incommensurable reasons.

There may also be a necessary truth that some goods are fundamental and never to be acted against. The nature of a particular kind of agent then specifies how incommensurability is to be resolved. When the agent should be merciful rather than strictly just, when strictly just, and when the agent is morally free to go either way. The nature of an agent also gives the agent inclinations to act accordingly, inclinations that can be introspective. So we can know how we should resolve cases of incommensurability when they come up for us. We have reliable moral intuitions about us.

But these moral intuitions are about humans. Intelligent sharks would have a nature that resolves incommensurables differently, and our moral intuitions wouldn't directly tell us much about how intelligent sharks should act (except in cases of domination and maybe the deontic constraint not to act directly against the most fundamental goods). So we have a reasonable scepticism about for our insight into how a morally upright intelligent shark would act. But this scepticism of course in no way detracts from our knowledge of how we should resolve incommensurables.

For exactly the same reason, we have a reasonable scepticism about how God would act, about what resolutions between incommensurables are necessitated by his nature and which are left to choice. But this scepticism in no way detracts from our moral knowledge.

I don't think the scepticism is total. We can engage in limited analogical speculation. But this needs modesty if the theory is right.

Let me end with a little argument. When we think of particularly outlandish ethics cases, such as actions that affect an infinite number of people, we get stuck or even misled. No surprise on the above theory. We aren't made for such decisions. Those are decisions for more godlike beings than us. Perhaps our nature simply fails to specify the resolutions for these cases, as they aren't relevant to us in our niche. Imagine asking an intelligent amoeba about sexual ethics!

Wednesday, February 18, 2015

A fallacy of probabilistic reasoning with an application to sceptical theism

Consider this line of reasoning:

  1. Given my evidence, I should do A rather than B.
  2. So, given my evidence, it is likely that A will be better than B.
This line of reasoning is simply fallacious. Decisions in many contexts where deontological-like concerns are not relevant are appropriately made on the basis of expected utilities. But the following inference is fallacious:
  1. The expected utility of A is higher than that of B.
  2. So, probably, A has higher utility than B.
In fact it may not even be possible to make sense of (4). For instance, suppose I am choosing between playing one of two indeterministic games that won't be played without me. I must play exactly one of the two. Game A pays a million dollars if I win, and the chance of winning is 1/1000. Game B pays a hundred, and the chance of winning is still 1/1000. Obviously, I should play game A, since the expected utility is much higher. But unless something like Molinism is true, if I choose A, there is no fact of the matter as to how B would have gone, and if I choose B, there is no fact of the matter as to how A would have gone. So there is no fact of matter as to whether A or B would have higher utility.

But even when there is a fact of the matter, the inference from (3) to (4) is fallacious, due to simple cases. Suppose that a die has been rolled but I haven't seen the result. I can choose to play game A which pays $1000 if the die shows 1 and nothing otherwise, or I have option B which is just to get a dollar no matter what. Then the expected utility of A is about $167 (think 1000/6) and the expected utility of B is exactly $1. However, there is a 5/6 chance that B has higher utility.

The lesson here is that our decisions are made on the basis of expected utilities rather than on the basis of the probabilities of the better outcome.

Now the application. One objection to some resolutions to the problem of evil, notably sceptical theism, is this line of thought:

  1. We are obligated to prevent evil E.
  2. So, probably, evil E is not outweighed by goods.
But this is just a version of the expectation-probability fallacy above. Bracketing deontological concerns, what is relevant to evaluating claim (5) is not so much the probability that evil E is or is not outweighed by goods, but the expected utility of E or, more precisely, the expected utilities of respectively preventing or not preventing E. On the other hand, what is relevant to (6) is precisely the probability that E is outweighed.

One might worry that the case of responses to the problem of evil isn't going to look anything like the cases that provide counterexamples to the expectation-probability fallacy. In other words, even though the expectation-probability fallacy is a fallacy in most cases, it isn't fallacious in the case of (5) and (6). But it's possible to provide a counterexample to the fallacy that is quite close to the sceptical theism case.

At this point the post turns a little more technical, and I won't be offended if you stop reading. Imagine that a quarter has been tossed a thousand times and so has a dime. There is now a game. You choose which coin counts—the quarter or the time—and then sequentially over the next thousand days you get a dollar for each heads toss and pay a dollar for each tails toss. Moreover, it is revealed to you that the first time the quarter was tossed it landed heads, while the first time the dime was tossed it landed tails.

It is clear that you should choose to base the game on the tosses of the quarter. For the expected utility of the first toss in this game is $1, and the expected utility of each subsequent toss is $0, for a total expected utility of one dollar, whereas the expected utility of the first toss in the dime-based game is $(-1), and the subsequent tosses have zero expected utility, so the expected utility is negative one dollar.

On the other hand, the probability that the quarter game is better than the dime game is insignificantly higher than 1/2. (We could use the binomial distribution to say just how much higher than 1/2 it is.) The reason for that is that the 999 subsequent tosses are very likely to swamp the result from the first toss.

Suppose now that you observe Godot choosing to play the dime game. Do you have significant evidence against the hypothesis that Godot is an omniscient self-interested agent? No. For if Godot is an omniscient self-interested agent, he will know how all the 1000 tosses of each coin went, and there is probability that's insignificantly short of 1/2 that they went in such a way that the dime game pays better.

Tuesday, February 17, 2015

The mystical security guard

One objection to some solutions to the problem of evil, particularly to sceptical theism, is that if there are such great goods that flow from evils, then we shouldn't prevent evils. But consider the following parable.

I am an air traffic controller and I see two airplanes that will collide unless they are warned. I also see our odd security guard, Jane, standing around and looking at my instruments. Jane is super-smart and very knowledgeable, to the point that I've concluded long ago that she is in fact all-knowing. A number of interactions have driven me to concede that she is morally perfect. Finally, she is armed and muscular so she can take over the air traffic control station on a moment's notice.

Now suppose that I reason as follows:

  • If I don't do anything, then either Jane will step in, take over the controls and prevent the crash, or she won't. If she does, all is well. If she doesn't, that'll be because in her wisdom she sees that the crash works out for the better in the long run. So, either way, I don't have good reason to prevent the crash.
This is fallacious as it assumes that Jane is thinking of only one factor, the crash and its consequences. But the mystical security guard, being morally perfect, is also thinking of me. Here are three relevant factors:
  • C: the value of the crash
  • J: the value of my doing my job
  • p: the probability that I will warn the pilots if Jane doesn't step in.
Here, J>0. If Jane foresees that the crash will lead to on balance goods in the long run, then C>0; if common sense is right, then C<0. Based on these three factors, Jane may be calculating as follows:
  • Expected value of non-intervention: pJ+(1−p)C
  • Expected value of intervention: 0 (no crash and I don't do my job).
Let's suppose that common sense is right and C<0. Will Jane intervene? Not necessarily. If p is sufficiently close to 1, then pJ+(1−p)C>0 even if C is a very large negative number. So I cannot infer that if C<0, or even if C<<0, then Jane will intervene. She might just have a lot of confidence in me.

Suppose now that I don't warn the pilots, and Jane doesn't either, and so there is a crash. Can I conclude that I did the right thing? After all, Jane did the right thing—she is morally perfect—and I did the same thing as Jane, so surely I did the right thing. Not so. For Jane's decision not to intervene may be based on the fact that her intervention would prevent me from doing my job, while my own intervention would do no such thing.

Can I conclude that I was mistaken in thinking Jane to be as smart, as powerful or as good as I thought she was? Not necessarily. We live in a chaotic world. If a butterfly's wings can lead to an earthquake a thousand years down the road, think what an airplane crash could do! And Jane would take that sort of thing into account. One possibility was that Jane saw that it was on balance better for the crash to happen, i.e., C>0. But another possibility is that she saw that C<0, but that it wasn't so negative as to make pJ+(1−p)C come out negative.

Objection: If Jane really is all-knowing, her decision whether to intervene will be based not on probabilities but on certainties. She will know for sure whether I will warn the pilots or not.

Response: This is complicated, but what would be required to circumvent the need for probabilistic reasoning would be not mere knowledge of the future, but knowledge of conditionals of free will that say what I would freely do if she did not intervene. And even an all-knowing being wouldn't know those, because there aren't any true non-trivial such conditionals.