Yesterday, at a prayer service responding to various evils, the chaplain talked of us as living in an unfinished world. I was very struck with this. I think that conceptualizing our world as unfinished can really help with the problem of evil.
The world is in construction. It's a mess. That's what construction sites are like. But we have the privilege of being in on the construction.
There is an argument here that worlds containing gratuitous evil are preferable to worlds that contain none, all else equal. Let w and w' be indiscernible with respect to evil (amounts, kinds, distribution, etc.). Let the evil in w be gratuitous and the evil in w' be justified. I'd prefer to be in w, since at least we can do something about the evil. w is an unfinished world. In w', there is nothing you can do about it.
ReplyDeleteIsn't the evil made nongratuitous because it serves the purpose of one's being able to do something about it?
ReplyDeleteI don't think so. But if you do, then imagine worlds in which God does not exist. w' has evil that is justified and w does not. Both have unpurposed evils, but the moral economy in w' is such that each instance of evil in w' is necessary to a greater good. Let w and w' be otherwise indiscernible. I'd prefer to be in w, though all of the evil in w is gratuitous. This points up the fact that there are bad things about worlds without gratuitous evil: in such worlds you are simply stuck with the evil you have. You cannot rid yourself of it without making the world worse.
ReplyDeleteIt seems to me Dr. Almeida that if you believed the number of instances of evil were going to be of a finite value you would want to be in a world where the evil was justified?
ReplyDeleteI assumed that all else is equal,so for any two worlds w and w' such that the value of each is the same, if E occurs in w and in w', but E is gratuitous in w, I'd prefer to be in w. This is irrespective of the finite of infinite value of those worlds. The assumptions do entail that w has more good unrelated to justifying evil than does w', and the assumptions entail that neither w nor w' are unsurpassed worlds. But under the assumptions, I'd prefer w.
ReplyDelete....finite or infinite value of those worlds....
ReplyDeleteWell as long as gratuitous evil is defined as just something that somehow causes suffering and is not necessary for the occurrence of any greater good or the prevention of any equally bad or worse evil, I think I would prefer a world where there was no justified evil. However if evil is defined as something that actually exists in opposition to God I would chose the world where instances of evil are justified as I would be certain
ReplyDeletethe number of instances of evil will be finite.
However if evil is defined as something that actually exists in opposition to God I would chose the world where instances of evil are justified as I would be certain the number of instances of evil will be finite.
ReplyDeleteI don't know how you arrive at the claim that the number of evils in a God-world must be finite. Infinitely many just-perceptible harms are less bad, most hold (see those who argue against the Repugnant Conclusion) than finitely many serious harms. So, you can actually make the world worse by diminishing the number of harms. But all that aside, I have no idea how you came to the conclusion that the number of evils in a God-world would have to be finite.
Thanks for spending time on the subject Dr. Almeida. I thought that fallacious claim up myself.
ReplyDeleteSo now I will try to directly answer your question of how I came to the wrong conclusion that a world of justified evil would only have a finite amount of evil Dr. Almeida. I thought a world of only gratuitous evil sounded like a definition of hell. Surely a world that only experiences justified evil would be better. For if evil can only instantiate in the world if God permits it, then there must have been and will be instances where God did not and will not permit evil. So a world where the only evil is permitted evil must have less than an infinite amount of evil. Here again a world with infinite evil sounds like a definition of hell. So I thought it best to pick a kind of world where the only evil in it is that which God permits for that would be a world with a finite amount of evil. Evidently I need to give more thought to hell.
ReplyDeleteO.K. I had a chance to consider hell and no, a holy God (like mine) who abhors evil would never create a world in which it were possible for an infinite number of evils to instantiate. Are you claiming there are gods who would?
ReplyDeleteMike:
ReplyDeleteA lot depends here on how one defines a gratuitous evil. Suppose Molinism and all relevant varieties of compatibilism are false. Then some evils might be justified by expected utilities but not by actual utilities. Furthermore an evil might itself promote no greater good, but God's permission of the evil might promote a greater good.
Suppose, for instance, that there is value 1000 in your freely preventing evil E, and that the occurrence of evil E has value -2000, while the null hypothesis (no E and no possibility of preventing it) has value 0.
Suppose that you're the only (created) person in a position to prevent E, and you have a propensity 0.8 to prevent it. Then the expected utility of God's leaving it up to you whether to prevent it is (0.8)(1000)+(0.2)(-2000)=+400. But suppose that you don't actually prevent it. Then the evil occurs, with a utility of -1000, and there are no benefits of the evil.
In this case, the evil E itself serves no purpose, but God's non-prevention of it serves a purpose--it serves the purpose of giving you the opportunity to prevent it, an opportunity with expected utility +400.
Now, we can either define gratuitousness in terms of the goods the evil itself leads to--and admittedly this is what the literature tends to do--or in terms of the goods that the permission of the evil leads to. And in both cases we have a division as to whether we use actual utilities or expected utilities to define gratuitousness.
Using actual utilities seems to me to be badly mistaken if Molinism and compatibilism are false. For in any case, we will end up comparing actual utilities with expected utilities. Suppose E occurs. Then there is such a thing as the actual utility of E. But there is no such thing as the actual utility of non-E. There is only the expected utility of non-E. (If Molinism is true, there might be an actual utility of non-E, the utility that *would* occur if E didn't.)
But to compare the actual utility of E to the expected utility of non-E is wrongheaded.
So we better compare expected utilities all around. But once we do that, then there can be evils that serve no actual purpose, but that are not gratuitous once one considers expected utilities.
Alex,
ReplyDeleteThese are really interesting comments. I'm dubious of attempts to treat the badness of evils/actions in terms of their expected disutility. It's almost provably mistaken, isn't it? Look, the expected disutility of playing the lottery might be quite high for everyone playing. But we know that the actual outcome is not bad for everyone who is playing. So, it is not true that each person's playing is bad.
The distinction between the value of God allowing evil E and the value of E is useful. But it is perverse to think that God allows evil in order for us to prevent them. Some think Plantinga (of NN) maintained that God allows evil for freedom's sake, as though the value of freedom outweighed the evils freedom produces. That's just plain false: does the value of freedom outweigh the disvalue of decapitation? From here we are led to adding epicycles on a view that cannot be rescued. Better to abandon it, and cut losses.
For those who think that God cannot allow gratuitous evil (I'm not one of them) it's better to think of such evils as being unnecessary to greater goods. And better to understand that as there being no world W in which E does not occur that is better than any world W' in which it does.
So, I say, if W and W' differ with respect to the gratuity of E and are otherwise the same. I prefer the world in which E is gratuitous, since we can do something about E in that world without cost in value.
That is, E is gratuitous iff some world in which E does not occur is better than any world in which it does.
ReplyDeleteDr. Almeida you said
ReplyDelete'And better to understand that as there being no world W in which E does not occur that is better than any world W' in which it does.'
But you meant to say?
'And better to understand that as there being no world W in which E occurs that is better than any world W' in which it does not.'
Or more correct to go with the stronger statement 'That is, E is gratuitous iff some world in which E does not occur is better than any world in which it does.'?
Or did you mean?
'And better to understand that as there being no world W in which E does not occur that is better than any world W' in which it does.' because in W', an unfinished world, you would have the freedom to do something about the evil.
No, I'm simply defining gratuitous evil as evil that does not occur at all in the better worlds. So, if we suppose there is a best world, then E is gratuitous if it does not occur there. If there are better and better worlds, then E is gratuitous if, as we move up the improving worlds, E no longer occurs in any world. That's just what gratuitous evil is. My other point was that I'd prefer to be in a world with gratuitous evil than without gratuitous evil, all else being equal. I'm assuming that gratuitous evil is preventable evil, though I don't think it always is.
ReplyDeleteThis is a pretty weird attempt at theodicy. How would you apply it to any concrete case? For instance, it surely seems awful to say "I'm privileged to live in a world where ISIL chops people's heads off so we can participate in the construction of a better future by bombing them. Thank God for ISIL!" And repeat with any other historical atrocity you like.
ReplyDeleteHi Eric,
ReplyDeleteI'm not sure how you arrived at the conclusion that I was offering a theodicy. I didn't say that. Nor did I say that I would be privileged to live in a world with gratuitous evil. I said that, other things being equal, I'd prefer to live in such a world when the alternative does not differ in other important ways.
But then, let me address your example, which I think misses the comparison I was trying to make. So, let W be a decapitating world in which I can prevent the decapitation without loss of a greater good. Let W' be a decapitation world in which I cannot prevent the decapitation without loss of a greater good. Let W and W' be otherwise the same, in particular W and W' are overall equal in value, the value is distributed the same way, etc. My claim was that I'd prefer to live in W because I can (or someone can) prevent the decapitation in W without any overall loss in value to the world.
I'm not sure what is motivating the intuition. Maybe what I dislike is the idea that there are serious evils that we cannot prevent without a greater cost. There might be quite terrible worlds having fully justified evils in them. The fact that they're justified does nothing to make them appealing worlds to inhabit. I'd rather be in a similarly horrible worlds that don't make evil so costly to eliminate.
ReplyDeleteMaybe I can generalize. Take any world W (our world, if you like) and any world W' indiscernible from W except for the justification of evil. All of the evils in W' are justified and none of them are in W. I conclude generally that I'd prefer to be in W.
ReplyDeleteI think this entails that, to me, it would be welcome news to learn that the evils in the actual world are preventable without exorbitant cost. I'd like to think that I am not making the world worse in preventing the evils I do prevent.
Mike:
ReplyDelete"But we know that the actual outcome is not bad for everyone who is playing. So, it is not true that each person's playing is bad."
Suppose I have a choice whether to play lottery A, which for a $1 gives me a 1/10 chance of winning $5, and lottery B, which for $1 gives me a $1/100 chance of winning $5. I play lottery B and win. Surely I did something bad (for me) here. Yet the expected utility of lottery A is $1 while the actual utility of B is $4, which is better.
So we better not compare actual utility with expected utility.
If Molinism were true, we could compare actual utility with counterfactual utility. Then I could say that whether I did the good thing depends on whether I would have won had I played A. But Molinism is false.
And in any case, what we want to know with respect to the problem of evil is whether God's decision was justified, not whether it was for the better. And justification goes along with the action-guiding expected utilities, not the actual results.
"But it is perverse to think that God allows evil in order for us to prevent them."
It would indeed be silly to think this is true in every case. But it is quite plausible in some cases. It's not an uncommon thing when training someone to allow them the opportunity to prevent a minor evil rather than doing the preventing ourselves, even though there is a greater risk that the evil won't get prevented if we leave it to them.
Mike - I was referring to Alex's original post, not to your comments, but I should have been clear about that! My fault, so I'm sorry about the confusion. I just find it very difficult to believe that "this world is under construction" can be used as a justification of any sort. Your points about the different classes of worlds are interesting. - Eric
ReplyDeleteIt can surely be used as a justification for *some* bad things. The rewards of building one's house oneself can easily justify *some* bad things that happen to one in the course of construction.
ReplyDeleteMy initial thought is that the rewards of building one's own house may compensate for some bad things, but not justify them. But I'm not sure I've got clear reasons behind this distinction.
ReplyDeleteDr. Almeida you say:
ReplyDelete'I think this entails that, to me, it would be welcome news to learn that the evils in the actual world are preventable without exorbitant cost. I'd like to think that I am not making the world worse in preventing the evils I do prevent.'
Surely it does not matter, with regards to your actions in preventing evil, if some number of the evils in your world are gratuitous or all justified. What harm would you cause if you can recall that what is required of you is to act justly, love mercy and walk humbly before God?
Alex,
ReplyDeleteSuppose I have a choice whether to play lottery A, which for a $1 gives me a 1/10 chance of winning $5, and lottery B, which for $1 gives me a $1/100 chance of winning $5. I play lottery B and win. Surely I did something bad (for me) here. Yet the expected utility of lottery A is $1 while the actual utility of B is $4, which is better.
This sort of cases raises a lot of questions on which, likely, we disagree. For starters, I take future contingents concerning undetermined events to have a truth value. So, despite the chancy facts, it was true at t that the game you played at t you win at t+. That is, there was a fact of the matter at t concerning how the chancy events would turn out at t+. Just as there is a good notion of chance in deterministic worlds (see J. Shaffer on deterministic chance), there are also true future contingents in indeterministic worlds.
So, the point is that it isn't obvious that you did anything wrong in playing the second game.
ReplyDeleteIt seems to me your argument entails that you are assuming both gratuitous evil and justified evil may be preventable as your concern seems to be that if you prevent a justifiable evil in world W, world W will be a worse world than world W' where a justifiable evil was not prevented. All else being equal.
This would mean a world W with only justifiable evil is not a 'finished' world, but rather a world that has reached it's apex of goodness and can only get worse. Preventing a justified evil in this world W would make this world worse than world W',which was indiscernible from W, where the justified evil was not prevented. This would make the prevention of a justified evil in world W a gratuitous evil by definition.
That is, E is gratuitous iff some world in which E does not occur is better than any world in which it does.
Or you mean justifiable evil is not preventable in a world of only justifiable evil and you are stuck with the evils you have. However justifiable evils are preventable in a world where gratuitous evils are possible. Yet only a world with no justifiable evils can attain perfection.
Mike:
ReplyDeleteI am OK with future contingent facts, but not with (irreducible) counterfactual contingents.
Alex,
ReplyDeleteI did not appeal to counterfactuals at all. I said that it was true that, when you chose to play the second game, you will win. The fact that you will win is good reason to think that you made the right choice. So it is not obvious that you did anything wrong in choosing to play that game.
I'm incidentally not sure what you'd like to reduce counterfactuals to, unless you have in mind a Lewisian reduction.
Mark;
This would mean a world W with only justifiable evil is not a 'finished' world, but rather a world that has reached it's apex of goodness and can only get worse.
No, the world can get better. It just can't get better by preventing evil. That follows directly form just about any view on justified evil (excepting the expected utility view that Alex mentioned).
Mike:
ReplyDeleteThat's what I took "would" to imply.
Take a case where 1000 perfectly rational people sequentially pay $1 for a game where they roll an indeterministic die and get $1000 if it's sixes. On your view, approximately 830 people would be making the wrong decision, because about 830 people would be losing. But once we start saying that in such cases perfectly people are making the wrong decision, in the total absence of any misleading evidence or any cognitive or volitional deficiency, the notion of "wrong decision" doesn't seem very useful.
Note also another curiosity about this case. Take some person, say Bob, who plays and loses. Let's say that he made the wrong decision by playing. But in the closest world where he did not play, he also made the wrong decision. For in that world, he gets $0, and fails to play a game that has expected utility approximately $166. In that world, there is no fact like "If I played, I would have lost", since there are no such counterfactuals, I am assuming.
So we have the odd situation where on a view like yours, the agent is deliberating between two choices, and each choice is such that most likely if he makes the choice, he will make the wrong one. That seems wrong.
p.s. I was excluding particular counterfactuals whose truth values can be determined from nomic connections, or via the Curley bigger-bribe trick that Plantinga tries on Adams.
This comment has been removed by the author.
ReplyDelete
ReplyDeleteBut it's very hard to see how the overall value of a world could be a function of expected (dis)utility. Imagine a world W in which the expected disvalue of most events is extremely high. Suppose further that, as it happens, there is no disvalue in W. Compare W to W' in which there is a lot of unexpected disvalue. It is very difficult to believe that W is worse than W', but your view entails that. People in W might be leading quite nice lives, while people in W' are suffering terribly.
Of people who perform actions that have a high expected disvalue and, against the odds, thereby produce a beneficial outcome, I'd urge that we distinguish act evaluation from agent evaluation. The act is exactly the right one (how could it not be?), since it produced (let's say) a maximally good outcome. But the agent's decision procedure was not a good one: i.e., not one conducive to producing good consequences.
FWIW, I'm inclined to more plastic views of rationality (cf. Gauthier's views) on which what it is rational to do is what produces the greatest value. This does famously run contrary to causal theories which would (bizarrely!) have you do things you know will produce a worse outcome. I'd like to see a causal theorist make a decision in which his life depended on it; I'm sure they'd see the error of their ways.
I think it would clear things up if we were clear about whether worlds were (a) time-slices, or possibly growing-block type incomplete histories, or (b) complete histories. Call these ‘incomplete’ vs. ‘complete’ worlds.
ReplyDeleteIn the incomplete cases, possibly there is something to Mike’s suggestion that evils one can do something about are preferable to evils one cannot do anything about. After all, in the yet-to-be-completed part of the history, I can do something about them! But I don’t see any interest at all in being incarnated in a complete world-history in which there are preventable but unprevented evils. By hypothesis I won’t do anything about them, nor will anyone else.
For example, suppose I go rock-climbing with my son and he unpreventably falls off a cliff. That’s bad. Is it better or worse than the case where he falls off a cliff, but if I make a desperate, life-threatening lunge I can save him? Maybe the latter is better, because I can prevent his death. It's better, that is, if I go on to save him. But if through cowardice I make no such lunge and watch my son plunge to his death when I could have prevented it, I would say that’s much worse and I would not like to live in that world at all.
I would also think this judgment fits well with Mike’s idea that it is actual utility rather than expected utility that matters for evaluation. The actual utility of a world with preventable but unprevented evils is less than the actual utility of a world with no such evils but otherwise similar. So I should prefer the latter. Right?
Mike:
ReplyDelete"it's very hard to see how the overall value of a world could be a function of expected (dis)utility"
Sure, but on incompatibilist indeterministic non-Molinist views, God isn't choosing between possible worlds. Say that the core of a world w is that portion of w that is strongly actualized by God (if worlds are sets of propositions, then the core of a world is all propositions p such that God strongly brings it about that p is true).
Then, at least to a first approximation, what God is doing is choosing between world-cores. But the value of a world-core is very much like an expected value.
Heath you say: 'The actual utility of a world with preventable but unprevented evils is less than the actual utility of a world with no such evils but otherwise similar. So I should prefer the latter. Right?'
ReplyDeletePerhaps, but I think Mike's salient point was that the actual utility of a world with preventable but unprevented gratuitous evils is greater than the actual utility of a world with no such evils but rather evils that are unpreventable justified evils otherwise the worlds are similar. At least he could do something about the evils in the former world.
A concern of Mike's is that we may live in a world where it is possible to prevent both gratuitous evil and justified evil. The possibility would then exist that your son might later grow up to be a Hitler incarnate and slaughter millions of people. In which case it would have been better if he had never been born. Better to go with expected utilities?
Sure, but on incompatibilist indeterministic non-Molinist views, God isn't choosing between possible worlds
ReplyDeleteRight, I agree. God's choice would liely be by expected utility. But what does that have to do with the claim that the value/disvalue of a world is a function of it's expected (dis)utility? All we get from the assumptions is that God's choice of a world is not based on it's actual value. So, God might have chosen differently, had he been in a better epistemic position.
Of course, what God could do is will the creation of W, then see whether he likes that world. If not, he wills that W come to an end. He then wills W', and sees whether he likes it. And so on.
I am not sure I am grasping the issues here, but is this it?
ReplyDeleteSuppose the right metaphysics is “incompatibilist indeterministic non-Molinist views” as Alex says, which is to say open theism or simple foreknowledge, which we can summarize as “risk-taking” views. God creates what Alex calls “world cores” and what I called incomplete histories. God does not know exactly what he is getting himself into when he does so, so he makes his creative choice by something like expected utility.
Mike does not want to live in a world in which he has to worry that, if he prevents some evil, he is making things worse than they otherwise would be. And this could be a possibility if the right metaphysics were risk-taking. God might have chosen to go with a world-core whose expected utility was U, which included as part of a greater-good package some evil E. Mike prevents that evil E, thinking he is doing the right thing, but what happens is the actual utility of the world becomes less than U. God says, “Rats! Mike screwed it up!” which is a pretty odd thing to have a perfectly good God say.
The risk-taking God could avoid this possibility, however, in either of two ways. (1) All such greater-good E’s are inexorable, that is, unpreventable by human free choices or other chancy items. This seems unlikely because so many evils are caused by human free choices, and hopefully many of them lead to greater goods. (2) God has a Plan B for every such greater-good E. If E occurs, GG1 will follow, while if E is prevented, GG2 will follow. If it makes someone feel better, stipulate that GG1 is not greater than GG2. Think of it as a world-design with tons of fail-safe mechanisms in place for when people screw things up. So, Mike, God has you covered.
Such worlds would have only justified evil—-any evil contributes to a greater good which justifies it—-but the evil could also be gratuitous in the sense that better worlds would not contain it (e.g. if Mike had not screwed up). God would probably choose on the basis of worst-case utility rather than expected utility. The whole arrangement would only be justified, I suspect, if it were very valuable to have a risk-taking scheme in the first place, i.e. only if the worst risk-taking outcome is better than the best no-risk outcome (the heart of the free will defense).
But I don’t see any interest at all in being incarnated in a complete world-history in which there are preventable but unprevented evils
ReplyDeleteHeath, I don't understand what you mean. Are you imagining being created in a world whose history has come to an end? I'm not even sure I understand that.
It is perhaps easiest just to consider the actual world, @. If I had a choice between @ being a world wherein I can prevent evil without cost (indeed, with gain!), or a world wherein I can prevent evil only with equal or greater cost in value, I'd prefer the former. So, it would be good news to me to learn that @ includes nothing but gratuitous evil. I get, for my part, absolutely no satisfaction from the thought that Jim's or Bob's or Sue's cancer or heart disease or whatnot has some greater value associated with it that serves to justify it.
Mike:
ReplyDeleteYou brought in the comparison of worlds when you defined gratuitous evil. But since the point of bringing in the notion of gratuitous evils is to evaluate whether the evils we observe are such as would exist in a God-created world, if God would decide on the basis of something like expected utilities, our notion of gratuitous evils needs to talk about something like expected utilities.
Alex,
ReplyDeleteWe are running together two discussions, I think. First, there's the question of how to analyze gratuitous evil. You want to do that via expected utility. But I'm not sure how you want that to go. E is a gratuitous evil iff the expected utility of E is high, but the actual utility of E is low? Or, iff the expected utility of E is low? Second, there is the question of what makes a world valuable. One answer is that the expected utility of creating the world is high. Another answer is that the actual utility of creating the world is high. I was talking about this latter distinction. Worlds with high expected utility might nonetheless be miserable worlds. I find it counterintuitive that you would call such a world valuable.
(1) All such greater-good E’s are inexorable, that is, unpreventable by human free choices or other chancy items. ... (2) God has a Plan B for every such greater-good E. If E occurs, GG1 will follow, while if E is prevented, GG2 will follow.
ReplyDeleteHeath,
Complicated! What sort of evil E is such that necessarily, GG1 --> E and, necessarily, ~E---> GG2, where GG1 is not greater than GG2?
If God brings about GG1, then there's no preventing E. If God does not bring about GG1, then E is gratuitous. I prefer the latter.
Mike:
ReplyDeleteI don't know how to characterize gratuitous evil. Maybe to a first approximation: an evil E is gratuitous iff the expected utility of allowing E is less than the expected utility of some option where E is not allowed?
I agree that worlds with high expected utility might be miserable.
Mike,
ReplyDeleteThe idea would be that God plans, “If Eve doesn’t eat the apple I will give her a nice garden to live in. And if she does eat the apple then I will send my son to redeem humanity.” Alex had a post on this a long time ago which I cannot now find.
Alex too,
I believe the original idea of ‘gratuitous evil’ was this. Mackie said that a perfectly good being would prevent any evil it could. Plantinga pointed out that this was not true: such a being might permit some evil in order to bring about a greater good, or to prevent a worse evil. Subsequent formulations of the problem of evil, e.g. Rowe, have modified Mackie’s claim to say that a perfectly good being would prevent all evil unless permitting it would bring about a greater good or prevent a worse evil. Gratuitous evil is evil that doesn’t fall into this category.
Nobody ever thought about expected utility because this would only differ from actual utility if there were something God didn’t know, i.e. he was not omniscient at the point of decision-making. At least, that is the obvious thought.
Heath:
ReplyDeleteSimple foreknowledge views don't deny omniscience.
In any case, even given Molinism or Calvinism, the standard definitions of gratuitous evils are tricky to fix up.
Take the version you give: "a perfectly good being would prevent all evil unless permitting it would bring about a greater good or prevent a worse evil. Gratuitous evil is evil that doesn’t fall into this category."
There need be nothing wrong about permitting an evil E when preventing it would bring about an equal evil.
Likewise, there need be nothing wrong with failing to prevent an evil E when permitting it would have a combination effect, bringing about a good G that is 2/3 as good as E is bad, and preventing an evil F that is 2/3 as bad as E.
Then there is the trickiness in "this evil". How do we identify evils across worlds? And the related trickiness in interpreting the subjunctive conditionals. (I think nothing of philosophical importance should be defined in terms of subjunctive conditionals. :-) )
Alex,
ReplyDeleteI'm not saying it's not tricky! I'm saying that's the standard use of the term "gratuitous evil" in the literature.
“If Eve doesn’t eat the apple I will give her a nice garden to live in. And if she does eat the apple then I will send my son to redeem humanity.”
ReplyDeleteIn this case, Eve's eating the apple is a gratuitous evil. God could provide the good of the garden even if she does eat the apple.
As gratuitous evils are typically understood, E is a gratuitous evil iff. there is no good G such that (i) G entails E and (ii) (G & E) is better than (~G & ~E). For Mackie, God's omnipotence entails that God can bring about the goods G without the evils E (though he hedges a little), so all evil is gratuitous. This is where Plantinga and Mackie are at cross purposes. If God can do the impossible, as Mackie suggests early in his essay on 'Evil and Omnipotence', then of course the FWD fails. But Mackie in The Miracle of Theism has effectively conceded nearly everything to Plantinga.
Plantinga's evils in FWD are, on his way of viewing them, not gratuitous, since they cannot be prevented by God without loss of a greater value (though they can be prevented by us without loss of value). I think this is a hopeless way to understand FWD, but that's another question.