Showing posts with label chaos. Show all posts
Showing posts with label chaos. Show all posts

Tuesday, September 6, 2022

Trolleys and chaos

Suppose that determinism is true and Alice is about to roll a twenty-sided die to determine which of twenty innocent prisoners to murder. There is nothing you can do to stop her. You are in Alice’s field of view. Now, a die roll, even if deterministic, is very sensitive to the initial conditions. A small change in Alice’s throw is apt to affect the outcome. And any behavior of yours is apt to affect Alice’s throw. You frown, and Alice becomes slightly tenser when she throws. You smile, and Alice pauses a little wondering what you’re smiling about, and then she throws differently. You turn around not to watch, and Alice grows annoyed or pleased, and her throw is affected.

So it’s quite reasonable to think that whatever you do has a pretty good chance, indeed close to a 95% chance, of changing which of the prisoners will die. In other words, with about 95% probability, each of your actions is akin to redirecting a trolley heading down a track with one person onto a different track with a different person.

Some people—a minority—think that it is wrong to redirect a trolley heading for five people to a track with only one person. I wonder what they could say should be done in the Alice case. If it’s wrong to redirect a trolley from five people to one person, it seems even more wrong to redirect a trolley from one person to another person. So since any discernible action is likely to effectively be a trolley redirection in the Alice case, it seems you should do nothing. But what does “do nothing” mean? Does it mean: stop all external bodily motion? But stopping all external bodily motion is itself an effortful action (as anybody who played Lotus Focus on the Wii knows). Or does it mean: do what comes naturally? But if one were in the situation described, one would likely become self-conscious and unable to do anything “naturally”.

The Alice case is highly contrived. But if determinism is true, then it is very likely that many ordinary actions affect who lives and who dies. You talk for a little longer to a colleague, and they start to drive home a little later, which has a domino effect on the timing of people’s behaviors in traffic today, which then slightly affects when people go to sleep, how they feel when they wake up, and eventually likely affects who dies and who does not die in a car accident. Furthermore, minor differences in timing affect the timing of human reproducive activity, which is likely to affect which sperm reaches the ovum, which then affects the personalities of people in the next generation, and eventually affects who lives and who dies. Thus, if we live in a deterministic world, we are constantly “randomly” (as far as we are concerned, since we don’t know the effects) redirectly trolleys between paths with unknown numbers of people.

Hence, if we live in a deterministic world, then we are all the time in trolley situations. If we think that trolley redirection is morally wrong, then we will be morally paralyzed all the time. So, in a deterministic world, we better think that it’s OK to redirect trolleys.

Of course, science (as well as the correct theology and philosophy) gives us good reason to think we live in an indeterministic world. But here is an intuition: when we deal with the external world, it shouldn’t make a difference whether we have real randomness or the quasi-randomness that determinism allows. It really shouldn’t matter whether Alice is flipping an indeterministic die or a deterministic but unpredictable one. So our conclusions should apply to our indeterministic world as well.

Friday, August 7, 2020

A value asymmetry in double effect reasoning

The Knobe effect is that people judge cases of good and bad foreseen effects differently with respect to intention: in cases of bad effects, they tend to attribute intention, but not so in cases of good effects.

Now, this is clearly a mistake about intention: there is no such asymmetry. However, I wonder if there isn’t a real asymmetry in the value of the actions. Simplify by considering actions that have exactly one unintended side-effect, which is either good or bad. My intuition says that an action’s having a foreseen bad side-effect, even when that side-effect is unintended and the action is justified by Double Effect, makes the action less valuable. But on the other hand, an action’s having a foreseen good side-effect, when that side-effect is unintended, doesn’t seem to make the action any better.

Let me try to think through this asymmetry intuition. I would be a worse person if I intended the bad side-effect. But I would be a better one if I intended the good side-effect. My not intending the good side-effect is a sign of vice in me (as is clear in the standard Knobe case, where the CEO’s indifference to the environmental benefits of his action is vicious). So not only does the presence of an unintended good side-effect not make the action better, it makes it worse. But so far there is no asymmetry: the not intending of the bad is good and the not intending of the good is bad. The presence of a good side-effect gives me an opportunity for virtue if I intend it and for vice if I fail to intend. The presence of a bad side-effect gives me an opportunity for vice if I intend it and for virtue if I fail to intend.

But maybe there still is an asymmetry. Here are two lines of thought that lead to an asymmetry. First, think about unforeseen, and even unforeseeable, effects. Let’s say that my writing this post causes an earthquake in ten years in Japan by a chaotic chain of events. I do feel that’s bad for me and bad for my action: it is unfortunate to be the cause of a bad, whether intentionally or not. But I don’t have a similar intuition on the good side. If my writing this post prevents an earthquake by a chaotic chain of events, I don’t feel like that’s good for me or my action. So perhaps that is all that is going on in my initial value asymmetry: there is a non-moral disvalue in an action whenever it unintentionally causes a bad effect, but no corresponding non-moral value when it unintentionally causes a good effect, and foresight is irrelevant. But my intuitions here are weak. Maybe there is nothing to the earthquake intuition.

Second, normally, when I perform an action that has an unintended bad side-effect, that is a defect of power in my action. I drop the bombs on the enemy headquarters, but I don’t have the power to prevent the innocents from being hit; I give my students a test, but I don’t have the power to prevent their being stressed. The action exhibits a defect of power and that makes it worse off, though not morally so. Symmetry here would say that when the action has an unintended good side-effect, then it exhibits positive power. But here exactly symmetry fails: for the power of an action qua action is exhibited precisely through its production of intended effects. The production of unintended effects does not redound to the power of the action qua action (though it may redound to its power qua event).

So, if I am right, an action is non-morally worse off, worse off as an exercise of power, for having an unintended bad effect, at least when that bad side-effect is unavoidable. What if it is avoidable, but I simply don’t care to avoid it? Then the action is morally worse off. Either way, it’s worse off. But this is asymmetric: an action isn’t better off as an exercise of power by having an unintended good effect, regardless of whether the good side-effect is avoidable or not, since power is exhibited by actions in fulfilling intentions.

Tuesday, September 13, 2016

How a blog radically changes the world forever

On any given day, one in 30,000 Americans will conceive a child. So, roughly, there is a one in 60,000 chance that someone you (I'll just assume you're in the US for convenience) are interacting with will be conceiving a child later that day. Any interaction you have with a person who will be conceiving a child later that day is likely to affect the exact time of conception, and it seems very likely that varying the time of conception will vary the genetic identity of the child conceived. However, there might be some "resetting" mechanisms throughout the day, mediated by the way our days are governed by times of meetings and so on, and so not every interaction will change the time of conception. So let's say that one in four interactions with someone who will be conceiving a child later that day will vary who will be conceived (or whether anyone will be). That means that one in 240,000 interactions we have with people affects who will be conceived on that day.

Once one has affected who will be conceived that day, as long as the human race survives long enough, eventually just about everyone's genetic identity will be affected by one's actions. For, obviously, that conceived individual's own children's genetic identity will be affected. But that individual will interact with others, affecting the romantic decisions or at least times of conception of others, for instance. It seems quite safe to suppose that that individual's interactions over a lifetime will affect the genetic identity of ten individuals. Given an interconnected world like we have, it seems reasonable to suppose that in 20 generations, almost everyone's genetic identity will be affected (maybe there will be some isolated communities that won't be affected--but I think this is unlikely).

Counting a generation as 30 years, a blog that has 240,000 hits per year, running over a single year, will affect the genetic identity of almost everyone in 600 years. And this, in turn, will affect all vastly morally significant things where individuals matter: the starting of wars, the inventing of medical treatments, etc.

It is very likely, then, that the long-term effects of such a blog in terms of reshaping the world population vastly exceed whatever good and ill the blog does to the readers in the way proper to blogs. After all, one more or one less warmongering dictator and we have millions people killed or not killed. So the kinds of considerations one brings to bear on the question whether to have a blog--how will it affect my readers, etc.--are swamped by the real variation in consequences. (Assuming Judgment Day is still hundreds of years away.)

Not to be paralyzed in our actions, we need to bracket such great unknowns, even though we know they are there and that they matter more than the knowns on the basis of which we make our decisions!

Wednesday, February 18, 2015

Mysterious thy ways

Imagine an ordinary decent person who is omniscient. Her actions are going to be rather different from what we expect. She would take what would to us be big risks for the sake of small gains, simply because for her there is no risk at all. He stock portfolio is apt to be undiversified and quite strange. If we live in a chaotic world, then she might from time to time be doing some really odd things, like hopping on one leg in order to prevent an earthquake a thousand years hence. There would be bad things she would refrain from preventing because she saw further than we into the consequences, and good things she would avoid for similar reasons.

Now add to this that the person is omnipotent. And morally perfect. These additions would presumably only make the person stranger to us in behavior.

Tuesday, February 17, 2015

The mystical security guard

One objection to some solutions to the problem of evil, particularly to sceptical theism, is that if there are such great goods that flow from evils, then we shouldn't prevent evils. But consider the following parable.

I am an air traffic controller and I see two airplanes that will collide unless they are warned. I also see our odd security guard, Jane, standing around and looking at my instruments. Jane is super-smart and very knowledgeable, to the point that I've concluded long ago that she is in fact all-knowing. A number of interactions have driven me to concede that she is morally perfect. Finally, she is armed and muscular so she can take over the air traffic control station on a moment's notice.

Now suppose that I reason as follows:

  • If I don't do anything, then either Jane will step in, take over the controls and prevent the crash, or she won't. If she does, all is well. If she doesn't, that'll be because in her wisdom she sees that the crash works out for the better in the long run. So, either way, I don't have good reason to prevent the crash.
This is fallacious as it assumes that Jane is thinking of only one factor, the crash and its consequences. But the mystical security guard, being morally perfect, is also thinking of me. Here are three relevant factors:
  • C: the value of the crash
  • J: the value of my doing my job
  • p: the probability that I will warn the pilots if Jane doesn't step in.
Here, J>0. If Jane foresees that the crash will lead to on balance goods in the long run, then C>0; if common sense is right, then C<0. Based on these three factors, Jane may be calculating as follows:
  • Expected value of non-intervention: pJ+(1−p)C
  • Expected value of intervention: 0 (no crash and I don't do my job).
Let's suppose that common sense is right and C<0. Will Jane intervene? Not necessarily. If p is sufficiently close to 1, then pJ+(1−p)C>0 even if C is a very large negative number. So I cannot infer that if C<0, or even if C<<0, then Jane will intervene. She might just have a lot of confidence in me.

Suppose now that I don't warn the pilots, and Jane doesn't either, and so there is a crash. Can I conclude that I did the right thing? After all, Jane did the right thing—she is morally perfect—and I did the same thing as Jane, so surely I did the right thing. Not so. For Jane's decision not to intervene may be based on the fact that her intervention would prevent me from doing my job, while my own intervention would do no such thing.

Can I conclude that I was mistaken in thinking Jane to be as smart, as powerful or as good as I thought she was? Not necessarily. We live in a chaotic world. If a butterfly's wings can lead to an earthquake a thousand years down the road, think what an airplane crash could do! And Jane would take that sort of thing into account. One possibility was that Jane saw that it was on balance better for the crash to happen, i.e., C>0. But another possibility is that she saw that C<0, but that it wasn't so negative as to make pJ+(1−p)C come out negative.

Objection: If Jane really is all-knowing, her decision whether to intervene will be based not on probabilities but on certainties. She will know for sure whether I will warn the pilots or not.

Response: This is complicated, but what would be required to circumvent the need for probabilistic reasoning would be not mere knowledge of the future, but knowledge of conditionals of free will that say what I would freely do if she did not intervene. And even an all-knowing being wouldn't know those, because there aren't any true non-trivial such conditionals.