Consider a trolley problem where on both tracks there is exactly one innocent stranger. Alice is driving the trolley. If she does nothing, the trolley will head down the left track. But the right track will get Alice to her destination three minutes sooner. Alice redirects.
It seems that Alice did something wrong. Yet, why? We can say that she intended to save the person on the left track and get to her destination faster, and did not intend to kill the person on the right track. What went wrong?
One option is this. In the proportionality condition on Double Effect, we need that the outcome chosen have a significantly better consequence than the alternative, and three minutes (normally) is not significant.
But that’s probably not right. There are times when it is permissible to redirect a trolley even when the outcome is a bit worse. For instance, suppose that we have a trolley setup with one person on each track, but things are such that if the trolley hits the person on the right track, the death will be a bit more painful. The trolley is controlled by the person on the right track. It seems obvious that the person on the right track is permitted to redirect the trolley to the right even though the outcome is a bit worse.
Maybe the issue is this. Even though it’s not always wrong to become the non-intentional cause of a grave harm to someone, we have moral reason to avoid becoming such a cause. This fits with our intuitions: we feel really bad when we become such a cause. Murray Leinster’s first novel Murder Madness is all about the horror of a drug that makes one involuntarily kill people (I won’t recommend the novel because of a number of pieces of outrageous racism).
This makes sense from an Aristotelian point of view. For a social organism, helping members of the group is a part of flourishing. This is true for animals that are not moral agents. A meerkat sentinel that saves the group by warning of a danger is thereby flourishing. This is even true in the case of non-intentional cooperative activity. A slime mold that, as part of a stalk, enables reproduction by slime molds that are part of the fruiting body is thereby flourishing. It makes sense, thus, to think that for social organisms harming members of the group is contrary to flourishing whether or not one is morally responsible for the harm, and even when the harm is one that one is not intending.
3 comments:
We often symbolically unite ourselves to things without intending them. If I were watching a livestream of Alice and cheering her on in her decision, it would be a little odd to say that I intended for her to get to her destination sooner, but I still wind up counting her early arrival as a kind of success. In a case like that, I’m not even a cause of what happens. It’s still morally ugly for me to cheer.
My sense about Alice is that the story you told in your first paragraph doesn’t quite tell us enough to infer that she has done something wrong; rather, I think we are filling in some details with our imaginations. Specifically, I think we are imagining that, as Alice makes the decision to redirect, she is failing to give proper attention to the worth of the lives that are on the tracks. In her situation, those lives command her attention. Punctuality does not.
Here is another thought. Sometimes, when terrible decisions have to be made, we intentionally make them by appeal to something of *no* value. For example, we decide who has to stay behind by drawing straws. We think it is more fitting to select in this way than to pick something of small but significant value (prioritizing intelligence or expected years of life remaining or expected contribution to future economic growth). We leave behind the person who draws the shortest straw, because no one could ever be tempted to believe that something so arbitrary could have any bearing on the tragedy of the loss. No one could ever be tempted to think, “well, there’s some consolation that it wasn’t one of the long-straw folks who died.”
The straw observation is very interesting. But my intuition is that in the trolley case, if there is one person on both lines, we should not draw straws or flip coins whether to redirect--we just shouldn't redirect. "Don't cause harm" is a better way to decide than randomly.
Suppose she was in a smart train that was programmed to kill the person with the lower IQ but she could push a button that would change its programming to go with a coin flip. Would it be morally wrong for her to push the button? I don’t have a strong sense about that sort of case, but I wouldn’t be inclined to fault her if she did.
Or closer to your example, suppose it was programmed to save her the three minutes on her journey. Would it be wrong to press the button then?
Post a Comment