Suppose you chose A over B, but that through minor changes in your circumstances, changes that at most slightly rationally affect the reasons for your decision and that do not intervene in your mental functioning, I could reliably control whether you chose A or whether you chose B. For instance, maybe I could reliably get you to choose B by being slightly louder in my request that you do A, and to choose A by being slightly quieter. In that case your choice is in effect random—the choice is controlled by features that from the point of view of your rational decision are random—and your responsibility slight.
Now suppose you are a friend of mine. To save my life, you would need to make a sacrifice. There is a spectrum of possible sacrifices. At the low end, you need to spend five minutes in my company (yes, it gets worse than that!). At the high end, you and everybody else you care about are tortured to death. With the required sacrifice being at the low end, of course you'd make the sacrifice for your friend. But with the required sacrifice being at the high end, of course you wouldn't. Now imagine a sequence of cases with greater and greater sacrifice. As the sacrifice gets too great, you wouldn't make it. Somewhere there is a critical point, a boundary between the levels of sacrifice you would undertake to save my life and ones you wouldn't. This critical point is where the reasons in favor of the sacrifice and those against it are balanced.
Speaking loosely, as the degree of required sacrifice increases, the probability of your making that sacrifice goes down. The "probability" here is something rough and frequentist, compatible with determinism. If determinism is true, however, in each precise setup around the critical point, there is a definite fact of the matter as to what you would do. And there are two possibilities about your character:
- You have a neat and rational character, so that for all sacrifices below the critical level, you'd do it, and for all the sacrifices above the critical level, you wouldn't do it.
- At around the critical value, whether you make the sacrifice or not comes to be determined not by the degree of sacrifice but by irrational factors—what shoes I'm wearing, how long ago you had lunch, etc.
But surely you would be very praiseworthy for undertaking a great sacrifice to save my life, especially around the critical point. That the sacrifice is so great that we're very near the point where the reasons are balanced does nothing to diminish your responsibility. If anything, it increases your praiseworthiness. Thus determinism is false.
This is not an argument for incompatibilism. I am not arguing here that responsible is incompatible with determinism. I am arguing that having full responsibility around the critical level is incompatible with determinism.
I don't follow the reasoning here.
ReplyDeleteDifferent people will have different critical points. Surely praiseworthiness has to do with how high up the scale your critical point is? And then I do not see that it matters whether I am near my critical point or not, only whether I actually do the sacrifice or not.
But around the critical point, "through minor changes in your circumstances, changes that at most slightly rationally affect the reasons for your decision and that do not intervene in your mental functioning, I could reliably control whether you chose A or whether you chose B". And hence by my starting principle--which perhaps you deny--there is little responsibility.
ReplyDeleteIt seems to me that you are arguing that full responsibility around the critical point is incompatible with being manipulable, and manipulability is understood in terms of counterfactuals: if there is something that I can do such that if I were to do it, you would do X, then you are manipulable with respect to X. But manipulability in this sense is compatible with indeterminism.
ReplyDeleteFirst case: Molinism.
Second case: suppose you have indeterministic agent causal powers or whatever, but are perfectly rational. Then around some critical point I can “control” whether you accept a certain tradeoff by, say, offering you an extra dollar. But surely that would not impact your responsibility for your action.
Third case: there is powerful empirical evidence that (2) obtains for many people much of the time, but we still (rightly) hold people responsible for their actions under such circumstances.
1: Dean Zimmerman uses this to argue against Molinism.
ReplyDelete2: True. But I think that when you choose an option that dominates another option, you don't choose it freely over the other option. Freedom requires incommensurability. So again I welcome the conclusion.
3: I think in real life, the control isn't quite as reliable.