I want to argue for this thesis:
- For a punishment P for a fault F to be right, P must stand in a causal-like relation to P.
What is a causal-like relation? Well, causation is a causal-like relation. But there is probably one other causal-like relation, namely when because of the occurrence of a contingent event E, God knows that E occurred, and this knowledge in turn explains why God did something. This is not exactly causation, because God is not causally affected by anything, but it is very much like causation. If you don’t agree, then just remove the ``like’’ from (1).
Thesis (1) helps explain what is wrong with punishing people on purely statistical grounds, such as sending a traffic ticket to Smith on the grounds that Smith has driven 30,000 miles in the last five years and anyone who drove that amount must have committed a traffic offense.
Are there other arguments against (1)? I think so. Consider forward-looking punishment where by knowing someone’s present character you know that they will commit some crime in ten days, so you punish them now (I assume that they will commit the crime even if you do not punish them). Or, even more oddly, consider circular forward-looking punishment. Suppose Alice has such a character that it is known that if we jail her, she will escape from jail. But assume that our in society an escape from jail is itself a crime punishable by jail, and that Alice is not currently guilty of anything. We then jail her, on the grounds that she will escape from jail, for which the punishment is us now jailing her.
One may try to rule out the forward-looking cases on the grounds that instead of (1) we should hold:
- For a punishment P for a fault F to be right, P must come after F.
But that’s not right. Simultaneous causation seems possible, and it does not seem unjust to set up a system where a shoplifter feels punitive pain at the very moment of the shoplifting, as long as the pain is caused by the shoplifting.
Or consider this kind of a case. You know that Bob will commit a crime in ten days, so you set up an automated system that will punish him at a preset future date. It does not seem to be of much significance whether the system is set to go off in nine or eleven days.
Or consider cases where Special Relativity is involved, and the punishment occurs at a location distant from the criminal. For instance, Carl, born on Earth, could be sentenced to public infamy on earth for a crime he commits around Alpha Centauri. Supposing that we have prior knowledge that he will commit the crime on such and such a date. If (2) is the right principle, when should we make him infamous on earth? Presumably after the crime. But in what reference frame? That seems a silly question. It is silly, because (2) isn’t the right principle—(1) is better.
Objection: One cannot predict what someone will freely do.
Response: One perhaps cannot predict with 100% certainty what someone will freely do, but punishment does not require 100% certainty.
3 comments:
I have very different moral intuitions concerning Bob. I think it is just to punish Bob on day 9 only if you have absolute certainty that Bob will commit the crime on day 10. (I mean something like God reveals it to you and you know with absolute certainty that it is God revealing it to you). Punishment does not require 100% certainty, but only after the crime has occurred, right?
I think it is unjust to set up the automated system to punish Bob, whether it punishes him on day nine or on day eleven. And I think that Bob could rightly complain with the following complaint: “You did not give me the chance to do the right thing! You treated me like a being that isn’t free and responsible for his own actions.” Concerning cases where you have 100% certainty, I think a distinction needs to be made. Perhaps there are cases where you know with probability 1.00 that Bob will commit the crime, but it is still possible for Bob not to commit the crime. In that case, it is still wrong to set up an automatic system to punish Bob on day eleven, because you never give Bob the chance to do the right thing. (‘Chance’ is here used in the colloquial sense and as a measure of probability.) But if you know with 100% certainty because God has seen Bob commit the crime and reveals it to you ahead of time, then the punishment seems to stand in the causal-like relationship that Pruss is talking about, because your foreknowledge depends (or at least depends*) on the criminal act itself. So the automated system to punish Bob on day eleven might be just. But in that case, I don’t think it could be just to set up the system to punish Bob on day nine, because Bob himself wouldn’t have the same sort of knowledge that he was guaranteed to commit the crime. (And I think it might actually be unjust for God—and therefore impossible—to reveal such knowledge to Bob, because it would rationally commit Bob to despair.)
A different thought. In typical Gettier cases, the truth-maker doesn’t stand in the right sort of causal relationship with the justified, true belief. If someone is convicted of a crime on the basis of the testimony of people with Gettier beliefs, principle (1) implies that the punishment is not right. And that seems to be correct; anyone convicted in this way would deserve a re-trial. So it is some evidence in favor of (1).
Post a Comment