Suppose that we have a minor parking infraction that, in justice, deserves about a $40 fine. Let’s suppose the infraction is so clear when it occurs that there cannot be reasonable appeal. The local authorities used to levy a $40 fine each time they saw a violation, but to reduce administrative costs, they raised the fine to $200, but each time the parking enforcement officer sees a violation he tosses a twenty-sided die, and only levies the fine if the die shows a one.
A one in twenty chance of losing $200 is a much better deal than a certainty of losing $40 dollars. So it seems that the treatment of violators is less harsh under the new system, and one can’t complain about it. But tell that to the person who gets the $200 fine. It seems unjust to impose a $200 fine for an infraction that in justice deserves only a $40 fine. But how can it be unjust when this is a better deal?
Here’s a possibly related case. Suppose you leave your wallet lying around and I take $10 out of your wallet and put back $20. You can complain about my handling your possessions, but it’s weird to call me a thief, unless there was something special about that $10 bill. But what if I take $10 out of your wallet, roll a six sided die, and put back $2000 if and only if the die shows a one? A one in six chance of $2000 is a way better deal than a certainty of $20. But if I end up putting nothing back, then I’m clearly a thief.
In both cases, we have two ways of treating someone, A and B. Treatment B is clearly a better deal for them than treatment A, and treatment A is not unjust for the patient. It seems to follow that we can impose treatment B in place of treatment A. But no!
It’s not, I think, just that people evaluate risk differently. For I think that the judgment that the randomized deal imposes an injustice remains even if we know the patient would have opted for the randomized deal had she been asked. The mere fact that you would have been happy to pay $20 to get a one in six chance of winning $2000 does not give me the right to play that lottery with your money on your behalf. Consent would need to be actually given, not merely hypothetically.
There seems to be an interesting lesson here: choices have a value that isn’t merely epistemic. The value of people having people make their own choices is not just for us to find out what is best for them or even what is best for them by their own lights. Another lesson is that it seems to matter that A is better in some respect (that of certainty) even if B is better in all respects.
But the above line of thought neglects a complication. While most people would be happy to get the one in six chance of winning $2000 in place of $20, most people would rather that such substitutions not be made without their being consulted. Perhaps that’s the relevant hypothetical question: Would you like having such substitutions made without consultation? Suppose the answer is “yes”. Is it clear that it’s wrong for me to make the substitution without asking you?
I am inclined to think it’s still wrong, unless you indicated in some way that you want substitutions of such a sort made for you.
My thought is that this is related to contractualist vs. consequentialist ethics.
ReplyDeleteThe aggregate utility of parking-violators is higher under the stochastic system. And since fines are handed out randomly, the expected utility of individual parking-violators is higher under the stochastic system. So what could the objection to this system be? None, from a consequentialist standpoint.
But now consider it from a social-contract point of view. Here it gets tricky, because it matters how you describe the contractors. If the contract is literally made before knowing who the violators are (say, the townspeople vote for the new stochastic system in a referendum) then there seems to be no issue about justice. On the other hand, when the contract is between those who get fortunate dice rolls and those who get unfortunate dice rolls (“tell that to the person who gets the $200 fine”) then there does seem to be a justice issue. Because THAT person (who gets the $200 fine) would never accept the contract, under that description of herself.
What if the new law is passed well before it begins to be enforced, it is widely promulgated, and everyone understands its virtues and regards it as an improvement? Then I am inclined to say you have widespread consent to a hypothetical contract, and the justice issue doesn’t seem pressing.
I’m not sure this nails down the whole problem. But the problem seems to arise when the fine-payer thinks of herself as “person who has to pay $200 instead of $40” rather than “person participating in a stochastic system allocating 1/20th chances of a $200 fine instead of a certain $40 fine”.
I think such stochastic stories may undercut some of the plausibility of social-contract pictures of justice. Take an extreme case. A rational agent is likely to be willing to risk a one in a billion chance of death to save $20. So, suppose that instead of a $20 fine, we kill one in a billion parking violators. If the system is world-wide, we'll end up killing someone pretty soon. But to kill someone for a parking violation is clearly unjust. Yet one could get rational agreement to this system.
ReplyDelete(In line with an earlier post, it may matter if the system is automated or human-operated. But I am assuming a human-operated system here.)