Voting involves compromise on two levels. On the ground level, a vote involves coming to a compromise decision. But on the meta level, a voting system embodies compromise between different desiderata. Arrow's Theorem is a famous way of seeing the latter point. But there is also another way of seeing it, which in one way goes beyond Arrow's Theorem: while Arrow's Theorem only applies where there are three or more options, what I say applies even in binary cases.
We suffer from both epistemic and moral limitations. Good voting systems are a way of overcoming these, by combining the information offered by us in such a way that no small group of individuals, suffering as it may from epistemic or moral shortcomings, has too much of a say. It is interesting to see that there is an inherent tension between overcoming epistemic and moral limitations.
Consider one of two models. On both models, a collection of options is offered to a population.
- Model 1: Each voter comes up with her honest best estimate of the total utility of each option, and offers a report of her estimate.
- Model 2: Each voter comes up with her honest best estimate of the utility for her of each option, and offers a report of her estimate.
Assuming that whatever people are going to say in a vote is going to be somehow based on their estimates of utility on the whole or utility to them, this averaging system is the best way to leverage the information scattered in the population. Unfortunately, while this is a good way to overcome our epistemic limitations, it does terribly with regard to our moral limitations. If one lies boldly enough, namely comes up with utility estimates that are far more inflated than anybody else's, one controls the outcome of the vote. Let's say that option 2 is the best one for me. Then I simply specify that the utility for option 2 is 10100000000 and for option 1 is −10100000000. And of course, there will be an arms race in the population to specify big numbers if there is more than one dishonest member of the population. But in any case, the dishonest will win.
In other words, the optimal system in the case of honest utility estimates is pretty much the worst system where honesty does not generally hold. A good voting system for morally imperfect voters must cap the effect each voter has. But in capping the effect each voter has, information can will in general be lost.
This is most clear in Model 2. We can imagine that an option moderately benefits a significant majority but horrendously harms a minority. Given honest utility reports from everyone and the averaging system, the option is likely to be defeated, since the members of the minority will report enormously negative utilities that will overcome the moderate positive utilities reported by members of the majority. But as soon as one caps the effects of each voter, the information about the enormously negative utilities to the minority will be lost. Model 1 is more helpful (presumably, civic education is how we might get most people to vote according to Model 1), but information will still be lost due to the differences in epistemic access to the total utility. On Model 1, capping will lose us the case where one individual genuinely has information about an enormous negative effect but is unable to convince others of this information. But capping of some sort is necessary because of moral imperfection.
(The optimal method of utility estimation also faces the problem that we are better at rank orderings than at absolute utilities. This can in principle be overcome to some degree by giving people additional hypothetical options to rank-order and then recovering utility estimates from these.)
A brief way to make the point is this. The more trusting a voting system is, the more information it brings to the table; but the more trusting a voting system is, the worse it does with regard to moral imperfection. A compromise is needed in this regard. And not just in voting.
Your last sentence is spot-on.
ReplyDeleteIt raises the suggestion that the best (feasible) political arrangements in a high-trust society might be quite different from those in a low-trust society. And yet prominent political philosophies don't incorporate this idea. E.g. in Rawls' Original Position you don't know how much people trust each other in any given set of institutions, and that might be a crucial piece of information in deciding whether you want to incarnate that set of institutions.