Some important results
By far the bestknown result in the theory of social choice is Arrow’s theorem. It is not only important substantially, i.e. in saying what it says about social choice, but also methodologically. Arrow (1951; 1963) set the stage for a host of results similar in spirit in showing that certain sets of voting system desiderata are unachievable because the properties included in those sets are mutually incompatible. Arrow’s theorem deals with social welfare functions which are mappings from Cartesian products of individual preference relations into similar collective preference relations. By similar we mean that both preference relations are binary, complete and transitive over the set of alternatives. Arrow, thus, assumes neither more nor less than individual preference rankings. In addition to this formal requirement, the following four conditions are imposed on social welfare functions:
(i) universal domain, (ii) independence of irrelevant alternatives (IIA), (iii) Pareto condition and (iv) nondictatorship. The theorem says that these four conditions together with the formal requirement that both individual and collective preference relations be complete and transitive are incompatible. Obviously, the significance and practical importance of the result depends on how important and plausible the conditions are deemed.
Condition (i) states that the function should make no restrictions with respect to allowable individual preference rankings. This sounds like a reasonable or, at least, very convenient condition. On the other hand, it implies that the likelihood of condition violations plays no role in the theorem: systems which fail on some criterion under very specific and unlikely circumstances are on par with those where one can expect a criterion violation all the time. Condition (ii) is perhaps the most controversial one of all. It states that the collective preference order between any two alternatives, say x and y, depends on the individual preference between these two alternatives only. In other words, whether x is collectively preferred to y or vice versa or both, depends only on the way the individuals rank x and y. Condition (iii) says that if each individual strictly prefers x to y, then y is not preferred to x in the collective preference relation. Finally, condition (iv) excludes dictators by saying that there be no individual whose preference relation over all each pair of alternatives coincides with the collective preference relation with respect to this pair.
All systems discussed in this article fail on IIA. Consider for example plurality voting and the first example in section 2. The collective ranking resulting from plurality system is: Brown Jones Smith. So, the ranking between Brown and Jones is such that the former is preferred to the latter. Now, consider the subset consisting of Brown and Jones. In this subset Jones is preferred to Brown. Hence, IIA is violated.
There are systems that fail on several Arrow conditions. One such system is the amendment procedure. It is wellknown that it may result in a collective preference cycle: x preferred to y, y preferred to z and z preferred x etc. It thus violates the condition that the social preference relation be transitive. But it also fails on condition (iii). Consider the following profile.
1 voter: A B D C
1 voter: B D C A
1 voter: D C A B
With sincere voting and agenda 1. B vs. D, 2. the winner vs. A and
3. the winner vs. C, C wins. Yet, D is preferred to C by each voter. Thus,
the amendment system fails on Pareto criterion.
Nearly equally celebrated as Arrow’s is the theorem independently proven by Gibbard (1973) and Satterthwaite (1975). It deals with voting strategies and singlevalued social choice functions. The latter are functions mapping all alternative sets and preference profiles into singleton sets of alternatives. These functions are also known as social decision functions. They, thus, specify for each alternative set and preference profile a single alternative, the winner. Voting strategy, in turn, indicates the preference ranking a voter reports when voting. This may be identical with her preference ranking over alternatives, but it may also differ from it. In the latter case the voter is said to misrepresent her preferences. It is plausible to assume that a voter misrepresents her preferences if the outcome resulting from misrepresentation, ceteris paribus, is preferred by the voter to the outcome resulting from her sincere voting, again ceteris paribus.
Now, a voting system is manipulable in a voting situation, i.e. in a set of alternatives and a preference profile over those alternatives, if in that situation there is at least one voter who achieves a better outcome (from her own point of view) by misrepresenting her preferences than by voting sincerely. A voting system, in turn, is defined to be manipulable if there is at least one situation in which the system is manipulable. In other words, a voting system is manipulable if the sincere voting strategies by all voters do not always lead to a Nash equilibrium.
The GibbardSatterthwaite theorem says that every anonymous, neutral and nontrivial social decision function is either manipulable or dictatorial. A social decision function is nontrivial if for each alternative one can construct a preference profile so that this alternative will be chosen by the system.
Prima facie, the GibbardSatterthwaite theorem is quite dramatic; manipulability or dictatorship do not look attractive alternatives to choose from.^{2} On closer inspection it is evident, however, that this is not a doomsday message for democratic institutions. It is possible that manipulability, i.e. gaining benefit from preference misrepresentation, materializes in very rare situations only. Moreover, to benefit from preference misrepresentation the voter needs to know basically everything about the preference profile which may be a tall order in most voting bodies. Finally, the theorem deals with singletonvalued choice functions, while most voting systems may result in a tie between two or more alternatives. These remarks are not intended to play down the importance of the theorem as a theoretical result. It is certainly of great significance in pointing out that the behavioural assumptions underlying voting behaviour should be taken into account in voting system evaluations. Results, such as those cited in the preceding, on properties of voting systems that hold under sincere voting assumption may fail under sophisticated voting assumption.
The GibbardSatterthwaite theorem amounts to the incompatibility of nondictatorship and nonmanipulability among singlevalued choice functions. Slightly later Gärdenfors (1976) proved a theorem that deals with (possibly multiple valued) choice functions or social choice correspondences. He showed that all anonymous and neutral social choice functions that satisfy the Condorcet winner criterion are manipulable. Since the Condorcet winner criterion is often regarded as a highly desirable property, this result is of the same negative type as Arrow’s. The Gärdenfors theorem leaves open the manipulability of those systems that may fail to elect the Condorcet winner when one exists. Yet, it is fairly straightforward to show that all those systems discussed above are manipulable. To show that the plurality runoff, IRV and STV are manipulable, consider the following profile of 8 voters:
3 voters: A B C
3 voters: B C A
2 voters: C A B
With sincere voting, there is a runoff between A and B whereupon A wins. This is the least preferred alternative of the 3 voters in the middle of the profile. If one, two or all of them had voted as if their preference ranking is C B A, ceteris paribus, C would have won on the first round or after a second round against A. In any event, the outcome would have been better for the voters deviating from their true preferences in their voting strategies. The same profile can also be used in showing that the plurality voting system is manipulable. With sincere voting, the outcome is a tie between A and B. This can be broken in A’s favour by one of the 2 last mentioned voters if she votes as if her first ranked alternative is A. Hence, this voter can bring about a preferable outcome by preference misrepresentation. The same profile can be used to show that sincere voting strategies do not lead to a Nash equilibrium under Borda Count, either. If all voters reveal their true preferences, the outcome is B, the lowestranked alternative for the 2 voters in the profile. If these voters rank A first, ceteris paribus, the outcome is A, their secondranked alternative in their true preference ranking.
The noshow paradox is, of course, an unpleasant surprise not only for its “victims”, i.e. people who would have been better off by abstaining than by voting, but also to the advocates of democratic forms of decision making. It undermines the very rationale of those forms. Therefore, Moulin’s (1988) theorem which states that the Condorcet winner criterion and invulnerability to the noshow paradox are incompatible is bad news for those who deem the criterion of utmost importance. It is worth observing that the theorem does not say anything at all about systems that do not satisfy the Condorcer winner criterion. Among those there are systems that are invulnerable to the noshow paradox and those that aren’t. In the first group there are systems such as plurality voting, vote for k and the Borda Count, in the latter plurality runoff, IRV and STV.
There is a stronger version of the noshow paradox which occurs when a group of voters by abstaining helps to bring about the election of their firstranked alternative, while by participating, ceteris paribus, they would contribute to the election of some other, i.e. lowerranked, alternative. Moulin’s result leaves open the possibility that invulnerability to this stronger version would not be incompatible with the Condorcet winner criterion. These hopes are largely squashed by the theorem of Pérez (2001) which states that nearly all systems that elect a Condorcet winner when they exist can exhibit the strong version of the noshow paradox. Note again, however, that this theorem does not extend to systems that fail on the Condorcet winner criterion. Plurality runoff system appears to be invulnerable to the strong version of the paradox. By abstaining a group of voters may, if anything, block the entry of their favourite to the second round, but not increase its chances of being elected. Similarly, in STV and IRV the abstainers increase the likelihood of their favourite being eliminated.
The above is but a small and biased sample of various incompatibility results pertaining to voting systems. To counterbalance them there are several important results on compatibility of various desiderata. Perhaps the bestknown of these is May’s (1952) characterization of the majority rule as the only rule defined over two alternatives that satisfies (1) anonymity, (2) neutrality, (3) duality and (4) strict monotonicity. Of these conditions (1) and (2) have been touched upon above. (3) says that if each voter reverses her preference over the two alternatives, then outcome is also reversed. (4) states that if there is a tie, only one individual’s preference change is needed to break it to the direction of the preference change. May’s result thus states that the conditions (1)(4) characterize the simple majority rule and, conversely, any rule that satisfies these conditions is equivalent to the simple majority rule.
Some other voting systems have also been axiomatized. One of them is the Borda Count. Young (1974) shows that the Borda Count is the only system that satisfies: (1) neutrality, (2) consistency, (3) faithfulness and (4) cancellation property. Faithfulness is the very natural requirement that the system be such that if the collective body consists of only one individual, then the winner according to the system coincides with her firstranked alternative. The cancellation property, in turn, is satisfied by systems which, in a situation where for each pair of alternatives x and y the number of voters preferring x to y is equal to the number of voters preferring y to x, result in a tie between all alternatives. Young’s result then states that the Borda Count has the properties (1)(4) and conversely, any system that has these properties is equivalent to the Borda Count.
Also approval voting has been given an axiomatic characterization (Fishburn 1978). There are three axioms in this system: neutrality, consistency and disjoint equality. The lastmentioned is the requirement that if two individuals come up with two distinct choices A and B from the same set of alternatives, then when forming a collective body of the two individuals, its choice set coincides with the union of A and B.
The theory of social choice has thus both good and bad news for the designer of voting institutions. Summarizing, the bad news are that no system satisfies all conceivable desiderata, not even the most important ones. Positional methods, such as plurality voting and Borda Count, tend to perform poorly in terms of the Condorcet winner criterion, while doing very well in terms of consistency and monotonicity. The systems based on pairwise comparisons of alternatives, in turn, do in general well in terms of the Condorcet winner criterion, but are typically inconsistent and vulnerable to the noshow paradox, most even to the strong version of it. Multistage systems, such as plurality runoff, IRV, Nanson and STV, are in general nonmonotonic and inconsistent.
Systems Requiring Minimal Information
The theory of voting systems is an applied field of social choice theory. As such it is largely based on Arrovian assumptions about the form in which individual opinions are expressed: the individuals are endowed with complete and transitive preference relations over the alternatives. However, most of the systems of voting used in practice do not require voters to have anywhere nearly so structured preferences. In many systems the voter simply places a check or cross next to the candidate she votes for or writes down the number corresponding the candidate or party. It is true that STV and IRV are based on the expectation that the voters reveal more about their opinions than simply one candidate, alternative or party, but these systems are used in relatively few large scale elections. More often than not, singling out one alternative is what the voters are expected to do. So, how does this practical limitation go together with the theoretical assumption of complete and transitive preferences?
In principle, there is no inconsistency in assuming that the voters have preference rankings over all alternatives and devising a system that only accepts one alternative from each voter. Indeed, in the above examples of plurality voting we have throughout assumed that people have preference rankings over all alternatives, but typically vote for just their topranked one or – in case of strategic voting – some other alternative. The point is that the voter is assumed to have a dichotomous classification of alternatives – the best vs. the others – instead of a preference ranking. If this would be the case, then most of the negative results would go away since the crux in achieving them is that the voters be able to provide a ranking, i.e. an opinion of preference regarding also those alternatives they do not cast their vote for. For example, to find out about monotonicity violations one has to know how the voters rank not only those alternatives they vote for, but also at least some of those they do not vote for. The same argument goes for the vulnerability to the noshow paradox.
Another related issue is the impact of the existing voting system on the structure of preferences of voters. It makes sense to argue that if the system works with dichotomous preferences, the voters tend to think in a dichotomous manner about the alternatives. Why bother with preference rankings as long as the system allows you to tell the best one apart from the others only? If we are dealing with dichotomous preferences rather than preference rankings, the available voting system repertoire becomes more restricted: systems like the Borda Count, STV or IRV are not applicable. As pointed out above, many of the negative results are also thereby avoided. But is it really plausible to assume that the voters can only classify candidates into two groups: good and less than good ones?
If this assumption is made, then a natural social choice assumption to start from is that, instead of complete and transitive preference relations over the alternatives, the voters are endowed with individual choice functions. Consequently the task of voting system designer becomes to look at the properties of various choice function aggregation rules. Considerably smaller number of pages has been written on choice function aggregation than on preference ranking aggregation. One of the magna opera in this field is Aizerman and Aleskerov’s (1995) study. For our purposes the main message of the book is negative: substituting individual choice functions for individual preference rankings seems to lead to analogous – albeit not identical – incompatibility results as in the mainstream literature. To illustrate, consider the following (plausibility) conditions on collective choices based on individual choice functions^{3}: (i) citizen sovereignty: for any alternative x, there exists a set of individual choice function values so that x is will be elected, (ii) choiceset monotonicity: if x is elected under some profile of individual choices, then x should also be elected if more individuals include x in their individual choices, (iii) neutrality, (iv) anonymity, and (v) choiceset Pareto: if all individuals include x in their individual choice sets, then the aggregation rule includes x as well , and if no voter includes y in their individual choice set, then y is not included in the collective choice.
In social welfare functions the aim is to impose the same formal properties on the aggregation rule as on the individual opinions: completeness and transitivity of preference relations. Surely some conditions have to be imposed on choice set aggregation rules to distinguish reasonable from unreasonable ones. Consider Chernoff’s condition. ^{4} It will be recalled that it states the following. If an alternative is among winners in a large set of alternatives, it should also be among the winners in every subset it belongs to. Another similar property is concordance. Suppose that the winners in two subsets of alternatives have some common alternatives. Then the rule is concordant if these common alternatives are also among the winners in the union of the two subsets. The properties of Chernoff and concordance can be associated with both individual and collective choice functions. In social welfare function the same formal properties are imposed on both individual and collective preference relations. We can make the same requirement for choiceset aggregation rules, i.e. insist that Chernoff and concordance be satisfied by both functions.
Using Aizerman and Aleskerov’s example we can show that two quite natural looking aggregation rules fail on either one of these two requirements. Rule 1 is a version of majority rule: whenever an alternative is included in the choice sets of a majority of voters, it will be elected. Rule 2 is plurality: whichever alternative is included in more numerous choice sets than the other alternatives, is elected. The former rule is called local since the inclusion of an alternative in the collective choice set can be determined independently of other alternatives (“locally”). Rule 2, on the other hand, is not local as the determination of whether x belongs to the collective choice set requires comparison with all other alternatives (“globally”). Suppose we have alternatives x, y and z, 3 individuals and the individual choices as indicated in the following table.
alt. set ind. choice sets rule 1 rule 2
ind.1 ind. 2 ind. 3
{x,y,z} {x} {z} {y} empty {x,y,z}
{x,y} {x} {x} {y} {x} {x}
{x,z} {x} {z} {x} {x} {x}
{y,z} {y} {z} {y} {y} {y}
Clearly concordance is not satisfied by rule 1, since x belongs to the choice sets from {x,y} and {x,z}, but is not included in the choice set from {x,y,z}. On the other hand, rule 2 fails on Chernoff since {z} is included in the choice set from {x,y,z}, but not to one from {x,z}. It is also worth noticing that plurality does not satisfy choiceset monotonicity, while majority does.
The results of Aizerman and Aleskerov are largely negative as far as local aggregation functions are concerned: they typically fail on some rationality conditions. Much less is known about nonlocal aggregation operators.
Systems Based on Richer Information
Individual choice functions can be regarded as less demanding for the voters than ordinal preference rankings over alternatives. However, we often encounter situations where we in fact have much more preference information about alternatives than just their ranking. Typical examples are situations where the individual’s opinions are reflected in her willingness to pay for various alternatives. Since monetary sums are ratio scale variables, the individual is able to signal her preferences in much richer way than in the standard social choice setting.
An relatively recent proposal for a voting system that utilizes richer than ordinal ranking information has recently been made by Balinski and Laraki (2007). The authors call it majority judgment. The procedure is the following. Given a set of alternatives or candidates, the voters evaluate each of them by assigning them a value from a set of values, such as integers from 0 to 10 or from the set {excellent, very good, good, satisfactory, tolerable, poor, to be rejected} or from {A, …F}. The highest grade given by an absolute majority of voters is called the majority grade of the alternative. This is a welldefined concept when the values in the set can be ordered from best to worst. For each alternative there is a value so that more than 50% of the voters give it or higher value to the alternative in question. Now, the majority judgment winner is the alternative with the highest majority grade. If the voters are listed according to the grades they give to a candidate from the lowest to the highest, the majority grade is the one given by the median voter, i.e. the voter with as many other voters on her “lower” and “higher” side. In case the median is not unique, the majority grade is the lowest of the values defining the median interval of grades. Ties between candidates are broken in the following manner. For each candidate with the same majority grade, say “good”, one tallies the number of voters who give the candidate a higher grade than “good”. Let them be b in number. Similarly, one counts the number of voters who give the candidate a grade worse than “good”. Say their number is w. If b > w, one gives the candidate the majority grade “good+”. If b
Balinski and Laraki argue that majoritarian judgment encourages voters to reveal their true valuations of candidates. This is debatable, however. Suppose that the voters have cast their grades to all candidates and that there is a tie for winner between candidates A and B, both of whom get the majority grade “good+” . This means that exactly the same number of voters regard both A and B better than “good”. Consider now a voter who grades A as “excellent” and B as “very good”. If this voter new the distribution of grades, it would make sense for her to grade B as “tolerable” or something else below “good”. Thereby she would break the tie, ceteris paribus, in favour of her favourite A. This shows that in the majority judgement system the sincere revelation of grades does not always lead to a Nash equilibrium. Hence the system is manipulable.
A related system, called utilitarian voting, has been proposed by Hillinger (2004; 2005). It is basically identical with the range voting (Smith 2000). The underlying idea is that for each candidate the voters vote by expressing their cardinal utility values for that candidate. The winner is the candidate with the largest sum of expressed utilities. The first crucial feature of utilitarian voting is that each voter is allowed to assign any grade or score to any alternative, i.e. the score given candidate A in no way restricts the score that can be given to candidate B. The second feature is that the scores are values on a predetermined scale of values, say (0,1) as in approval voting. The third defining characteristic of utilitarian voting is that the winner or, as the case may be, the order of priority among candidates is determined by the sum of scores received from the voters. The candidate with the largest score sum wins.
Obviously the voter input in utilitarian voting is very similar to the one resorted in majority judgment. The method for determining the winner, however, differs. The utilitarian voting elects the candidate with largest average score, while the majority judgment ends up with one associated with the highest median grade. As we just saw, the majoritarian judgment is manipulable. The same example can be used to show that utilitarian voting is manipulable as well. In fact, manipulating these systems is not much different from manipulating the Borda Count. If one knows the toughest contestant of one’s favourite, then giving the former the lowest possible grade – regardless of one’s true valuation – helps in electing one’s favourite. Moreover, in those cases where one’s favourite does not get the highest possible grade in one’s true evaluation, giving it the maximum grade will increase its likelihood of being elected.
The main point is, however, that alternatives to systems that operate on individual preference rankings exist and deserve scholarly attention. Strategic behaviour in opinion revelation cannot be excluded, but this is a problem in aggregating ordinal preference information as well.
How to Evaluate Systems?
Given the abundance of voting systems and performance criteria as well as the theoretical results on compatibilities between various desiderata, it is worth stopping for a moment to consider what use can we make out of all this information in the design of voting systems. The most straightforward way to proceed is to use a single criterion of performance to eliminate all those systems failing on this criterion, then possibly pick another criterion to eliminate some of the remaining ones etc. Of course, the criteria used have to be deemed the most important ones. Candidates for this type of elimination criteria are absolute majority or strong Condorcet criterion (if an alternative is ranked first by more than half of the electorate, this alternative has to be elected), the Condorcet winner criterion, monotonicity, consistency, the Condorcet loser criterion (if a candidate would be defeated by all the others in pairwise majority comparisons, it ought not to be elected), Pareto criterion etc.
There are problems with this type of approach. To wit, which are the most important criteria? The scholarly community is divided on this issue. Even if unanimity prevailed on the most important criteria, this approach would lead to a dichotomy in each criterion: those systems that satisfy the criterion and those which do not. Both classes are bound to consist of many systems.
Within each class one could resort to probability or simulation modelling to find out the theoretical probability or frequency of criterion violations under various “cultures”, i.e. assumptions concerning the distribution of preferences in the electorates (Gehrlein 2006). But which culture is most appropriate for this kind of assessment? The early simulations of voting procedures as well as analytical probability models were based on the “impartial culture” assumption according to which the preference ranking for each voter is generated randomly and independently of other voters. It has turned out that this assumption may lead to flawed conclusions about the frequency of various anomalies in voting systems (Regenwetter et al. 2006).^{5}
Alternatively, one could utilize multiple performance criteria simultaneously and construct a dominance relation over voting systems. A system dominates another if it satisfies all the criteria – among those considered – that the latter satisfies and at least one criterion that the latter does not satisfy. Once this relation has been constructed, it is natural to make the final choice from among undominated systems. This approach to system choice is not without problems, either. The set of undominated systems is typically pretty large one visàvis the original set of systems. Moreover, the dominance implicitly treats all criteria equally, i.e. they are all deemed of equal importance.
Historically, the choice of the best voting system has been deemed as contextdependent. What has been found right in electing the president has typically not been viewed appropriate in judgment aggregation. Perhaps this is how it should be. Some choice settings emphasize the consensual nature of the outcomes, while in others intensive support of large voter groups seems like a plausible desideratum. In both types of settings a wide variety of systems is available.
The property of manipulability or strategic misrepresentation of opinions is a kind of metacriterion since it has implications to other system properties. If the system is manipulable and has a set of desirable properties when the sincere opinion revelation assumption is made, it is not guaranteed that those desirable properties also hold when the voters resort to strategic behaviour. Strictly speaking what manipulability entails is that there is a situation where a voter can benefit from misrepresenting her opinion. One such situation is enough to classify a system manipulable. So, a more refined analysis would call for estimates regarding the relative frequency of such situations. In practice, the manipulability of a system depends on at least three different aspects (Nurmi 2002, 110111):

The empirical frequency of those profiles in which an individual or group can benefit from opinion misrepresentation.

The nature of information that the voters need in order to misrepresent their opinions with success.

The payoff difference that a successful misrepresentation brings about to the voters engaged in it.
So, instead of manipulability as a dichotomous notion one should talk about degrees of manipulability if these three aspects are anything to go by. Kelly’s (1993) measure of degree of manipulability focuses precisely on the first aspect by defining the degree of manipulability as the number of profiles in which the procedure is manipulable. This measure can be modified by weighting the profiles by the number of voters who can benefit from misrepresentation (Smith 1999). Nonetheless, one ends up with a measure that is relative to the number of voters and alternatives.
The second aspect is of great practical importance, since the manipulability of a system in principle means very little in practice if the information that the voters need to benefit from opinion misrepresentation is of the kind that they cannot typically possess. On intuitive grounds one could argue that the plurality voting requires no more information about other voters’ views than the distribution of the first ranked alternatives, while STV requires a lot more detailed information in order for the misrepresentation to succeed. The third aspects relates directly to the incentives of the voters to misrepresent their opinions: the larger the benefit, the more likely is misrepresentation, ceteris paribus.
As was stated above, the manipulability of a system means that sincere voting strategies do not always lead to Nash equilibrium outcomes. But what does it mean that a voting outcome is not an equilibrium? By definition it means that at least some voters may regret their voting strategies in the sense that assuming that the others stick to their strategies they could have brought about a better outcome for themselves had they selected some other voting strategy. But what is there to justify the assumption that the others would not change their behaviour? Very little. The notion of equilibrium is based on a counterfactual proposition. This is perhaps worth taking into account in assessing the practical implications of manipulability results.
Searching for consensus
Often voting systems are resorted in an effort to reach a consensus on an issue where several alternative positions are available. The task is trivial when all voters have an identical position on the issue at hand. But in general no such unanimity exists, but has to be found using some procedure. Given a nonunanimous profile of opinions, a plausible way to proceed is to look for the collective opinion that is closest to the expressed opinions of voters. If all but one voter in a large group have an identical preference ranking and the collective choice is to be a ranking as well, the collective ranking closest to the observed one would seem to be the one representing the vast majority opinion. In large bodies the same suggestion would hold when all but very few voters have an identical opinion.
Although pretty obvious in these cases, the search for the nearest collective ranking in general needs some explication. There are many ways of measuring the proximity of two preference rankings. Perhaps the bestknown is the inversion metric which tallies the number of individual binary preference inversions needed to transform one ranking into the other. Kemeny’s rule is based on this metric (Kemeny 1959). For any given profile of complete and transitive preference orders over k alternatives it determines the closest collective preference ranking by computing for each possible k! ranking its distance (in the sense of the inversion metric) to each individual’s ranking and summing these. The collective ranking for which this sum is at the minimum is the Kemeny ranking and its firstranked alternative the Kemeny winner.^{6} Kemeny’s rule can, thus, be seen as a method that defines a consensus state and a metric for measuring distances from observed profiles to ranking candidates and that, moreover, results in a ranking for which the sum of distances in at the minimum.
In Kemeny’s rule the consensus pertains to every rank in the profile. In many settings the consensus on every position in the collective ranking is something of a luxury. Especially, if the task is elect just one alternative, it makes little sense to search for such a comprehensive consensus. Instead one could look for the collective ranking which is closest to the observed individual rankings and which differs from the individual rankings only in positioning the same alternative first in the ranking. In other words, the collective ranking would be obtained by counting the number of inversions needed to put a given alternative first in every individual ranking and summing those inversions over voters. The winner is then the alternative that needs the smallest number of inversions to end up first in every voter’s ranking. Nitzan (1981) shows that the ranking resulting from this minimization is the same as the Borda ranking, i.e. the order based on Borda scores of alternatives.
It turns out that all rankingbased voting systems can be characterized in terms of a goal (consensus) state and a metric used in measuring the distance between the observed preference ranking and the goal state (Meskanen and Nurmi 2006). For example the plurality voting can be defined as the rule which minimizes the distance between the observed profile and a state where all voters have the same alternative ranked first keeping the rankings between the other alternatives intact. Since the goal is the same as in the Borda Count, the metric must be different from the inversion one. Otherwise we would be dealing with the Borda Count. Indeed, the metric for plurality voting is defined so that whenever two rankings differ in terms of the first ranked alternative, their distance is equal to unity. Otherwise, it is zero.
The goal state cum distance metric characterization opens a new angle to analyzing voting systems. It is the decision setting that often determines the most appropriate goal state. Thereafter, the metric captures our views of closeness of different opinions. As these two aspects pin down a voting system, a way to choose the best voting system for any given purpose is to spell out one’s intuitions in terms of them, i.e. state explicitly what is the desired goals state and how one measures distances between various views.
Conclusion
Theory of voting is approached above from three perspectives: (1) by determining which desirable or undesirable properties various systems possess, (2) by studying the mutual compatibility or incompatibility of the properties, and (3) by characterizing them in terms of various conditions or goal statemetric combinations. The first approach takes its motivation from the fact that the institutional (voting system) design does not take place in a vacuum, but in the setting where historical and cultural features dictate the set of realistic systems within which the choice has to be made. Hence it is important to know the properties – desirable and undesirable – of the systems deemed realistic. While informative and potentially useful, this approach is on par with classification of objects in the theory of measurement, i.e. necessary but only the first step in the way of measuring properties. The second approach is a more advanced in abstracting away concrete voting systems and dealing with the relationships that obtain between their properties. This approach is notorious for its primarily negative results showing the incompatibility of various desiderata of systems. The third approach is either “axiomatic” in the sense of aiming at a characterization of systems with the aid of properties necessary and sufficient for them or distancebased in the sense of determining the underlying goal states and distance metric for each voting system.
Most results in the theory of voting have been achieved under the standard assumption that the voters possess complete and transitive preference relations over the alternatives and that we are looking for optimal ways of aggregating those preferences either into a set of best alternatives or a collective preference ranking. Since many incompatibility results depend on these assumptions, it is worthwhile to look for plausible alternatives for them. The standard assumption can be either too demanding – the voters do not necessarily have preference rankings over all alternatives – or too modest – the voters may have a more refined opinion about the alternatives than a mere ranking. Both of these possibilities have been briefly discussed above. It seems that the emphasis on voting systems studies has recently moved towards aggregating more detailed voter input than preference rankings.
Share with your friends: 