Speaker: Edith Elkind (University of Oxford)
Title: Voting as maximum likelihood estimation: revisiting the rationality assumption
There are two complementary views of voting. The first view is that voters have genuinely different preferences, and the goal of voting is to reach a compromise among the voters. The second view is that there is an objectively correct choice, and voters have different opinions simply because of errors of judgement. The latter view, which dates back to medieval church elections, gives rise to the maximum likelihood-based approach to voting: we treat votes as noisy estimates of the ground truth, and attempt to identify the ground truth that is most likely to generate the observed votes, under a given model of noise. In this talk, I will give an overview of voting rules that arise in this framework and relationships among them. The talk is partially based on joint work with Nisarg Shah (UAI'14).
Speaker: Christian Klamler (University of Graz)
Title: On the Division of Indivisible Items
We are concerned with the fair division of a set of indivisible items among two or more agents. Different division procedures will be analyzed with respect to various well-known properties in the fair division literature such as envy-freeness, proportionality, efficiency, maximin and others. All the discussed procedures use as the sole input the individuals’ rankings over the set of items. It is shown that for two players the AL procedure provides a maximal envy-free and efficient allocation, i.e. assigns as many items as possible to the players such that a strong form of envy-freeness based on the ordinal ranking information is satisfied and there is no envy-free allocation that allocates more items. Unassigned items will be put in a contested pile. How such a contested pile can be divided is shown via the undercut procedure. A second procedure, SA, tries to extend AL by assigning all items to the players using a sequential structure and ideas from fallback bargaining. Finally we will discuss procedures to be used for the proportional allocation of indivisible items to more than two players. Various different aspects such as conditions for the existence of an envy-free allocation and computational complexity issues will be investigated as well.
Speaker: Rudolf Vetschera (University of Vienna, Austria)
Title: Computational experiments on Contested Pile methods for the fair allocation of indivisible items
Contested Pile procedures are methods to allocate a set of indivisible items to two players, which use only a limited amount of ordinal preference information. These procedures work in two phases: In a generation phase, players in several rounds simultaneously demand (or reject) one or more items. Items which can unambiguously be allocated based on these statements are assigned accordingly. The remaining items are placed on a contested pile, which is then allocated in the following splitting phase. This second phase can utilize the fact that preferences for items on the contested pile are (approximately) identical between players.
Several design parameters can be selected within this general framework. In the generation phase items can be demanded or rejected by players, which implies that players reveal in descending or ascending order.Another parameter is the number of item to be specified in each round. In the splitting phase, different methods can be used to allocate items in the contested pile.
In this work, we evaluate the impact of these different design parameters on several outcome dimensions. In addition to the different parameters of the generation phase, we specifically consider the Undercut Procedure recently developed by Brams, Kilgour sand Klamler as a possibility for the splitting phase. Outcome dimensions considered involve both fairness and efficiency of allocations. Furthermore, we study the potential impact of strategic play. Revealing their true preferences in the generation phase is not necessarily an optimal strategy for players in contested pile procedures, and sometimes even quite simple manipulation strategies can be effective. We therefore also study the impact of such manipulations on different design variants of the procedure.
To study these effects, we developed a quite general computational model of contested pile procedures, and carried out extensive simulations using that model.
Speaker: Jia Yuan Yu (IBM Research Ireland)
Title: Privacy-preserving and socially optimal resource allocation
We consider the problem of allocating a divisible resource among an unknown number of users, who do not reveal their utility functions. We propose a simple iterative algorithm for users to adjust their demand based on additive-increase multiplicative-decrease algorithms. If every user follows this algorithm, then the resulting demand-profile converges to a social optimum.
Speaker: Ulle Endriss (ILLC, University of Amsterdam)
Title: Collective Annotation: From Crowdsourcing to Social Choice
Crowdsourcing is an important tool, e.g., in computational linguistics and computer vision, to efficiently label large amounts of data using nonexpert annotators. The individual annotations collected then need to be aggregated into a single collective annotation that can serve as a new gold standard. In this talk, I will introduce the framework of collective annotation, in which we view this problem of aggregation as a problem of social choice. I will present both a formal model for collective annotation in which we can express desirable properties of diverse aggregation methods as axioms, and I will report on the empirical performance of several such methods on annotation tasks in computational linguistics using data we collected by means of crowdsourcing. The talk is based on joint work with Raquel Fernandez, Justin Kruger and Ciyang Qing. Our datasets and related papers are available at http://www.illc.uva.nl/Resources/CollectiveAnnotation.
Speaker: Ioannis Caragiannis (University of Patras)
Title: When do noisy votes reveal the truth?
The main conceptual question considered in this talk is whether voting methods can be used in order to learn. In particular, assuming that votes are noisy estimates of a true ranking of the alternatives, how many votes does a voting rule need to reconstruct the true ranking? We define the family of pairwise-majority consistent rules, and show that for all rules in this family the number of samples required from the Mallows noise model is logarithmic in the number of alternatives, and that no rule can do asymptotically better (while some rules like plurality do much worse). Taking a more normative point of view, we consider voting rules that surely return the true ranking as the number of samples tends to infinity (we call this property accuracy in the limit); this allows us to move to a higher level of abstraction. We study families of noise models that are parameterized by distance functions, and present voting rules that are accurate in the limit for all noise models in such general families. We characterize the distance functions that induce noise models for which pairwise-majority consistent rules are accurate in the limit, and provide a similar result for another novel family of position-dominance consistent rules. These characterizations capture three well-known distance functions.
Joint work with Ariel D. Procaccia and Nisarg Shah (Carnegie Mellon University).
Speaker: Marcus Pivato (Université de Cergy-Pontoise)
Title: Epistemic social choice with correlated and heterogeneous voters
The Condorcet Jury Theorem (CJT) and the Wisdom of Crowds (WoC) Theorem assert that a democracy with a large number of voters has a very high probability of finding the correct answer to a factual question. However, these results make two assumptions:
1. All voters have identical competency.
2. The errors of different voters are not correlated.
Neither of these assumptions is realistic. Thus, there is a large body of literature seeking to extend the CJT to correlated and/or heterogeneous voters. However, this literature deals only with dichotomous decisions, not with vector-valued decisions like the WoC, or with other epistemic social choice models.
I develop a general asymptotic theory of epistemic social choice with correlated, heterogeneous voters. This includes the classic CJT and WoC theorem as special cases, but it also applies to other epistemic social choice models, such as maximum-likelihood voting rules (e.g. the Kemeny rule) and the aggregation of probabilistic beliefs. The correlation structure I consider is very general. Special cases include voters arranged in a social network, societies divided into different subcultures, and societies with opinion leaders and followers.
Speaker: Piotr Faliszewski (AGH University of Science and Technology)
Title: Distance Rationalizability: Information Merging through Consensus Seeking
Voting can be seen through the lenses of information merging. We receive "the information" in the form of preference orders of agents (where the agents may be people voting for the president, robot's sensors trying to decide on its location, consumers expressing their desire for particular items, etc.), and our goal is to "merge" this information into a global view (the elected president, the robot's location, the item our company should produce). In this talk we will present distance rationalizability approach, a framework for defining voting rules that particularly well fits into this information merging view of voting. There are two major components to the distance rationalizability approach: the notion of a consensus (that is, of an election with a clear, undisputed winner), and the notion of a distance between elections. A voting rule R is distance rationalizable if one can provide a notion of a consensus and a notion of distance such that for every election E, voting rule R picks the canidate(s) that win in the nearest consensus. We will show that many (in fact, almost all) natural voting rules are distance rationalizable, we will show classic results on universal distance rationalizability and argue that distance rationalizations should be judged by the properties of the consensus and distance notions they involve. We will show that many properties of voting rules can be derived from the properties of their underlying rationalizations. Finally, we will show how distance rationalizability allows one to seamlessly create multiple variants of different voting rules, and we will show that distance rationalizability provides connections between apparently very different rules.
Speaker: Nhan-Tam Nguyen (University of Dusseldorf)
Title: Approximability of optimal social welfare in multiagent resource allocation with cardinal and ordinal preferences
Multiagent resource allocation deals with the assignment of indivisible and nonshareable goods to agents. Agents express their preferences cardinally in the form of utility functions. Given an allocation, we can aggregate the agents' utility values using the sum (utilitarian social welfare), minimum (egalitarian social welfare) or product operator (Nash product). Since for most cases computing allocations with maximum social welfare is NP-hard, we resort to approximation algorithms. We survey recent approximability and inapproximability results on social welfare optimization. After that we focus on the approximation of social welfare in the more restrictive model of scoring allocation rules, where agents preferences are expressed in ordinal form. Even in this restricted setting social welfare optimization is still computationally hard. Instead of classical approximation algorithms we turn to picking sequences, whose advantage is that they avoid preference elicitation. We study the loss incurred by the application of the picking sequence with respect to an optimal allocation. This is in part joint work with Dorothea Baumeister, Sylvain Bouveret, Jérôme Lang, Trung Thanh Nguyen, Jörg Rothe, and Abdallah Saffidine.
Speaker: Yann Chevaleyre (LIPN, Paris-Nord)
Title: Multiagent Allocation of Indivisible Goods: Survey and Open Questions
Traditionally, allocating goods to agents is done by a central authority by solving a combinatorial optimization problem. When no such authority exist, agents can negotiate and share resources in a decentralized manner. Since the last 15 years, there have been many researches on this topic, both investigating the efficiency and fairness of the decentralized solutions. This talk will survey this literature, and try to make the link between both the centralized and decentralized approaches.