Fair division is the problem of dividing one or several goods among a set of agents in a way that satisfies a suitable fairness criterion. Traditionally studied in economics, philosophy, and political science, fair division has drawn a lot of attention from the multiagent systems community, since this field is strongly
concerned about how a surplus (or a cost) should be divided among a group of agents.
Arguably, the Shapley value is the single most important contribution to the problem of fair division. It assigns to each agent a share of the resource equal to the expected marginal contribution of that agent. Thus, it is implicitly assumed that individual marginal contributions can be objectively computed. In this thesis, we propose a game-theoretic model for sharing a joint reward when the quality of individual contributions is subjective.
In detail, we consider scenarios where a group has been formed and has accomplished a task for which it is granted a reward, which must be shared among the group members. After observing the contribution of the peers in accomplishing the task, each agent is asked to provide evaluations for the others. Mainly to facilitate the sharing process, agents can also be requested to provide predictions about how their peers are evaluated. These subjective opinions are elicited and aggregated by a central, trusted entity, called the mechanism, which is also responsible for sharing the reward based exclusively on the received opinions.
Besides the formal game-theoretic model for sharing rewards based on subjective opinions, we propose three different mechanisms in this thesis. Our first mechanism, the peer-evaluation mechanism, divides the reward proportionally to the evaluations received by the agents. We show that this mechanism is fair, budget-balanced, individually rational, and strategy-proof, but that it can be collusion-prone.
Our second mechanism, the peer-prediction mechanism, shares the reward by considering two aspects: the evaluations received by the agents and their truth-telling scores. To compute these scores, this mechanism uses a strictly proper scoring rule. Under the assumption that agents are Bayesian decision-makers, we show that this mechanism is weakly budget-balanced, individually rational, and incentive-compatible. Further, we present approaches that guarantee the mechanism to be collusion-resistant and fair.
Our last mechanism, the BTS mechanism, is the only one to elicit both evaluations and predictions from the agents. It considers the evaluations received by the agents and their truth-telling scores when sharing the reward. For computing the scores, it uses the Bayesian truth serum method, a powerful scoring method based on the surprisingly common criterion. Under the assumptions that agents are Bayesian decision-makers, and that the population of agents is sufficiently large so that a single evaluation cannot significantly affect the empirical distribution of evaluations, we show that this mechanism is incentive-compatible, budget-balanced, individually rational, and fair.
Identifer | oai:union.ndltd.org:WATERLOO/oai:uwspace.uwaterloo.ca:10012/5333 |
Date | January 2010 |
Creators | Carvalho, Arthur |
Source Sets | University of Waterloo Electronic Theses Repository |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Page generated in 0.0018 seconds