• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Sharing Rewards Based on Subjective Opinions

Carvalho, Arthur January 2010 (has links)
Fair division is the problem of dividing one or several goods among a set of agents in a way that satisfies a suitable fairness criterion. Traditionally studied in economics, philosophy, and political science, fair division has drawn a lot of attention from the multiagent systems community, since this field is strongly concerned about how a surplus (or a cost) should be divided among a group of agents. Arguably, the Shapley value is the single most important contribution to the problem of fair division. It assigns to each agent a share of the resource equal to the expected marginal contribution of that agent. Thus, it is implicitly assumed that individual marginal contributions can be objectively computed. In this thesis, we propose a game-theoretic model for sharing a joint reward when the quality of individual contributions is subjective. In detail, we consider scenarios where a group has been formed and has accomplished a task for which it is granted a reward, which must be shared among the group members. After observing the contribution of the peers in accomplishing the task, each agent is asked to provide evaluations for the others. Mainly to facilitate the sharing process, agents can also be requested to provide predictions about how their peers are evaluated. These subjective opinions are elicited and aggregated by a central, trusted entity, called the mechanism, which is also responsible for sharing the reward based exclusively on the received opinions. Besides the formal game-theoretic model for sharing rewards based on subjective opinions, we propose three different mechanisms in this thesis. Our first mechanism, the peer-evaluation mechanism, divides the reward proportionally to the evaluations received by the agents. We show that this mechanism is fair, budget-balanced, individually rational, and strategy-proof, but that it can be collusion-prone. Our second mechanism, the peer-prediction mechanism, shares the reward by considering two aspects: the evaluations received by the agents and their truth-telling scores. To compute these scores, this mechanism uses a strictly proper scoring rule. Under the assumption that agents are Bayesian decision-makers, we show that this mechanism is weakly budget-balanced, individually rational, and incentive-compatible. Further, we present approaches that guarantee the mechanism to be collusion-resistant and fair. Our last mechanism, the BTS mechanism, is the only one to elicit both evaluations and predictions from the agents. It considers the evaluations received by the agents and their truth-telling scores when sharing the reward. For computing the scores, it uses the Bayesian truth serum method, a powerful scoring method based on the surprisingly common criterion. Under the assumptions that agents are Bayesian decision-makers, and that the population of agents is sufficiently large so that a single evaluation cannot significantly affect the empirical distribution of evaluations, we show that this mechanism is incentive-compatible, budget-balanced, individually rational, and fair.
2

Eliciting and Aggregating Truthful and Noisy Information

Gao, Xi 21 October 2014 (has links)
In the modern world, making informed decisions requires obtaining and aggregating relevant information about events of interest. For many political, business, and entertainment events, the information of interest only exists as opinions, beliefs, and judgments of dispersed individuals, and we can only get a complete picture by putting the separate pieces of information together. Thus, an important first step towards decision making is motivating the individuals to reveal their private information and coalescing the separate pieces of information together. In this dissertation, I study three information elicitation and aggregation methods, prediction markets, peer prediction mechanisms, and adaptive polling, using both theoretical and applied approaches. These methods mainly differ by their assumptions on the participants' behavior, namely whether the participants possess noisy or perfect information and whether they strategically decide on what information to reveal. The first two methods, prediction markets and peer prediction mechanisms, assume that the participants are strategic and have perfect information. Their primary goal is to use carefully designed monetary rewards to incentivize the participants to truthfully reveal their private information. As a result, my studies of these methods focus on understanding to what extent are these methods incentive compatible in theory and in practice. The last method, adaptive polling, assumes that the participants are not strategic and have noisy information. In this case, our goal is to accurately and efficiently estimate the latent ground truth given the noisy information, and we aim to evaluate whether this goal can be achieved by using this method experimentally. I make four main contributions in this dissertation. First, I theoretically analyze how the participants' knowledge of one another's private information affects their strategic behavior when trading in a prediction market with a finite number of participants. Each participant may trade multiple times in the market, and hence may have an incentive to withhold or misreport his information in order to mislead other participants and capitalize on their mistakes. When the participants' private information is unconditionally independent, we show that the participants reveal their information as late as possible at any equilibrium, which is arguably the worse outcome for the purpose of information aggregation. We also provide insights on the equilibria of such prediction markets when the participants' private information is both conditionally and unconditionally dependent given the outcome of the event. Second, I theoretically analyze the participants' strategic behavior in a prediction market when a participant has outside incentives to manipulate the market probability. The presence of such outside incentives would seem to damage the information aggregation in the market. Surprisingly, when the existence of such incentives is certain and common knowledge, we show that there exist separating equilibria where all the participants' private information is revealed and fully aggregated into the market probability. Although there also exist pooling equilibria with information loss, we prove that certain separating equilibria are more desirable than many pooling equilibria because the separating equilibria satisfy domination based belief refinements, maximize the social welfare of the setting, or maximize either participant's total expected payoff. When the existence of the outside incentives is uncertain, trust cannot be established and the separating equilibria no longer exist. Third, I experimentally investigate participants' behavior towards the peer prediction mechanisms, which were proposed to elicit information without observable ground truth. While peer prediction mechanisms promise to elicit truthful information by rewarding participants with carefully constructed payments, they also admit uninformative equilibria where coordinating participants provide no useful information. We conduct the first controlled online experiment of the Jurca and Faltings peer prediction mechanism, engaging the participants in a multiplayer, real-time and repeated game. Using a hidden Markov model to capture players' strategies from their actions, our results show that participants successfully coordinate on uninformative equilibria and the truthful equilibrium is not focal, even when some uninformative equilibria do not exist or result in lower payoffs. In contrast, most players are consistently truthful in the absence of peer prediction, suggesting that these mechanisms may be harmful when truthful reporting has similar cost to strategic behavior. Finally, I design and experimentally evaluate an adaptive polling method for aggregating small pieces of imprecise information together to produce an accurate estimate of a latent ground truth. In designing this method, we make two main contributions: (1) Our method aggregates the participants' noisy information by using a theoretical model to account for the noise in the participants' contributed information. (2) Our method uses an active learning inspired approach to adaptively choose the query for each participant. We apply this method to the problem of ranking a set of alternatives, each of which is characterized by a latent strength parameter. At each step, adaptive polling collects the result of a pairwise comparison, estimates the strength parameters from the pairwise comparison data, and adaptively chooses the next pairwise comparison question to maximize expected information gain. Our MTurk experiment shows that our adaptive polling method can effectively incorporate noisy information and improve the estimate accuracy over time. Compared to a baseline method, which chooses a random pairwise comparison question at each step, our adaptive method can generate more accurate estimates with less cost. / Engineering and Applied Sciences
3

Sharing Rewards Based on Subjective Opinions

Carvalho, Arthur January 2010 (has links)
Fair division is the problem of dividing one or several goods among a set of agents in a way that satisfies a suitable fairness criterion. Traditionally studied in economics, philosophy, and political science, fair division has drawn a lot of attention from the multiagent systems community, since this field is strongly concerned about how a surplus (or a cost) should be divided among a group of agents. Arguably, the Shapley value is the single most important contribution to the problem of fair division. It assigns to each agent a share of the resource equal to the expected marginal contribution of that agent. Thus, it is implicitly assumed that individual marginal contributions can be objectively computed. In this thesis, we propose a game-theoretic model for sharing a joint reward when the quality of individual contributions is subjective. In detail, we consider scenarios where a group has been formed and has accomplished a task for which it is granted a reward, which must be shared among the group members. After observing the contribution of the peers in accomplishing the task, each agent is asked to provide evaluations for the others. Mainly to facilitate the sharing process, agents can also be requested to provide predictions about how their peers are evaluated. These subjective opinions are elicited and aggregated by a central, trusted entity, called the mechanism, which is also responsible for sharing the reward based exclusively on the received opinions. Besides the formal game-theoretic model for sharing rewards based on subjective opinions, we propose three different mechanisms in this thesis. Our first mechanism, the peer-evaluation mechanism, divides the reward proportionally to the evaluations received by the agents. We show that this mechanism is fair, budget-balanced, individually rational, and strategy-proof, but that it can be collusion-prone. Our second mechanism, the peer-prediction mechanism, shares the reward by considering two aspects: the evaluations received by the agents and their truth-telling scores. To compute these scores, this mechanism uses a strictly proper scoring rule. Under the assumption that agents are Bayesian decision-makers, we show that this mechanism is weakly budget-balanced, individually rational, and incentive-compatible. Further, we present approaches that guarantee the mechanism to be collusion-resistant and fair. Our last mechanism, the BTS mechanism, is the only one to elicit both evaluations and predictions from the agents. It considers the evaluations received by the agents and their truth-telling scores when sharing the reward. For computing the scores, it uses the Bayesian truth serum method, a powerful scoring method based on the surprisingly common criterion. Under the assumptions that agents are Bayesian decision-makers, and that the population of agents is sufficiently large so that a single evaluation cannot significantly affect the empirical distribution of evaluations, we show that this mechanism is incentive-compatible, budget-balanced, individually rational, and fair.

Page generated in 0.1063 seconds