Spelling suggestions: "subject:"[een] INFORMATION AGGREGATION"" "subject:"[enn] INFORMATION AGGREGATION""
1 |
Three Essays on Learning And Dynamic Coordination Games / 学習と動学調整ゲームに関する三つの小論Qi, Dengwei 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(経済学) / 甲第24382号 / 経博第669号 / 新制||経||303(附属図書館) / 京都大学大学院経済学研究科経済学専攻 / (主査)教授 関口 格, 准教授 陳 珈惠, 教授 渡辺 誠 / 学位規則第4条第1項該当 / Doctor of Economics / Kyoto University / DFAM
|
2 |
Eliciting and Aggregating Truthful and Noisy InformationGao, Xi 21 October 2014 (has links)
In the modern world, making informed decisions requires obtaining and aggregating relevant information about events of interest. For many political, business, and entertainment events, the information of interest only exists as opinions, beliefs, and judgments of dispersed individuals, and we can only get a complete picture by putting the separate pieces of information together. Thus, an important first step towards decision making is motivating the individuals to reveal their private information and coalescing the separate pieces of information together.
In this dissertation, I study three information elicitation and aggregation methods, prediction markets, peer prediction mechanisms, and adaptive polling, using both theoretical and applied approaches. These methods mainly differ by their assumptions on the participants' behavior, namely whether the participants possess noisy or perfect information and whether they strategically decide on what information to reveal. The first two methods, prediction markets and peer prediction mechanisms, assume that the participants are strategic and have perfect information. Their primary goal is to use carefully designed monetary rewards to incentivize the participants to truthfully reveal their private information. As a result, my studies of these methods focus on understanding to what extent are these methods incentive compatible in theory and in practice. The last method, adaptive polling, assumes that the participants are not strategic and have noisy information. In this case, our goal is to accurately and efficiently estimate the latent ground truth given the noisy information, and we aim to evaluate whether this goal can be achieved by using this method experimentally.
I make four main contributions in this dissertation. First, I theoretically analyze how the participants' knowledge of one another's private information affects their strategic behavior when trading in a prediction market with a finite number of participants. Each participant may trade multiple times in the market, and hence may have an incentive to withhold or misreport his information in order to mislead other participants and capitalize on their mistakes. When the participants' private information is unconditionally independent, we show that the participants reveal their information as late as possible at any equilibrium, which is arguably the worse outcome for the purpose of information aggregation. We also provide insights on the equilibria of such prediction markets when the participants' private information is both conditionally and unconditionally dependent given the outcome of the event.
Second, I theoretically analyze the participants' strategic behavior in a prediction market when a participant has outside incentives to manipulate the market probability. The presence of such outside incentives would seem to damage the information aggregation in the market. Surprisingly, when the existence of such incentives is certain and common knowledge, we show that there exist separating equilibria where all the participants' private information is revealed and fully aggregated into the market probability. Although there also exist pooling equilibria with information loss, we prove that certain separating equilibria are more desirable than many pooling equilibria because the separating equilibria satisfy domination based belief refinements, maximize the social welfare of the setting, or maximize either participant's total expected payoff. When the existence of the outside incentives is uncertain, trust cannot be established and the separating equilibria no longer exist.
Third, I experimentally investigate participants' behavior towards the peer prediction mechanisms, which were proposed to elicit information without observable ground truth. While peer prediction mechanisms promise to elicit truthful information by rewarding participants with carefully constructed payments, they also admit uninformative equilibria where coordinating participants provide no useful information. We conduct the first controlled online experiment of the Jurca and Faltings peer prediction mechanism, engaging the participants in a multiplayer, real-time and repeated game. Using a hidden Markov model to capture players' strategies from their actions, our results show that participants successfully coordinate on uninformative equilibria and the truthful equilibrium is not focal, even when some uninformative equilibria do not exist or result in lower payoffs. In contrast, most players are consistently truthful in the absence of peer prediction, suggesting that these mechanisms may be harmful when truthful reporting has similar cost to strategic behavior.
Finally, I design and experimentally evaluate an adaptive polling method for aggregating small pieces of imprecise information together to produce an accurate estimate of a latent ground truth. In designing this method, we make two main contributions: (1) Our method aggregates the participants' noisy information by using a theoretical model to account for the noise in the participants' contributed information. (2) Our method uses an active learning inspired approach to adaptively choose the query for each participant.
We apply this method to the problem of ranking a set of alternatives, each of which is characterized by a latent strength parameter. At each step, adaptive polling collects the result of a pairwise comparison, estimates the strength parameters from the pairwise comparison data, and adaptively chooses the next pairwise comparison question to maximize expected information gain. Our MTurk experiment shows that our adaptive polling method can effectively incorporate noisy information and improve the estimate accuracy over time. Compared to a baseline method, which chooses a random pairwise comparison question at each step, our adaptive method can generate more accurate estimates with less cost. / Engineering and Applied Sciences
|
3 |
The Role of Feedback in the Assimilation of Information in Prediction MarketsJolly, Richard Donald 01 January 2011 (has links)
Leveraging the knowledge of an organization is an ongoing challenge that has given rise to the field of knowledge management. Yet, despite spending enormous sums of organizational resources on Information Technology (IT) systems, executives recognize there is much more knowledge to harvest. Prediction markets are emerging as one tool to help extract this tacit knowledge and make it operational. Yet, prediction markets, like other markets, are subject to pathologies (e.g., bubbles and crashes) which compromise their accuracy and may discourage organizational use. The techniques of experimental economics were used to study the characteristics of prediction markets. Empirical data was gathered from an on-line asynchronous prediction market. Participants allocated tickets based on private information and, depending on the market type, public information indicative of how prior participants had allocated their tickets. The experimental design featured three levels of feedback (no-feedback, percentages of total allocated tickets and frequency of total allocated tickets) presented to the participants. The research supported the hypothesis that information assimilation in feedback markets is composed of two mechanisms - information collection and aggregation. These are defined as: Collection - The compilation of dispersed information - individuals using their own private information make judgments and act accordingly in the market. Aggregation - The market's judgment on the implications of this gathered information - an inductive process. This effect comes from participants integrating public information with their private information in their decision process. Information collection was studied in isolation in no feedback markets and the hypothesis that markets outperform the average of their participants was supported. The hypothesis that with the addition of feedback, the process of aggregation would be present was also supported. Aggregation was shown to create agreement in markets (as measured by entropy) and drive market results closer to correct values (the known probabilities). However, the research also supported the hypothesis that aggregation can lead to information mirages, creating a market bubble. The research showed that the presence and type of feedback can be used to modulate market performance. Adding feedback, or more informative feedback, increased the market's precision at the expense of accuracy. The research supported the hypotheses that these changes were due to the inductive aggregation process which creates agreement (increasing precision), but also occasionally generates information mirages (which reduces accuracy). The way individual participants use information to make allocations was characterized. In feedback markets the fit of participants' responses to various decision models demonstrated great variety. The decision models ranged from little use of information (e.g., MaxiMin), use of only private information (e.g., allocation in proportion to probabilities), use of only public information (e.g., allocating in proportion to public distributions) and integration of public and private information. Analysis of all feedback market responses using multivariate regression also supported the hypothesis that public and private information were being integrated by some participants. The subtle information integration results are in contrast to the distinct differences seen in markets with varying levels of feedback. This illustrates that the differences in market performance with feedback are an emergent phenomenon (i.e., one that could not be predicted by analyzing the behavior of individuals in different market situations). The results of this study have increased our collective knowledge of market operation and have revealed methods that organizations can use in the construction and analysis of prediction markets. In some situations markets without feedback may be a preferred option. The research supports the hypothesis that information aggregation in feedback markets can be simultaneously responsible for beneficial information processing as well as harmful information mirage induced bubbles. In fact, a market subject to mirage prone data resembles a Prisoner's Dilemma where individual rationality results in collective irrationality.
|
4 |
Iterative reasoning and markets : three experimentsChoo, Chang Yuan Lawrence January 2014 (has links)
We present in this thesis three distinct experiments, studying issues in behavioural economics and finan- cial market. In the first paper, we study the level-k reasoning model in an experimental extension of Arad and Rubinsten (2012) “11-20 Money Request Game”. In the second paper, we introduce an experimental design where traders can buy or sell the rights for performing complex decisional task. The design seeks to test the allocative efficiency of markets. In the third paper, we introduce an Arrow-Debreu market where traders have diverse and partial information about the true state of nature. The design seeks to test the Rational Expectations model’s prediction that all relevant information will be aggregated into market prices.
|
5 |
Container Line Supply Chain security analysis under complex and uncertain environmentTang, Dawei January 2012 (has links)
Container Line Supply Chain (CLSC), which transports cargo in containers and accounts for approximately 95 percent of world trade, is a dominant way for world cargo transportation due to its high efficiency. However, the operation of a typical CLSC, which may involve as many as 25 different organizations spreading all over the world, is very complex, and at the same time, it is estimated that only 2 percent of imported containers are physically inspected in most countries. The complexity together with insufficient prevention measures makes CLSC vulnerable to many threats, such as cargo theft, smuggling, stowaway, terrorist activity, piracy, etc. Furthermore, as disruptions caused by a security incident in a certain point along a CLSC may also cause disruptions to other organizations involved in the same CLSC, the consequences of security incidents to a CLSC may be severe. Therefore, security analysis becomes essential to ensure smooth operation of CLSC, and more generally, to ensure smooth development of world economy. The literature review shows that research on CLSC security only began recently, especially after the terrorist attack on September 11th, 2001, and most of the research either focuses on developing policies, standards, regulations, etc. to improve CLSC security from a general view or focuses on discussing specific security issues in CLSC in a descriptive and subjective way. There is a lack of research on analytical security analysis to provide specific, feasible and practical assistance for people in governments, organizations and industries to improve CLSC security. Facing the situation mentioned above, this thesis intends to develop a set of analytical models for security analysis in CLSC to provide practical assistance to people in maintaining and improving CLSC security. In addition, through the development of the models, the thesis also intends to provide some methodologies for general risk/security analysis problems under complex and uncertain environment, and for some general complex decision problems under uncertainty. Specifically, the research conducted in the thesis is mainly aimed to answer the following two questions: how to assess security level of a CLSC in an analytical and rational way, and according to the security assessment result, how to develop balanced countermeasures to improve security level of a CLSC under the constraints of limited resources. For security assessment, factors influencing CLSC security as a whole are identified first and then organized into a general hierarchical model according to the relations among the factors. The general model is then refined for security assessment of a port storage area along a CLSC against cargo theft. Further, according to the characteristics of CLSC security analysis, the belief Rule base Inference Methodology using the Evidential Reasoning approach (RIMER) is selected as the tool to assess CLSC security due to its capabilities in accommodating and handling different forms of information with different kinds of uncertainty involved in both the measurement of factors identified and the measurement of relations among the factors. To build a basis of the application of RIMER, a new process is introduced to generate belief degrees in Belief Rule Bases (BRBs), with the aim of reducing bias and inconsistency in the process of the generation. Based on the results of CLSC security assessment, a novel resource allocation model for security improvement is also proposed within the framework of RIMER to optimally improve CLSC security under the constraints of available resources. In addition, it is reflected from the security assessment process that RIMER has its limitations in dealing with different information aggregation patterns identified in the proposed security assessment model, and in dealing with different kinds of incompleteness in CLSC security assessment. Correspondently, under the framework of RIMER, novel methods are proposed to accommodate and handle different information aggregation patterns, as well as different kinds of incompleteness. To validate the models proposed in the thesis, several case studies are conducted using data collected from different ports in both the UK and China. From a methodological point of view, the ideas, process and models proposed in the thesis regarding BRB generation, optimal resource allocation based on security assessment results, information aggregation pattern identification and handling, incomplete information handling can be applied not only for CLSC security analysis, but also for dealing with other risk and security analysis problems and more generally, some complex decision problems. From a practical point of view, the models proposed in the thesis can help people in governments, organizations, and industries related to CLSC develop best practices to ensure secure operation, assess security levels of organizations involved in a CLSC and security level of the whole CLSC, and allocate limited resources to improve security of organizations in CLSC. The potential beneficiaries of the research may include: governmental organizations, international/regional organizations, industrial organizations, classification societies, consulting companies, companies involved in a CLSC, companies with cargo to be shipped, individual researchers in relevant areas etc.
|
6 |
Essays in Game Theory Applied to Political and Market InstitutionsBouton, Laurent 15 November 2009 (has links)
My thesis contains essays on voting theory, market structures and fiscal federalism: (i) One Person, Many Votes: Divided Majority and Information Aggregation, (ii) Runoff Elections and the Condorcet Loser, (iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics, and (iv) Redistributing Income under Fiscal Vertical Imbalance.
(i) One Person, Many Votes: Divided Majority and Information Aggregation (joint with Micael Castanheira)
In elections, majority divisions pave the way to focal manipulations and coordination failures, which can lead to the victory of the wrong candidate. This paper shows how this flaw can be addressed if voter preferences over candidates are sensitive to information. We consider two potential sources of divisions: majority voters may have similar preferences but opposite information about the candidates, or opposite preferences. We show that when information is the source of majority divisions, Approval Voting features a unique equilibrium with full information and coordination equivalence. That is, it produces the same outcome as if both information and coordination problems could be resolved. Other electoral systems, such as Plurality and Two-Round elections, do not satisfy this equivalence. The second source of division is opposite preferences. Whenever the fraction of voters with such preferences is not too large, Approval Voting still satisfies full information and coordination equivalence.
(ii) Runoff Elections and the Condorcet Loser
A crucial component of Runoff electoral systems is the threshold fraction of votes above which a candidate wins outright in the first round. I analyze the influence of this threshold on the voting equilibria in three-candidate Runoff elections. I demonstrate the existence of an Ortega Effect which may unduly favor dominated candidates and thus lead to the election of the Condorcet Loser in equilibrium. The reason is that, contrarily to commonly held beliefs, lowering the threshold for first-round victory may actually induce voters to express their preferences excessively. I also extend Duverger's Law to Runoff elections with any threshold below, equal or above 50%. Therefore, Runoff elections are plagued with inferior equilibria that induce either too high or too low expression of preferences.
(iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics
Information on product quality is crucial for buyers to make sound choices. For "experience products", this information is not available at the time of the purchase: it is only acquired through consumption. For much experience products, there exist institutions that provide buyers with information about quality. It is commonly believed that such institutions help consumers to make better choices and are thus welfare improving.
The quality of various experience products depends on the characteristics of buyers. For instance, conversely to the quality of cars, business school quality depends on buyers (i.e. students) characteristics. Indeed, one of the main inputs of a business school is enrolled students. The choice of buyers for such products has then some features of a coordination problem: ceteris paribus, a buyer prefers to buy a product consumed by buyers with "good" characteristics. This coordination dimension leads to inefficiencies when buyers coordinate on products of lower "intrinsic" quality. When the quality of products depends on buyer characteristics, information about product quality can reinforce such a coordination problem. Indeed, even though information of high quality need not mean high intrinsic quality, rational buyers pay attention to this information because they prefer high quality products, no matter the reason of the high quality. Information about product quality may then induce buyers to coordinate on products of low intrinsic quality.
In this paper, I show that, for experience products which quality depends on the characteristics of buyers, more information is not necessarily better. More precisely, I prove that more information about product quality may lead to a Pareto deterioration, i.e. all buyers may be worse off due.
(iv) Redistributing Income under Fiscal Vertical Imbalance (joint with Marjorie Gassner and Vincenzo Verardi)
From the literature on decentralization, it appears that the fiscal vertical imbalance (i.e. the dependence of subnational governments on national government revenues to support their expenditures) is somehow inherent to multi-level governments. Using a stylized model we show that this leads to a reduction of the extent of redistributive fiscal policies if the maximal size of government has been reached. To test for this empirically, we use some high quality data from the LIS dataset on individual incomes. The results are highly significant and point in the direction of our theoretical predictions.
|
7 |
[en] INFORMATIONALLY EFFICIENT MARKETS UNDER RATIONAL INATTENTION / [pt] MERCADOS INFORMACIONALMENTE EFICIENTES SOB DESATENÇÃO RACIONALANDRE MEDEIROS SZTUTMAN 19 October 2017 (has links)
[pt] Propomos uma nova solução para o paradoxo de Grossman Stiglitz [1980]. Trocando sua estrutura informacional por uma restrição de desatenção racional, nós mostramos que os preços podem refletir toda
a informação disponível, sem quebrar os incentivos dos participantes do mercado em processar informação. Esse modelo reformula a hipótese dos mercados eficientes e concilia visões opostas: preços são completamente reveladores, mas apenas para aqueles que são suficientemente espertos. Finalmente, nós desenvolvemos um método para postular e resolver modelos de equilíbrio geral Walrasiano que circunscreve hipóteses simplificadoras anteriores. / [en] We propose a new solution for the Grossman and Stiglitz [1980] paradox. By substituting a rational inattention restriction for their information structure, we show that prices can reflect all the information
available without breaking the incentives of market participants to gather information. This model reframes the efficient market hypothesis and reconciles opposing views: prices are fully revealing but only for those who are sufficiently smart. Finally, we develop a method for postulating and solving Walrasian general equilibrium models with rationally inattentive agents circumventing previous tractability assumptions.
|
8 |
市場交易淺薄下之錯誤評價及其校正-以預測市場為實證基礎吳偉劭 Unknown Date (has links)
預測市場的研究近年來在學界逐漸受到重視,因為它利用價格具有訊息加總的功能,每每創造出良好的預測績效,但一個預測市場的建立在諸多原因下,通常不易吸引大規模的參與者,例如為免觸犯法令規定,以虛擬貨幣代替真錢進行交易,在缺乏真實貨幣的獲利誘因下,很難有效吸引參與者,即便真能以真實貨幣交易,若實驗的議題並非一般大眾感興趣的話題也不易吸引多數人參與,在這種情況下無法避免要面臨市場交易過於淺薄的問題,雖然不少文獻標榜淺薄市場不會影響預測市場的預測精準度,但並不表這是一個可以置之不理的問題。
本文以預測市場預測2006年北高市長選舉為實證基礎,闡明淺薄市場對價格產生的影響,以及這些影響將導致對未來事件的錯誤評價與推論,要避免這種錯誤的評價與推論唯有設法消除淺薄市場引發的干擾,因此我們提出了五種可以消除這些干擾的方法並從中選擇一較佳者。如同一般文獻的讚揚,我們再次從預測市場獲得精確的預測效果,同時證明所謂淺薄市場不影響預測市場的預測精準度前提乃在消除淺薄市場對價格產生的干擾之後才能還原這個真相。
|
9 |
Essays in game theory applied to political and market institutionsBouton, Laurent 15 June 2009 (has links)
My thesis contains essays on voting theory, market structures and fiscal federalism: (i) One Person, Many Votes: Divided Majority and Information Aggregation, (ii) Runoff Elections and the Condorcet Loser, (iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics, and (iv) Redistributing Income under Fiscal Vertical Imbalance.<p><p>(i) One Person, Many Votes: Divided Majority and Information Aggregation (joint with Micael Castanheira)<p>In elections, majority divisions pave the way to focal manipulations and coordination failures, which can lead to the victory of the wrong candidate. This paper shows how this flaw can be addressed if voter preferences over candidates are sensitive to information. We consider two potential sources of divisions: majority voters may have similar preferences but opposite information about the candidates, or opposite preferences. We show that when information is the source of majority divisions, Approval Voting features a unique equilibrium with full information and coordination equivalence. That is, it produces the same outcome as if both information and coordination problems could be resolved. Other electoral systems, such as Plurality and Two-Round elections, do not satisfy this equivalence. The second source of division is opposite preferences. Whenever the fraction of voters with such preferences is not too large, Approval Voting still satisfies full information and coordination equivalence.<p><p>(ii) Runoff Elections and the Condorcet Loser<p>A crucial component of Runoff electoral systems is the threshold fraction of votes above which a candidate wins outright in the first round. I analyze the influence of this threshold on the voting equilibria in three-candidate Runoff elections. I demonstrate the existence of an Ortega Effect which may unduly favor dominated candidates and thus lead to the election of the Condorcet Loser in equilibrium. The reason is that, contrarily to commonly held beliefs, lowering the threshold for first-round victory may actually induce voters to express their preferences excessively. I also extend Duverger's Law to Runoff elections with any threshold below, equal or above 50%. Therefore, Runoff elections are plagued with inferior equilibria that induce either too high or too low expression of preferences.<p><p>(iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics<p>Information on product quality is crucial for buyers to make sound choices. For "experience products", this information is not available at the time of the purchase: it is only acquired through consumption. For much experience products, there exist institutions that provide buyers with information about quality. It is commonly believed that such institutions help consumers to make better choices and are thus welfare improving.<p>The quality of various experience products depends on the characteristics of buyers. For instance, conversely to the quality of cars, business school quality depends on buyers (i.e. students) characteristics. Indeed, one of the main inputs of a business school is enrolled students. The choice of buyers for such products has then some features of a coordination problem: ceteris paribus, a buyer prefers to buy a product consumed by buyers with "good" characteristics. This coordination dimension leads to inefficiencies when buyers coordinate on products of lower "intrinsic" quality. When the quality of products depends on buyer characteristics, information about product quality can reinforce such a coordination problem. Indeed, even though information of high quality need not mean high intrinsic quality, rational buyers pay attention to this information because they prefer high quality products, no matter the reason of the high quality. Information about product quality may then induce buyers to coordinate on products of low intrinsic quality.<p>In this paper, I show that, for experience products which quality depends on the characteristics of buyers, more information is not necessarily better. More precisely, I prove that more information about product quality may lead to a Pareto deterioration, i.e. all buyers may be worse off due.<p><p>(iv) Redistributing Income under Fiscal Vertical Imbalance (joint with Marjorie Gassner and Vincenzo Verardi)<p>From the literature on decentralization, it appears that the fiscal vertical imbalance (i.e. the dependence of subnational governments on national government revenues to support their expenditures) is somehow inherent to multi-level governments. Using a stylized model we show that this leads to a reduction of the extent of redistributive fiscal policies if the maximal size of government has been reached. To test for this empirically, we use some high quality data from the LIS dataset on individual incomes. The results are highly significant and point in the direction of our theoretical predictions.<p> / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
10 |
Mitigating Congestion by Integrating Time Forecasting and Realtime Information Aggregation in Cellular NetworksChen, Kai 11 March 2011 (has links)
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
|
Page generated in 0.0531 seconds