• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Voting and information aggregation. Theories and experiments in the tradition of condorcet

Rata, Cristina 29 July 2002 (has links)
Esta tesis ofrece una justificación para el uso de la pluralidad como una manera óptima de agregar información en las sociedades compuestas por individuos con intereses comunes pero con información diversa. El motivo de esta tesis es seguir una línea de investigación sobre la elección social que se remonta al matemático y filósofo político francés Jean-Antoine-Nicolas de Caritat, Marqués de Condorcet (1743-1794). En su Essai sur l'application de l'analyse à la probabilité des decisions rendues à la pluralité des voix (1785), Condorcet afirmó que se garantizaría la justicia social si las naciones adoptaran constituciones políticas que facilitaran el juicio correcto del grupo y argumentó que la votación por mayoría sería la herramienta constitucional más probable para alcanzar este objetivo.Siguiendo esta línea de investigación, la primera parte de esta tesis estudia las condiciones bajo las cuales la pluralidad proporciona a la sociedad el método más adecuado para llegar a decisiones de grupo. Aquí, como en el estudio de Condorcet, supondremos que los votantes actúan honradamente.El desarrollo natural de la teoría de votación, que ha introducido los temas de incentivos e interacción estratégica en la toma de decisiones de grupos, ha sido utilizado para cuestionar la suposición de votación honesta. Austen-Smith y Banks (1996) fueron los primeros en observar que la combinación de información privada e intereses comunes en el sistema propuesto por Condorcet podría crear incentivos para los votantes para actuar estratégicamente. Esta observación les condujo a plantear si la votación honesta sería compatible con el comportamiento de equilibrio de Nash en el juego inducido por la mayoría. La segunda parte de esta tesis expone esta problemática estudiando el comportamiento de los votantes en el juego inducido por la pluralidad.El interés en las instituciones del mundo real, para las cuales la votación es un elemento importante, ha hecho plantear desde hace tiempo la cuestión de si los votantes se comportan tal y como pronostican los modelos teóricos. Otra cuestión ha sido cómo tratar la complejidad del entorno estratégico. La segunda parte de esta tesis pide respuestas a estas preguntas. Puesto que la literatura sobre experimentos de votación parece proporcionar respuestas razonables a estas preguntas, la tercera parte de esta tesis utiliza experimentos de laboratorio para verificar las implicaciones de la segunda parte. / This thesis offers a justification for the use of plurality rule as an optimal way to aggregate information for societies composed of individuals with common interests but diverse information. The motivation of this thesis follows a line of research in social choice that dates back to the French mathematician and political philosopher Jean-Antoine-Nicolas de Caritat, Marquis de Condorcet (1743-1794). In his Essai sur l'application de l'analyse à la probabilité des decisions rendues à la pluralité des voix (1785), Condorcet posited that social justice would be secured if nations would adopt political constitutions that facilitate accurate group judgments, and argued that the majority rule would be the most likely constitutional tool to achieve this goal.Following this line of research, the first part of this thesis discusses the conditions under which plurality rule provides the society with the most likely method to reach accurate group judgments. In this part, as in Condorcet's work, it is assumed that voters act honestly. Natural developments in the theory of voting, that brought in the issues of incentives and strategic interaction in group decision making, were used to challenge the assumption of honest voting. Austen-Smith and Banks (1996) were the first to notice that the combination of private information and common interests in the framework proposed by Condorcet might create an incentive for voters to act strategically. This observation led them to ask the question of whether honest voting is compatible with the Nash equilibrium behavior in the game induced by majority rule. The second part of this thesis advances this problematic by studying voters' behavior in the game induced by plurality rule.The interest in real-world institutions, for which voting is an important element, raised for some time the question of whether voters behave as predicted by the theoretical models. Another question was of how to deal with the complexity of the strategic environment. The second part of this thesis calls for answers to these types of questions. Since the literature on voting experiments seems to provide reasonable answers to these questions, the third part of this thesis uses laboratory experiments to test the implications of the second part.
2

Three Essays on Learning And Dynamic Coordination Games / 学習と動学調整ゲームに関する三つの小論

Qi, Dengwei 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(経済学) / 甲第24382号 / 経博第669号 / 新制||経||303(附属図書館) / 京都大学大学院経済学研究科経済学専攻 / (主査)教授 関口 格, 准教授 陳 珈惠, 教授 渡辺 誠 / 学位規則第4条第1項該当 / Doctor of Economics / Kyoto University / DFAM
3

Eliciting and Aggregating Truthful and Noisy Information

Gao, Xi 21 October 2014 (has links)
In the modern world, making informed decisions requires obtaining and aggregating relevant information about events of interest. For many political, business, and entertainment events, the information of interest only exists as opinions, beliefs, and judgments of dispersed individuals, and we can only get a complete picture by putting the separate pieces of information together. Thus, an important first step towards decision making is motivating the individuals to reveal their private information and coalescing the separate pieces of information together. In this dissertation, I study three information elicitation and aggregation methods, prediction markets, peer prediction mechanisms, and adaptive polling, using both theoretical and applied approaches. These methods mainly differ by their assumptions on the participants' behavior, namely whether the participants possess noisy or perfect information and whether they strategically decide on what information to reveal. The first two methods, prediction markets and peer prediction mechanisms, assume that the participants are strategic and have perfect information. Their primary goal is to use carefully designed monetary rewards to incentivize the participants to truthfully reveal their private information. As a result, my studies of these methods focus on understanding to what extent are these methods incentive compatible in theory and in practice. The last method, adaptive polling, assumes that the participants are not strategic and have noisy information. In this case, our goal is to accurately and efficiently estimate the latent ground truth given the noisy information, and we aim to evaluate whether this goal can be achieved by using this method experimentally. I make four main contributions in this dissertation. First, I theoretically analyze how the participants' knowledge of one another's private information affects their strategic behavior when trading in a prediction market with a finite number of participants. Each participant may trade multiple times in the market, and hence may have an incentive to withhold or misreport his information in order to mislead other participants and capitalize on their mistakes. When the participants' private information is unconditionally independent, we show that the participants reveal their information as late as possible at any equilibrium, which is arguably the worse outcome for the purpose of information aggregation. We also provide insights on the equilibria of such prediction markets when the participants' private information is both conditionally and unconditionally dependent given the outcome of the event. Second, I theoretically analyze the participants' strategic behavior in a prediction market when a participant has outside incentives to manipulate the market probability. The presence of such outside incentives would seem to damage the information aggregation in the market. Surprisingly, when the existence of such incentives is certain and common knowledge, we show that there exist separating equilibria where all the participants' private information is revealed and fully aggregated into the market probability. Although there also exist pooling equilibria with information loss, we prove that certain separating equilibria are more desirable than many pooling equilibria because the separating equilibria satisfy domination based belief refinements, maximize the social welfare of the setting, or maximize either participant's total expected payoff. When the existence of the outside incentives is uncertain, trust cannot be established and the separating equilibria no longer exist. Third, I experimentally investigate participants' behavior towards the peer prediction mechanisms, which were proposed to elicit information without observable ground truth. While peer prediction mechanisms promise to elicit truthful information by rewarding participants with carefully constructed payments, they also admit uninformative equilibria where coordinating participants provide no useful information. We conduct the first controlled online experiment of the Jurca and Faltings peer prediction mechanism, engaging the participants in a multiplayer, real-time and repeated game. Using a hidden Markov model to capture players' strategies from their actions, our results show that participants successfully coordinate on uninformative equilibria and the truthful equilibrium is not focal, even when some uninformative equilibria do not exist or result in lower payoffs. In contrast, most players are consistently truthful in the absence of peer prediction, suggesting that these mechanisms may be harmful when truthful reporting has similar cost to strategic behavior. Finally, I design and experimentally evaluate an adaptive polling method for aggregating small pieces of imprecise information together to produce an accurate estimate of a latent ground truth. In designing this method, we make two main contributions: (1) Our method aggregates the participants' noisy information by using a theoretical model to account for the noise in the participants' contributed information. (2) Our method uses an active learning inspired approach to adaptively choose the query for each participant. We apply this method to the problem of ranking a set of alternatives, each of which is characterized by a latent strength parameter. At each step, adaptive polling collects the result of a pairwise comparison, estimates the strength parameters from the pairwise comparison data, and adaptively chooses the next pairwise comparison question to maximize expected information gain. Our MTurk experiment shows that our adaptive polling method can effectively incorporate noisy information and improve the estimate accuracy over time. Compared to a baseline method, which chooses a random pairwise comparison question at each step, our adaptive method can generate more accurate estimates with less cost. / Engineering and Applied Sciences
4

The Role of Feedback in the Assimilation of Information in Prediction Markets

Jolly, Richard Donald 01 January 2011 (has links)
Leveraging the knowledge of an organization is an ongoing challenge that has given rise to the field of knowledge management. Yet, despite spending enormous sums of organizational resources on Information Technology (IT) systems, executives recognize there is much more knowledge to harvest. Prediction markets are emerging as one tool to help extract this tacit knowledge and make it operational. Yet, prediction markets, like other markets, are subject to pathologies (e.g., bubbles and crashes) which compromise their accuracy and may discourage organizational use. The techniques of experimental economics were used to study the characteristics of prediction markets. Empirical data was gathered from an on-line asynchronous prediction market. Participants allocated tickets based on private information and, depending on the market type, public information indicative of how prior participants had allocated their tickets. The experimental design featured three levels of feedback (no-feedback, percentages of total allocated tickets and frequency of total allocated tickets) presented to the participants. The research supported the hypothesis that information assimilation in feedback markets is composed of two mechanisms - information collection and aggregation. These are defined as: Collection - The compilation of dispersed information - individuals using their own private information make judgments and act accordingly in the market. Aggregation - The market's judgment on the implications of this gathered information - an inductive process. This effect comes from participants integrating public information with their private information in their decision process. Information collection was studied in isolation in no feedback markets and the hypothesis that markets outperform the average of their participants was supported. The hypothesis that with the addition of feedback, the process of aggregation would be present was also supported. Aggregation was shown to create agreement in markets (as measured by entropy) and drive market results closer to correct values (the known probabilities). However, the research also supported the hypothesis that aggregation can lead to information mirages, creating a market bubble. The research showed that the presence and type of feedback can be used to modulate market performance. Adding feedback, or more informative feedback, increased the market's precision at the expense of accuracy. The research supported the hypotheses that these changes were due to the inductive aggregation process which creates agreement (increasing precision), but also occasionally generates information mirages (which reduces accuracy). The way individual participants use information to make allocations was characterized. In feedback markets the fit of participants' responses to various decision models demonstrated great variety. The decision models ranged from little use of information (e.g., MaxiMin), use of only private information (e.g., allocation in proportion to probabilities), use of only public information (e.g., allocating in proportion to public distributions) and integration of public and private information. Analysis of all feedback market responses using multivariate regression also supported the hypothesis that public and private information were being integrated by some participants. The subtle information integration results are in contrast to the distinct differences seen in markets with varying levels of feedback. This illustrates that the differences in market performance with feedback are an emergent phenomenon (i.e., one that could not be predicted by analyzing the behavior of individuals in different market situations). The results of this study have increased our collective knowledge of market operation and have revealed methods that organizations can use in the construction and analysis of prediction markets. In some situations markets without feedback may be a preferred option. The research supports the hypothesis that information aggregation in feedback markets can be simultaneously responsible for beneficial information processing as well as harmful information mirage induced bubbles. In fact, a market subject to mirage prone data resembles a Prisoner's Dilemma where individual rationality results in collective irrationality.
5

Iterative reasoning and markets : three experiments

Choo, Chang Yuan Lawrence January 2014 (has links)
We present in this thesis three distinct experiments, studying issues in behavioural economics and finan- cial market. In the first paper, we study the level-k reasoning model in an experimental extension of Arad and Rubinsten (2012) “11-20 Money Request Game”. In the second paper, we introduce an experimental design where traders can buy or sell the rights for performing complex decisional task. The design seeks to test the allocative efficiency of markets. In the third paper, we introduce an Arrow-Debreu market where traders have diverse and partial information about the true state of nature. The design seeks to test the Rational Expectations model’s prediction that all relevant information will be aggregated into market prices.
6

Container Line Supply Chain security analysis under complex and uncertain environment

Tang, Dawei January 2012 (has links)
Container Line Supply Chain (CLSC), which transports cargo in containers and accounts for approximately 95 percent of world trade, is a dominant way for world cargo transportation due to its high efficiency. However, the operation of a typical CLSC, which may involve as many as 25 different organizations spreading all over the world, is very complex, and at the same time, it is estimated that only 2 percent of imported containers are physically inspected in most countries. The complexity together with insufficient prevention measures makes CLSC vulnerable to many threats, such as cargo theft, smuggling, stowaway, terrorist activity, piracy, etc. Furthermore, as disruptions caused by a security incident in a certain point along a CLSC may also cause disruptions to other organizations involved in the same CLSC, the consequences of security incidents to a CLSC may be severe. Therefore, security analysis becomes essential to ensure smooth operation of CLSC, and more generally, to ensure smooth development of world economy. The literature review shows that research on CLSC security only began recently, especially after the terrorist attack on September 11th, 2001, and most of the research either focuses on developing policies, standards, regulations, etc. to improve CLSC security from a general view or focuses on discussing specific security issues in CLSC in a descriptive and subjective way. There is a lack of research on analytical security analysis to provide specific, feasible and practical assistance for people in governments, organizations and industries to improve CLSC security. Facing the situation mentioned above, this thesis intends to develop a set of analytical models for security analysis in CLSC to provide practical assistance to people in maintaining and improving CLSC security. In addition, through the development of the models, the thesis also intends to provide some methodologies for general risk/security analysis problems under complex and uncertain environment, and for some general complex decision problems under uncertainty. Specifically, the research conducted in the thesis is mainly aimed to answer the following two questions: how to assess security level of a CLSC in an analytical and rational way, and according to the security assessment result, how to develop balanced countermeasures to improve security level of a CLSC under the constraints of limited resources. For security assessment, factors influencing CLSC security as a whole are identified first and then organized into a general hierarchical model according to the relations among the factors. The general model is then refined for security assessment of a port storage area along a CLSC against cargo theft. Further, according to the characteristics of CLSC security analysis, the belief Rule base Inference Methodology using the Evidential Reasoning approach (RIMER) is selected as the tool to assess CLSC security due to its capabilities in accommodating and handling different forms of information with different kinds of uncertainty involved in both the measurement of factors identified and the measurement of relations among the factors. To build a basis of the application of RIMER, a new process is introduced to generate belief degrees in Belief Rule Bases (BRBs), with the aim of reducing bias and inconsistency in the process of the generation. Based on the results of CLSC security assessment, a novel resource allocation model for security improvement is also proposed within the framework of RIMER to optimally improve CLSC security under the constraints of available resources. In addition, it is reflected from the security assessment process that RIMER has its limitations in dealing with different information aggregation patterns identified in the proposed security assessment model, and in dealing with different kinds of incompleteness in CLSC security assessment. Correspondently, under the framework of RIMER, novel methods are proposed to accommodate and handle different information aggregation patterns, as well as different kinds of incompleteness. To validate the models proposed in the thesis, several case studies are conducted using data collected from different ports in both the UK and China. From a methodological point of view, the ideas, process and models proposed in the thesis regarding BRB generation, optimal resource allocation based on security assessment results, information aggregation pattern identification and handling, incomplete information handling can be applied not only for CLSC security analysis, but also for dealing with other risk and security analysis problems and more generally, some complex decision problems. From a practical point of view, the models proposed in the thesis can help people in governments, organizations, and industries related to CLSC develop best practices to ensure secure operation, assess security levels of organizations involved in a CLSC and security level of the whole CLSC, and allocate limited resources to improve security of organizations in CLSC. The potential beneficiaries of the research may include: governmental organizations, international/regional organizations, industrial organizations, classification societies, consulting companies, companies involved in a CLSC, companies with cargo to be shipped, individual researchers in relevant areas etc.
7

Essays in economic design : information, markets and dynamics

Khan, Urmee, 1977- 06 July 2011 (has links)
This dissertation consists of three essays that apply both economic theory and econometric methods to understand design and dynamics of institutions. In particular, it studies how institutions aggregate information and deal with uncertainty and attempts to derive implications for optimal institution design. Here is a brief summary of the essays. In many economic, political and social situations where the environment changes in a random fashion necessitating costly action we face a choice of both the timing of the action as well as choosing the optimal action. In particular, if the stochastic environment possesses the property that the next environmental change becomes either more or less likely as more time passes since the last change (in other words the hazard rate of environmental change is not constant over time), then the timing of the action takes on special importance. In the first essay, joint with Maxwell B Stinchcombe, we model and solve a dynamic decision problem in a semi-Markov environment. We find that if the arrival times for state changes do not follow a memoryless process, time since the last observed change of state, in addition to the current state, becomes a crucial variable in the decision. We characterize the optimal policy and the optimal timing of executing that policy in the differentiable case by a set of first order conditions of a relatively simple form. They show that both in the case of increasing and decreasing hazard rates, the optimal response may be to wait before executing a policy change. The intuitive explanation of the result has to do with the fact that waiting reveals information about the likelihood of the next change occurring, hence waiting is valuable when actions are costly. This result helps shed new light on the structure of optimal decisions in many interesting problems of institution design, including the fact that constitutions often have built-in delay mechanisms to slow the pace of legislative change. Our model results could be used to characterize optimal timing rules for constitutional amendments. The paper also contributes to generalize the methodology of semi-Markov decision theory by formulating a dynamic programming set-up that looks to solve the timing-of-action problem whereas the existing literature looks to optimize over a much more limited set of policies where the action can only be taken at the instant when the state changes. In the second essay, we extend our research to situations, where the current choice of action influences the future path of the stochastic process, and apply it to the legal framework surrounding environmental issues, particularly to the ‘Precautionary Principle' as applied to climate change legislation. We represent scientific uncertainty about environmental degradation using the concept of 'ambiguity' and show that ambiguity aversion generates a 'precautionary effect'. As a result, justification is provided for the Precautionary Principle that is different from the ones provided by expected utility theory. This essay serves both as an application of the general theoretical results derived in the first essay and also stands alone as an analysis of a substantive question about environmental law. Prediction markets have attracted public attention in recent years for making accurate predictions about election outcomes, product sales, film box office and myriad other variables of interest and many believe that they will soon become a very important decision support system in a wide variety of areas including governance, law and industry. For successful design of these markets, a thorough understanding of the theoretical and empirical foundations of such markets is necessary. But the information aggregation process in these markets is not fully understood yet. There remains a number of open questions. The third essay, joint with Robert Lieli, attempts to analyze the direction and timing of information flow between prices, polls, and media coverage of events traded on prediction markets. Specifically, we examine the race between Barack Obama and Hillary Clinton in the 2008 Democratic primaries for presidential nomination. Substantively, we ask the following question: (i) Do prediction market prices have information that is not reflected in viii contemporaneous polls and media stories? (ii) Conversely, do prices react to information that appears to be news for pollsters or is prominently featured by the media? Quantitatively, we construct time series variables that reflect the "pollster's surprise" in each primary election, measured as the difference between actual vote share and vote share predicted by the latest poll before the primary, as well as indices that describe the extent of media coverage received by the candidates. We carry out Granger Causality tests between the day-to-day percentage change in the price of the "Obama wins nomination" security and these information variables. Some key results from our exercise can be summarized as follows. There seems to be mutual (two-way) Granger causality between prediction market prices and the surprise element in the primaries. There is also evidence of one-way Granger causality in the short run from price changes towards media news indices. These results suggest that prediction market prices anticipate at least some of the discrepancy between the actual outcome and the latest round of polls before the election. Nevertheless, prices also seem to be driven partly by election results, suggesting that there is an element of the pollster’s surprise that is genuine news for the market as well. / text
8

Essays in Game Theory Applied to Political and Market Institutions

Bouton, Laurent 15 November 2009 (has links)
My thesis contains essays on voting theory, market structures and fiscal federalism: (i) One Person, Many Votes: Divided Majority and Information Aggregation, (ii) Runoff Elections and the Condorcet Loser, (iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics, and (iv) Redistributing Income under Fiscal Vertical Imbalance. (i) One Person, Many Votes: Divided Majority and Information Aggregation (joint with Micael Castanheira) In elections, majority divisions pave the way to focal manipulations and coordination failures, which can lead to the victory of the wrong candidate. This paper shows how this flaw can be addressed if voter preferences over candidates are sensitive to information. We consider two potential sources of divisions: majority voters may have similar preferences but opposite information about the candidates, or opposite preferences. We show that when information is the source of majority divisions, Approval Voting features a unique equilibrium with full information and coordination equivalence. That is, it produces the same outcome as if both information and coordination problems could be resolved. Other electoral systems, such as Plurality and Two-Round elections, do not satisfy this equivalence. The second source of division is opposite preferences. Whenever the fraction of voters with such preferences is not too large, Approval Voting still satisfies full information and coordination equivalence. (ii) Runoff Elections and the Condorcet Loser A crucial component of Runoff electoral systems is the threshold fraction of votes above which a candidate wins outright in the first round. I analyze the influence of this threshold on the voting equilibria in three-candidate Runoff elections. I demonstrate the existence of an Ortega Effect which may unduly favor dominated candidates and thus lead to the election of the Condorcet Loser in equilibrium. The reason is that, contrarily to commonly held beliefs, lowering the threshold for first-round victory may actually induce voters to express their preferences excessively. I also extend Duverger's Law to Runoff elections with any threshold below, equal or above 50%. Therefore, Runoff elections are plagued with inferior equilibria that induce either too high or too low expression of preferences. (iii) On the Influence of Rankings when Product Quality Depends on Buyer Characteristics Information on product quality is crucial for buyers to make sound choices. For "experience products", this information is not available at the time of the purchase: it is only acquired through consumption. For much experience products, there exist institutions that provide buyers with information about quality. It is commonly believed that such institutions help consumers to make better choices and are thus welfare improving. The quality of various experience products depends on the characteristics of buyers. For instance, conversely to the quality of cars, business school quality depends on buyers (i.e. students) characteristics. Indeed, one of the main inputs of a business school is enrolled students. The choice of buyers for such products has then some features of a coordination problem: ceteris paribus, a buyer prefers to buy a product consumed by buyers with "good" characteristics. This coordination dimension leads to inefficiencies when buyers coordinate on products of lower "intrinsic" quality. When the quality of products depends on buyer characteristics, information about product quality can reinforce such a coordination problem. Indeed, even though information of high quality need not mean high intrinsic quality, rational buyers pay attention to this information because they prefer high quality products, no matter the reason of the high quality. Information about product quality may then induce buyers to coordinate on products of low intrinsic quality. In this paper, I show that, for experience products which quality depends on the characteristics of buyers, more information is not necessarily better. More precisely, I prove that more information about product quality may lead to a Pareto deterioration, i.e. all buyers may be worse off due. (iv) Redistributing Income under Fiscal Vertical Imbalance (joint with Marjorie Gassner and Vincenzo Verardi) From the literature on decentralization, it appears that the fiscal vertical imbalance (i.e. the dependence of subnational governments on national government revenues to support their expenditures) is somehow inherent to multi-level governments. Using a stylized model we show that this leads to a reduction of the extent of redistributive fiscal policies if the maximal size of government has been reached. To test for this empirically, we use some high quality data from the LIS dataset on individual incomes. The results are highly significant and point in the direction of our theoretical predictions.
9

Price formation in multi-asset securities markets

Säfvenblad, Patrik January 1997 (has links)
This volume is a collection of three essays relating to the pricing of securities in financial markets, such as stock markets, where a large number of individual securities are traded. Lead-Lag Effects in a Competitive REE MarketThis essay introduces a model of cross-security information aggregation. The model is essentially an extension of Chan (Journal of Finance, 1993) to the case of simultaneous auction markets where revealed information is correlated across securities.The model provides clear predictions of lead-lag effects between securities returns. Several of the model's predictions are confirmed empirically using data from the Paris Bourse. Other models of price formation, including the basic Chan model and nonsynchronous trading, are rejected as they cannot account for observed return patterns. Learning the True Index LevelThis essay extends the model of cross-security information aggregation by deriving implications for autocorrelation in index returns. Both time series and cross-sectional predictions are confirmed by empirical evidence from the Paris Bourse. In addition, the time series predictions are consistent with earlier, partly unexplained, empirical evidence from the US market. An Empirical Study of Index Return AutocorrelationThis essay studies return autocorrelation on the Stockholm Stock Exchange focusing on the relation between index returns and indvidual stock returns. It is demonstrated that the two return types have similar time series properties, and it is concluded that the causes of autocorrelation are the same in both cases. / <p>Diss. Stockholm : Handelshögskolan, 1997</p>
10

[en] INFORMATIONALLY EFFICIENT MARKETS UNDER RATIONAL INATTENTION / [pt] MERCADOS INFORMACIONALMENTE EFICIENTES SOB DESATENÇÃO RACIONAL

ANDRE MEDEIROS SZTUTMAN 19 October 2017 (has links)
[pt] Propomos uma nova solução para o paradoxo de Grossman Stiglitz [1980]. Trocando sua estrutura informacional por uma restrição de desatenção racional, nós mostramos que os preços podem refletir toda a informação disponível, sem quebrar os incentivos dos participantes do mercado em processar informação. Esse modelo reformula a hipótese dos mercados eficientes e concilia visões opostas: preços são completamente reveladores, mas apenas para aqueles que são suficientemente espertos. Finalmente, nós desenvolvemos um método para postular e resolver modelos de equilíbrio geral Walrasiano que circunscreve hipóteses simplificadoras anteriores. / [en] We propose a new solution for the Grossman and Stiglitz [1980] paradox. By substituting a rational inattention restriction for their information structure, we show that prices can reflect all the information available without breaking the incentives of market participants to gather information. This model reframes the efficient market hypothesis and reconciles opposing views: prices are fully revealing but only for those who are sufficiently smart. Finally, we develop a method for postulating and solving Walrasian general equilibrium models with rationally inattentive agents circumventing previous tractability assumptions.

Page generated in 0.5231 seconds