• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 518
  • 53
  • 47
  • 32
  • 12
  • 12
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • Tagged with
  • 771
  • 771
  • 601
  • 588
  • 135
  • 116
  • 101
  • 89
  • 68
  • 64
  • 61
  • 60
  • 60
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Event History Analysis in Multivariate Longitudinal Data

Yuan, Chaoyu January 2021 (has links)
This thesis studies event history analysis in multivariate longitudinal observational databases (LODs) and its application in postmarketing surveillance to identify and measure the relationship between events of health outcomes and drug exposures. The LODs contain repeated measurements on each individual whose healthcare information is recorded electronically. Novel statistical methods are being developed to handle challenging issues arising from the scale and complexity of postmarketing surveillance LODs. In particular, the self-controlled case series (SCCS) method has been developed with two major features (1) it only uses individuals with at least one event for analysis and inference and, (2) it uses each individual to be served as his/her own control, effectively requiring a person to switch treatments during the observation period. Although this method handles heterogeneity and bias, it does not take full advantage of the observational databases. In this connection, the SCCS method may lead to a substantial loss of efficiency. We proposed a multivariate proportional intensity modeling approach with random effect for multivariate LODs. The proposed method can explain the heterogeneity and eliminate bias in LODs. It also handles multiple types of event cases and makes full use of the observational databases. In the first part of this thesis, we present the multivariate proportional intensity model with correlated frailty. We explore the correlation structure between multiple types of clinical events and drug exposures. We introduce a multivariate Gaussian frailty to incorporate thewithin-subject heterogeneity, i.e. hidden confounding factors. For parameter estimation, we adopt the Bayesian approach using the Markov chain Monte Carlo method to get a series of samples from the targeted full likelihood. We compare the new method with the SCCS method and some frailty models through simulation studies. We apply the proposed model to an electronic health record (EHR) dataset and identify event types as defined in Observational Outcomes Medical Partnership (OMOP) project. We show that the proposed method outperforms the existing methods in terms of common metrics, such as receiver operating characteristic (ROC) metrics. Finally, we extend the proposed correlated frailty model to include a dynamic random effect. We establish a general asymptotic theory for the nonparametric maximum likelihood estimators in terms of identifiability, consistency, asymptotic normality and asymptotic efficiency. A detailed illustration of the proposed method is done with the clinical event Myocardial Infarction (MI) and drug treatment of Angiotensin-converting-enzyme (ACE) inhibitors, showing the dynamic effect of unobserved heterogeneity.
522

Design-based, Bayesian Causal Inference for the Social-Sciences

Leavitt, Thomas January 2021 (has links)
Scholars have recognized the benefits to science of Bayesian inference about the relative plausibility of competing hypotheses as opposed to, say, falsificationism in which one either rejects or fails to reject hypotheses in isolation. Yet inference about causal effects — at least as they are conceived in the potential outcomes framework (Neyman, 1923; Rubin, 1974; Holland, 1986) — has been tethered to falsificationism (Fisher, 1935; Neyman and Pearson, 1933) and difficult to integrate with Bayesian inference. One reason for this difficulty is that potential outcomes are fixed quantities that are not embedded in statistical models. Significance tests about causal hypotheses in either of the traditions traceable to Fisher (1935) or Neyman and Pearson (1933) conceive potential outcomes in this way; randomness in inferences about about causal effects stems entirely from a physical act of randomization, like flips of a coin or draws from an urn. Bayesian inferences, by contrast, typically depend on likelihood functions with model-based assumptions in which potential outcomes — to the extent that scholars invoke them — are conceived as outputs of a stochastic, data-generating model. In this dissertation, I develop Bayesian statistical inference for causal effects that incorporates the benefits of Bayesian scientific reasoning, but does not require probability models on potential outcomes that undermine the value of randomization as the “reasoned basis” for inference (Fisher, 1935, p. 14). In the first paper, I derive a randomization-based likelihood function in which Bayesian inference of causal effects is justified by the experimental design. I formally show that, under weak conditions on a prior distribution, as the number of experimental subjects increases indefinitely, the resulting sequence of posterior distributions converges in probability to the true causal effect. This result, typically known as the Bernstein-von Mises theorem, has been derived in the context of parametric models. Yet randomized experiments are especially credible precisely because they do not require such assumptions. Proving this result in the context of randomized experiments enables scholars to quantify how much they learn from experiments without sacrificing the design-based properties that make inferences from experiments especially credible in the first place. Having derived a randomization-based likelihood function in the first paper, the second paper turns to the calibration of a prior distribution for a target experiment based on past experimental results. In this paper, I show that usual methods for analyzing randomized experiments are equivalent to presuming that no prior knowledge exists, which inhibits knowledge accumulation from prior to future experiments. I therefore develop a methodology by which scholars can (1) turn results of past experiments into a prior distribution for a target experiment and (2) quantify the degree of learning in the target experiment after updating prior beliefs via a randomization-based likelihood function. I implement this methodology in an original audit experiment conducted in 2020 and show the amount of Bayesian learning that results relative to information from past experiments. Large Bayesian learning and statistical significance do not always coincide, and learning is greatest among theoretically important subgroups of legislators for which relatively less prior information exists. The accumulation of knowledge about these subgroups, specifically Black and Latino legislators, carries implications about the extent to which descriptive representation operates not only within, but also between minority groups. In the third paper, I turn away from randomized experiments toward observational studies, specifically the Difference-in-Differences (DID) design. I show that DID’s central assumption of parallel trends poses a neglected problem for causal inference: Counterfactual uncertainty, due to the inability to observe counterfactual outcomes, is hard to quantify since DID is based on parallel trends, not an as-if-randomized assumption. Hence, standard errors and ?-values are too small since they reflect only sampling uncertainty due to the inability to observe all units in a population. Recognizing this problem, scholars have recently attempted to develop inferential methods for DID under an as-if-randomized assumption. In this paper, I show that this approach is ill-suited for the most canonical DID designs and also requires conducting inference on an ill-defined estimand. I instead develop an empirical Bayes’ procedure that is able to accommodate both sampling and counterfactual uncertainty under the DIDs core identification assumption. The overall method is straightforward to implement and I apply it to a study on the effect of terrorist attacks on electoral outcomes.
523

Bayesian Modeling in Personalized Medicine with Applications to N-of-1 Trials

Liao, Ziwei January 2021 (has links)
The ultimate goal of personalized or precision medicine is to identify the best treatment for each patient. An N-of-1 trial is a multiple-period crossover trial performed within a single individual, which focuses on individual outcome instead of population or group mean responses. As in a conventional crossover trial, it is critical to understand carryover effects of the treatment in an N-of-1 trial, especially in situations where there are no washout periods between treatment periods and high volume of measurements are made during the study. Existing statistical methods for analyzing N-of-1 trials include nonparametric tests, mixed effect models and autoregressive models. These methods may fail to simultaneously handle measurements autocorrelation and adjust for potential carryover effects. Distributed lag model is a regression model that uses lagged predictors to model the lag structure of exposure effects. In the dissertation, we first introduce a novel Bayesian distributed lag model that facilitates the estimation of carryover effects for single N-of-1 trial, while accounting for temporal correlations using an autoregressive model. In the second part, we extend the single N-of-1 trial model to multiple N-of-1 trials scenarios. In the third part, we again focus on single N-of-1 trials. But instead of modeling comparison with one treatment and one placebo (or active control), multiple treatments and one placebo (or active control) is considered. In the first part, we propose a Bayesian distributed lag model with autocorrelated errors (BDLM-AR) that integrate prior knowledge on the shape of distributed lag coefficients and explicitly model the magnitude and duration of carryover effect. Theoretically, we show the connection between the proposed prior structure in BDLM-AR and frequentist regularization approaches. Simulation studies were conducted to compare the performance of our proposed BDLM-AR model with other methods and the proposed model is shown to have better performance in estimating total treatment effect, carryover effect and the whole treatment effect coefficient curve under most of the simulation scenarios. Data from two patients in the light therapy study was utilized to illustrate our method. In the second part, we extend the single N-of-1 trial model to multiple N-of-1 trials model and focus on estimating population level treatment effect and carryover effect. A Bayesian hierarchical distributed lag model (BHDLM-AR) is proposed to model the nested structure of multiple N-of-1 trials within the same study. The Bayesian hierarchical structure also improve estimates for individual level parameters by borrowing strength from the N-of-1 trials of others. We show through simulation studies that BHDLM-AR model has best average performance in terms of estimating both population level and individual level parameters. The light therapy study is revisited and we applied the proposed model to all patients’ data. In the third part, we extend BDLM-AR model to multiple treatments and one placebo (or active control) scenario. We designed prior precision matrix on each treatment. We demonstrated the application of the proposed method using a hypertension study, where multiple guideline recommended medications were involved in each single N-of-1 trial.
524

Laboratory Experiments on Belief Formation and Cognitive Constraints

Puente, Manuel January 2020 (has links)
In this dissertation I study how different cognitive constraints affect individuals' belief formation process, and the consequences of these constraints on behavior. In the first chapter I present laboratory experiments designed to test whether subjects' inability to perform more rounds of iterated deletion of dominated strategies is due to cognitive limitations, or to higher order beliefs about the rationality of others. I propose three alternative explanations for why subjects might not be doing more iterations of dominance reasoning. First, they might have problems computing iterated best responses, even when doing so does not require higher order beliefs. Second, subjects might face limitations in their ability to generate higher order beliefs. Finally, subjects' behavior might not be limited by cognitive limitations, but rather justified by their beliefs about what others will play. I design two experiments in order to test these hypothesis. Findings from the first experiment suggest that most subjects' strategies (about 66%) are not the result of their inability to compute iterated best responses. I then run a second experiment, finding that about 70% of the subjects' behavior come from limitations in their ability to iterate best responses and generate higher order beliefs at the same time, while for the other 30% their strategies are a best response to higher order beliefs that others are not rational. In the second chapter I study whether a Sender in a Bayesian Persuasion setting (Kamenica and Gentzkow, 2011) can benefit from behavioral biases in the way Receivers update their beliefs, by choosing how to communicate information. I present three experiments in order to test this hypothesis, finding that Receivers tend to overestimate the probability of a state of the world after receiving signals that are more likely in that state. Because of this bias, Senders' gains from persuasion can be increased by ``muddling the water'' and making it hard for Receivers to find the correct posteriors. This contradicts the theoretical result that states that communicating using signal structures is equivalent to communicating which posteriors these structures induce. Through analysis of the data and robustness experiments, I am able to discard social preferences or low incentives as driving my results, leaving base-rate neglect as a more likely explanation. The final chapter studies whether sensory bottlenecks, as oppose to purely computational cognitive constraints, are important factors affecting subjects' inference in an experiment that mimics financial markets. We show that providing redundant visual and auditory cues about the liquidity of a stock significantly improves performance, corroborating previous findings in neuroscience of multi-sensory integration, which could have policy implications in economically relevant situation.
525

Essays on Regulatory Design

Thompson, David January 2021 (has links)
This dissertation consists of three essays on the design of regulatory systems intended to inform market participants about product quality. The central theme is how asymmetric information problems influence the incentives of customers, regulated firms, and certifiers, and the implications these distortions have for welfare and market design. The first chapter, Regulation by Information Provision, studies quality provision in New York City's elevator maintenance market. In this market, service providers maintain machines and are inspected periodically by city inspectors. I find evidence that monitoring frictions create moral hazard for service providers. In the absence of perfect monitoring, buildings rely on signals generated by the regulator to hold service providers accountable, cancelling contracts when bad news arrives and preserving them when good news arrives. Regulatory instruments, such as inspection frequency and fine levels, can therefore influence provider effort in two ways: (i) by directly changing the cost of effort (e.g. fines for poor peformance); (ii) by changing expected future revenue (through building cancellation decisions). Using a structural search model of the industry, I find that the second channel is the dominant one. In particular, I note that strengthening the information channel has two equilibrium effects: first, it increases provider effort; and second, it shifts share towards higher-quality matches since buildings can more quickly sever unproductive relationships. These findings have important policy implications, as they suggest that efficient information provision --- for example, targeting inspections to newly-formed relationships --- is a promising avenues for welfare improvement. The second chapter, Quality Disclosure Design, studies a similar regulatory scheme, but emphasizes the incentives of the certifier. In particular, I argue that restaurant inspectors in New York City are locally averse to giving restaurants poor grades: restaurants whose inspections are on the border of an A versus a B grade are disproportionately given an A. The impact of this bias is twofold: first, it degrades the quality of the information provided to the market, as there is substantial heterogeneity in food-poisoning risk even within A restaurants. Second, by making it easier to achieve passing grades, inspector bias reduces incentives for restaurants to invest in their health practices. After developing a model of the inspector-restaurant interaction, counterfactual work suggests that stricter grading along the A-B boundary could generate substantial improvements in food-poisoning rates. The policy implications of these findings depends on the source of inspector bias. I find some evidence that bias is bureaucratic in nature: when inspectors have inspection decisions overturned in an administrative trial, they are more likely to score leniently along the A-B boundary in their other inspections. However, it's not clear whether this behavior stems from administrative burden (a desire to avoid more trials) or a desire to avoid looking incompetent. Pilot programs that reduce the administrative burden of giving B grades are a promising avenue for future research. The last chapter, Real-Time Inference, also studies the incentives of certifiers, namely MLB umpires charged with classifying pitches as balls or strikes. Unlike in \textit{Quality Disclosure Design}, I find that umpire ball/strike decisions are remarkably bias-free. Previous literature on this topic has noted a tendency for umpires to --- for a fixed pitch location --- call more strikes in hitter's counts and more balls in pitcher's counts. I propose a simple rational explanation for this behavior: umpires are Bayesian. In hitter's counts, such as 3-0, pitchers tend to throw pitches right down the middle of the plate, whereas in pitcher's counts, they throw pitches outside the strike zone. For a borderline pitch, the umpire's prior will push it towards the strike zone in a 3-0 count and away from the strike-zone in an 0-2 count, producing the exact divergence in ball/strike calls noted in previous work. While implications for broader policy are not immediately obvious, I note several features of the environment that are conducive to umpires effectively approximating optimal inference, particularly the frequent, data-driven feedback that umpires receive on their performance.
526

Vart är vi på väg? : En kvalitativ studie av strategiska ledares möjlighet att planera inför framtiden / Where are we going? : A qualitative study of strategic leaders’ ability to plan for the future

Larsson, Amanda, Arnstedt, Elinor January 2021 (has links)
Bakgrund: Ledare behöver ofta ta ställning till och hantera nya digitala verktyg. Oftast faller de företag som inte hänger med i digitaliseringens snabba svängar mellan stolarna. Även ledarskapet genomgår en omvandling i och med digitaliseringen. Enkelheten i att kommunicera och dela information bidrar till att utmana hierarkier och funktioner i organisationer. Nu ökar användningen av teknik, inte minst på grund av den pågående Covid-19-pandemin. Det krävs av ledare att hitta nya sätt att fatta beslut på när arbetssituationen inte längre ser likadan ut som tidigare. Med hjälp av scenarioplanering kan ledare förbereda sig på de mer komplicerade besluten. En scenarioplanering handlar om att planera för möjliga utfall i framtiden baserat på det man vet i nuläget. Genom att skildra möjliga utfall hinner ledare planera och förbereda sig om något av dem inträffar. Syfte: Syftet med uppsatsen är att närmare undersöka vilken påverkan den ökade digitaliseringen kommer få på framtidens ledarroll samt att bidra till en ökad förståelse för hur digitaliseringen förändrar synen på ledarskap och rollen som ledare. Frågeställning: Hur kommer digitaliseringen påverka framtidens ledarskap? Hur kan ledare fatta beslut för att anpassa sig till digitaliseringens utveckling? Hur kan scenarioplanering användas som verktyg för att underlätta ledares beslutsfattande? Metod: Arbetet utgår från en abduktiv ansats med växelverkan mellan teori och empiri. Det empiriska resultatet är framtaget genom kvalitativa semistrukturerade intervjuer som sedan transkriberats och genomgått en form av tematisk analys där likheter och skillnader i intervjuerna pekats ut. Empiri och resultat: Empirin utgörs av intervjuer av experter respektive ledare. Expertintervjuerna bidrog till skapandet av scenariokorset och ledarintervjuerna bidrog med ett arbetslivsperspektiv på scenarierna. De fyra scenarier som tagits fram baseras på drivkrafterna kontroll och användning av framtida teknik. När scenarierna diskuterades med ledarna tog de bland annat upp att de hade föredragit mellanvarianter på scenarierna framför ett renodlat scenario. Slutsats: Genom att skapa scenarier svarar uppsatsen på hur digitaliseringen kan påverka framtidens ledarskap. Ledarna såg hellre en kombination av två scenarier istället för att ett scenario inträffar. Med kontinuerligt arbete kan scenarioplanering vara ett effektivt verktyg i den strategiska planeringen och beslutsfattandet. / Background: Leaders often need to take a stand on and manage new digital tools. Most often, the companies that do not keep up with the rapid turns of digitalization fall between the cracks. Leadership is also undergoing a transformation with digitalization. The simplicity of communicating and sharing information helps to challenge hierarchies and functions in organizations. Now the use of technology is increasing, not least due to the ongoing Covid-19 pandemic. Leaders are required to find new ways to make decisions when the work situation no longer looks the same as before. With the help of scenario planning, leaders can prepare for the more complicated decisions. Scenario planning is about planning for possible outcomes in the future based on what is known at present. By depicting possible outcomes, leaders have time to plan and prepare if any of them occur. Purpose: The purpose of the essay is to investigate in more detail the impact that increased digitalization will have on the future leadership role and to contribute to an increased understanding of how digitalisation changes the view of leadership and the role as a leader. Issue: How will digitalization affect future leadership? How can leaders make decisions to adapt to the development of digitalization? How can scenario planning be used as a tool to facilitate leaders' decision-making? Method: The work is based on an abductive approach with an interaction between theory and empiricism. The empirical result is produced through qualitative semi-structured interviews which then were transcribed and underwent a form of thematic analysis where similarities and differences in the interviews were pointed out. Empirical data and results: The empirical data consists of interviews with experts and leaders. The expert interviews contributed to the creation of the scenario cross and the leader interviews contributed with a working life perspective on the scenarios. The four scenarios developed are based on the driving forces of control and use of future technology. When the scenarios were discussed with the leaders, they mentioned, among other things, that they had preferred intermediate variants of the scenarios over a pure scenario. Conclusion: By creating scenarios, the essay responds to how digitalisation can affect future leadership. Leaders would rather see a combination of two scenarios instead of one scenario occurring. With continuous work, scenario planning can be an effective tool in strategic planning and decision-making.
527

Modelling malaria in the Limpopo Province, South Africa : comparison of classical and bayesian methods of estimation

Sehlabana, Makwelantle Asnath January 2020 (has links)
Thesis (M.Sc. (Statistics)) -- University of Limpopo, 2020 / Malaria is a mosquito borne disease, a major cause of human morbidity and mortality in most of the developing countries in Africa. South Africa is one of the countries with high risk of malaria transmission, with many cases reported in Mpumalanga and Limpopo provinces. Bayesian and classical methods of estimation have been applied and compared on the effect of climatic factors (rainfall, temperature, normalised difference vegetation index, and elevation) on malaria incidence. Credible and confidence intervals from a negative binomial model estimated via Bayesian estimation-Markov chain Monte Carlo process and maximum likelihood, respectively, were utilised in the comparison process. Bayesian methods appeared to be better than the classical method in analysing malaria incidence in the Limpopo province of South Africa. The classical framework identified rainfall and temperature during the night to be the significant predictors of malaria incidence in Mopani, Vhembe and Waterberg districts of Limpopo province. However, the Bayesian method identified rainfall, normalised difference vegetation index, elevation, temperature during the day and temperature during the night to be the significant predictors of malaria incidence in Mopani, Sekhukhune, Vhembe and Waterberg districts of Limpopo province. Both methods also affirmed that Vhembe district is more susceptible to malaria incidence, followed by Mopani district. We recommend that the Department of Health and Malaria Control Programme of South Africa allocate more resources for malaria control, prevention and elimination to Vhembe and Mopani districts of Limpopo province. Future research may involve studies on the methods to select the best prior distributions. / National Research Foundation (NRF)
528

Essays in Information and Behavioral Economics

Ravindran, Dilip Raghavan January 2021 (has links)
This dissertation studies problems in individual and collective decision making. Chapter 1 examines how information providers may compete to influence the actions of one or many decision makers. This chapter studies a Bayesian Persuasion game with multiple senders who have access to conditionally independent experiments (and possibly others). Senders have zero-sum preferences over what information is revealed. The main results characterize when any set of states can be pooled in equilibrium and, as a consequence, when the state is (fully) revealed in every equilibrium. The state must be fully revealed in every equilibrium if and only if sender utility functions satisfy a ‘global nonlinearity’ condition. In the binary-state case, the state is fully revealed in every equilibrium if and only if some sender has nontrivial preferences. Our main takeaway is that ‘most’ zero-sum sender preferences result in full revelation. We discuss a number of extensions and variations. Chapter 2 studies Liquid Democracy (LD), a voting system which combines aspects of direct democracy (DD) and representative democracy (RD) and is becoming more widely used for collective decision making. In LD, for every decision each voter is endowed with a vote and can cast it themselves or delegate it to another voter. We study information aggregation under LD in a common-interest jury voting game with heterogenously well-informed voters. There is an incentive for a voter i to delegate to someone better informed; but delegation has a cost: if i delegates her vote, she can no longer express her own private information by voting. Delegation trades off empowering better information and making use of more information. Under some conditions, efficiency requires the number of votes held by each nondelegator to optimally reflect how well informed they are. Under efficiency LD improves welfare over DD and RD, especially in medium-sized committees. However LD also admits inefficient equilibria characterized by a small number of voters holding a large share of votes. Such equilibria can do worse than DD and can fail to aggregate information asymptotically. We discuss the implications of our results for implementing LD. For many years, psychologists have discussed the possibility of choice overload: large choice sets can be detrimental to a chooser’s wellbeing. The existence of such a phenomenon would have profound impact on both the positive and normative study of economic decision making, yet recent meta studies have reported mixed evidence. In Chapter 3, we argue that existing tests of choice overload - as measured by an increased probability of choosing a default option - are likely to be significantly under powered because ceteris parabus we should expect the default alternative to be chosen less often in larger choice sets. We propose a more powerful test based on richer data and characterization theorems for the Random Utility Model. These new approaches come with significant econometric challenges, which we show how to address. We apply the resulting tests to an exploratory data set of choices over lotteries.
529

Uncertainty and Complexity: Essays on Statistical Decision Theory and Behavioral Economics

Goncalves, Duarte January 2021 (has links)
This dissertation studies statistical decision making and belief formation in face of uncertainty, that is, when agents' payoffs depend on an unknown distribution. Chapter 1 introduces and analyzes an equilibrium solution concept in which players sequentially sample to resolve strategic uncertainty over their opponents' distribution of actions. Bayesian players can sample from their opponents' distribution of actions at a cost and make optimal choices given their posterior beliefs. The solution concept makes predictions on the joint distribution of players' choices, beliefs, and decision times, and generates stochastic choice through the randomness inherent to sampling, without relying on indifference or choice mistakes. It rationalizes well-known deviations from Nash equilibrium such as the own-payoff effect and I show its novel predictions relating choices, beliefs, and decision times are supported by existing data. Chapter 2 presents experimental evidence establishing that the level of incentives affects both gameplay and mean beliefs.Holding fixed the actions of the other player, it is shown that, in the context of a novel class of dominance-solvable games --- diagonal games ---, higher incentives make subjects more likely to best-respond to their beliefs. Moreover, higher incentives result in more responsive beliefs but not necessarily less biased. Incentives affect effort --- as proxied by decision time --- and that it is effort, and not incentives directly, that accounts for the changes in belief formation. The results support models where, in addition to choice mistakes, players exhibit costly attention. Chapter 3 examines the class of diagonal games that are used in Chapter 2. Diagonal games constitute a new class of two-player dominance-solvable games which constitutes a useful benchmark in the study of cognitive limitations in strategic settings, both for exploring predictions of theoretical models and for experiments. This class of finite games allows for a disciplined way to vary two features of the strategic setting plausibly related to game complexity: the number of steps of iterated elimination of dominated actions required to reach the dominance solution and the number of actions. Furthermore, I derive testable implications of solution concepts such as level-k, endogenous depth of reasoning, sampling equilibrium, and quantal response equilibrium. Finally, Chapter 4 studies the robustness of pricing strategies when a firm is uncertain about the distribution of consumers' willingness-to-pay. When the firm has access to data to estimate this distribution, a simple strategy is to implement the mechanism that is optimal for the estimated distribution. We find that such empirically optimal mechanism delivers exponential, finite-sample profit and regret guarantees. Moreover, we provide a toolkit to evaluate the robustness properties of different mechanisms, showing how to consistently estimate and conduct valid inference on the profit generated by any one mechanism, which enables one to evaluate and compare their probabilistic revenue guarantees.
530

Essays in Applied Microeconomic Theory:

Copland, Andrew Gregory January 2022 (has links)
Thesis advisor: Uzi Segal / This collection of papers examines applications of microeconomic theory to practical problems. More specifically, I identify frictions between theoretical results and agent behavior. I seek to resolve these tensions by either proposing mechanisms to more closely capture the theoretical environment of interest or extending the model to more closely approximate the world as individuals perceive it. In the first chapter, "Compensation without Distortion,'' I propose a new mechanism for compensating subjects in preference elicitation experiments. The motivation for this tool is the theoretical problem of incentive compatibility in decision experiments. A hallmark of experimental economics is the connection between a subject's payment with their actions or decisions, however previous literature has highlighted shortcomings in this link between compensation and methods currently used to elicit beliefs. Specifically, compensating individuals based on choices they make increases reliability, however these payments can themselves distort subjects' preferences, limiting the resulting data's usefulness. I reexamine the source of the underlying theoretical tension, and propose using a stochastic termination mechanism called the "random stopping procedure'' (RSP). I show that the RSP is theoretically able to structurally avoid preference distortions induced by the current state of the art protocols. Changing the underlying context subjects answer questions—by resolving payoff uncertainty immediately after every decision is made—the assumed impossibility of asking multiple questions without creating preference distortions is theoretically resolved. To test this prediction, I conduct an experiment explicitly designed to test the accuracy of data gathered by the RSP against the current best practice for measuring subject preferences. Results show that RSP-elicited preferences more closely match a control group's responses than the alternative. In the second chapter, "School Choice and Class Size Externalities,'' I revisit the many-to-one matching problem of school choice. I focus on the importance of problem definition, and argue that the "standard'' school choice model is insufficiently sensitive to relevant characteristics of student preferences. Motivated by the observation that students care about both the school they attend, and how over- or under-crowded the school is, I extend the problem definition to allow students to report preferences over both schools and cohort sizes. (Cohort size is intended as a generalization of school crowding, relative resources, or other similar school characteristics.) I show that, if students do have preferences over schools and cohort sizes, current mechanisms lose many of their advantageous properties, and are no longer stable, fair, nor non-wasteful. Moreover, I show that current mechanisms no longer necessarily incentivize students to truthfully report their preferences over school orderings. Motivated by the observation that students care about both the school they attend, and how over- or under-crowded the school is, in "School Choice and Class Size Externalities'' I extend the standard school choice problem to incorporate both of these elements. I show that, if students do have preferences over schools and cohort sizes, current mechanisms are no longer stable, fair, nor non-wasteful. In response, I construct an alternative matching mechanism, called the deferred acceptance with voluntary withdrawals (DAwVW) mechanism, which improves on the underlying (unobserved) manipulability of "standard" mechanisms. The DAwVW mechanism is deterministic and terminates, more closely satisfies core desirable matching properties, and can yield substantial efficiency gains compared to mechanisms that do not consider class size. In the third chapter, I provide an overview of the history of decision experiments in economics, describe several of the underlying tensions that motivate my other projects, and identify alternative potential solutions that have been proposed by others to these problems. In this project, I add context to the larger field of experimental economics in which my research is situated. In addition to the mechanisms discussed by prior literature reviews, I incorporate and discuss recently developed payment and elicitation methods, and identify these new approaches' advantages and drawbacks. / Thesis (PhD) — Boston College, 2022. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.

Page generated in 0.1007 seconds