• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 22
  • 22
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Essays on Environmental Regulation, Management and Conflict

Sjöberg, Eric January 2013 (has links)
This thesis consists of three different papers summarized as follows. In The political economy of environmental regulation, I study how enforcement of national environmental legislation differ across municipalities in Sweden depending on the local political situation. While the legislation is national, enforcement is decentralized. I find that municipalities where the Green Party joins the ruling political coalition issue more environmental fines than other municipalities. In pricing on the fish market I use Swedish data to study how size affects the price per kilo of fish for several species. In traditional fishery biomass models, fish stocks are treated as homogenous. New theoretical heterogeneous fishery models, where size is allowed to differ in a fish stock, have important implications for regulation, for example that it is optimal to regulate on numbers of fish instead of weight. However, prices in these models are assumed to be constant. My estimates can be used to shed some light on how prices change when the size composition of the catch changes. In my third and final chapter, Settlement under the threat of conflict - The cost of asymmetric information, I present a theoretical model where two players can divide a good peacefully or engage in a contest in order to obtain the entire good. I assume that one player's valuation of the good is private information and show how this affects the expected cost of the contest and thus the probability of peaceful settlement.
2

Essays in industrial organization of Peer-to-Peer online credit markets

Talal-Ur-Rahim, Fnu 27 November 2018 (has links)
This dissertation consists of three separate essays on Peer-to-Peer (P2P) online credit markets. The first essay presents new empirical evidence of decreases in loan demand and repayment when prices in the market are determined by competing lenders in auctions as compared to the case in which a platform directly controls all prices. The paper develops an econometric model of loan demand and repayment which is then used to predict borrower choices when they are offered prices set by lenders in a market. I find that when lenders set prices, borrowers are more likely to pick loans of shorter maturity and smaller sizes, and repay less. Aggregated at the market level, demand and repayment of credit fall by 10% and 2%, respectively. In the second paper, I quantify the effects of implementation of finer credit scoring on credit demand, defaults and repayment in the context of a large P2P online credit platform. I exploit an exogenous change in the platform's credit scoring policy where the centralized price setting rules ensure that the one-to-one relationship between credit scores and prices remains intact unlike in a traditional credit market where it is broken. The results show that a 1% increase in interest rate due to the implementation of finer credit scoring results in an average decrease of 0.29% in the requested loan amount, an average increase of 0.01 in the fraction of borrowers who default and an average increase of 0.02 in the fraction of loan repaid. These findings contribute to a better understanding of how a reduction in information asymmetry affects borrower choices in a credit market. The third paper explores the main drivers behind the geographic expansion in demand for credit from P2P online platforms. It uses data from the two largest platforms in the United States to conduct an empirical analysis. By exploiting heterogeneity in local credit markets before the entry of P2P online platforms, the paper estimates the effect of local credit market conditions on demand for credit from P2P platforms. The paper uses a spatial autoregressive model for the main specification. We find that P2P consumer credit expanded more in counties with poor branch networks, lower concentration of banks, and lower leverage ratios.
3

Revisiting Random Utility Models

Azari Soufiani, Hossein 06 June 2014 (has links)
This thesis explores extensions of Random Utility Models (RUMs), providing more flexible models and adopting a computational perspective. This includes building new models and understanding their properties such as identifiability and the log concavity of their likelihood functions as well as the development of estimation algorithms. / Engineering and Applied Sciences
4

Real-Time Demand Estimation for Water Distribution Systems

Kang, Doo Sun January 2008 (has links)
The goal of a water distribution system (WDS) is to supply the desired quantity of fresh water to consumers at the appropriate time. In order to properly operate a WDS, system operators need information about the system states, such as tank water level, nodal pressure, and water quality for the system wide locations. Most water utilities now have some level of SCADA (Supervisory Control and Data Acquisition) systems providing nearly real-time monitoring data. However, due to the prohibitive metering costs and lack of applications for the data, only portions of systems are monitored and the use of the SCADA data is limited. This dissertation takes a comprehensive view of real-time demand estimation in water distribution systems. The goal is to develop an optimal monitoring system plan that will collect appropriate field data to determine accurate, precise demand estimates and to understand their impact on model predictions. To achieve that goal, a methodology for real-time demand estimates and associated uncertainties using limited number of field measurements is developed. Further, system wide nodal pressure and chlorine concentration and their uncertainties are predicted using the estimated nodal demands. This dissertation is composed of three journal manuscripts that address these three key steps beginning with uncertainty evaluation, followed by demand estimation and finally optimal metering layout.The uncertainties associated with the state estimates are quantified in terms of confidence limits. To compute the uncertainties in real-time alternative schemes that reduce computational efforts while providing good statistical approximations are evaluated and verified by Monte Carlo simulation (MCS). The first order second moment(FOSM) method provides accurate variance estimates for pressure; however, because of its linearity assumption it has limited predictive ability for chlorine under unsteady conditions. Latin Hypercube sampling (LHS) provides good estimates of prediction uncertainty for chlorine and pressure in steady and unsteady conditions with significantly less effort.For real-time demand estimation, two recursive state estimators; tracking state estimator (TSE) based on weighted least squares (WLS) scheme and Kalman filter (KF), are applied. In addition, in order to find available field data types for demand estimation, comparative studies are performed using pipe flow rate and nodal pressure head as measurements. To reduce the number of unknowns and make the system solvable, nodes with similar user characteristics are grouped and assumed to have same demand pattern. The uncertainties in state variables are quantified in terms of confidence limits using the approximate methods (i.e., FOSM and LHS). Results show that TSE with pipe flow rates as measurements provide reliable demand estimations. Also, the model predictions computed using the estimated demands match well with the synthetically generated true values.Field measurements are critical elements to obtaining quality real-time state estimates. However, the limited number of metering locations has been a significant obstacle for the real-time studies and identifying locations to best gain information is critical. Here, an optimal meter placement (OMP) is formulated as a multi-objective optimization problem and solved using a multi-objective genetic algorithm (MOGA) based on Pareto-optimal solutions. Results show that model accuracy and precision should be pursued at the same time as objectives since both measures have trade-off relationship. GA solutions were improvements over the less robust methods or designers' experienced judgment.
5

Beers and Bonds : Essays in Structural Empirical Economics

Romahn, André January 2012 (has links)
This dissertation consists of four papers in structural empirics that can be broadly categorized into two areas. The first three papers revolve around the structural estimation of demand for differentiated products and several applications thereof (Berry (1994), Berry, Levinsohn and Pakes (1995), Nevo (2000)), while the fourth paper examines the U.S. Treasury yield curve by estimating yields as linear functions of observable state variables (Ang and Piazzesi (2003), Ang et al. (2006)). The central focus of each paper are the underlying economics. Nevertheless, all papers share a common empirical approach. Be it prices of beers in Sweden or yields of U.S. Treasury bonds, it is assumed throughout that the economic variables of interest can be modeled by imposing specific parametric functional forms. The underlying structural parameters are then consistently estimated based on the variation in available data. Consistent estimation naturally hinges on the assumption that the assumed functional forms are correct. Another way of viewing this is that the imposed functions are flexible enough not to impose restrictive patterns on the data that ultimately lead to biased estimates of the structural parameters and thereby produce misleading conclusions regarding the underlying economics. In principle, the danger of misspecification could therefore be avoided by adopting sufficiently flexible functional forms. This, however, typically requires the estimation of a growing number of structural parameters that determine the underlying economic relationships. As an example, we can think of the estimation of differentiated product demand. The key object of interest here is the substitution patterns between the products. That is, we are interested in what happens to the demand of good X and all its rival products, as the price of good X increases. With N products in total, we could collect the product-specific changes in demand in a vector with N entries. It is also possible, however, that the price of any other good Y changes and thereby alters the demands for the remaining varieties. Thus, in total, we are interested in N2 price effects on product-specific demand. With few products, these effects could be estimated directly and the risk of functional misspecification could be excluded (Goolsbee and Petrin (2004)). With 100 products, however, we are required to estimate 10,000 parameters, which rarely, if ever, is feasible. This is the curse of dimensionality. Each estimation method employed in the four papers breaks this curse by imposing functions that depend on relatively few parameters and thereby tries to strike a balance between the necessity to rely on parsimonious structural frameworks and the risk of misspecification. This is a fundamental feature of empirical research in economics that makes it both interesting and challenging. / <p>Diss. Stockholm :  Stockholm School of Economics, 2012. Introduction together with 4 papers</p>
6

The Value of Branding in Two-sided Platforms

Sun, Yutec 13 August 2013 (has links)
This thesis studies the value of branding in the smartphone market. Measuring brand value with data available at product level potentially entails computational and econometric challenges due to data constraints. These issues motivate the three studies of the thesis. Chapter 2 studies the smartphone market to understand how operating system platform providers can grow one of the most important intangible assets, i.e., brand value, by leveraging the indirect network between two user groups in a two-sided platform. The main finding is that iPhone achieved the greatest brand value growth by opening its platform to the participation of third-party developers, thereby indirectly connecting the consumers and the developers via its app store effectively. Without the open app store, I find that iPhone would have lost its brand value by becoming a two-sided platform. Hence these findings provide an important lesson that open platform strategy is vital to the success of building platform brands. Chapter 3 solves a computational challenge in structural estimation of aggregate demand. I develop a computationally efficient MCMC algorithm for the GMM estimation framework developed by Berry, Levinsohn and Pakes (1995) and Gowrisankaran and Rysman (forthcoming). I combine the MCMC method with the classical approach by transforming the GMM into a Laplace type estimation framework, therefore avoiding the need to formulate a likelihood model. The proposed algorithm solves the two fixed point problems, i.e., the market share inversion and the dynamic programming, incrementally with MCMC iteration. Hence the proposed approach achieves computational efficiency without compromising the advantages of the conventional GMM approach. Chapter 4 reviews recently developed econometric methods to control for endogeneity bias when the random slope coefficient is correlated with treatment variables. I examine how standard instrumental variables and control function approaches can solve the slope endogeneity problem under two general frameworks commonly used in the literature.
7

The Value of Branding in Two-sided Platforms

Sun, Yutec 13 August 2013 (has links)
This thesis studies the value of branding in the smartphone market. Measuring brand value with data available at product level potentially entails computational and econometric challenges due to data constraints. These issues motivate the three studies of the thesis. Chapter 2 studies the smartphone market to understand how operating system platform providers can grow one of the most important intangible assets, i.e., brand value, by leveraging the indirect network between two user groups in a two-sided platform. The main finding is that iPhone achieved the greatest brand value growth by opening its platform to the participation of third-party developers, thereby indirectly connecting the consumers and the developers via its app store effectively. Without the open app store, I find that iPhone would have lost its brand value by becoming a two-sided platform. Hence these findings provide an important lesson that open platform strategy is vital to the success of building platform brands. Chapter 3 solves a computational challenge in structural estimation of aggregate demand. I develop a computationally efficient MCMC algorithm for the GMM estimation framework developed by Berry, Levinsohn and Pakes (1995) and Gowrisankaran and Rysman (forthcoming). I combine the MCMC method with the classical approach by transforming the GMM into a Laplace type estimation framework, therefore avoiding the need to formulate a likelihood model. The proposed algorithm solves the two fixed point problems, i.e., the market share inversion and the dynamic programming, incrementally with MCMC iteration. Hence the proposed approach achieves computational efficiency without compromising the advantages of the conventional GMM approach. Chapter 4 reviews recently developed econometric methods to control for endogeneity bias when the random slope coefficient is correlated with treatment variables. I examine how standard instrumental variables and control function approaches can solve the slope endogeneity problem under two general frameworks commonly used in the literature.
8

Bias from ignoring price dispersion in demand estimation

Pinto, Tomás Milanez Ferreira 30 January 2015 (has links)
Submitted by Tomás Milanez Ferreira Pinto (tomasmilanez@gmail.com) on 2015-04-28T23:38:11Z No. of bitstreams: 1 mestrado.pdf: 973554 bytes, checksum: ecd785af01da846a4fe24cc7d9882091 (MD5) / Approved for entry into archive by BRUNA BARROS (bruna.barros@fgv.br) on 2015-04-29T20:32:23Z (GMT) No. of bitstreams: 1 mestrado.pdf: 973554 bytes, checksum: ecd785af01da846a4fe24cc7d9882091 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-05-04T12:28:50Z (GMT) No. of bitstreams: 1 mestrado.pdf: 973554 bytes, checksum: ecd785af01da846a4fe24cc7d9882091 (MD5) / Made available in DSpace on 2015-05-04T12:29:05Z (GMT). No. of bitstreams: 1 mestrado.pdf: 973554 bytes, checksum: ecd785af01da846a4fe24cc7d9882091 (MD5) Previous issue date: 2015-01-30 / Consumers often pay different prices for the same product bought in the same store at the same time. However, the demand estimation literature has ignored that fact using, instead, aggregate measures such as the 'list' or average price. In this paper we show that this will lead to biased price coefficients. Furthermore, we perform simple comparative statics simulation exercises for the logit and random coefficient models. In the 'list' price case we find that the bias is larger when discounts are higher, proportion of consumers facing discount prices is higher and when consumers are more unwilling to buy the product so that they almost only do it when facing discount. In the average price case we find that the bias is larger when discounts are higher, proportion of consumers that have access to discount are similar to the ones that do not have access and when consumers willingness to buy is very dependent on idiosyncratic shocks. Also bias is less problematic in the average price case in markets with a lot of bargain deals, so that prices are as good as individual. We conclude by proposing ways that the econometrician can reduce this bias using different information that he may have available.
9

Essais sur l'estimation structurelle de la demande / Essays in Structural Demand Estimation

Monardo, Julien 18 October 2019 (has links)
L’estimation structurelle des modèles de demande sur des marchés de produits différenciés joue un rôle important en économie. Elle permet de mieux comprendre les choix des consommateurs et, entre autres, de mesurer les effets d’une fusion d’entreprise, de l’introduction d’un nouveau produit sur le marché ou d’une nouvelle régulation. L’approche traditionnelle consiste à spécifier un modèle d’utilité, typiquement un modèle d’utilité aléatoire additif, à en calculer ses demandes et à inverser ces dernières pour obtenir des équations de demande inverse qui serviront de base pour l’estimation. Toutefois, en général, ces demandes inverses n’ont pas de forme analytique. L'estimation exige donc une inversion numérique et l’emploi de procédures d’estimation non-linéaire, qui peuvent être difficiles à mettre en oeuvre et chronophages.Cette thèse adopte une approche différente, en développant de nouveaux modèles de demande inverse qui sont cohérents avec un modèle d’utilité de consommateurs hétérogènes. Cette approche permet de capter de façon plus flexible les substitutions entre les produits, grâce à de simples régressions linéaires basées sur des données incluant les parts de marché, les prix et les caractéristiques des produits. Le premier chapitre de cette thèse développe le modèle inverse product differentiation logit (IPDL), qui généralise les modèles logit emboîtés, permettant ainsi de capter de façon flexible les substitutions entre les produits, y compris de la complémentarité. Il montre que le modèle IPDL appartient à une classe de modèles de demande inverse, nommée generalized inverse logit (GIL), laquelle inclut une grande majorité de modèles d’utilité aléatoire additifs qui ont été utilisés à des fins d'estimation de la demande. Le second chapitre développe le modèle flexible inverse logit (FIL), un modèle GIL qui utilise une structure de nids flexible avec un nid pour chaque pair de produits. Il montre que le modèle FIL, projeté dans l’espace des caractéristiques des produits, permet d’obtenir des élasticités-prix qui dépendent directement des caractéristiques des produits et, en utilisant des simulations de Monte-Carlo, qu’il est capable de reproduire celles du "flexible" modèle logit à coefficients aléatoires. Le troisième chapitre étudie la micro-fondation du modèle GIL. Il montre que les restrictions que le modèle GIL impose sur la fonction de demande inverse sont des conditions nécessaires et suffisantes de cohérence avec un modèle de consommateurs hétérogènes maximisant leur fonction d’utilité, connu sous le nom de perturbed utility model (PUM). Il montre également que tout modèle GIL génère une fonction de demande qui satisfait une légère variante des conditions de Daly-Zachary, laquelle permet de combiner substituabilité et complémentarité en demande. / Estimation of structural demand models in differentiated product markets plays an important role in economics. It allows to better understand consumers’ choices and, amongst other, to assess the effects of mergers, new products, and changes in regulation. The standard approach consists in specifying a utility model, typically an additive random utility model, computing its demands, and inverting them to obtain inverse demand equations, which will serve as a basis for estimation. However, since these inverse demands have generally no closed form, estimation requires numerical inversion and non-linear optimization, which can be painful and time-consuming. This dissertation adopts a different approach, developing novel inverse demand models, which are consistent with a utility model of heterogeneous consumers. This approach allows to accommodate rich substitution patterns thanks to simple linear regressions with data on market shares, prices and product characteristics. The first chapter of this dissertation develops the inverse product differentiation logit (IPDL) model, which generalizes the nested logit models to allow for richer substitution patterns, including complementarity. It also shows that the IPDL model belongs to the class of generalized inverse logit (GIL) models, which includes a vast majority of additive random utility models that have been used for demand estimation purposes. The second chapter develops the flexible inverse logit (FIL) model, a GIL model that uses a flexible nesting structure with a nest for each pair of products. It shows that the FIL model, projected into product characteristics space, makes the price elasticities depending on product characteristics directly and, using Monte Carlo simulations, that it is able to mimic those from the "flexible" random coefficient logit model. The third chapter studies the micro-foundation of the GIL model. It shows that the restrictions that the GIL model imposes on the inverse demand function are necessary and sufficient for consistency with a model of heterogeneous and utility-maximizing consumers, called perturbed utility model. It also shows that any GIL model yields a demand function that satisfies a slight variant of the Daly-Zachary conditions, which allows to combine substitutability and complementarity in demand.
10

Transitions in new technology and market structure: applications and new methods for discrete choice model estimation

Wang, Shuang 06 November 2021 (has links)
My dissertation consists of three chapters that evaluate the social welfare effect of either antitrust policy or industrial transition, all using discrete choice model estimation as the front end for counterfactual analysis. In the first chapter, I investigate the economic impact of the merger that created the world's largest hotel chain, Marriott's acquisition of Starwood, thereby shedding light on the antitrust authorities' performance in protecting competitive markets for the benefit of consumers. Different from traditional merger analysis that focuses on the tradeoff between the upward pricing pressure and the cost synergy among the merging parties while fixing the market structure, I endogenize firms’ entry decisions into an oligopoly price competition model. To tackle the associated multiple equilibria issue, I use moment inequality estimation and propose a novel lower probability bound that reduces the computational burden from being exponential to being linear in the number of players. It also adds to the scant empirical evidence on post-merger cost synergy by showing that every one more affiliated hotel in the local market reduces a hotel's marginal cost by up to 2.3%. Then a comparison between the simulated with-merger and without-merger equilibria indicates that this merger enhances social welfare. In particular, for those markets that are previously not profitable for any firm to enter, because of the post-merger cost saving, Marriott or Starwood would enter 6% - 24% of them, which provides a new perspective for merger reviews. The second chapter, joint with Mingli Chen, Marc Rysman and Krzysztof Wozniak, studies the determinants of the US payment system's shift from paper payment instruments, namely cash and check, to digital instruments, such as debit cards and credit cards. With a 5-year transaction-level panel data, for the first time in the literature, we can distinguish the short-term effects of transaction size from the long-term changes in households’ preferences. To do so, we incorporate a household-product-quarter fixed effect into a multinomial logit model. We develop a new method based on the Minorization-Maximization (MM) algorithm to address the prohibitive computational challenge of estimating over one million fixed effects in such a nonlinear model. Results show that over a short horizon (within a quarter), the probability of using card increases with transaction sizes in general but exhibits substantial household heterogeneity. While over long horizon (five-year period of the data), with the estimated household-product-quarter fixed effects, we decompose the increase in card usage into different channels and find that only a third of it is due to the changes in household preferences. Another significant driver is the households' entry and exit into the sample. In the third chapter, my coauthors Jacob LaRiviere, Aadharsh Kannan, and I explore the "death of distance” hypothesis with a novel anonymized customer-level dataset on demand for cloud computing, accounting for both spatial and price competition among public cloud providers. We introduce a mixed logit demand model of spatial competition estimable with detailed data of a single firm but only aggregate sales data of a second. We leverage the Expectation-Maximization (EM) algorithm to tackle the customer-level missing data problem of the second firm. Estimation results and counterfactuals show that standard spatial competition economics hold even when distance for cloud latency is trivial.

Page generated in 0.1458 seconds