• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Models for managing surge capacity in the face of an influenza epidemic

Zenteno, Ana January 2013 (has links)
Influenza pandemics pose an imminent risk to society. Yearly outbreaks already represent heavy social and economic burdens. A pandemic could severely affect infrastructure and commerce through high absenteeism, supply chain disruptions, and other effects over an extended and uncertain period of time. Governmental institutions such as the Center for Disease Prevention and Control (CDC) and the U.S. Department of Health and Human Services (HHS) have issued guidelines on how to prepare for a potential pandemic, however much work still needs to be done in order to meet them. From a planner's perspective, the complexity of outlining plans to manage future resources during an epidemic stems from the uncertainty of how severe the epidemic will be. Uncertainty in parameters such as the contagion rate (how fast the disease spreads) makes the course and severity of the epidemic unforeseeable, exposing any planning strategy to a potentially wasteful allocation of resources. Our approach involves the use of additional resources in response to a robust model of the evolution of the epidemic as to hedge against the uncertainty in its evolution and intensity. Under existing plans, large cities would make use of networks of volunteers, students, and recent retirees, or borrow staff from neighboring communities. Taking into account that such additional resources are likely to be significantly constrained (e.g. in quantity and duration), we seek to produce robust emergency staff commitment levels that work well under different trajectories and degrees of severity of the pandemic. Our methodology combines Robust Optimization techniques with Epidemiology (SEIR models) and system performance modeling. We describe cutting-plane algorithms analogous to generalized Benders' decomposition that prove fast and numerically accurate. Our results yield insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be deployed during the epidemic. To assess the efficacy of our solutions, we study their performance under different scenarios and compare them against other seemingly good strategies through numerical experiments. This work would be particularly valuable for institutions that provide public services, whose operations continuity is critical for a community, especially in view of an event of this caliber. As far as we know, this is the first time this problem is addressed in a rigorous way; particularly we are not aware of any other robust optimization applications in epidemiology.
372

Financial Portfolio Risk Management: Model Risk, Robustness and Rebalancing Error

Xu, Xingbo January 2013 (has links)
Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes.
373

Stochastic Models of Limit Order Markets

Kukanov, Arseniy January 2013 (has links)
During the last two decades most stock and derivatives exchanges in the world transitioned to electronic trading in limit order books, creating a need for a new set of quantitative models to describe these order-driven markets. This dissertation offers a collection of models that provide insight into the structure of modern financial markets, and can help to optimize trading decisions in practical applications. In the first part of the thesis we study the dynamics of prices, order flows and liquidity in limit order markets over short timescales. We propose a stylized order book model that predicts a particularly simple linear relation between price changes and order flow imbalance, defined as a difference between net changes in supply and demand. The slope in this linear relation, called a price impact coefficient, is inversely proportional in our model to market depth - a measure of liquidity. Our empirical results confirm both of these predictions. The linear relation between order flow imbalance and price changes holds for time intervals between 50 milliseconds and 5 minutes. The inverse relation between the price impact coefficient and market depth holds on longer timescales. These findings shed a new light on intraday variations in market volatility. According to our model volatility fluctuates due to changes in market depth or in order flow variance. Previous studies also found a positive correlation between volatility and trading volume, but in order-driven markets prices are determined by the limit order book activity, so the association between trading volume and volatility is unclear. We show how a spurious correlation between these variables can indeed emerge in our linear model due to time aggregation of high-frequency data. Finally, we observe short-term positive autocorrelation in order flow imbalance and discuss an application of this variable as a measure of adverse selection in limit order executions. Our results suggest that monitoring recent order flow can improve the quality of order executions in practice. In the second part of the thesis we study the problem of optimal order placement in a fragmented limit order market. To execute a trade, market participants can submit limit orders or market orders across various exchanges where a stock is traded. In practice these decisions are influenced by sizes of order queues and by statistical properties of order flows in each limit order book, and also by rebates that exchanges pay for limit order submissions. We present a realistic model of limit order executions and formalize the search for an optimal order placement policy as a convex optimization problem. Based on this formulation we study how various factors determine investor's order placement decisions. In a case when a single exchange is used for order execution, we derive an explicit formula for the optimal limit and market order quantities. Our solution shows that the optimal split between market and limit orders largely depends on one's tolerance to execution risk. Market orders help to alleviate this risk because they execute with certainty. Correspondingly, we find that an optimal order allocation shifts to these more expensive orders when the execution risk is of primary concern, for example when the intended trade quantity is large or when it is costly to catch up on the quantity after limit order execution fails. We also characterize the optimal solution in the general case of simultaneous order placement on multiple exchanges, and show that it sets execution shortfall probabilities to specific threshold values computed with model parameters. Finally, we propose a non-parametric stochastic algorithm that computes an optimal solution by resampling historical data and does not require specifying order flow distributions. A numerical implementation of this algorithm is used to study the sensitivity of an optimal solution to changes in model parameters. Our numerical results show that order placement optimization can bring a substantial reduction in trading costs, especially for small orders and in cases when order flows are relatively uncorrelated across trading venues. The order placement optimization framework developed in this thesis can also be used to quantify the costs and benefits of financial market fragmentation from the point of view of an individual investor. For instance, we find that a positive correlation between order flows, which is empirically observed in a fragmented U.S. equity market, increases the costs of trading. As the correlation increases it may become more expensive to trade in a fragmented market than it is in a consolidated market. In the third part of the thesis we analyze the dynamics of limit order queues at the best bid or ask of an exchange. These queues consist of orders submitted by a variety of market participants, yet existing order book models commonly assume that all orders have similar dynamics. In practice, some orders are submitted by trade execution algorithms in an attempt to buy or sell a certain quantity of assets under time constraints, and these orders are canceled if their realized waiting time exceeds a patience threshold. In contrast, high-frequency traders submit and cancel orders depending on the order book state and their orders are not driven by patience. The interaction between these two order types within a single FIFO queue leads bursts of order cancelations for small queues and anomalously long waiting times in large queues. We analyze a fluid model that describes the evolution of large order queues in liquid markets, taking into account the heterogeneity between order submission and cancelation strategies of different traders. Our results show that after a finite initial time interval, the queue reaches a specific structure where all orders from high-frequency traders stay in the queue until execution but most orders from execution algorithms exceed their patience thresholds and are canceled. This "order crowding" effect has been previously noted by participants in highly liquid stock and futures markets and was attributed to a large participation of high-frequency traders. In our model, their presence creates an additional workload, which increases queue waiting times for new orders. Our analysis of the fluid model leads to waiting time estimates that take into account the distribution of order types in a queue. These estimates are tested against a large dataset of realized limit order waiting times collected by a U.S. equity brokerage firm. The queue composition at a moment of order submission noticeably affects its waiting time and we find that assuming a single order type for all orders in the queue leads to unrealistic results. Estimates that assume instead a mix of heterogeneous orders in the queue are closer to empirical data. Our model for a limit order queue with heterogeneous order types also appears to be interesting from a methodological point of view. It introduces a new type of behavior in a queueing system where one class of jobs has state-dependent dynamics, while others are driven by patience. Although this model is motivated by the analysis of limit order books, it may find applications in studying other service systems with state-dependent abandonments.
374

Approximate dynamic programming for large scale systems

Desai, Vijay V. January 2012 (has links)
Sequential decision making under uncertainty is at the heart of a wide variety of practical problems. These problems can be cast as dynamic programs and the optimal value function can be computed by solving Bellman's equation. However, this approach is limited in its applicability. As the number of state variables increases, the state space size grows exponentially, a phenomenon known as the curse of dimensionality, rendering the standard dynamic programming approach impractical. An effective way of addressing curse of dimensionality is through parameterized value function approximation. Such an approximation is determined by relatively small number of parameters and serves as an estimate of the optimal value function. But in order for this approach to be effective, we need Approximate Dynamic Programming (ADP) algorithms that can deliver `good' approximation to the optimal value function and such an approximation can then be used to derive policies for effective decision-making. From a practical standpoint, in order to assess the effectiveness of such an approximation, there is also a need for methods that give a sense for the suboptimality of a policy. This thesis is an attempt to address both these issues. First, we introduce a new ADP algorithm based on linear programming, to compute value function approximations. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program -- the `smoothed approximate linear program' -- is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. The resulting program enjoys strong approximation guarantees and is shown to perform well in numerical experiments with the game of Tetris and queueing network control problem. Next, we consider optimal stopping problems with applications to pricing of high-dimensional American options. We introduce the pathwise optimization (PO) method: a new convex optimization procedure to produce upper and lower bounds on the optimal value (the `price') of high-dimensional optimal stopping problems. The PO method builds on a dual characterization of optimal stopping problems as optimization problems over the space of martingales, which we dub the martingale duality approach. We demonstrate via numerical experiments that the PO method produces upper bounds and lower bounds (via suboptimal exercise policies) of a quality comparable with state-of-the-art approaches. Further, we develop an approximation theory relevant to martingale duality approaches in general and the PO method in particular. Finally, we consider a broad class of MDPs and introduce a new tractable method for computing bounds by consider information relaxation and introducing penalty. The method delivers tight bounds by identifying the best penalty function among a parameterized class of penalty functions. We implement our method on a high-dimensional financial application, namely, optimal execution and demonstrate the practical value of the method vis-a-vis competing methods available in the literature. In addition, we provide theory to show that bounds generated by our method are provably tighter than some of the other available approaches.
375

Two Papers of Financial Engineering Relating to the Risk of the 2007--2008 Financial Crisis

Zhong, Haowen January 2013 (has links)
This dissertation studies two financial engineering and econometrics problems relating to two facets of the 2007-2008 financial crisis. In the first part, we construct the Spatial Capital Asset Pricing Model and the Spatial Arbitrage Pricing Theory to characterize the risk premiums of futures contracts on real estate assets. We also provide rigorous econometric analysis of the new models. Empirical study shows there exists significant spatial interaction among the S&P/Case-Shiller Home Price Index futures returns. In the second part, we perform empirical studies on the jump risk in the equity market. We propose a simple affine jump-diffusion model for equity returns, which seems to outperform existing ones (including models with Levy jumps) during the financial crisis and is at least as good during normal times, if model complexity is taken into account. In comparing the models, we made two empirical findings: (i) jump intensity seems to increase significantly during the financial crisis, while on average there appears to be little change of jump sizes; (ii) finite number of large jumps in returns for any finite time horizon seem to fit the data well both before and after the crisis.
376

Pricing, Trading and Clearing of Defaultable Claims Subject to Counterparty Risk

Kim, Jinbeom January 2014 (has links)
The recent financial crisis and subsequent regulatory changes on over-the-counter (OTC) markets have given rise to the new valuation and trading frameworks for defaultable claims to investors and dealer banks. More OTC market participants have adopted the new market conventions that incorporate counterparty risk into the valuation of OTC derivatives. In addition, the use of collateral has become common for most bilateral trades to reduce counterparty default risk. On the other hand, to increase transparency and market stability, the U.S and European regulators have required mandatory clearing of defaultable derivatives through central counterparties. This dissertation tackles these changes and analyze their impacts on the pricing, trading and clearing of defaultable claims. In the first part of the thesis, we study a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous partial differential equations, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask prices. In the second part, we study the problem of pricing and trading of defaultable claims among investors with heterogeneous risk preferences and market views. Based on the utility-indifference pricing methodology, we construct the bid-ask spreads for risk-averse buyers and sellers, and show that the spreads widen as risk aversion or trading volume increases. Moreover, we analyze the buyer's optimal static trading position under various market settings, including (i) when the market pricing rule is linear, and (ii) when the counterparty -- single or multiple sellers -- may have different nonlinear pricing rules generated by risk aversion and belief heterogeneity. For defaultable bonds and credit default swaps, we provide explicit formulas for the optimal trading positions, and examine the combined effect of heterogeneous risk aversions and beliefs. In particular, we find that belief heterogeneity, rather than the difference in risk aversion, is crucial to trigger a trade. Finally, we study the impact of central clearing on the credit default swap (CDS) market. Central clearing of CDS through a central counterparty (CCP) has been proposed as a tool for mitigating systemic risk and counterpart risk in the CDS market. The design of CCPs involves the implementation of margin requirements and a default fund, for which various designs have been proposed. We propose a mathematical model to quantify the impact of the design of the CCP on the incentive for clearing and analyze the market equilibrium. We determine the minimum number of clearing participants required so that they have an incentive to clear part of their exposures. Furthermore, we analyze the equilibrium CDS positions and their dependence on the initial margin, risk aversion, and counterparty risk in the inter-dealer market. Our numerical results show that minimizing the initial margin maximizes the total clearing positions as well as the CCP's revenue.
377

Design and Evaluation of Procurement Combinatorial Auctions

Kim, Sang Won January 2014 (has links)
The main advantage of a procurement combinatorial auction (CA) is that it allows suppliers to express cost synergies through package bids. However, bidders can also strategically take advantage of this flexibility, by discounting package bids and "inflating" bid prices for single-items, even in the absence of cost synergies; the latter behavior can hurt the performance of the auction. It is an empirical question whether allowing package bids and running a CA improves performance in a given setting.Analyzing the actual performance of a CA requires evaluating cost efficiency and the margins of the winning bidders, which is typically private and sensitive information of the bidders. Thus motivated, in Chapter 2 of this dissertation, we develop a structural estimation approach for large-scale first-price CAs to estimate the firms' cost structure using the bid data. To overcome the computational difficulties arising from the large number of bids observed in large-scale CAs, we propose a novel simplified model of bidders' behavior based on pricing package characteristics. Overall, this work develops the first practical tool to empirically evaluate the performance of large-scale first-price CAs commonly used in procurement settings.In Chapter 3, we apply our method to the Chilean school meals auction, in which the government procures half a billion dollars' worth of meal services every year and bidders submit thousands of package bids. Our estimates suggest that bidders' cost synergies are economically significant in this application (~5%), and the current CA mechanism achieves high allocative efficiency (~98%) and reasonable margins for the bidders (~5%). We believe this is the first work in the literature that empirically shows that a CA performs well in a real-world application.We also conduct a counterfactual analysis to study the performance of the Vickrey-Clarke-Groves (VCG) mechanism in our empirical application. While it is well known in the literature that the VCG mechanism achieves allocative efficiency, its application in practice is at best rare due to several potential weaknesses such as prohibitively high procurement costs. Interestingly, contrary to the recent theoretical work, the results show that the VCG mechanism achieves reasonable procurement costs in our application. Motivated from this observation, Chapter 4 addresses such apparent paradox between the theory and our empirical application. Focusing on the high procurement cost issue, we study the impact of competition on the revenue performance of the VCG mechanism using an asymptotic analysis. We believe the findings in this chapter add useful insights for the practical usage of the VCG mechanism.
378

From Continuous to Discrete: Studies on Continuity Corrections and Monte Carlo Simulation with Applications to Barrier Options and American Options

Cao, Menghui January 2014 (has links)
This dissertation 1) shows continuity corrections for first passage probabilities of Brownian bridge and barrier joint probabilities, which are applied to the pricing of two-dimensional barrier and partial barrier options, and 2) introduces new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American options. The joint distribution of Brownian motion and its first passage time has found applications in many areas, including sequential analysis, pricing of barrier options, and credit risk modeling. There are, however, no simple closed-form solutions for these joint probabilities in a discrete-time setting. Chapter 2 shows that, discrete two-dimensional barrier and partial barrier joint probabilities can be approximated by their continuous-time probabilities with remarkable accuracy after shifting the barrier away from the underlying by a factor. We achieve this through a uniform continuity correction theorem on the first passage probabilities for Brownian bridge, extending relevant results in Siegmund (1985a). The continuity corrections are applied to the pricing of two-dimensional barrier and partial barrier options, extending the results in Broadie, Glasserman & Kou (1997) on one-dimensional barrier options. One interesting aspect is that for type B partial barrier options, the barrier correction cannot be applied throughout one pricing formula, but only to some barrier values and leaving the other unchanged, the direction of correction may also vary within one formula. In Chapter 3 we introduce new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American-style options. For simulation algorithms that compute lower bounds of American option values, we apply martingale control variates and introduce the local policy enhancement, which adopts a local simulation to improve the exercise policy. For duality-based upper bound methods, specifically the primal-dual simulation algorithm (Andersen and Broadie 2004), we have developed two improvements. One is sub-optimality checking, which saves unnecessary computation when it is sub-optimal to exercise the option along the sample path; the second is boundary distance grouping, which reduces computational time by skipping computation on selected sample paths based on the distance to the exercise boundary. Numerical results are given for single asset Bermudan options, moving window Asian options and Bermudan max options. In some examples the computational time is reduced by a factor of several hundred, while the confidence interval of the true option value is considerably tighter than before the improvements.
379

Infrastructure Scaling and Pricing

Gocmen, Fikret Caner January 2014 (has links)
Infrastructure systems play a crucial role in our daily lives. They include, but are not limited to, the highways we take while we commute to work, the stadiums we go to watch games, and the power plants that provide the electricity we consume in our homes. In this thesis we study infrastructure systems from several different perspectives with a focus on pricing and scalability. The pricing aspect of our research focuses on two industries: toll roads and sports events. Afterwards, we analyze the potential impact of small modular infrastructure on a wide variety of industries. We start by analyzing the problem of determining the tolls that maximize revenue for a managed lane operator -- that is, an operator who can charge a toll for the use of some lanes on a highway while a number of parallel lanes remain free to use. Managing toll lanes for profit is becoming increasingly common as private contractors agree to build additional lane capacity in return for the opportunity to retain toll revenue. We start by modeling the lanes as queues and show that the dynamic revenue-maximizing toll is always greater than or equal to the myopic toll that maximizes expected revenue from each arriving vehicle. Numerical examples show that a dynamic revenue-maximizing toll scheme can generate significantly more expected revenue than either a myopic or a static toll scheme. An important implication is that the revenue-maximizing fee does not only depend on the current state, but also on anticipated future arrivals. We discuss the managerial implications and present several numerical examples. Next, we relax the queueing assumption and model traffic propagation on a highway realistically by using simulation. We devise a framework that can be used to obtain revenue maximizing tolls in such a context. We calibrate our framework by using data from the SR-91 Highway in Orange County, CA and explore different tolling schemes. Our numerical experiments suggest that simple dynamic tolling mechanisms can lead to substantial revenue improvements over myopic and time-of-use tolling policies. In the third part, we analyze the revenue management of consumer options for tournaments. Sporting event managers typically only offer advance tickets which guarantee a seat at a future sporting event in return for an upfront payment. Some event managers and ticket resellers have started to offer call options under which a customer can pay a small amount now for the guaranteed option to attend a future sporting event by paying an additional amount later. We consider the case of tournament options where the event manager sells team-specific options for a tournament final, such as the Super Bowl, before the finalists are determined. These options guarantee a final game ticket to the bearer if his team advances to the finals. We develop an approach by which an event manager can determine the revenue maximizing prices and amounts of advance tickets and options to sell for a tournament final. Afterwards, for a specific tournament structure we show that offering options is guaranteed to increase expected revenue for the event. We also establish bounds for the revenue improvement and show that introducing options can increase social welfare. We conclude by presenting a numerical application of our approach. Finally, we argue that advances made in automation, communication and manufacturing portend a dramatic reversal of the ``bigger is better'' approach to cost reductions prevalent in many basic infrastructure industries, e.g. transportation, electric power generation and raw material processing. We show that the traditional reductions in capital costs achieved by scaling up in size are generally matched by learning effects in the mass-production process when scaling up in numbers instead. In addition, using the U.S. electricity generation sector as a case study, we argue that the primary operating cost advantage of large unit scale is reduced labor, which can be eliminated by employing low-cost automation technologies. Finally, we argue that locational, operational and financial flexibilities that accompany smaller unit scale can reduce investment and operating costs even further. All these factors combined argue that with current technology, economies of numbers may well dominate economies of unit scale.
380

Dynamic Markets with Many Agents: Applications in Social Learning and Competition

Ifrach, Bar January 2012 (has links)
This thesis considers two applications in dynamics economic models with many agents. The dynamics of the economic systems under consideration are intractable since they depend on the (stochastic) outcomes of the agents' actions. However, as the number of agents grows large, approximations to the aggregate behavior of agents come to light. I use this observation to characterize market dynamics and subsequently to study these applications. Chapter 2 studies the problem of devising a pricing strategy to maximize the revenues extracted from a stream of consumers with heterogenous preferences. Consumers, however, do not know the quality of the product or service and engage in a social learning process to learn it. Using a mean-field approximation the transient of this social learning process is uncovered and the pricing problem is analyzed. Chapter 3 adds to the previous chapter in analyzing features of this social learning process with finitely many agents. In addition, the chapter generalizes the information structure to include cases where consumers take into account the order in which reviews were submitted. Chapter 4 considers a model of dynamic oligopoly competition in the spirit of models that are widespread in industrial organization. The computation of equilibrium strategies of such models suffers from the curse of dimensionality when the number of agents (firms) is large. For a market structure with few dominant firms and many fringe firms, I study an alternative equilibrium concept in which fringe firms are represented succinctly with a low dimensional set of statistics. The chapter explores how this new equilibrium concept expands the class of dynamic oligopoly models that can be studied computationally in empirical work.

Page generated in 0.0966 seconds