421 |
First Order Methods for Large-Scale Sparse OptimizationAybat, Necdet Serhat January 2011 (has links)
In today's digital world, improvements in acquisition and storage technology are allowing us to acquire more accurate and finer application-specific data, whether it be tick-by-tick price data from the stock market or frame-by-frame high resolution images and videos from surveillance systems, remote sensing satellites and biomedical imaging systems. Many important large-scale applications can be modeled as optimization problems with millions of decision variables. Very often, the desired solution is sparse in some form, either because the optimal solution is indeed sparse, or because a sparse solution has some desirable properties. Sparse and low-rank solutions to large scale optimization problems are typically obtained by regularizing the objective function with L1 and nuclear norms, respectively. Practical instances of these problems are very high dimensional (~ million variables) and typically have dense and ill-conditioned data matrices. Therefore, interior point based methods are ill-suited for solving these problems. The large scale of these problems forces one to use the so-called first-order methods that only use gradient information at each iterate. These methods are efficient for problems with a "simple" feasible set such that Euclidean projections onto the set can be computed very efficiently, e.g. the positive orthant, the n-dimensional hypercube, the simplex, and the Euclidean ball. When the feasible set is "simple", the subproblems used to compute the iterates can be solved efficiently. Unfortunately, most applications do not have "simple" feasible sets. A commonly used technique to handle general constraints is to relax them so that the resulting problem has only "simple" constraints, and then to solve a single penalty or Lagrangian problem. However, these methods generally do not guarantee convergence to feasibility. The focus of this thesis is on developing new fast first-order iterative algorithms for computing sparse and low-rank solutions to large-scale optimization problems with very mild restrictions on the feasible set - we allow linear equalities, norm-ball and conic inequalities, and also certain non-smooth convex inequalities to define the constraint set. The proposed algorithms guarantee that the sequence of iterates converges to an optimal feasible solution of the original problem, and each subproblem is an optimization problem with a "simple" feasible set. In addition, for any eps > 0, by relaxing the feasibility requirement of each iteration, the proposed algorithms can compute an eps-optimal and eps-feasible solution within O(log(1/eps)) iterations which requires O(1/eps) basic operations in the worst case. Algorithm parameters do not depend on eps > 0. Thus, these new methods compute iterates arbitrarily close to feasibility and optimality as they continue to run. Moreover, the computational complexity of each basic operation for these new algorithms is the same as that of existing first-order algorithms running on "simple" feasible sets. Our numerical studies showed that only O(log(1/eps)) basic operations, as opposed to O(1/eps) worst case theoretical bound, are needed for obtaining eps-feasible and eps-optimal solutions. We have implemented these new first-order methods for the following problem classes: Basis Pursuit (BP) in compressed sensing, Matrix Rank Minimization, Principal Component Pursuit (PCP) and Stable Principal Component Pursuit (SPCP) in principal component analysis. These problems have applications in signal and image processing, video surveillance, face recognition, latent semantic indexing, and ranking and collaborative filtering. To best of our knowledge, an algorithm for the SPCP problem that has O(1/eps) iteration complexity and has a per iteration complexity equal to that of a singular value decomposition is given for the first time.
|
422 |
Supply Chain Management: Supplier Financing Schemes and Inventory StrategiesWang, Min January 2011 (has links)
This dissertation addresses a few fundamental questions on the interface between supplier financing schemes and inventory management. Traditionally, retailers finance their inventories through an independent financing institution or by drawing from their own cash reserves, without any supplier involvement (Independent Financing). However, suppliers may reduce their buyers' costs and stimulate sales and associated revenues and profits, by either (i) adopting the financing function themselves (Trade Credit), or (ii) subsidizing the inventory costs (Inventory Subsidies). In the first part (Chapter 2) we analyze and compare the equilibrium performance of supply chains under these three basic financing schemes. The objective is to compare the equilibrium profits of the individual chain members, the aggregate supply chain profits, the equilibrium wholesale price, the expected sales volumes and the average inventory levels under the three financing options, and thus provide important insights for the selection and implementation of supply chain financing mechanisms. Several of the financing schemes introduce a new type of inventory control problem for the retailers in response to terms specified by their suppliers. In Chapter 3 we therefore consider the inventory management problem of a firm which incurs inventory carrying costs with a general shelf age dependent structure and, even more generally, that of a firm with shelf age and delay dependent inventory and backlogging costs. Beyond identifying the structure of optimal replenishment strategies and corresponding algorithms to compute them, it is often important to understand how changes in various primitives of the inventory model impact on the optimal policy parameters and performance measures. In spite of a voluminous literature over more than fifty years, very little is known about this area. In Chapter 4, we therefore study monotonicity properties of stochastic inventory systems governed by an (r; q) or (r; nq) policy and apply the results in our general theorems both to standard inventory models and to those with general shelf age and delay dependent inventory costs.
|
423 |
Multiproduct Pricing Management and Design of New Service ProductsWang, Ruxian January 2012 (has links)
In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.
|
424 |
A Simulation Model to Analyze the Impact of Golf Skills and a Scenario-based Approach to Options Portfolio OptimizationKo, Soonmin January 2012 (has links)
A simulation model of the game of golf is developed to analyze the impact of various skills (e.g., driving distance, directional accuracy, putting skill, and others) on golf scores. The golf course model includes realistic features of a golf course including rough, sand, water, and trees. Golfer shot patterns are modeled with t distributions and mixtures of t and normal distributions since normal distributions do not provide good fits to the data. The model is calibrated to extensive data for amateur and professional golfers. The golf simulation is used to assess the impact on scores of distance and direction, determine what factors separate pros from amateurs, and to determine the impact of course length on scores. In the second part of the thesis, we use a scenario-based approach to solve a portfolio optimization problem with options. The solution provides the optimal payoff profile given an investor's view of the future, his utility function or risk appetite, and the market prices of options. The scenario-based approach has several advantages over the traditional covariance matrix method, including additional flexibility in the choice of constraints and objective function.
|
425 |
Strategic Models in Supply Network DesignLederman, Roger January 2012 (has links)
This dissertation contains a series of essays intended to introduce strategic modeling techniques into the network design problem. While investment in production capacity has long been approached as a critical strategic decision, the increasing need for robust, responsive supply capabilities has made it essential to take a network view, where multiple products and sites are considered simultaneously. In traditional network planning, models have rarely accounted for the behavior of additional players - customers, competitors, suppliers - on whom a firm can exert only a limited influence. We analyze a set of models that account for the dynamics of the firm's interaction with these outside actors. In Chapters 2 and 3, we develop game-theoretic models to characterize the allocation of resources in a network context. In Chapter 2, we use series-parallel networks to model the arrangement of producers whose output is bundled. This structure may arise, for example, when various components of the production process are outsourced individually. We study supply-function mechanisms through which producers strategically manage scarce capacity. Our results show how network structure can be analyzed to measure producers' market power and its effect on equilibrium markups. Chapter 3 looks at the network design problem of a vertically integrated firm with the ability to flexibly allocate resources across markets. We consider optimal design of the firm's production network as an upper-level decision to be optimized with respect to competitive outcomes in the lower stage. We find that optimal strategies regarding the location and centralization of production will differ across firms, depending on their competitive position in the market. The final two chapters discuss practical issues regarding the availability of model inputs in a multi-product context. In Chapter 4, we propose a method to construct competitor sets through estimation of a latent-segment choice model. We present a case study in a hotel market, where demand is distributed both spatially and temporally. We show how widely available data on market events can be used to drive identification of customer segments, providing a basis to assess competitive interactions. Chapter 5 provides a further example, in the setting of urban transportation networks, of how user behavior on a network can be estimated from partially observed data. We present a novel two-phase approach for performing this estimation in real time.
|
426 |
Contingent Capital: Valuation and Risk Implications Under Alternative Conversion MechanismsNouri, Behzad January 2012 (has links)
Several proposals for enhancing the stability of the financial system include requirements that banks hold some form of contingent capital, meaning equity that becomes available to a bank in the event of a crisis or financial distress. Specific proposals vary in their choice of conversion trigger and conversion mechanism, and have inspired extensive scrutiny regarding their effectivity in avoiding costly public rescues and bail-outs and potential adverse effects on market dynamics. While allowing banks to leverage and gain a higher return on their equity capital during the upturns in financial markets, contingent capital provides an automatic mechanism to reduce debt and raise the loss bearing capital cushion during the downturns and market crashes; therefore, making it possible to achieve stability and robustness in the financial sector, without reducing efficiency and competitiveness of the banking system with higher regulatory capital requirements. However, many researchers have raised concerns regarding unintended consequences and implications of such instruments for market dynamics. Death spirals in the stock price near the conversion, possibility of profitable stock or book manipulations by either the investors or the issuer, the marketability and demand for such hybrid instruments, contagion and systemic risks arising from the hedging strategies of the investors and higher risk taking incentives for issuers are among such concerns. Though substantial, many of such issues are addressed through a prudent design of the trigger and conversion mechanism. In the following chapters, we develop multiple models for pricing and analysis of contingent capital under different conversion mechanisms. In Chapter 2 we analyze the case of contingent capital with a capital-ratio trigger and partial and on-going conversion. The capital ratio we use is based on accounting or book value to approximate the regulatory ratios that determine capital requirements for banks. The conversion process is partial and on-going in the sense that each time a bank's capital ratio reaches the minimum threshold, just enough debt is converted to equity to meet the capital requirement, so long as the contingent capital has not been depleted. In Chapter 3 we simplify the design to all-at-once conversion however we perform the analysis through a much richer model which incorporates tail risk in terms of jumps, endogenous optimal default policy and debt rollover. We also investigate the case of bail-in debt, where at default the original shareholders are wiped out and the converted investors take control of the firm. In the case of contingent convertibles the conversion trigger is assumed as a contractual term specified by market value of assets. For bail-in debt the trigger is where the original shareholders optimally default. We study incentives of shareholders to change the capital structure and how CoCo's affect risk incentives. Several researchers have advocated use of a market based trigger which is forward looking, continuously updated and readily available, while some others have raised concerns regarding unintended consequences of a market based trigger. In Chapter 4 we investigate one of these issues, namely the existence and uniqueness of equilibrium when the conversion trigger is based on the stock price.
|
427 |
Modeling Customer Behavior for Revenue ManagementBansal, Matulya January 2012 (has links)
In this thesis, we model and analyze the impact of two behavioral aspects of customer decision making upon the revenue maximization problem of a monopolist firm. First, we study the revenue maximization problem of a monopolist firm selling a homogeneous good to a market of risk-averse, strategic customers. Using a discrete (but arbitrary) valuation distribution, we show how the dynamic pricing problem with strategic customers can be formulated as a mechanism design problem, thereby making it more amenable to analysis. We characterize the optimal solution, and solve the problem for several special cases. We perform asymptotic analysis for the low risk-aversion case and show that it is asymptotically optimal to offer at most two products. Second, we consider a revenue-maximizing monopolist firm that serves a market of customers that are heterogeneous with respect to their valuations and desire for a quality attribute. Instead of optimizing the net utility that results from an appropriate combination of product price and quality, as in the traditional model of customer behavior, we consider a setting where customers purchase the cheapest product subject to its quality exceeding a customer specific quality threshold. We call such preferences threshold preferences. We solve the firm’s product design problem in this setting, and contrast with the traditional model of customer choice behavior. We consider several scenarios where such preferences might arise, and identify the optimal solution in each case. In addition to these product design problems, we study the problem of identifying the optimal putting strategy for a golfer. We develop a model of golfer putting skill, and combine it with a putt trajectory and holeout model to identify a golfer’s optimal putting strategy. The problem of identifying the optimal putting strategy is shown to be equivalent to a two-dimensional stochastic shortest path problem, with continuous state and control space, and solved using approximate dynamic programming. We calibrate the golfer model to professional and amateur player data, and use the calibrated model to answer several interesting questions, e.g., how does green reading ability affect golfer performance, how do professional and amateur golfers differ in their strategy, how do uphill and downhill putts compare in difficulty, etc.
|
428 |
Price competition and the impact of service attributes: Structural estimation and analytical characterizations of equilibrium behaviorPierson, Margaret Parker January 2012 (has links)
This dissertation addresses a number of outstanding, fundamental questions in operations management and industrial organization literature. Operations management literature has a long history of studying the competitive impact of operational, firm-level strategic decisions within oligopoly markets. The first essay reports on an empirical study of an important industry, the drive-thru fast-food industry. We estimate a competition model, derived from an underlying Mixed MultiNomial Logit (MNML) consumer choice model, using detailed empirical data. The main goal is to measure to what extent waiting time performance, along with price levels, brand attributes, geographical and demographic factors, impacts competing firms' market shares. The primary goal of our second essay is to characterize the equilibrium behavior of price competition models with Mixed Multinomial Logit (MMNL) demand functions under affine cost structures. In spite of the huge popularity of MMNL models in both the theoretical and empirical literature, it is not known, in general, whether a Nash equilibrium (in pure strategies) of prices exists, and whether the equilibria can be uniquely characterized as the solutions to the system of First Order Condition (FOC) equations. The third essay, which is the most general in its context, we establish that in the absence of cost efficiencies resulting from a merger, aggregate profits of the merging firms increase as do equilibrium prices for general price competition models with general nonlinear demand and cost functions as long as the models are supermodular, with two additional structural conditions: (i) each firm's profit function is strictly quasi-concave in its own price(s), and (ii) markets are competitive, i.e., in the pre-merger industry, each firm's profits increase when any of his competitors increases his price, unilaterally. Even the equilibrium profits of the remaining firms in the industry increase, while the consumer ends up holding the bag, i.e., consumer welfare declines. As demonstrated by this essay, the answers to these sorts of strategy questions have implications not only for the firms and customers but also the policy makers policing these markets.
|
429 |
Models for managing surge capacity in the face of an influenza epidemicZenteno, Ana January 2013 (has links)
Influenza pandemics pose an imminent risk to society. Yearly outbreaks already represent heavy social and economic burdens. A pandemic could severely affect infrastructure and commerce through high absenteeism, supply chain disruptions, and other effects over an extended and uncertain period of time. Governmental institutions such as the Center for Disease Prevention and Control (CDC) and the U.S. Department of Health and Human Services (HHS) have issued guidelines on how to prepare for a potential pandemic, however much work still needs to be done in order to meet them. From a planner's perspective, the complexity of outlining plans to manage future resources during an epidemic stems from the uncertainty of how severe the epidemic will be. Uncertainty in parameters such as the contagion rate (how fast the disease spreads) makes the course and severity of the epidemic unforeseeable, exposing any planning strategy to a potentially wasteful allocation of resources. Our approach involves the use of additional resources in response to a robust model of the evolution of the epidemic as to hedge against the uncertainty in its evolution and intensity. Under existing plans, large cities would make use of networks of volunteers, students, and recent retirees, or borrow staff from neighboring communities. Taking into account that such additional resources are likely to be significantly constrained (e.g. in quantity and duration), we seek to produce robust emergency staff commitment levels that work well under different trajectories and degrees of severity of the pandemic. Our methodology combines Robust Optimization techniques with Epidemiology (SEIR models) and system performance modeling. We describe cutting-plane algorithms analogous to generalized Benders' decomposition that prove fast and numerically accurate. Our results yield insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be deployed during the epidemic. To assess the efficacy of our solutions, we study their performance under different scenarios and compare them against other seemingly good strategies through numerical experiments. This work would be particularly valuable for institutions that provide public services, whose operations continuity is critical for a community, especially in view of an event of this caliber. As far as we know, this is the first time this problem is addressed in a rigorous way; particularly we are not aware of any other robust optimization applications in epidemiology.
|
430 |
Financial Portfolio Risk Management: Model Risk, Robustness and Rebalancing ErrorXu, Xingbo January 2013 (has links)
Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes.
|
Page generated in 0.1458 seconds