Spelling suggestions: "subject:"coperations 3research"" "subject:"coperations 1research""
251 |
Applications of robust optimization to queueing and inventory systemsRikun, Alexander Anatolyevich January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 105-111). / This thesis investigates the application of robust optimization in the performance analysis of queueing and inventory systems. In the first part of the thesis, we propose a new approach for performance analysis of queueing systems based on robust optimization. We first derive explicit upper bounds on performance for tandem single class, multiclass single server, and single class multi-server queueing systems by solving appropriate robust optimization problems. We then show that these bounds derived by solving deterministic optimization problems translate to upper bounds on the expected steady-state performance for a variety of widely used performance measures such as waiting times and queue lengths. Additionally, these explicit bounds agree qualitatively with known results. In the second part of the thesis, we propose methods to compute (s,S) policies in supply chain networks using robust and stochastic optimization and compare their performance. Our algorithms handle general uncertainty sets, arbitrary network topologies, and flexible cost functions including the presence of fixed costs. The algorithms exhibit empirically practical running times. We contrast the performance of robust and stochastic (s,S) policies in a numerical study, and we find that the robust policy is comparable to the average performance of the stochastic policy, but has a considerably lower standard deviation across a variety of networks and realized demand distributions. Additionally, we identify regimes when the robust policy exhibits particular strengths even in average performance and tail behavior as compared with the stochastic policy. / by Alexander Anatolyevich Rikun. / Ph.D.
|
252 |
Algorithms and hardness results for the jump number problem, the joint replenishment problem, and the optimal clustering of frequency-constrained maintenance jobsTelha Cornejo, Claudio (Claudio A.) January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 107-110). / In the first part of this thesis we present a new, geometric interpretation of the jump number problem on 2-dimensional 2-colorable (2D2C) partial order. We show that the jump number of a 2D2C poset is equivalent to the maximum cardinality of an independent set in a properly defined collection of rectangles in the plane. We then model the geometric problem as a linear program. Even though the underlying polytope may not be integral, we show that one can always find an integral optimal solution. Inspired by this result and by previous work of A. Frank, T. Jordan and L. Vegh [13, 14, 15] on set-pairs, we derive an efficient combinatorial algorithm to find the maximum independent set and its dual, the minimum hitting set, in polynomial time. The combinatorial algorithm solves the jump number problem on convex posets (a subclass of 2D2C posets) significantly faster than current methods. If n is the number of nodes in the partial order, our algorithm runs in 0((n log n)2.5) time, while previous algorithms ran in at least 0(n9 ) time. In the second part, we present a novel connection between certain sequencing problems that involve the coordination of activities and the problem of factorizing integer numbers. We use this connection to derive hardness results for three different problems: -- The Joint Replenishment Problem with General Integer Policies. -- The Joint Replenishment Problem with Correction Factor. -- The Problem of Optimal Clustering of Frequency-Constrained Maintenance Jobs. Our hardness results do not follow from a standard type of reduction (e.g., we do not prove NP-hardness), and imply that no polynomial-time algorithm exists for the problems above, unless Integer Factorization is solvable in polynomial time.. / by Claudio Telha Cornejo. / Ph.D.
|
253 |
An analytics approach to problems in health careKung, Jerry Lai January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 99-102). / Health care expenditures in the United States have been increasing at unsustainable rates for more than thirty years with no signs of abating. Decisions to accept or reject deceased-donor kidneys offered to patients on the kidney transplantation wait-list currently rely on physician experience and intuition. Scoring rules to determine which end-stage liver disease patients are in most dire need of immediate transplantation have been haphazardly designed and reactively modified in an attempt to decrease waitlist mortality and increase fairness for cancer patients. For each of the above problem settings, we propose a framework that takes real-world data as input and draws upon modern data analytics methods ranging from mixed integer linear optimization to predictive machine learning to yield actionable insights that can add a significant edge over current practice. We describe an approach that, given insurance claims data, leads conservatively to a 10% reduction in health care costs in a study involving a large private US employer. Using historical data for patients on the kidney waitlist and organ match runs, we build a model that achieves an out-of-sample AUC of 0.87 when predicting whether or not a patient will receive a kidney of a particular quality within three, six, or twelve months. Given historical data for patients on the liver waitlist, we create a unified model that is capable of averting an additional 25% of adverse events in simulation compared to current practice without disadvantaging cancer patients. / by Jerry Lai Kung. / Ph. D.
|
254 |
Machine learning approaches to challenging problems : interpretable imbalanced classification, interpretable density estimation, and causal inferenceGoh, Siong Thye January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 111-118). / In this thesis, I address three challenging machine-learning problems. The first problem that we address is the imbalanced data problem. We propose two algorithms to handle highly imbalanced classification problems. The first algorithm uses mixed integer programming to optimize a weighted balance between positive and negative class accuracies. The second method uses an approximation in order to assist with scalability. Specifically, it follows a characterize-then-discriminate approach. The positive class is first characterized by boxes, and then each box boundary becomes a separate discriminative classifier. This method is computationally advantageous because it can be easily parallelized, and considers only the relevant regions of the feature space. The second problem is a density estimation problem for categorical data sets. We present tree- and list- structured density estimation methods for binary/categorical data. We present three generative models, where the first one allows the user to specify the number of desired leaves in the tree within a Bayesian prior. The second model allows the user to specify the desired number of branches within the prior. The third model returns lists (rather than trees) and allows the user to specify the desired number of rules and the length of rules within the prior. Finally, we present a new machine learning approach to estimate personalized treatment effects in the classical potential outcomes framework with binary outcomes. Strictly, both treatment and control outcomes must be measured for each unit in order to perform supervised learning. However, in practice, only one outcome can be observed per unit. To overcome the problem that both treatment and control outcomes for the same unit are required for supervised learning, we propose surrogate loss functions that incorporate both treatment and control data. The new surrogates yield tighter bounds than the sum of the losses for the treatment and control groups. A specific choice of loss function, namely a type of hinge loss, yields a minimax support vector machine formulation. The resulting optimization problem requires the solution to only a single convex optimization problem, incorporating both treatment and control units, and it enables the kernel trick to be used to handle nonlinear (also non-parametric) estimation. / by Siong Thye Goh. / Ph. D.
|
255 |
Dynamic learning and optimization for operations management problemsWang, He, Ph. D. Massachusetts Institute of Technology January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 153-157). / With the advances in information technology and the increased availability of data, new approaches that integrate learning and decision making have emerged in operations management. The learning-and-optimizing approaches can be used when the decision maker is faced with incomplete information in a dynamic environment. We first consider a network revenue management problem where a retailer aims to maximize revenue from multiple products with limited inventory constraints. The retailer does not know the exact demand distribution at each price and must learn the distribution from sales data. We propose a dynamic learning and pricing algorithm, which builds upon the Thompson sampling algorithm used for multi-armed bandit problems by incorporating inventory constraints. Our algorithm proves to have both strong theoretical performance guarantees as well as promising numerical performance results when compared to other algorithms developed for similar settings. We next consider a dynamic pricing problem for a single product where the demand curve is not known a priori. Motivated by business constraints that prevent sellers from conducting extensive price experimentation, we assume a model where the seller is allowed to make a bounded number of price changes during the selling period. We propose a pricing policy that incurs the smallest possible regret up to a constant factor. In addition to the theoretical results, we describe an implementation at Groupon, a large e-commerce marketplace for daily deals. The field study shows significant impact on revenue and bookings. Finally, we study a supply chain risk management problem. We propose a hybrid strategy that uses both process flexibility and inventory to mitigate risks. The interplay between process flexibility and inventory is modeled as a two-stage robust optimization problem: In the first stage, the firm allocates inventory, and in the second stage, after disruption strikes, the firm schedules its production using process flexibility to minimize demand shortage. By taking advantage of the structure of the second stage problem, we develop a delayed constraint generation algorithm that can efficiently solve the two-stage robust optimization problem. Our analysis of this model provides important insights regarding the impact of process flexibility on total inventory level and inventory allocation pattern. / by He Wang. / Ph. D.
|
256 |
Robust optimizationSim, Melvyn, 1971- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 169-171). / We propose new methodologies in robust optimization that promise greater tractability, both theoretically and practically than the classical robust framework. We cover a broad range of mathematical optimization problems, including linear optimization (LP), quadratic constrained quadratic optimization (QCQP), general conic optimization including second order cone programming (SOCP) and semidefinite optimization (SDP), mixed integer optimization (MIP), network flows and 0 - 1 discrete optimization. Our approach allows the modeler to vary the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations, while keeping the problem tractable. Specifically, for LP, MIP, SOCP, SDP, our approaches retain the same complexity class as the original model. The robust QCQP becomes a SOCP, which is computationally as attractive as the nominal problem. In network flows, we propose an algorithm for solving the robust minimum cost flow problem in polynomial number of nominal minimum cost flow problems in a modified network. For 0 - 1 discrete optimization problem with cost uncertainty, the robust counterpart of a polynomially solvable 0 - 1 discrete optimization problem remains polynomially solvable and the robust counterpart of an NP-hard o-approximable 0-1 discrete optimization problem, remains a-approximable. / (cont.) Under an ellipsoidal uncertainty set, we show that the robust problem retains the complexity of the nominal problem when the data is uncorrelated and identically distributed. For uncorrelated, but not identically distributed data, we propose an approximation method that solves the robust problem within arbitrary accuracy. We also propose a Frank-Wolfe type algorithm for this case, which we prove converges to a locally optimal solution, and in computational experiments is remarkably effective. / by Melvyn Sim. / Ph.D.
|
257 |
Applications of optimal portfolio managementBisias, Dimitrios January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 183-188). / This thesis revolves around applications of optimal portfolio theory. In the first essay, we study the optimal portfolio allocation among convergence trades and mean reversion trading strategies for a risk averse investor who faces Value-at-Risk and collateral constraints with and without fear of model misspecification. We investigate the properties of the optimal trading strategy, when the investor fully trusts his model dynamics. Subsequently, we investigate how the optimal trading strategy of the investor changes when he mistrusts the model. In particular, we assume that the investor believes that the data will come from an unknown member of a set of unspecified alternative models near his approximating model. The investor believes that his model is a pretty good approximation in the sense that the relative entropy of the alternative models with respect to his nominal model is small. Concern about model misspecification leads the investor to choose a robust optimal portfolio allocation that works well over that set of alternative models. In the second essay, we study how portfolio theory can be used as a framework for making biomedical funding allocation decisions focusing on the National Institutes of Health (NIH). Prioritizing research efforts is analogous to managing an investment portfolio. In both cases, there are competing opportunities to invest limited resources, and expected returns, risk, correlations, and the cost of lost opportunities are important factors in determining the return of those investments. Can we apply portfolio theory as a systematic framework of making biomedical funding allocation decisions? Does NIH manage its research risk in an efficient way? What are the challenges and limitations of portfolio theory as a way of making biomedical funding allocation decisions? Finally in the third essay, we investigate how risk constraints in portfolio optimization and fear of model misspecification affect the statistical properties of the market returns. Risk sensitive regulation has become the cornerstone of international financial regulations. How does this kind of regulation affect the statistical properties of the financial market? Does it affect the risk premium of the market? What about the volatility or the liquidity of the market? / by Dimitrios Bisias. / Ph. D.
|
258 |
Optimal trees for prediction and prescriptionDunn, Jack William January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 217-226). / For the past 30 years, decision tree methods have been one of the most widely-used approaches in machine learning across industry and academia, due in large part to their interpretability. However, this interpretability comes at a price--the performance of classical decision tree methods is typically not competitive with state-of-the-art methods like random forests and gradient boosted trees. A key limitation of classical decision tree methods is their use of a greedy heuristic for training. The tree is therefore constructed one locally-optimal split at a time, and so the final tree as a whole may be far from global optimality. Motivated by the increase in performance of mixed-integer optimization methods over the last 30 years, we formulate the problem of constructing the optimal decision tree using discrete optimization, allowing us to construct the entire decision tree in a single step and hence find the single tree that best minimizes the training error. We develop high-performance local search methods that allow us to efficiently solve this problem and find optimal decision trees with both parallel (axes-aligned) and hyperplane splits. We show that our approach using modern optimization results in decision trees that improve significantly upon classical decision tree methods. In particular, across a suite of synthetic and real-world classification and regression examples, our methods perform similarly to random forests and boosted trees whilst maintaining the interpretability advantage of a single decision tree, thus alleviating the need to choose between performance and interpretability. We also adapt our approach to the problem of prescription, where the goal is to make optimal prescriptions for each observation. While constructing the tree, our method simultaneously infers the unknown counterfactuals in the data and learns to make optimal prescriptions. This results in a decision tree that optimizes both the predictive and prescriptive error, and delivers an interpretable solution that offers significant improvements upon the existing state-of-the-art in prescriptive problems. / by Jack William Dunn. / Ph. D.
|
259 |
Statistical methods for forecasting and estimating passenger willingness-to-pay in airline revenue managementBoyer, Christopher A. (Christopher Andrew) January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Page 170 blank. Cataloged from PDF version of thesis. / Includes bibliographical references (p. 167-169). / The emergence of less restricted fare structures in the airline industry reduced the capability of airlines to segment demand through restrictions such as Saturday night minimum stay, advance purchase, non-refundability, and cancellation fees. As a result, new forecasting techniques such as Hybrid Forecasting and optimization methods such as Fare Adjustment were developed to account for passenger willingness-to- pay. This thesis explores statistical methods for estimating sell-up, or the likelihood of a passenger to purchase a higher fare class than they originally intended, based solely on historical booking data available in revenue management databases. Due to the inherent sparseness of sell-up data over the booking period, sell-up estimation is often difficult to perform on a per-market basis. On the other hand, estimating sell-up over an entire airline network creates estimates that are too broad and over-generalized. We apply the K-Means clustering algorithm to cluster markets with similar sell-up estimates in an attempt to address this problem, creating a middle ground between system-wide and per-market sell-up estimation. This thesis also formally introduces a new regression-based forecasting method known as Rational Choice. Rational Choice Forecasting creates passenger type categories based on potential willingness-to-pay levels and the lowest open fare class. Using this information, sell-up is accounted for within the passenger type categories, making Rational Choice Forecasting less complex than Hybrid Forecasting. This thesis uses the Passenger Origin-Destination Simulator to analyze the impact of these forecasting and sell-up methods in a controlled, competitive airline environment. The simulation results indicate that determining an appropriate level of market sell-up aggregation through clustering both increases revenue and generates sell-up estimates with a sufficient number of observations. In addition, the findings show that Hybrid Forecasting creates aggressive forecasts that result in more low fare class closures, leaving room for not only sell-up, but for recapture and spill-in passengers in higher fare classes. On the contrary, Rational Choice Forecasting, while simpler than Hybrid Forecasting with sell-up estimation, consistently generates lower revenues than Hybrid Forecasting (but still better than standard pick-up forecasting). To gain a better understanding of why different markets are grouped into different clusters, this thesis uses regression analysis to determine the relationship between a market's characteristics and its estimated sell-up rate. These results indicate that several market factors, in addition to the actual historical bookings, may predict to some degree passenger willingness-to-pay within a market. Consequently, this research illustrates the importance of passenger willingness-to-pay estimation and its relationship to forecasting in airline revenue management. / by Christopher A. Boyer. / S.M.
|
260 |
Stochastic analysis via robust optimizationYoussef, Nataly January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 167-174). / To evaluate the performance and optimize systems under uncertainty, two main avenues have been suggested in the literature: stochastic analysis and optimization describing the uncertainty probabilistically and robust optimization describing the uncertainty deterministically. Instead, we propose a novel paradigm which leverages the conclusions of probability theory and the tractability of the robust optimization approach to approximate and optimize the expected behavior in a given system. Our framework models the uncertainty via polyhedral sets inspired by the limit laws of probability. We characterize the uncertainty sets by variability parameters that we treat as random variables. We then devise a methodology to approximate and optimize the average performance of the system via a robust optimization formulation. Our framework (a) avoids the challenges of fitting probability distributions to the uncertain variables, (b) eliminates the need to generate scenarios to describe the states of randomness, and (c) demonstrates the use of robust optimization to evaluate and optimize expected performance. We illustrate the applicability of our methodology to analyze the performance of queueing networks and optimize the inventory policy for supply chain networks. In Part I, we study the case of a single queue. We develop a robust theory to study multi-server queues with possibly heavy-tailed primitives. Our methodology (a) provides approximations that match the diffusion approximations for light-tailed queues in heavy traffic, and (b) extends the framework to analyze the transient behavior of heavy-tailed queues. In Part II, we study the case of a network of queues. Our methodology provides accurate approximations of (a) the expected steady-state behavior in generalized queueing networks, and (b) the expected transient behavior in feedforward queueing networks. Our approach achieves significant computational tractability and provides accurate approximations relative to simulated values. In Part III, we study the case of a supply chain network. Our methodology (a) obtains optimal base-stock levels that match the optimal solutions obtained via stochastic optimization, (b) yields optimal affine policies which oftentimes exhibit better results compared to optimal base-stock policies, and (c) provides optimal policies that consistently outperform the solutions obtained via the traditional robust optimization approach. / by Nataly Youssef. / Ph. D.
|
Page generated in 0.2777 seconds