Spelling suggestions: "subject:"contenter"" "subject:"intenter""
181 |
Composite mission variable formulation for real-time mission planningBarth, Christopher D. (Christopher Daniel), 1976- January 2001 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 153-154). / In this thesis, we create a comprehensive model and efficient solution technique for generating air operations plans. Previous air operations models have fallen short in at least one of the following areas: routing. real-time re-planning of aircraft. problem size capability, plan generation speed. and optimal packaging of aircraft. The purpose of the Composite Mission Variable Decomposition (CMVD) approach is to plan and re-plan air operations for a real conflict as it unfolds. Previous model shortcomings were the result of two main reasons: the models were developed for other purposes (typically weapons studies). or developers could not create techniques that can efficiently generate plans while including the listed areas. The application of conventional optimization modeling to an operations problem that includes aspects such as routing and real-time re-planning forms a model that has millions of constraints and a weak linear programming relaxation. The Composite Mission Variable MNodeils the first step in overcoming the above shortcomings because it greatly decreases the number of constraints in the optimization model. and the linear programming relaxation provides tight bounds. The Composite Mission Variable Model combines multiple air operations planning decisions into a composite mission variable. Many complex constraints that are explicitly included in a conventional model are implicitly enforced in the composite mission variables. We apply price coordinated decomposition to generate the composite mission variables. Price coordination reduces the number of variables in the Composite Mission Variable Model and allows for parallel processing of composite mission variable generation. CMVD creates air operations plans in minutes for scenarios with thousands of targets. while including important capabilities such as routing and re-planning of aircraft in air. CMVD is tested in simulated conflicts and its performance validated by comparisons with a heuristic approach for generating plans. / by Christopher D. Barth. / S.M.
|
182 |
Internet of Things and anomaly detection for the iron ore mining industry / IoT and anomaly detection for the iron ore mining industrySaroufim, Carl Elie January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 176-182). / In the context of a world flooded with data, the Internet of Things (IoT) is exploding. This thesis considers the problem of applying IoT technology to the reduction of costs in the iron ore mining industry, to compensate for the iron ore price slumping observed over the past years. More specifically, we focused on improving the quality of the output in a data-driven iron ore concentration factory. In this plant, mined iron ore goes through a series of complex physical and chemical transformations so as to increase the concentration in iron and reduce the concentration in impurities such as silica. In this thesis, we developed an IoT infrastructure comprising of machines, a network of sensors, a database, a random forest prediction model, an algorithm for adjusting its cutoff parameter dynamically, and a predictive maintenance algorithm. It can preventively detect and maybe fix poor quality events in the iron ore concentration factory, improving the overall quality and decreasing costs. The random forest model was selected among other anomaly detection techniques. It is able, on an independent test data set, with an AUC of about 0.92, to detect 90% of the poor quality events, with a false positive rate of 23.02%, lowered by the dynamic cutoff algorithm. These methods can be applied to any factory in any industry, as long as it has a good infrastructure of sensors, providing sufficiently precise and frequent data. / by Carl Elie Saroufim. / S.M.
|
183 |
Lowering outbound shipping costs in an online retail environment by making better fulfillment and replenishment decisionsAcimovic, Jason Andrew January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 181-185). / As online retailing - or e-tailing - continues to grow as more and more customers buy physical goods on the internet, finding ways to reduce the cost and environmental impact of outbound shipping in this sector will become increasingly important. We investigate the impact of making poor fulfillment and replenishment decisions using data obtained from a large American online retailer. Then, we propose implementable - i.e., computationally tractable and relatively intuitive - solutions for both the fulfillment and replenishment problems, both tested either on actual data from our industrial partner or on small but realistic models. We first focus on the fulfillment problem, namely, deciding from which warehouse(s) to fulfill a customer's order when several options exist. We propose a heuristic that utilizes the dual values of a transportation linear program to estimate the opportunity cost of depleting inventory from a warehouse. This linear program values inventory at a warehouse due to both its geography and the size of its catalogue. After showing that this linear program is asymptotically optimal - using concepts developed in airline network revenue management - we then test the heuristic on industry data, showing a 1% reduction in outbound shipping costs as compared to a myopic fulfillment policy. The last part of the thesis focuses on replenishment. Every period, for each item, the network places an order to restock all the warehouses. Complicating this decision are two factors. First, the orders will not arrive immediately, but rather require a lead time to be delivered. During this time a random number of customers will place orders with the network. Second, any customer's order may be filled from any warehouse, which becomes important when warehouses stock out of an item. Therefore, it is not trivial to calculate the optimal inventory to order to each warehouse. We show that using a standard replenishment policy - popular in practice - can lead to dynamics that result in increased outbound shipping costs. We propose a replenishment policy heuristic that is intuitive and performs well on examples. This heuristic has two variants: a simpler one that assumes deterministic demand, and a more complicated one that accounts for stochasticity. / by Jason Andrew Acimovic. / Ph.D.
|
184 |
Decomposition methods for large scale stochastic and robust optimization problemsBecker, Adrian Bernard Druke January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 107-112). / We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stochastic optimization problems, we develop a nested policy heuristic which iteratively solves a series of sub-problems operating on smaller bandit systems. We also develop linear-optimization based bounds for the Generalized Restless Bandit problem and demonstrate promising computational performance of the nested policy heuristic on a large-scale real world application of search term selection for sponsored search advertising. We further study the distributionally robust optimization problem with known mean, covariance and support. These optimization models are attractive in their real world applications as they require the model consumer to only rely on those statistics of uncertainty that are known with relative confidence rather than making arbitrary assumptions about the exact dynamics of the underlying distribution of uncertainty. Known to be AP - hard, current approaches invoke tractable but often weak relaxations for real-world applications. We develop a decomposition method for this family of problems which recursively derives sub-policies along projected dimensions of uncertainty and provides a sequence of bounds on the value of the derived policy. In the development of this method, we prove that non-convex quadratic optimization in n-dimensions over a box in two-dimensions is efficiently solvable. We also show that this same decomposition method yields a promising heuristic for the MAXCUT problem. We then provide promising computational results in the context of a real world fixed income portfolio optimization problem. The decomposition methods developed in this thesis recursively derive sub-policies on projected dimensions of the master problem. These sub-policies are optimal on relaxations which admit "tight" projections of the master problem; that is, the projection of the feasible region for the relaxation is equivalent to the projection of that of master problem along the dimensions of the sub-policy. Additionally, these decomposition strategies provide a hierarchical solution structure that aids in solving large-scale problems. / by Adrian Bernard Druke Becker. / Ph.D.
|
185 |
Combinatorial optimization problems with concave costsStratila, Dan January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009. / Includes bibliographical references (p. 83-89). / In the first part, we study the problem of minimizing a separable concave function over a polyhedron. We assume the concave functions are nonnegative nondecreasing on R+, and the polyhedron is in RI' (these assumptions can be relaxed further under suitable technical conditions). We show how to approximate this problem to 1+ E precision in optimal value by a piecewise linear minimization problem so that the number of resulting pieces is polynomial in the input size of the original problem and linear in 1/c. For several concave cost problems, the resulting piecewise linear problem can be reformulated as a classical combinatorial optimization problem. As a result of our bound, a variety of polynomial-time heuristics, approximation algorithms, and exact algorithms for classical combinatorial optimization problems immediately yield polynomial-time heuristics, approximation algorithms, and fully polynomial-time approximation schemes for the corresponding concave cost problems. For example, we obtain a new approximation algorithm for concave cost facility location, and a new heuristic for concave cost multi commodity flow. In the second part, we study several concave cost problems and the corresponding combinatorial optimization problems. We develop an algorithm design technique that yields a strongly polynomial primal-dual algorithm for a concave cost problem whenever such an algorithm exists for the corresponding combinatorial optimization problem. / (cont.) Our technique preserves constant-factor approximation ratios as well as ratios that depend only on certain problem parameters, and exact algorithms yield exact algorithms. For example, we obtain new approximation algorithms for concave cost facility location and concave cost joint replenishment, and a new exact algorithm for concave cost lot-sizing. In the third part, we study a real-time optimization problem arising in the operations of a leading internet retailer. The problem involves the assignment of orders that arrive via the retailer's website to the retailer's warehouses. We model it as a concave cost facility location problem, and employ existing primal-dual algorithms and approximations of concave cost functions to solve it. On past data, we obtain solutions on average within 1.5% of optimality, with running times of less than 100ms per problem. / by Dan Stratila. / Ph.D.
|
186 |
Three essays on sequencing and routing problemsBompadre, Agustín January 2005 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (leaves 157-162). / In this thesis we study different combinatorial optimization problems. These problems arise in many practical settings where there is a need for finding good solutions fast. The first class of problems we study are vehicle routing problems, and the second type of problems are sequencing problems. We study approximation algorithms and local search heuristics for these problems. First, we analyze the Vehicle Routing Problem (VRP) with and without split deliveries. In this problem, we have to route vehicles from the depot to deliver the demand to the customers while minimizing the total traveling cost. We present a lower bound for this problem, improving a previous bound of Haimovich and Rinnooy Kan. This bound is then utilized to improve the worst-case approximation algorithm of the Iterated Tour Partitioning (ITP) heuristic when the capacity of the vehicles is constant. Second, we analyze a particular case of the VRP, when the customers are uniformly distributed i.i.d. points on the unit square of the plane, and have unit demand. We prove that there exists a constant c > 0 such that the ITP heuristic is a 2 - c approximation algorithm with probability arbitrarily close to one as the number of customers goes to infinity. This result improves the approximation factor of the ITP heuristic under the worst-case analysis, which is 2. We also generalize this result and previous ones to the multi-depot case. Third, we study a language to generate Very Large Scale Neighborhoods for sequencing problems. Local search heuristics are among the most popular approaches to solve hard optimization problems. / (cont.) Among them, Very Large Scale Neighborhood Search techniques present a good balance between the quality of local optima and the time to search a neighborhood. We develop a language to generate exponentially large neighborhoods for sequencing problems using grammars. We develop efficient generic dynamic programming solvers that determine the optimal neighbor in a neighborhood generated by a grammar for a list of sequencing problems, including the Traveling Salesman Problem and the Linear Ordering Problem. This framework unifies a variety of previous results on exponentially large neighborhoods for the Traveling Salesman Problem. / by Agustín Bompadre. / Ph.D.
|
187 |
Think global, act local when estimating a sparse precision matrixLee, Peter Alexander January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 99-100). / Substantial progress has been made in the estimation of sparse high dimensional precision matrices from scant datasets. This is important because precision matrices underpin common tasks such as regression, discriminant analysis, and portfolio optimization. However, few good algorithms for this task exist outside the space of L1 penalized optimization approaches like GLASSO. This thesis introduces LGM, a new algorithm for the estimation of sparse high dimensional precision matrices. Using the framework of probabilistic graphical models, the algorithm performs robust covariance estimation to generate potentials for small cliques and fuses the local structures to form a sparse yet globally robust model of the entire distribution. Identification of appropriate local structures is done through stochastic discrete optimization. The algorithm is implemented in Matlab and benchmarked against competitor algorithms for an array of synthetic datasets. Simulation results suggest that LGM may outperform GLASSO when model sparsity is especially important and when variables in the dataset belong to a number of closely related (if unknown) groups. / by Peter Alexander Lee. / S.M.
|
188 |
Capacity expansion in contemporary telecommunication networksSivaraman, Raghavendran January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 155-160). / We study three capacity expansion problems in contemporary long distance telecommunication networks. The first two problems, motivated by a major long distance provider, address capacity expansion in national hybrid long distance telecommunication networks that use both the traditional TDM technology and more recent VoIP technology to transport voice calls. While network capacity expansion in general is known to be hard to approximate, we exploit the unique requirements associated with hybrid networks to develop compact models and algorithms with strong performance guarantees for these problems. For a single period single facility capacity expansion problem in a hybrid network, using a decomposition approach we design a (2 + E)-factor approximation algorithm. Generalizing this idea, we propose a Decentralized Routing Scheme that can be used to design approximation algorithms for many variations of hybrid network capacity expansion. For the Survivable Capacity Expansion Problem in hybrid networks, in which we are required to install enough capacity to be able to support all demands even if a single link fails, we propose a compact integer program model. We show that this problem is APX-Hard, and present two heuristics with constant worst case performance guarantees. Finally, we consider the capacity planning problem when peak demands occurring at different times can share network capacity. We propose a general model for capturing time variation of demand, and establish a necessary and sufficient condition for a capacity plan to be feasible. Using a cutting plane approach, we develop a heuristic procedure. Computational experiments on real and random instances show that the proposed procedure performs well, producing solutions within 12% of optimality on average for the instances tested. The tests also highlight the significant savings potential that might be obtained by capacity planning with time sharing. / by Raghavendran Sivaraman. / Ph.D.
|
189 |
A robust optimization approach to online problems / RO approach to online problemsKorolko, Nikita (Nikita E.) January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 149-155). / In this thesis, we consider online optimization problems that are characterized by incrementally revealed input data and sequential irrevocable decisions that must be made without complete knowledge of the future. We employ a combination of mixed integer optimization (MIO) and robust optimization (RO) methodologies in order to design new efficient online algorithms that outperform state-of-the-art methods for many important practical applications. We empirically demonstrate that RO-based algorithms are computationally tractable for instances of practical size, generate more cost-effective decisions and can simultaneously model a large class of similar online problems due to exceptional modeling power of MIO. In Part I, we consider the well-known K-server problem from the perspective of robust adaptive optimization. We propose a new tractable mixed integer linear formulation of the K-server problem that incorporates both information from the past and uncertainty about the future. By combining ideas from classical online algorithms developed in the computer science literature and robust and adaptive optimization developed in the operations research literature we propose a new method that (a) is computationally tractable, (b) almost always outperforms all other methods in numerical experiments, and (c) is stable with respect to potential errors in the assumptions about the future. In Part II, we consider several extensions of the asset-based weapon-to-target assignment problem whose objective is to protect ships in a fleet from incoming threats. We demonstrate that the new highly nonlinear MIO formulation (a) can be combined with lazy constraints techniques allowing the system designer to find optimal solutions in real time, (b) can be extended to the multi-period setting, and (c) admits a decentralized solution with limited loss of optimality. In Part III, we present a novel covariate-adaptive optimization algorithm for online allocation in clinical trials. The new approach leveraging MIO and RO techniques (a) guarantees a better between-group covariate balance in comparison with state-of- the-art methods, (b) yields statistical power at least as high as, and sometimes significantly higher than, randomization-based algorithms, and (c) is well protected against selection, investigator and accidental bias. / by Nikita Korolko. / Ph. D.
|
190 |
Competitive multi-period pricing for perishable productsSood, Anshul, 1978- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 161-165). / Pricing of a perishable product over a multi-period time horizon is a challenging problem under an oligopolistic market framework. We propose and study a model for multi-period pricing in an oligopolistic market for a single perishable product. Each participating seller in the market has a fixed inventory of the product at the beginning of the time horizon and additional production is not an available option. Any unsold inventory at the end of the horizon is worthless. The sellers do not have the option of periodically reviewing and replenishing their inventory. Such a model is appropriate for modelling competition in situations where inventory replenishment decisions are made over a longer time horizon and can be considered exogenous to the pricing decision process. This kind of a setup can be used to model pricing of air fares, hotel reservations, bandwidth in communication networks, etc. In this thesis, we study two issues related to multi-period pricing of a perishable product. First we study the competitive aspect of the problem. Second we study the setup where the demand function for each seller has some associated uncertainty. We assume that the sellers would like to adopt a policy that is robust to adverse uncertain circumstances. We discuss the challenges associated with the analysis for this model. / (cont.) We study non-cooperative Nash equilibrium policies for the sellers. We discuss why known results from the literature do not extend to this model. We introduce an optimization approach using results from variational inequality theory and robust optimization to establish existence of the pricing equilibrium policy and comment on the uniqueness of the pricing equilibrium policy. We also introduce an iterative learning algorithm for computing the equilibrium policy and analyze its convergence. We study how much is lost in terms of efficiency (in terms of total system profit) due to competition. Finally, we illustrate our results with some numerical examples and discuss some insights. / by Anshul Sood. / Ph.D.
|
Page generated in 0.0834 seconds