Spelling suggestions: "subject:"computation."" "subject:"omputation.""
101 |
Setting optimal production lot sizes and planned lead times in a job shop systemYuan, Rong, Ph. D. Massachusetts Institute of Technology January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 73-75). / In this research we model a job shop that produces a set of discrete parts in a make-to-stock setting. The intent of the research is to develop a planning model to determine the optimal operating tactics that minimize the relevant manufacturing costs subject to workload variability and capacity limits. We model the interplay of three key components in the job shop, namely, the production frequency for each part, the variability of production at each work station, and the level of parts inventory. We consider two operating tactics (decision variables): the production lot size for each part and the planned lead time for each work station. We model the relevant manufacturing costs, entailing production overtime costs and inventory-related costs (finished parts, work-in-process, and raw materials), as functions of these decision variables. We formulate a non-linear optimization model and implement it in the Excel Spreadsheet. We solve the model with the premium Excel Solver to determine the minimum-cost operating tactics. We test the model with both hypothetical and actual factory data from our research sponsor. The target factory processes 133 product parts on 59 work stations. The results are consistent with our intuition and demonstrate the potential value from optimizing over these tactics; these tests also provide some managerial insights on the application of these operating tactics. / by Rong Yuan. / S.M.
|
102 |
Model simplification of chemical kinetic systems under uncertaintyColes, Thomas Michael Kyte January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 103-108). / This thesis investigates the impact of uncertainty on the reduction and simplification of chemical kinetics mechanisms. Chemical kinetics simulations of complex fuels are very computationally expensive, especially when combined with transport, and so reduction or simplification must be used to make them more tractable. Existing approaches have been in an entirely deterministic setting, even though reaction rate parameters are generally highly uncertain. In this work, potential objectives under uncertainty are defined and then a number of studies are made in the hope of informing the development of a new uncertainty-aware simplification scheme. Modifications to an existing deterministic algorithm are made as a first step towards an appropriate new scheme. / by Thomas Michael Kyte Coles. / S.M.
|
103 |
Model reduction for dynamic sensor steering : a Bayesian approach to inverse problemsWogrin, Sonja January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 97-101). / In many settings, distributed sensors provide dynamic measurements over a specified time horizon that can be used to reconstruct information such as parameters, states or initial conditions. This estimation task can be posed formally as an inverse problem: given a model and a set of measurements, estimate the parameters of interest. We consider the specific problem of computing in real-time the prediction of a contamination event, based on measurements obtained by mobile sensors. The spread of the contamination is modeled by the convection diffusion equation. A Bayesian approach to the inverse problem yields an estimate of the probability density function of the initial contaminant concentration, which can then be propagated through the forward model to determine the predicted contaminant field at some future time and its associated uncertainty distribution. Sensor steering is effected by formulating and solving an optimization problem that seeks the sensor locations that minimize the uncertainty in this prediction. An important aspect of this Dynamic Sensor Steering Algorithm is the ability to execute in real-time. We achieve this through reduced-order modeling, which (for our two-dimensional examples) yields models that can be solved two orders of magnitude faster than the original system, but only incur average relative errors of magnitude O(10-3). The methodology is demonstrated on the contaminant transport problem, but is applicable to a broad class of problems where we wish to observe certain phenomena whose location or features are not known a priori. / by Sonja Wogrin. / S.M.
|
104 |
A generalized precorrected-FFT method for electromagnetic analysisLeibman, Stephen Gerald January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 117-119). / Boundary Element Methods (BEM) can be ideal approaches for simulating the behavior of physical systems in which the volumes have homogeneous properties. These, especially the so-called "fast" or "accelerated" BEM approaches often have significant computational advantages over other well-known methods which solve partial differential equations on a volume domain. However, the implementation of techniques used to accelerate BEM approaches often comes at a loss of some generality, reducing their applicability to many problems and preventing engineers and researchers from easily building on a common, popular base of code. In this thesis we create a BEM solver which uses the Pre-Corrected FFT technique for accelerating computation, and uses a novel approach which allows users to provide arbitrary basis functions. We demonstrate its utility for both electrostatic and full-wave electromagnetic problems in volumes with homogeneous isotropic permittivity, bounded by arbitrarily complex surface geometries. The code is shown to have performance characteristics similar to the best known approaches for these problems. It also provides an increased level of generality, and is designed in such a way that should allow it to easily be extended by other researchers. / by Stephen Gerald Leibman. / S.M.
|
105 |
Computational design and optimization of infrastructure policy in water and agricultureAlhassan, Abdulaziz (Abdulaziz Abdulrahman) January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-90). / Investments in infrastructure tend to be associated with high capital costs, creating a necessity for tools to prioritize and evaluate different infrastructure investment options. This thesis provides a survey of computational tools, and their applicability in fine-tuning infrastructure policy levers, prioritizing among different infrastructure investment options and finding optimal sizing parameters to achieve a certain objective. First, we explore the usability of Monti Carlo simulations to project future water demand in Saudi Arabia and then, we use the outcome as an input to a Mixed Integer Linear Program (MILP) that investigates the feasibility of seawater desalination for agricultural irrigation under different water costing schemes. Further, we use numerical simulations of partial differential equations to study the conflicting interests between agricultural and municipal water demands in groundwater aquifer withdrawals and lastly we evaluate the use of Photo Voltaic powered Electro Dialysis Reversal (PV-EDR) as a potential technology to desalinate brackish groundwater through a multidisciplinary system design and optimization approach. / by Abdulaziz Alhassan. / S.M.
|
106 |
Recovery of primal solution in dual subgradient schemesMa, Jing, S.M. Massachusetts Institute of Technology January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 97-99). / In this thesis, we study primal solutions for general optimization problems. In particular, we employ the subgradient method to solve the Lagrangian dual of a convex constrained problem, and use a primal-averaging scheme to obtain near-optimal and near-feasible primal solutions. We numerically evaluate the performance of the scheme in the framework of Network Utility Maximization (NUM), which has recently drawn great research interest. Specifically for the NUM problems, which can have concave or nonconcave utility functions and linear constraints, we apply the dual-based decentralized subgradient method with averaging to estimate the rate allocation for individual users in a distributed manner, due to its decomposability structure. Unlike the existing literature on primal recovery schemes, we use a constant step-size rule in view of its simplicity and practical significance. Under the Slater condition, we develop a way to effectively reduce the amount of feasibility violation at the approximate primal solutions, namely, by increasing the value initial dual iterate; moreover, we extend the established convergence results in the convex case to the more general and realistic situation where the objective function is convex. In particular, we explore the asymptotical convergence properties of the averaging sequence, the tradeoffs involved in the selection of parameter values, the estimation of duality gap for particular functions, and the bounds for the amount of constraint violation and value of primal cost per iteration. Numerical experiments performed on NUM problems with both concave and nonconcave utility functions show that, the averaging scheme is more robust in providing near-optimal and near-feasible primal solutions, and it has consistently better performance than other schemes in most of the test instances. / by Jing Ma. / S.M.
|
107 |
Optimal Bayesian experimental design in the presence of model errorFeng, Chi, S.M. Massachusetts Institute of Technology January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-90). / The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction. We propose an information theoretic framework and algorithms for robust optimal experimental design with simulation-based models, with the goal of maximizing information gain in targeted subsets of model parameters, particularly in situations where experiments are costly. Our framework employs a Bayesian statistical setting, which naturally incorporates heterogeneous sources of information. An objective function reflects expected information gain from proposed experimental designs. Monte Carlo sampling is used to evaluate the expected information gain, and stochastic approximation algorithms make optimization feasible for computationally intensive and high-dimensional problems. A key aspect of our framework is the introduction of model calibration discrepancy terms that are used to "relax" the model so that proposed optimal experiments are more robust to model error or inadequacy. We illustrate the approach via several model problems and misspecification scenarios. In particular, we show how optimal designs are modified by allowing for model error, and we evaluate the performance of various designs by simulating "real-world" data from models not considered explicitly in the optimization objective. / by Chi Feng. / S.M.
|
108 |
Evaluating Intrusion Detection Systems for Energy Diversion AttacksSethi, Abhishek Rajkumar January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 111-114). / The widespread deployment of smart meters and ICT technologies is enabling continuous collection of high resolution data about consumption behavior and health of grid infrastructure. This has also spurred innovations in technological solutions using analytics/machine learning methods that aim to improve efficiency of grid operations, implement targeted demand management programs, and reduce distribution losses. One one hand, the technological innovations can potentially lead large-scale adoption of analytics driven tools for predictive maintenance and anomaly detection systems in electricity industry. On the other hand, private profit-maximizing firms (distribution utilities) need accurate assessment of the value of these tools to justify investment in collection and processing of significant amount of data and buy/implement analytics tools that exploit this data to provide actionable information (e.g. prediction of component failures, alerts regarding fraudulent customer behavior, etc.) In this thesis, the focus on the value assessment of intrusion/fraud detection systems, and study the tradeoff faced by distribution utilities in terms of gain from fraud investigations (and deterrence of fraudulent customer) versus cost of investigation and false alarms triggered due to probabilistic nature of IDS. Our main contribution is a Bayesian inspection game framework, which models the interactions between a profit-maximizing distribution utility and a population of strategic customers. In our framework, a fraction of customers are fraudulent - they consume same average quantity of electricity but report less by strategically manipulating their consumption data. We consider two sources of information incompleteness: first, the distribution utility does not know the identity of fraudulent customers but only knows the fraction of these consumers, and second, the distribution utility does not know the actual theft level but only knows its distribution. We first consider situation in which only the first source of information incompleteness is present, i.e., the distribution utility has complete information about the actual theft level. We present two simultaneous game models, which have same assumption about customer preferences and fraud, but differ in the way in which the distribution utility operates the IDS. In the first model, the distribution utility probabilistically chooses to use IDS with a default (fixed) configuration. In the second model, the distribution utility can configure/tune the IDS to achieve an optimal operating point (i.e. combination of detection probability and false alarm rate). Throughout, we assume that the theft level is greater than cost of attack. Our results show that for, the game with default IDS configuration, the distribution utility does not use the IDS in equilibrium if the fraction of fraudulent customers is less than a critical fraction. Also the distribution utility realizes a positive "value of IDS" only if one or both have the following conditions hold: (a) the ratio of detection probability and false alarm probability is greater than a critical ratio, (b) the fraction of fraudulent customers is greater than the critical fraction. For the tunable IDS game, we show that the distribution utility always uses an optimal configuration with non-zero false alarm probability. Furthermore, the distribution utility does not tune the false alarm probability when the fraction of fraudulent customers is greater than a critical fraction. In contrast to the game with fixed IDS, in the game of tunable IDS, the distribution utility realizes a positive value from IDS, and the value increases in fraction of fraudulent customers. Next, we consider the situation in which both sources of information incompleteness are present. Specifically, we present a sequential game in which the distribution utility first chooses the optimal configuration of the IDS based on its knowledge of theft level distribution (Stage 1), and then optimally uses the configured IDS in a simultaneous interaction with the customers (Stage 2). This sequential game naturally enables estimation of the "value of information" about theft level, which represents the additional monetary benefit the distribution utility can obtain if the exact value of average theft level is available in choosing optimal IDS configuration in Stage 1. Our results suggest that the optimal configuration under lack of full information on theft level lies between the optimal configurations corresponding to the high and low theft levels. Interestingly enough, our analysis also suggests that for certain technical (yet realistic) conditions on the ROC curve that characterizes achievable detection probability and false alarm probability configurations, the value of information about certain combination of theft levels can attain negligibly small values. / by Abhishek Rajkumar Sethi. / S.M.
|
109 |
Efficient reduced-basis approximation of scalar nonlinear time-dependent convection-diffusion problems, and extension to compressible flow problemsMen, Han (Han Abby) January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 61-65). / In this thesis, the reduced-basis method is applied to nonlinear time-dependent convection-diffusion parameterized partial differential equations (PDEs). A proper orthogonal decomposition (POD) procedure is used for the construction of reduced-basis approximation for the field variables. In the presence of highly nonlinear terms, conventional reduced-basis would be inefficient and no longer superior to classical numerical approaches using advanced iterative techniques. To recover the computational advantage of the reduced-basis approach, an empirical interpolation approximation method is employed to define the coefficient-function approximation of the nonlinear terms. Next, the coefficient-function approximation is incorporated into the reduced-basis method to obtain a reduced-order model of nonlinear time-dependent parameterized convection-diffusion PDEs. Two formulations for the reduced-order models are proposed, which construct the reduced-basis space for the nonlinear functions and residual vector respectively. Finally, an offline-online procedure for rapid and inexpensive evaluation of the reduced-order model solutions and outputs, as well as associated asymptotic a posterior error estimators are developed. / (cont.) The operation count for the online stage depends only on the dimension of our reduced-basis approximation space and the dimension of our coefficient-function approximation space. The extension of the reduced-order model to a system of equations is also explored. / by Han Men. / S.M.
|
110 |
Robust fluid control of multiclass queueing networks / Robust fluid control of multiclass queuing networksSu, Hua, S.M. Massachusetts Institute of Technology January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 89-92). / This thesis applies recent advances in the field of robust optimization to the optimal control of multiclass queueing networks. We develop models that take into account the uncertainty of interarrival and service time in multiclass queueing network problems without assuming a specific probability distribution, while remaining highly tractable and providing insight into the corresponding optimal control policy. Our approach also allows us to adjust the level of robustness of the solution to trade off performance and protection against uncertainty. We apply robust optimization to both open and closed queueing networks. For open queueing networks, we study control problems that involve sequencing, routing and input control decision, and optimize the total holding cost. For closed queueing networks, we focus on the sequencing problem and optimize the throughput. We compare the robust solutions to those derived by fluid control, dynamic programming and stochastic input control. We show that the robust control policy leads to better performance. Robust optimization emerges as a promising methodology to address a wide range of multiclass queueing networks subject to uncertainty, as it leads to representations of randomness that make few assumptions on the underlying probabilities. It also remains numerically tractable, and provides theoretical insights into the structure of the optimal control policy. / by Hua Su. / S.M.
|
Page generated in 0.1104 seconds