• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 236
  • 236
  • 176
  • 176
  • 175
  • 21
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Model simplification of chemical kinetic systems under uncertainty

Coles, Thomas Michael Kyte January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 103-108). / This thesis investigates the impact of uncertainty on the reduction and simplification of chemical kinetics mechanisms. Chemical kinetics simulations of complex fuels are very computationally expensive, especially when combined with transport, and so reduction or simplification must be used to make them more tractable. Existing approaches have been in an entirely deterministic setting, even though reaction rate parameters are generally highly uncertain. In this work, potential objectives under uncertainty are defined and then a number of studies are made in the hope of informing the development of a new uncertainty-aware simplification scheme. Modifications to an existing deterministic algorithm are made as a first step towards an appropriate new scheme. / by Thomas Michael Kyte Coles. / S.M.
62

Model reduction for dynamic sensor steering : a Bayesian approach to inverse problems

Wogrin, Sonja January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 97-101). / In many settings, distributed sensors provide dynamic measurements over a specified time horizon that can be used to reconstruct information such as parameters, states or initial conditions. This estimation task can be posed formally as an inverse problem: given a model and a set of measurements, estimate the parameters of interest. We consider the specific problem of computing in real-time the prediction of a contamination event, based on measurements obtained by mobile sensors. The spread of the contamination is modeled by the convection diffusion equation. A Bayesian approach to the inverse problem yields an estimate of the probability density function of the initial contaminant concentration, which can then be propagated through the forward model to determine the predicted contaminant field at some future time and its associated uncertainty distribution. Sensor steering is effected by formulating and solving an optimization problem that seeks the sensor locations that minimize the uncertainty in this prediction. An important aspect of this Dynamic Sensor Steering Algorithm is the ability to execute in real-time. We achieve this through reduced-order modeling, which (for our two-dimensional examples) yields models that can be solved two orders of magnitude faster than the original system, but only incur average relative errors of magnitude O(10-3). The methodology is demonstrated on the contaminant transport problem, but is applicable to a broad class of problems where we wish to observe certain phenomena whose location or features are not known a priori. / by Sonja Wogrin. / S.M.
63

A generalized precorrected-FFT method for electromagnetic analysis

Leibman, Stephen Gerald January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 117-119). / Boundary Element Methods (BEM) can be ideal approaches for simulating the behavior of physical systems in which the volumes have homogeneous properties. These, especially the so-called "fast" or "accelerated" BEM approaches often have significant computational advantages over other well-known methods which solve partial differential equations on a volume domain. However, the implementation of techniques used to accelerate BEM approaches often comes at a loss of some generality, reducing their applicability to many problems and preventing engineers and researchers from easily building on a common, popular base of code. In this thesis we create a BEM solver which uses the Pre-Corrected FFT technique for accelerating computation, and uses a novel approach which allows users to provide arbitrary basis functions. We demonstrate its utility for both electrostatic and full-wave electromagnetic problems in volumes with homogeneous isotropic permittivity, bounded by arbitrarily complex surface geometries. The code is shown to have performance characteristics similar to the best known approaches for these problems. It also provides an increased level of generality, and is designed in such a way that should allow it to easily be extended by other researchers. / by Stephen Gerald Leibman. / S.M.
64

Computational design and optimization of infrastructure policy in water and agriculture

Alhassan, Abdulaziz (Abdulaziz Abdulrahman) January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-90). / Investments in infrastructure tend to be associated with high capital costs, creating a necessity for tools to prioritize and evaluate different infrastructure investment options. This thesis provides a survey of computational tools, and their applicability in fine-tuning infrastructure policy levers, prioritizing among different infrastructure investment options and finding optimal sizing parameters to achieve a certain objective. First, we explore the usability of Monti Carlo simulations to project future water demand in Saudi Arabia and then, we use the outcome as an input to a Mixed Integer Linear Program (MILP) that investigates the feasibility of seawater desalination for agricultural irrigation under different water costing schemes. Further, we use numerical simulations of partial differential equations to study the conflicting interests between agricultural and municipal water demands in groundwater aquifer withdrawals and lastly we evaluate the use of Photo Voltaic powered Electro Dialysis Reversal (PV-EDR) as a potential technology to desalinate brackish groundwater through a multidisciplinary system design and optimization approach. / by Abdulaziz Alhassan. / S.M.
65

Recovery of primal solution in dual subgradient schemes

Ma, Jing, S.M. Massachusetts Institute of Technology January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 97-99). / In this thesis, we study primal solutions for general optimization problems. In particular, we employ the subgradient method to solve the Lagrangian dual of a convex constrained problem, and use a primal-averaging scheme to obtain near-optimal and near-feasible primal solutions. We numerically evaluate the performance of the scheme in the framework of Network Utility Maximization (NUM), which has recently drawn great research interest. Specifically for the NUM problems, which can have concave or nonconcave utility functions and linear constraints, we apply the dual-based decentralized subgradient method with averaging to estimate the rate allocation for individual users in a distributed manner, due to its decomposability structure. Unlike the existing literature on primal recovery schemes, we use a constant step-size rule in view of its simplicity and practical significance. Under the Slater condition, we develop a way to effectively reduce the amount of feasibility violation at the approximate primal solutions, namely, by increasing the value initial dual iterate; moreover, we extend the established convergence results in the convex case to the more general and realistic situation where the objective function is convex. In particular, we explore the asymptotical convergence properties of the averaging sequence, the tradeoffs involved in the selection of parameter values, the estimation of duality gap for particular functions, and the bounds for the amount of constraint violation and value of primal cost per iteration. Numerical experiments performed on NUM problems with both concave and nonconcave utility functions show that, the averaging scheme is more robust in providing near-optimal and near-feasible primal solutions, and it has consistently better performance than other schemes in most of the test instances. / by Jing Ma. / S.M.
66

Optimal Bayesian experimental design in the presence of model error

Feng, Chi, S.M. Massachusetts Institute of Technology January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-90). / The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction. We propose an information theoretic framework and algorithms for robust optimal experimental design with simulation-based models, with the goal of maximizing information gain in targeted subsets of model parameters, particularly in situations where experiments are costly. Our framework employs a Bayesian statistical setting, which naturally incorporates heterogeneous sources of information. An objective function reflects expected information gain from proposed experimental designs. Monte Carlo sampling is used to evaluate the expected information gain, and stochastic approximation algorithms make optimization feasible for computationally intensive and high-dimensional problems. A key aspect of our framework is the introduction of model calibration discrepancy terms that are used to "relax" the model so that proposed optimal experiments are more robust to model error or inadequacy. We illustrate the approach via several model problems and misspecification scenarios. In particular, we show how optimal designs are modified by allowing for model error, and we evaluate the performance of various designs by simulating "real-world" data from models not considered explicitly in the optimization objective. / by Chi Feng. / S.M.
67

Evaluating Intrusion Detection Systems for Energy Diversion Attacks

Sethi, Abhishek Rajkumar January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 111-114). / The widespread deployment of smart meters and ICT technologies is enabling continuous collection of high resolution data about consumption behavior and health of grid infrastructure. This has also spurred innovations in technological solutions using analytics/machine learning methods that aim to improve efficiency of grid operations, implement targeted demand management programs, and reduce distribution losses. One one hand, the technological innovations can potentially lead large-scale adoption of analytics driven tools for predictive maintenance and anomaly detection systems in electricity industry. On the other hand, private profit-maximizing firms (distribution utilities) need accurate assessment of the value of these tools to justify investment in collection and processing of significant amount of data and buy/implement analytics tools that exploit this data to provide actionable information (e.g. prediction of component failures, alerts regarding fraudulent customer behavior, etc.) In this thesis, the focus on the value assessment of intrusion/fraud detection systems, and study the tradeoff faced by distribution utilities in terms of gain from fraud investigations (and deterrence of fraudulent customer) versus cost of investigation and false alarms triggered due to probabilistic nature of IDS. Our main contribution is a Bayesian inspection game framework, which models the interactions between a profit-maximizing distribution utility and a population of strategic customers. In our framework, a fraction of customers are fraudulent - they consume same average quantity of electricity but report less by strategically manipulating their consumption data. We consider two sources of information incompleteness: first, the distribution utility does not know the identity of fraudulent customers but only knows the fraction of these consumers, and second, the distribution utility does not know the actual theft level but only knows its distribution. We first consider situation in which only the first source of information incompleteness is present, i.e., the distribution utility has complete information about the actual theft level. We present two simultaneous game models, which have same assumption about customer preferences and fraud, but differ in the way in which the distribution utility operates the IDS. In the first model, the distribution utility probabilistically chooses to use IDS with a default (fixed) configuration. In the second model, the distribution utility can configure/tune the IDS to achieve an optimal operating point (i.e. combination of detection probability and false alarm rate). Throughout, we assume that the theft level is greater than cost of attack. Our results show that for, the game with default IDS configuration, the distribution utility does not use the IDS in equilibrium if the fraction of fraudulent customers is less than a critical fraction. Also the distribution utility realizes a positive "value of IDS" only if one or both have the following conditions hold: (a) the ratio of detection probability and false alarm probability is greater than a critical ratio, (b) the fraction of fraudulent customers is greater than the critical fraction. For the tunable IDS game, we show that the distribution utility always uses an optimal configuration with non-zero false alarm probability. Furthermore, the distribution utility does not tune the false alarm probability when the fraction of fraudulent customers is greater than a critical fraction. In contrast to the game with fixed IDS, in the game of tunable IDS, the distribution utility realizes a positive value from IDS, and the value increases in fraction of fraudulent customers. Next, we consider the situation in which both sources of information incompleteness are present. Specifically, we present a sequential game in which the distribution utility first chooses the optimal configuration of the IDS based on its knowledge of theft level distribution (Stage 1), and then optimally uses the configured IDS in a simultaneous interaction with the customers (Stage 2). This sequential game naturally enables estimation of the "value of information" about theft level, which represents the additional monetary benefit the distribution utility can obtain if the exact value of average theft level is available in choosing optimal IDS configuration in Stage 1. Our results suggest that the optimal configuration under lack of full information on theft level lies between the optimal configurations corresponding to the high and low theft levels. Interestingly enough, our analysis also suggests that for certain technical (yet realistic) conditions on the ROC curve that characterizes achievable detection probability and false alarm probability configurations, the value of information about certain combination of theft levels can attain negligibly small values. / by Abhishek Rajkumar Sethi. / S.M.
68

Efficient reduced-basis approximation of scalar nonlinear time-dependent convection-diffusion problems, and extension to compressible flow problems

Men, Han (Han Abby) January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 61-65). / In this thesis, the reduced-basis method is applied to nonlinear time-dependent convection-diffusion parameterized partial differential equations (PDEs). A proper orthogonal decomposition (POD) procedure is used for the construction of reduced-basis approximation for the field variables. In the presence of highly nonlinear terms, conventional reduced-basis would be inefficient and no longer superior to classical numerical approaches using advanced iterative techniques. To recover the computational advantage of the reduced-basis approach, an empirical interpolation approximation method is employed to define the coefficient-function approximation of the nonlinear terms. Next, the coefficient-function approximation is incorporated into the reduced-basis method to obtain a reduced-order model of nonlinear time-dependent parameterized convection-diffusion PDEs. Two formulations for the reduced-order models are proposed, which construct the reduced-basis space for the nonlinear functions and residual vector respectively. Finally, an offline-online procedure for rapid and inexpensive evaluation of the reduced-order model solutions and outputs, as well as associated asymptotic a posterior error estimators are developed. / (cont.) The operation count for the online stage depends only on the dimension of our reduced-basis approximation space and the dimension of our coefficient-function approximation space. The extension of the reduced-order model to a system of equations is also explored. / by Han Men. / S.M.
69

Robust fluid control of multiclass queueing networks / Robust fluid control of multiclass queuing networks

Su, Hua, S.M. Massachusetts Institute of Technology January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 89-92). / This thesis applies recent advances in the field of robust optimization to the optimal control of multiclass queueing networks. We develop models that take into account the uncertainty of interarrival and service time in multiclass queueing network problems without assuming a specific probability distribution, while remaining highly tractable and providing insight into the corresponding optimal control policy. Our approach also allows us to adjust the level of robustness of the solution to trade off performance and protection against uncertainty. We apply robust optimization to both open and closed queueing networks. For open queueing networks, we study control problems that involve sequencing, routing and input control decision, and optimize the total holding cost. For closed queueing networks, we focus on the sequencing problem and optimize the throughput. We compare the robust solutions to those derived by fluid control, dynamic programming and stochastic input control. We show that the robust control policy leads to better performance. Robust optimization emerges as a promising methodology to address a wide range of multiclass queueing networks subject to uncertainty, as it leads to representations of randomness that make few assumptions on the underlying probabilities. It also remains numerically tractable, and provides theoretical insights into the structure of the optimal control policy. / by Hua Su. / S.M.
70

Variational constitutive updates for strain gradient isotropic plasticity

Qiao, Lei, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 93-96). / In the past decades, various strain gradient isotropic plasticity theories have been developed to describe the size-dependence plastic deformation mechanisms observed experimentally in micron-indentation, torsion, bending and thin-film bulge tests in metallic materials. Strain gradient plasticity theories also constitute a convenient device to introduce ellipticity in the differential equations governing plastic deformation in the presence of softening. The main challenge to the numerical formulations is that the effective plastic strain, a local internal variable in the classic isotropic plasticity theory, is now governed by the partial differential equation which includes spatial derivatives. Most of the current numerical formulations are based on Aifantis' one-parameter model with a Laplacian term [Aifantis and Muhlhaus, ijss, 28:845-857, 1991]. As indicated in the paper [Fleck and Hutchinson, jmps, 49:2245-2271, 2001], one parameter is not sufficient to match the experimental data. Therefore a robust and efficient computational framework that can deal with more parameters is still in need. In this thesis, a numerical formulation based on the framework of variational constitutive updates is presented to solve the initial boundary value problem in strain gradient isotropic plasticity. One advantage of this approach compared to the mixed methods is that it avoids the need to solve for both the displacement and the effective plastic strain fields simultaneously. Another advantage of this approach is, as has been amply established for many other material models, that the solution of the problem follows a minimum principle, thus providing a convenient basis for error estimation and adaptive remeshing. / (cont.) The advantages of the framework of variational constitutive updates have already been verified in a wide class of material models including visco-elasticity, visco-plasticity, crystal plasticity and soil, however this approach has not been implemented in the strain gradient plasticity models. In this thesis, a three-parameter strain gradient isotropic plasticity model is formulated within the variational framework, which is then taken as a basis for finite element discretization. The resulting model is implemented in a computer code and exercised on the benchmark problems to demonstrate the robustness and versatility of the proposed method. / by Lei Qiao. / S.M.

Page generated in 0.1188 seconds