Spelling suggestions: "subject:"computation."" "subject:"omputation.""
111 |
Variational constitutive updates for strain gradient isotropic plasticityQiao, Lei, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 93-96). / In the past decades, various strain gradient isotropic plasticity theories have been developed to describe the size-dependence plastic deformation mechanisms observed experimentally in micron-indentation, torsion, bending and thin-film bulge tests in metallic materials. Strain gradient plasticity theories also constitute a convenient device to introduce ellipticity in the differential equations governing plastic deformation in the presence of softening. The main challenge to the numerical formulations is that the effective plastic strain, a local internal variable in the classic isotropic plasticity theory, is now governed by the partial differential equation which includes spatial derivatives. Most of the current numerical formulations are based on Aifantis' one-parameter model with a Laplacian term [Aifantis and Muhlhaus, ijss, 28:845-857, 1991]. As indicated in the paper [Fleck and Hutchinson, jmps, 49:2245-2271, 2001], one parameter is not sufficient to match the experimental data. Therefore a robust and efficient computational framework that can deal with more parameters is still in need. In this thesis, a numerical formulation based on the framework of variational constitutive updates is presented to solve the initial boundary value problem in strain gradient isotropic plasticity. One advantage of this approach compared to the mixed methods is that it avoids the need to solve for both the displacement and the effective plastic strain fields simultaneously. Another advantage of this approach is, as has been amply established for many other material models, that the solution of the problem follows a minimum principle, thus providing a convenient basis for error estimation and adaptive remeshing. / (cont.) The advantages of the framework of variational constitutive updates have already been verified in a wide class of material models including visco-elasticity, visco-plasticity, crystal plasticity and soil, however this approach has not been implemented in the strain gradient plasticity models. In this thesis, a three-parameter strain gradient isotropic plasticity model is formulated within the variational framework, which is then taken as a basis for finite element discretization. The resulting model is implemented in a computer code and exercised on the benchmark problems to demonstrate the robustness and versatility of the proposed method. / by Lei Qiao. / S.M.
|
112 |
Multi-objective constrained optimization for decision making and optimization for system architecturesLin, Maokai January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 171-174). / This thesis proposes new methods to solve three problems: 1) how to model and solve decision-making problems, 2) how to translate between a graphical representation of systems and a matrix representation of systems, and 3) how to cluster single and multiple Design Structure Matrices (DSM). To solve the first problem, the thesis provides an approach to model decisionmaking problems as multi-objective Constraint Optimization Problems (COP) based on their common structures. A set of new algorithms to find Pareto front of multi objective COP is developed by generalizing upon the Conflict-directed A* (CDA*) algorithm for single-objective COPs. Two case studies - Apollo mission mode study and earth science decadal survey study - are provided to demonstrate the effectiveness of the modelling approach and the set of algorithms when they are applied to real world problems. For the second problem, the thesis first extends classical DSMs to incorporate different relations between components in a system. The Markov property of the extended DSM is then revealed. Furthermore, the thesis introduces the concept of "projection", which maps and condenses a system graph to a DSM based on the Markov property of DSM. For the last problem, an integer programming model is developed to encode the single DSM clustering problem. The thesis tests the effectiveness of the model by applying it to a part of a real-world jet engine design project. The model is further extended to solve the multiple DSM clustering problems. / by Maokai Lin. / S.M.
|
113 |
Characterization of unsteady loading due to impeller-diffuser interaction in centrifugal compressorsLusardi, Christopher (Christopher Dean) January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 89-90). / Time dependent simulations are used to characterize the unsteady impeller blade loading due to imipeller-diffuser interaction in centrifugal compressor stages. The capability of simulations are assessed by comparing results against unsteady pressure and velocity measurements in the vaneless space. Simulations are shown to be adequate for identifying the trends of unsteady impeller blade loading with operating and design parameters. However they are not sufficient for predicting the absolute magnitude of loading unsteadiness. Errors of up to 14% exist between absolute values of flow quantities. Evidence suggests that the k - e turbulence model used is inappropriate for centrifugal compressor flow and is the significant source of these errors. The unsteady pressure profile on the blade surface is characterized as the sum of two superimposing pressure components. The first component varies monotonically along the blade chord. The second component can be interpreted as an acoustic wave propagating upstream. Both components fluctuate at the diffuser vane passing frequency, but at a different phase angle. The unsteady loading is the sum of the fluctuation amplitude of each component minus a value that is a function of the phase relationship between the pressure component fluctuations. Simulation results for different compressor designs are compared. Differences observed are primarily attributed to the amplitude of pressure fluctuation on the pressure side of the blade and the wavelength of the pressure disturbance propagating upstream. Lower pressure side pressure fluctuations are associated with a weaker pressure non-uniformity at the diffuser inlet as a result of a lower incidence angle into the diffuser. The wavelength of the pressure disturbance propagating upstream sets the domain on the blade surface in which the phase relationship between pressure component fluctuations is favorable. A longer wavelength increases the domain over which this phase relationship is such that the amplitude of unsteadiness is reduced. / by Christopher Lusardi. / S.M.
|
114 |
Iterative algorithms for a joint pricing and inventory control problem with nonlinear demand functionsMazumdar, Anupam, S.M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 79-81). / Price management, production planning and inventory control are important determinants of a firm's profitability. The intense competition brought about by rapid innovation, lean manufacturing time and the internet revolution has compelled firms to adopt a dynamic strategy that involves complex interplay between pricing and production decisions. In this thesis we consider some of these problems and develop computationally efficient algorithms that aim to tackle and optimally solve these problems in a finite amount of time. In the first half of the thesis we consider the joint pricing and inventory control problem in a deterministic and multiperiod setting utilizing the popular log linear demand model. We develop four algorithms that aim to solve the resulting profit maximization problem in a finite amount of time. The developed algorithms are then tested in a variety of settings ranging from small to large instances of trial data. The second half of the thesis deals with setting prices effectively when the customer demand is assumed to follow the multinomial logit demand model, which is the most popular discrete choice demand model. The profit maximization problem (even in the absence of constraints) is non-convex and hard to solve. Despite this fact we develop algorithms that compute the optimal solution efficiently. We test the algorithms we develop in a wide variety of scenarios from small to large customer segment, with and without production/inventory constraints. The last part of the thesis develops solution methods for the joint pricing and inventory control problem when costs are linear and demand follows the multinomial logit model. / by Anupam Mazumdar. / S.M.
|
115 |
Optimizing beer distribution game order policy using numerical simulationsXiao, Qinwen January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 63-64). / One of the major challenges in supply chain management is the level of information availability. It is very hard yet important to coordinate each stage in the supply chain when the information is not centralized and the demand is uncertain. In this thesis, I analyzed the bullwhip effect in supply chain management using the MIT Beer Distribution Game. I also proposed heuristics and models to optimize the MIT Beer Distribution Game order policy when the customer's demand is both known and unknown. The proposed model provides each player with an order policy based on how many weeks of inventory the player needs to keep ahead to minimize the global cost of the supply chain. The optimized order policy is robust, practical, and generated by numerical simulations. The model is applied in a number of experiments involving deterministic and random demand and lead time. The simulation results of my work are compared with two other artificial agent algorithms, and the improvements brought by my results are presented and analyzed. / by Qinwen Xiao. / S.M.
|
116 |
A fast enriched FEM for Poisson equations involving interfaces / Fast enriched finite element method for Poisson equations involving interfacesHuynh, Thanh Le Ngoc January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (leaves 55-56). / We develop a fast enriched finite element method for solving Poisson equations involving complex geometry interfaces by using regular Cartesian grids. The presence of interfaces is accounted for by developing suitable jump conditions. The immersed boundary method (IBM) and the immersed interface method (IIM) are successfully used to solve these problems when combined with a fast Fourier transform. However, the IBM and the IIM, which are developed from the finite difference method, have several disadvantages including the characterization of the null spaces and the inability to treat complex geometries accurately. We propose a solution to these difficulties by employing the finite element method. The continuous Galerkin solution approximations at the interface elements are modified using the enriched basis functions to make sure that the optimal convergence rates are obtained. Here, the FFT is applied in the fast Poisson solver to significantly accelerate the computational processes for solving the global matrix system. With reasonably small interfaces, the operational cost is almost linearly proportional to the number of the Cartesian grid points. The method is further extended to solve problems involving multi-materials while preserving the optimal accuracy. Several benchmark examples are shown to demonstrate the performance of the method. / by Thanh Le Ngoc Huynh. / S.M.
|
117 |
Riemannian geometry of matrix manifolds for Lagrangian uncertainty quantification of stochastic fluid flowsFeppon, Florian (Florian Jeremy) January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 119-129). / This work focuses on developing theory and methodologies for the analysis of material transport in stochastic fluid flows. In a first part, two dominant classes of techniques for extracting Lagrangian Coherent Structures are reviewed and compared and some improvements are suggested for their pragmatic applications on realistic high-dimensional deterministic ocean velocity fields. In the stochastic case, estimating the uncertain Lagrangian motion can require to evaluate an ensemble of realizations of the flow map associated with a random velocity flow field, or equivalently realizations of the solution of a related transport partial differential equation. The Dynamically Orthogonal (DO) approximation is applied as an efficient model order reduction technique to solve this stochastic advection equation. With the goal of developing new rigorous reduced-order advection schemes, the second part of this work investigates the mathematical foundations of the method. Riemannian geometry providing an appropriate setting, a framework free of tensor notations is used to analyze the embedded geometry of three popular matrix manifolds, namely the fixed rank manifold, the Stiefel manifold and the isospectral manifold. Their extrinsic curvatures are characterized and computed through the study of the Weingarten map. As a spectacular by-product, explicit formulas are found for the differential of the truncated Singular Value Decomposition, of the Polar Decomposition, and of the eigenspaces of a time dependent symmetric matrix. Convergent gradient flows that achieve related algebraic operations are provided. A generalization of this framework to the non-Euclidean case is provided, allowing to derive analogous formulas and dynamical systems for tracking the eigenspaces of non-symmetric matrices. In the geometric setting, the DO approximation is a particular case of projected dynamical systems, that applies instantaneously the SVD truncation to optimally constrain the rank of the reduced solution. It is obtained that the error committed by the DO approximation is controlled under the minimal geometric condition that the original solution stays close to the low-rank manifold. The last part of the work focuses on the practical implementation of the DO methodology for the stochastic advection equation. Fully linear, explicit central schemes are selected to ensure stability, accuracy and efficiency of the method. Riemannian matrix optimization is applied for the dynamic evaluation of the dominant SVD of a given matrix and is integrated to the DO time-stepping. Finally the technique is illustrated numerically on the uncertainty quantification of the Lagrangian motion of two bi-dimensional benchmark flows. / by Florian Feppon. / S.M.
|
118 |
Multiscale Dynamic Time and Space Warping / Multiscale DTSWFitriani January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 149-151). / Dynamic Time and Space Warping (DTSW) is a technique used in video matching applications to find the optimal alignment between two videos. Because DTSW requires O(N4) time and space complexity, it is only suitable for short and coarse resolution videos. In this thesis, we introduce Multiscale DTSW: a modification of DTSW that has linear time and space complexity (O(N)) with good accuracy. The first step in Multiscale DTSW is to apply the DTSW algorithm to coarse resolution input videos. In the next step, Multiscale DTSW projects the solution from coarse resolution to finer resolution. A solution for finer resolution can be found effectively by refining the projected solution. Multiscale DTSW then repeatedly projects a solution from the current resolution to finer resolution and refines it until the desired resolution is reached. I have explored the linear time and space complexity (O(N)) of Multiscale DTSW both theoretically and empirically. I also have shown that Multiscale DTSW achieves almost the same accuracy as DTSW. Because of its efficiency in computational cost, Multiscale DTSW is suitable for video detection and video classification applications. We have developed a Multiscale-DTSW-based video classification framework that achieves the same accuracy as a DTSW-based video classification framework with greater than 50 percent reduction in the execution time. We have also developed a video detection application that is based on Dynamic Space Warping (DSW) and Multiscale DTSW methods and is able to detect a query video inside a target video in a short time. / by Fitriani. / S.M.
|
119 |
Iterative uncertainty reduction via Monte Carlo simulation : a streamlined life cycle assessment case studyBolin, Christopher E. (Christopher Eric) January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / "June 2013." Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 97-103). / Life cycle assessment (LCA) is one methodology for assessing a product's impact on the environment. LCA has grown in popularity recently as consumers and governments request more information concerning the environmental consequences of goods and services. In many cases, however, carrying out a complete LCA is prohibitively expensive, demanding large investments of time and money to collect and analyze data. This thesis aims to address the complexity of LCA by highlighting important product parameters, thereby guiding data collection. LCA streamlining is the process of reducing the necessary effort to produce acceptable analyses. Many methods of LCA streamlining are unfortunately vague and rely on engineering intuition. While they can be effective, the reduction in effort is often accompanied by a commensurate increase in the uncertainty of the results. One nascent streamlining method aims to reduce uncertainty by generating random simulations of the target product's environmental impact. In these random Monte Carlo simulations the product's attributes are varied, producing a range of impacts. Parameters that contribute significantly to the uncertainty of the overall impact are targeted for resolution. To resolve a parameter, data must be collected to more precisely define its value. This research project performs a streamlined LCA case study in collaboration with a diesel engine manufacturer. A specific engine is selected and a complex model of its production and manufacturing energy use is created. The model, consisting of 184 parameters, is then sampled randomly to determine key parameters for resolution. Parameters are resolved progressively and the resulting decrease in uncertainty is examined. The primary metric for evaluating model uncertainty is False Error Rate (FSR), defined here as the confusion between two engines that differ in energy use by 10%. Initially the FSR is 21%, dropping to 6.1% after 20 parameters are resolved, and stabilizing at 5.8% after 39 parameters are resolved. The case study illustrates that, if properly planned, a streamlined LCA can be performed that achieves desired resolution while vastly reducing the data collection burden. / by Christopher E. Bolin. / S.M.
|
120 |
A multiple secretary problem with switch costsDing, Jiachuan January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 76). / In this thesis, we utilize probabilistic reasoning and simulation methods to determine the optimal selection rule for the secretary problem with switch costs, in which a known number of applicants appear sequentially in a random order, and the objective is to maximize the sum of the qualities of all hired secretaries over all time. It is assumed that the quality of each applicant is uniformly distributed and any hired secretary can be replaced by a better qualified one at a constant switch cost. A dynamic program is formulated and the optimal selection rule for the single secretary case is solved. An approximate solution is given for the multiple secretary case, in which we are allowed to have more than one secretary at a time. An experiment was designed to simulate the interview process, in which respondents were sequentially faced with random numbers that represent the qualities of different applicants. Finally, the experimental results are compared against the optimal selection strategy. / by Jiachuan Ding. / S.M.
|
Page generated in 0.1025 seconds