Spelling suggestions: "subject:"design anda optimization"" "subject:"design ando optimization""
11 |
Approximation algorithms for rapid evaluation and optimization of architectural and civil structuresTseranidis, Stavros January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 109-111). / This thesis explores the use of approximation algorithms, sometimes called surrogate modelling, in the early-stage design of structures. The use of approximation models to evaluate design performance scores rapidly could lead to a more in-depth exploration of a design space and its trade-offs and also aid in reducing the computation time of optimization algorithms. Six machine-learning-based approximation models have been examined, chosen so that they span a wide range of different characteristics. A complete framework from the parametrization of a design space and sampling, to the construction of the approximation models and their assessment and comparison has been developed. New methodologies and metrics to evaluate model performance and understand their prediction error are introduced. The concepts examined are extensively applied to case studies of multi-objective design problems of architectural and civil structures. The contribution of this research lies in the cohesive and broad framework for approximation via surrogate modelling with new novel metrics and approaches that can assist designers in the conception of more efficient, functional as well as diverse structures. Key words: surrogate modelling, conceptual design, structural design, structural optimization. / by Stavros Tseranidis. / S.M.
|
12 |
Low rank decompositions for sum of squares optimizationSun, Jia Li, S.M. Massachusetts Institute of Technology January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (leaves 77-79). / In this thesis, we investigate theoretical and numerical advantages of a novel representation for Sum of Squares (SOS) decomposition of univariate and multivariate polynomials. This representation formulates a SOS problem by interpolating a polynomial at a finite set of sampling points. As compared to the conventional coefficient method of SOS, the formulation has a low rank property in its constraints. The low rank property is desirable as it improves computation speed for calculations of barrier gradient and Hessian assembling in many semidefinite programming (SDP) solvers. Currently, SDPT3 solver has a function to store low rank constraints to explore its numerical advantages. Some SOS examples are constructed and tested on SDPT3 to a great extent. The experimental results demonstrate that the computation time decreases significantly. Moreover, the solutions of the interpolation method are verified to be numerically more stable and accurate than the solutions yielded from the coefficient method. / by Jia Li Sun. / S.M.
|
13 |
Stochastic methods for improving secondary production decisions under compositional uncertaintyGaustad, Gabrielle G January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 77-80). / A key element for realizing long term sustainable use of any metal will be a robust secondary recovery industry. Secondary recovery forestalls depletion of non-renewable resources and avoids the deleterious effects of extraction and winning (albeit by substituting some effects of its own). For most metals, the latter provides strong motivation for recycling; for light metals, like aluminum, the motivation is compelling. Along aluminum's life-cycle there are a variety of leverage points for increasing the usage of secondary or recycled materials. This thesis aims to improve materials decision making in two of these key areas: 1) blending decisions in manufacturing, and 2) alloy design decisions in product development. The usage of recycled aluminum in alloy blends is greatly hindered by variation in the raw material composition. Currently, to accommodate compositional variation, firms commonly set production targets well inside the window of compositional specification required for performance reasons. Window narrowing, while effective, does not make use of statistical sampling data, leading to sub-optimal usage of recycled materials. This work explores the use of stochastic programming techniques which allow explicit consideration of statistical information on composition. The computational complexity of several methods is quantified in order to select a single method for comparison to deterministic models, in this case, a chance-constrained model was optimal. The framework and a case study of cast and wrought production with available scrap materials are presented. / (cont.) Results show that it is possible to increase the use of recycled material without compromising the likelihood of batch errors, when using this method compared to conventional window narrowing. The chance-constrained framework was then extended to improving the alloy design process. Currently, few systematic methods exist to measure and direct the metallurgical alloy design process to create alloys that are most able to be produced from scrap. This is due, in part, to the difficulty in evaluating such a context-dependent property as recyclability of an alloy, which will depend on the types of scraps available to producers, the compositional characteristics of those scraps, their yield, and the alloy itself. Results show that this method is effective in, a) characterizing the challenge of developing recycling-friendly alloys due to the contextual sensitivity of that property; b) demonstrating how such models can be used to evaluate the potential scrap usage of alloys; and (c) exploring the value of sensitivity analysis information to proactively identify effective alloy modifications that can drive increased potential scrap use. / by Gabrielle G. Gaustad. / S.M.
|
14 |
Reduced basis method for Boltzmann equationGarlapati, Revanth Reddy January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 103-106). / The main aim of the project is to solve the BGK model of the Knudsen parameterized Boltzmann equation which is 1-d with respect to both space and velocity. In order to solve the Boltzmann equation, we first transform the original differential equation by replacing the dependent variable with another variable, weighted with function t(y); next we obtain a Petrov Galerkin weak form of this new transformed equation. To obtain a stable and accurate solution of this weak form, we perform a transformation of the velocity variable y, such that the semi-infinite domain is mapped into a finite domain; we choose the weighting function t(y), to balance contributions at infinity. Once we obtain an accurate and well defined finite element solution of the problem. The next step is to perform the reduced basis analysis of the equation using these accurate finite element solutions. We conclude the project by verifying that the orthonormal reduced Basis method based on the greedy algorithm converges rapidly over the chosen test space. / by Revanth Reddy Garlapati. / S.M.
|
15 |
Network security and min-cost max-flow problemDahan, Mathieu January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 92-93). / Network optimization has widely been studied in the literature for a variety of design and operational problems. This has resulted in the development of computational algorithms for the study of classical operations research problems such as the maximum flow problem, the shortest path problem, and the network interdiction problem. However, in environments where network components are subject to adversarial failures, the network operator needs to strategically allocate at least some of her resources (e.g., link capacities, network flows, etc.) while accounting for the presence of a strategic adversary. This motivates the study of network security games. This thesis considers a class of network security games on flow networks, and focuses on utilizing well-known results in network optimization toward the characterization of Nash equilibria of this class of games. Specifically, we consider a 2-player strategic game for network routing under link disruptions. Player 1 (defender) routes flow through a network to maximize her value of effective flow while facing transportation costs. Player 2 (attacker) simultaneously disrupts one or more links to maximize her value of lost flow but also faces cost of disrupting links. Linear programming duality and the Max-Flow Min-Cut Theorem are applied to obtain properties that are satisfied in any Nash equilibrium. Using graph theoretic arguments, we give a characterization of the support of the equilibrium strategies. Finally, we study the conditions under which these results extend to a revised version of the game where both players face budget constraints. Thus, our contribution can be viewed as a generalization of the classical minimum cost maximum flow problem and the minimum cut problem to adversarial environments. / by Mathieu Dahan. / S.M.
|
16 |
Impact of triangle shapes using high-order discretizations and direct mesh adaptation for output errorSun, Huafei January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 95-101). / The impact of triangle shapes, including angle sizes and aspect ratios, on accuracy and stiffness is investigated for simulations of highly anisotropic problems. The results indicate that for high-order discretizations, large angles do not have an adverse impact on solution accuracy. However, a correct aspect ratio is critical for accuracy for both linear and high-order discretizations. In addition, large angles are not problematic for the conditioning of the linear systems arising from discretization. They can be overcome through small increases in preconditioning costs. A direct adaptation scheme that controls the output error via mesh operations and mesh smoothing is also developed. The decision of mesh operations is solely based on output error distribution without any a priori assumption on error convergence rate. Anisotropy is introduced by evaluating the error changes due to potential edge split, and thus the anisotropies of both primal and dual solutions are taken into account. This scheme is demonstrated to produce grids with fewer degrees of freedom for a specified error level than the existing metric-based approach. / by Huafei Sun. / S.M.
|
17 |
Tensor decomposition and parallelization of Markov Decision ProcessesSmart, David P. (David Paul) January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 85-81). / Markov Decision Processes (MDPs) with large state spaces arise frequently when applied to real world problems. Optimal solutions to such problems exist, but may not be computationally tractable, as the required processing scales exponentially with the number of states. Unsurprisingly, investigating methods for efficiently determining optimal or near-optimal policies has generated substantial interest and remains an active area of research. A recent paper introduced an MDP representation as a tensor composition of a set of smaller component MDPs, and suggested a method for solving an MDP by decomposition into its tensor components and solving the smaller problems in parallel, combining their solutions into one for the original problem. Such an approach promises an increase in solution efficiency, since each smaller problem could be solved exponentially faster than the original. This paper develops this MDP tensor decomposition and parallelization algorithm, and analyzes both its computational performance and the optimality of its resultant solutions. / by David P. Smart. / S.M.
|
18 |
Relative effectiveness of inhibition strategies for control of biological signaling pathwaysLi, Meng, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 36-38). / Many therapeutically useful and critically important drugs are inhibitors of particular enzymes in a signaling pathway. The efficiency with which an inhibitor inactivates its target can be characterized by the binding kinetics and thermodynamics. However, the overall efficiency with which the inhibitor shuts down a pathway or process, arrests a signal, or interferes with an outcome can be quite different and is often measured in cell-based assays. Because of non-linear effects, particularly in signaling pathways but elsewhere as well, non-obvious and possibly useful relationships may exist such that much greater inhibition is needed at some points in a pathway as compared to others to achieve the same overall effect. To investigate the relationship between inhibiting a signaling molecule and interrupting the result of that signal, we used pathway simulations to study the effects of inhibition for different pathway components and the effect on pathway output. We use two biological characteristics to assess the inhibitory effects: the peak and integrated pathway response. In the epidermal growth factor receptor (EGFR) pathway, 50% inhibition of most enzymes in the pathway yields less than 50% reduction of the final activated ERK output. Inhibiting MEK was found most effective in EGFR pathway and it is more effective than directly inhibiting ERK itself. Inhibiting two signaling molecules at the same time yields an effect similar to the linear superposition of effects of inhibiting them separately. In the extrinsic apoptosis pathway, 50% inhibition of most signaling molecules in the pathway yields less than 50% reduction of the final caspase-3 output. The most effective inhibitor found is XIAP which is already included in the extrinsic apoptosis pathway. / by Meng Li. / S.M.
|
19 |
0-1 graph partitioning and image segmentation / Graph partitioning and image segmentationGoh, Chun Fan January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 179-180). / Graph partitioning is the grouping of all the nodes in a graph into two or more partitions based on certain criteria. Graph cut techniques are used to partition a graph. The Minimum Cut method gives imbalanced partitions. To overcome the imbalanced partitioning, the Normalized Cut method is used. However, it is computationally expensive. The Isoperimetric Partitioning is faster and more stable, and I aim to extend and develop the related ideas. In this thesis, I propose a graph partitioning method - the 0-1 Graph Partitioning. I treat a graph as an electrical circuit: a few nodes are fixed as the voltage inputs (sources), another few nodes are grounded (sinks), and the weight of each edge is seen as the conductance between the two ends (nodes) of the edge. With this setup, other nodes have voltages in between zero and input voltage. The method cuts the graph between the sinks and sources according to the nodes' voltages and in such a way that it minimizes the normalized cut value. The method leads to the Graph Laplacian System -- a linear system. As opposed to the Normalized Cut method, which solves an eigenvalue problem to partition a graph, solving a linear system is much faster and more stable. In addition to the speed, I have proven empirically that the quality of the bipartitions is comparable to the Normalized Cut method. Based on the 0-1 method, I have also developed the Fiedler Quick Start algorithm, which can compute the Fiedler vector faster than solving the generalized eigensystem. I have also applied the graph partitioning algorithm to image segmentation. In comparison to the Normalized Cut method, we show that the method not only gives good segmentation, but it is also much simpler and faster in terms of the construction of a graph from an image, and robust to any noise contained in an image. / (cont.) With the speed and simple graph construction advantage, the method can be applied to large images. The method is object-oriented. It focuses on the objects of images and it is able to segment out objects in the first bi-partition. For k-way image segmentation, the 0-1 method can be applied in both the simultaneous and recursive ways. Apart from the 0-1 image segmentation, I have also developed the Resized Image Segmentation Scheme and the Refinement Scheme (Fast and Thorough), which can speed up the image segmentation process and improve the segmentation. Both schemes can be used by any graph based image segmentation methods. / by Chun Fan Goh. / S.M.
|
20 |
Improving the efficiency of an automated manufacturing system through a tri-part approachSong, Chen, S.M. Massachusetts Institute of Technology January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 71-72). / This research investigates a complex automated manufacturing system at three levels to improve its efficiency. In the system there are parallel loops of stations connected by a single closed conveyor. In each loop there are a series of identical stations, each with multiple storage slots and with capability to process several jobs simultaneously. At the system level we undertake capacity planning and explore Work-in-Process (WIP) control. We build an Excel model to calculate the implied load of each station, applying the model to sensitivity analyses of the system capacity. In addition, we identify a concave relationship between output and WIP based on actual factory data from our industrial partner. We surprisingly observe a reduction in output when WIP is high. Therefore, we suggest adopting a CONWIP policy in the system in order to increase and smooth the output. At the loop level we study the assignment policy. The complexity of this study is highlighted by non-trivial travel time between stations. We build a simulation model in Matlab to compare different assignment policies. The objective is to find the assignment policy that balances the station load, decreases the flow time for jobs, and reduces the rejection or blockage rate for the system. At the station level we investigate the holding time between simultaneous processes. We model this as a semi-Markov process, building a simulation model in Matlab to confirm the analytical results. We discover a tradeoff between flow time and production rate with different holding times, and propose new holding rules to further improve station performance. The conclusions from this research are useful for our industrial partner in its efforts to improve the operation of the system and to increase its capacity. Moreover, the methodologies and insights of this work can be beneficial to further research on related industry practice. / by Chen Song. / S.M.
|
Page generated in 0.1183 seconds