Spelling suggestions: "subject:"aptimization."" "subject:"anoptimization.""
651 |
High-resolution simulation of pattern formation and coarsening dynamics in 3D convective mixingFu, Xiaojing, S.M. Massachusetts Institute of Technology January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 45-47). / Geologic C0₂ sequestration is considered a promising tool to reduce anthropogenic C0₂ emissions while allowing continued use of fossil fuels for the current time. The process entails capturing C0₂ at point sources such as coal-fired power plants, and injecting it in its supercritical state into deep saline aquifers for long-term storage. Upon injection, C0₂ partially dissolves in groundwater to form an aqueous solution that is denser than groundwater. The local increase in density triggers a gravitational instability at the boundary layer that further develops into columnar C0₂-rich plumes that sink away. This mechanism, also known as convective mixing, greatly accelerates the dissolution rate of C0₂ into water and provides secure storage of C0₂ underground. Understanding convective mixing in the context of C0₂ sequestration is essential for the design of injection and monitoring strategies that prevent leakage of C0₂ back into the atmosphere. While current studies have elucidated various aspects of this phenomenon in 2D, little is known about this process in 3D. In this thesis we investigate the pattern-formation aspects of convective mixing during geological C0₂ sequestration by means of high-resolution three-dimensional simulation. We find that the C0₂ concentration field self-organizes as a cellular network structure in the diffusive boundary layer right beneath the top boundary. By studying the statistics of the cellular network, we identify various regimes of finger coarsening over time, the existence of a nonequilibrium stationary state, and an universal scaling of 3D convective mixing. We explore the correlation between the observed network pattern and the 3D flow structure predicted by hydrodynamics stability theory. / by Xiaojing Fu. / S.M.
|
652 |
Optimal approximations of coupling in multidisciplinary modelsSantos Baptista, Ricardo Miguel January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 111-115). / Design of complex engineering systems requires coupled analyses of the multiple disciplines affecting system performance. The coupling among disciplines typically contributes significantly to the computational cost of analyzing a system, and can become particularly burdensome when coupled analyses are embedded within a design or optimization loop. In many cases, disciplines may be weakly coupled, so that some of the coupling or interaction terms can be neglected without significantly impacting the accuracy of the system output. However, typical practice derives such approximations in an ad hoc manner using expert opinion and domain experience. In this thesis, we propose a new approach that formulates an optimization problem to find a model that optimally balances accuracy of the model outputs with the sparsity of the discipline couplings. An adaptive sequential Monte Carlo sampling-based technique is used to efficiently search the combinatorial model space of different discipline couplings. Finally, an algorithm for optimal model selection is presented and combined with three tractable approaches to quantify the accuracy of the system outputs with approximate couplings. These algorithms are applied to identify the important discipline couplings in three engineering problems: a fire detection satellite model, a turbine engine cycle analysis model, and a lifting surface aero-structural model. / by Ricardo Miguel Santos Baptista. / S.M.
|
653 |
Energy optimal path planning using stochastic dynamically orthogonal level set equationsNarayanan Subramani, Deepak January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2014. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 93-100). / The growing use of autonomous underwater vehicles and underwater gliders for a variety of applications gives rise to new requirements in the operation of these vehicles. One such important requirement is optimization of energy required for undertaking missions that will enable longer endurance and lower operational costs. Our goal in this thesis is to develop a computationally efficient, and rigorous methodology that can predict energy-optimal paths from among all time-optimal paths to complete an underwater mission. For this, we develop rigorous a new stochastic Dynamically Orthogonal Level Set optimization methodology. In our thesis, after a review of existing path planning methodologies with a focus on energy optimality, we present the background of time-optimal path planning using the level set method. We then lay out the questions that inspired the present thesis, provide the goal of the current work and explain an extension of the time-optimal path planning methodology to the time-optimal path planning in the case of variable nominal engine thrust. We then proceed to state the problem statement formally. Thereafter, we develop the new methodology for solving the optimization problem through stochastic optimization and derive new Dynamically Orthogonal Level Set Field equations. We then carefully present different approaches to handle the non-polynomial non-linearity in the stochastic Level Set Hamilton-Jacobi equations and also discuss the computational efficiency of the algorithm. We then illustrate the inner-workings and nuances of our new stochastic DO level set energy optimal path planning algorithm through two simple, yet important, canonical steady flows that simulate a stead front and a steady eddy. We formulate a double energy-time minimization to obtain a semi-analytical energy optimal path for the steady front crossing test case and compare the results to these of our stochastic DO level set scheme. We then apply our methodology to an idealized ocean simulation using Double Gyre flows, and finally show an application with real ocean data for completing a mission in the Middle Atlantic Bight and New Jersey Shelf/Hudson Canyon region. / by Deepak Narayanan Subramani. / S.M.
|
654 |
Simulation and optimization of hot syngas separation processes in integrated gasification combined cyclePrakash, Kshitij January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 131-137). / IGCC with CO2 capture offers an exciting approach for cleanly using abundant coal reserves of the world to generate electricity. The present state-of-the-art synthesis gas (syngas) cleanup technologies in IGCC involve cooling the syngas from gasifier to room temperature or lower for removing Sulfur, Carbon dioxide and Mercury, leading to a large efficiency loss. It is therefore important to develop processes that remove these impurities from syngas at an optimally high temperature in order to maximize the energy efficiency of an IGCC plant. The high temperature advanced syngas cleanup technologies are presently at various stages of development and it is still not clear which technology and configuration of IGCC process would be most energetically efficient. In this thesis, I present a framework to assess the suitability of various candidate syngas cleanup technologies by developing computational simulations of these processes which are used in conjunction with Aspen Plus® to design various IGCC flowsheet configurations. In particular, we evaluate the use of membranes and sorbents for CO2 separation and capture from hot syngas in IGCC, as a substitute to solutionbased absorption processes. We present a multi-stage model for CO2 separation from multi-component gas mixtures using polymeric membranes based on the solutiondiffusion transport mechanism. A numerical simulation of H2 separation from syngas using Pd-alloy based composite metallic membranes is implemented to assess their performance for CO2 sequestration. / (cont.) In addition, we develop an equilibrium-based combined pressure and temperature swing adsorption-desorption model to estimate the amount of energy required for capturing pollutants using regenerable sorbent beds. We use our models with Aspen Plus® simulations to identify optimum design and operating conditions for membrane and adsorption processes in an IGCC plant. Furthermore, we identify from our simulations desired thermodynamic properties of sorbents and material properties of membranes that are needed to make these technologies work successfully at IGCC conditions. This should serve to provide an appropriate direction and target for ongoing experimental efforts in developing these novel materials. / by Kshitij Prakash. / S.M.
|
655 |
Future characteristics of Offshore Support Vessels / Future characteristics of OSVsRose, Robin Sebastian Koske January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 101-104). / The objective of this thesis is to examine trends in Offshore Support Vessel (OSV) design and determine the future characteristics of OSVs based on industry insight and supply chain models. Specifically, this thesis focuses on Platform Supply Vessels (PSVs) and the advantages of certain design characteristics are analyzed by modeling representative offshore exploration and production scenarios and selecting support vessels to minimize costs while meeting supply requirements. A review of current industry practices and literature suggests that offshore exploration and production activities will move into deeper water further from shore and as a result supply requirements will increase significantly. A review of the current fleet and orderbook reveal an aging fleet of traditional vessels with little deepwater capabilities and a growing, young fleet of advanced vessels capable of deepwater support. A single-vessel supply chain analysis shows that traditional vessels outperform larger vessels for shallow-water resupply activities, while modern vessels and vessels significantly larger than modern vessels are more cost-effective for deepwater operations. As offshore oilfield supply is more complicated than a single vessel supplying a single platform, we develop a mixed integer linear program model of the fleet selection process and implement it on representative offshore exploration and production scenarios. The model is used to evaluate the cost-effectiveness of representative vessels and the value of flexibility in vessel design for the oilfield operator. Incorporating industry insight into the results from the supply chain analyses, this study concludes that a) offshore exploration and production will move further offshore into deeper water, b) OSVs will become significantly larger both in response to the increased cargo need as well as to meet upcoming regulations, c) crew transfer will continue to be done primarily by helicopter, d) OSVs will become significantly more fuel efficient, e) high-specification, flexible OSV designs will continue to be built, and f) major oil companies will focus on safety and redundancy in OSV designs. / by Robin Sebastian Koske Rose. / S.M.
|
656 |
Robust scheduling in forest operations planningLim, Lui Cheng January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 67-68). / Forest operations planning is a complex decision process which considers multiple objectives on the strategic, tactical and operational horizons. Decisions such as where to harvest and in what order over different time periods are just some of the many diverse and complex decisions that are needed to be made. An important issue in real-world optimization of forest harvesting planning is how to treat uncertainty of a biological nature, namely the uncertainty due to different growth rates of trees which affects their respective yields. Another important issue is in the effective use of high capital intensive forest harvesting machinery by suitable routing and scheduling assignments. The focus of this thesis is to investigate the effects of incorporating the robust formulation and a machinery assignment problem collectively to a forest harvesting model. The amount of variability in the harvest yield can be measured by sampling from historical data and suitable protection against uncertainty can be set after incorporating the use of a suitable robust formulation. A trade off between robustness to uncertainty with the deterioration in the objective value ensues. Using models based on industrial and slightly modified data, both the robust and routing formulations have been shown to affect the solution and its underlying structure thus making them necessary considerations. A study of feasibility using Monte Carlo simulation is then undertaken to evaluate the difference in average performances of the formulations as well as to obtain a method of setting the required protections with an acceptable probability of infeasibility under a given set of scenarios. / by Lui Cheng, Lim. / S.M.
|
657 |
Computational tools for enabling longitudinal skin image analysisLee, Kang Qi Ian January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 165-174). / We present a set of computational tools that enable quantitative analysis of longitudinally acquired skin images: the assessment and characterization of the evolution of skin features over time. A framework for time-lapsed skin imaging is proposed. A nonrigid registration algorithm based on multiple plane detection for landmark identification accurately aligns pairs of longitudinal skin images. If dense and thick hairs are present, then nonrigid registration is used to reconstruct the skin texture of occluded regions by recording multiple images from the same area. Realistic reconstruction of occluded skin texture is aided by an automatic hair segmentation algorithm and guided painting method based on image blending. We demonstrate that constituent algorithms in this framework are accurate and robust in a multitude of scenarios. In addition, a methodology for rigorous longitudinal analysis of skin microrelief structure is introduced. Following rigid registration, a microrelief junction point matching algorithm based on point pattern matching is shown to accurately match two sets of junction points. Immediate applications for these computational tools are change detection for pigmented skin lesions and deformation field computation of the skin surface under stress using only visual features of the skin. Prospective applications include new insights in skin physiology and diseases from the capability to precisely track movements of the microrelief structure over time and localization of skin images on the body. / by Kang Qi Ian Lee. / S.M.
|
658 |
On the computation of probabilities and eigenvalues for random and non-random matricesPeruvamba Sundaresh, Vignesh January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 43-44). / Can you imagine doing hundreds of millions of operations on non-integers and not obtaining a single round-off error? For n < 12, the algorithm used in this thesis does exactly that. We took advantage of a floating point property that we have not seen used before. If only we had quad precision we could have gone even further and extended the algorithm without round-off error for higher values of 'n'. The problem in question concerns whether the eigenvalues are real or complex. The eigenvalues of an n-by-n real random matrix whose elements are independent random variables with standard normal are examined. An exact expression to determine the probability Pn,k that exactly k eigenvalues are real are derived in [1]. This expression was used to compute the probabilities Pn,k, but the computation was achieved only up to n = 9. For higher values of n, the symbolic expressions generated during the course of an algorithm to compute an exact probability as expressed in Mathematica code requires large amounts of memory. In this thesis, we target development of a more efficient algorithm. The symbolic algorithm implemented in Mathematica is converted into an equivalent numerical version and is implemented using MATLAB. After implementing the serial code in MATLAB, the code is parallelized using a client-server parallel computing platform named Star-p. This modified code implementation along with superior hardware in terms of better processor speeds and larger memory, has enabled the probability evaluation for all values of k up to n= 11, and for certain k values for n = 12 and 13. / (cont.) An expression for the expected number of real eigenvalues En=o kpn,k is obtained in paper [2]. Results relating the rational and irrational parts of the summations n =o kpn,ki, En k=0 (Pk n,k and En= - n,k 0 k)Pn,k are conjectured. Three eigenvalue algorithms, the block Davidson, the block KrylovSchur and the Locally optimal Block Pre-conditioned Conjugate Gradient Method (LOBPCG) are analyzed and their performance on different types of matrices are studied. The performance of the algorithms as a function of the parameters , block size, number of blocks and the type of preconditioner is also examined in this thesis. The block Krylov Schur Algorithm for the matrices which are used for the experiments have proved to much superior to the others in terms of computation time. Also its been more efficient in finding eigenvalues for matrices representing grids with Neumann boundary conditions which have at least one zero eigenvalue. There exists one optimal combination of block size and number of blocks at which the time for eigenvalue computation is minimum. These parameters have different effects for different cases. The block Davidson algorithm has also been incorporated with the locking mechanism and this implementation is found to be much superior to its counterpart without the locking mechanism for matrices which have at least one zero eigenvalue. / by Vignesh Peruvamba Sundaresh. / S.M.
|
659 |
Impact of carbon emission regulatory policies on the electricity market : a simulation studyTiwari, Sandeep, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 119-121). / With ever rising concerns regarding global warming and other dangerous effects of CO2 , there had been efforts to reduce CO2 emissions all around the world by adopting more efficient technologies and alternate green or carbon neutral fuels. However, these technologies require large investments and hence to make them economically viable there should be suitable incentives from the government in form of emission regulatory policies such as carbon taxation and carbon cap-and-trade policy. In this research, a simulation study was carried out to analyze the impact of different carbon emission regulatory policies including cap-and-trade policy and carbon taxation policy on the utilities of various stakeholders of the electricity market. An agent based simulation approach was used to model the market where each market stakeholder was represented as an autonomous agent. We use the simulation model to compare the effectiveness of cap-and-trade policy and taxation policy in achieving emission reduction targets. We observe significant windfall profit for electricity producers under the cap-and-trade policy. Therefore for the same emission level the cost to consumers is higher under cap-and-trade policy as compared to taxation policy. Our results suggest that cap-and-trade policy might be ineffective in emission reduction when the market is not fully efficient. Moreover the simplicity of Taxation model gives government a better control on emissions. Based on our study we recommend that the present model be extended to more efficient cap and trade mechanisms by incorporating multistage periods, auctioning of carbon emission permits and carbon emission permits banking. / by Sandeep Tiwari. / S.M.
|
660 |
Optimal operating strategy for a storage facilityZhai, Ning January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 100-101). / In the thesis, I derive the optimal operating strategy to maximize the value of a storage facility by exploiting the properties in the underlying natural gas spot price. To achieve the objective, I investigate the optimal operating strategy under three different spot price processes: the one-factor mean reversion price process with and without seasonal factors, the one-factor geometric Brownian motion price process with and without seasonal factors, and the two-factor short-term/long-term price process with and without seasonal factors. I prove the existence of the unique optimal trigger prices, and calculate the trigger prices under certain conditions. I also show the optimal trigger prices are the prices where the marginal revenue is equal to the marginal cost. Thus, the marginal analysis argument can be used to determine the optimal operating strategy. Once the optimal operating strategy is determined, I use it to obtain the optimal value of the storage facility in three ways: 1, using directly the net present value method; 2, solving the partial differential equations governing the value of the storage facility; 3, using the Monte Carlo method to simulate the decision making process. Issues about parameter estimations are also considered in the thesis. / by Ning Zhai. / S.M.
|
Page generated in 0.083 seconds