Spelling suggestions: "subject:"computation."" "subject:"omputation.""
171 |
Robust scheduling in forest operations planningLim, Lui Cheng January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 67-68). / Forest operations planning is a complex decision process which considers multiple objectives on the strategic, tactical and operational horizons. Decisions such as where to harvest and in what order over different time periods are just some of the many diverse and complex decisions that are needed to be made. An important issue in real-world optimization of forest harvesting planning is how to treat uncertainty of a biological nature, namely the uncertainty due to different growth rates of trees which affects their respective yields. Another important issue is in the effective use of high capital intensive forest harvesting machinery by suitable routing and scheduling assignments. The focus of this thesis is to investigate the effects of incorporating the robust formulation and a machinery assignment problem collectively to a forest harvesting model. The amount of variability in the harvest yield can be measured by sampling from historical data and suitable protection against uncertainty can be set after incorporating the use of a suitable robust formulation. A trade off between robustness to uncertainty with the deterioration in the objective value ensues. Using models based on industrial and slightly modified data, both the robust and routing formulations have been shown to affect the solution and its underlying structure thus making them necessary considerations. A study of feasibility using Monte Carlo simulation is then undertaken to evaluate the difference in average performances of the formulations as well as to obtain a method of setting the required protections with an acceptable probability of infeasibility under a given set of scenarios. / by Lui Cheng, Lim. / S.M.
|
172 |
Computational tools for enabling longitudinal skin image analysisLee, Kang Qi Ian January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 165-174). / We present a set of computational tools that enable quantitative analysis of longitudinally acquired skin images: the assessment and characterization of the evolution of skin features over time. A framework for time-lapsed skin imaging is proposed. A nonrigid registration algorithm based on multiple plane detection for landmark identification accurately aligns pairs of longitudinal skin images. If dense and thick hairs are present, then nonrigid registration is used to reconstruct the skin texture of occluded regions by recording multiple images from the same area. Realistic reconstruction of occluded skin texture is aided by an automatic hair segmentation algorithm and guided painting method based on image blending. We demonstrate that constituent algorithms in this framework are accurate and robust in a multitude of scenarios. In addition, a methodology for rigorous longitudinal analysis of skin microrelief structure is introduced. Following rigid registration, a microrelief junction point matching algorithm based on point pattern matching is shown to accurately match two sets of junction points. Immediate applications for these computational tools are change detection for pigmented skin lesions and deformation field computation of the skin surface under stress using only visual features of the skin. Prospective applications include new insights in skin physiology and diseases from the capability to precisely track movements of the microrelief structure over time and localization of skin images on the body. / by Kang Qi Ian Lee. / S.M.
|
173 |
On the computation of probabilities and eigenvalues for random and non-random matricesPeruvamba Sundaresh, Vignesh January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 43-44). / Can you imagine doing hundreds of millions of operations on non-integers and not obtaining a single round-off error? For n < 12, the algorithm used in this thesis does exactly that. We took advantage of a floating point property that we have not seen used before. If only we had quad precision we could have gone even further and extended the algorithm without round-off error for higher values of 'n'. The problem in question concerns whether the eigenvalues are real or complex. The eigenvalues of an n-by-n real random matrix whose elements are independent random variables with standard normal are examined. An exact expression to determine the probability Pn,k that exactly k eigenvalues are real are derived in [1]. This expression was used to compute the probabilities Pn,k, but the computation was achieved only up to n = 9. For higher values of n, the symbolic expressions generated during the course of an algorithm to compute an exact probability as expressed in Mathematica code requires large amounts of memory. In this thesis, we target development of a more efficient algorithm. The symbolic algorithm implemented in Mathematica is converted into an equivalent numerical version and is implemented using MATLAB. After implementing the serial code in MATLAB, the code is parallelized using a client-server parallel computing platform named Star-p. This modified code implementation along with superior hardware in terms of better processor speeds and larger memory, has enabled the probability evaluation for all values of k up to n= 11, and for certain k values for n = 12 and 13. / (cont.) An expression for the expected number of real eigenvalues En=o kpn,k is obtained in paper [2]. Results relating the rational and irrational parts of the summations n =o kpn,ki, En k=0 (Pk n,k and En= - n,k 0 k)Pn,k are conjectured. Three eigenvalue algorithms, the block Davidson, the block KrylovSchur and the Locally optimal Block Pre-conditioned Conjugate Gradient Method (LOBPCG) are analyzed and their performance on different types of matrices are studied. The performance of the algorithms as a function of the parameters , block size, number of blocks and the type of preconditioner is also examined in this thesis. The block Krylov Schur Algorithm for the matrices which are used for the experiments have proved to much superior to the others in terms of computation time. Also its been more efficient in finding eigenvalues for matrices representing grids with Neumann boundary conditions which have at least one zero eigenvalue. There exists one optimal combination of block size and number of blocks at which the time for eigenvalue computation is minimum. These parameters have different effects for different cases. The block Davidson algorithm has also been incorporated with the locking mechanism and this implementation is found to be much superior to its counterpart without the locking mechanism for matrices which have at least one zero eigenvalue. / by Vignesh Peruvamba Sundaresh. / S.M.
|
174 |
Impact of carbon emission regulatory policies on the electricity market : a simulation studyTiwari, Sandeep, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 119-121). / With ever rising concerns regarding global warming and other dangerous effects of CO2 , there had been efforts to reduce CO2 emissions all around the world by adopting more efficient technologies and alternate green or carbon neutral fuels. However, these technologies require large investments and hence to make them economically viable there should be suitable incentives from the government in form of emission regulatory policies such as carbon taxation and carbon cap-and-trade policy. In this research, a simulation study was carried out to analyze the impact of different carbon emission regulatory policies including cap-and-trade policy and carbon taxation policy on the utilities of various stakeholders of the electricity market. An agent based simulation approach was used to model the market where each market stakeholder was represented as an autonomous agent. We use the simulation model to compare the effectiveness of cap-and-trade policy and taxation policy in achieving emission reduction targets. We observe significant windfall profit for electricity producers under the cap-and-trade policy. Therefore for the same emission level the cost to consumers is higher under cap-and-trade policy as compared to taxation policy. Our results suggest that cap-and-trade policy might be ineffective in emission reduction when the market is not fully efficient. Moreover the simplicity of Taxation model gives government a better control on emissions. Based on our study we recommend that the present model be extended to more efficient cap and trade mechanisms by incorporating multistage periods, auctioning of carbon emission permits and carbon emission permits banking. / by Sandeep Tiwari. / S.M.
|
175 |
Optimal operating strategy for a storage facilityZhai, Ning January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 100-101). / In the thesis, I derive the optimal operating strategy to maximize the value of a storage facility by exploiting the properties in the underlying natural gas spot price. To achieve the objective, I investigate the optimal operating strategy under three different spot price processes: the one-factor mean reversion price process with and without seasonal factors, the one-factor geometric Brownian motion price process with and without seasonal factors, and the two-factor short-term/long-term price process with and without seasonal factors. I prove the existence of the unique optimal trigger prices, and calculate the trigger prices under certain conditions. I also show the optimal trigger prices are the prices where the marginal revenue is equal to the marginal cost. Thus, the marginal analysis argument can be used to determine the optimal operating strategy. Once the optimal operating strategy is determined, I use it to obtain the optimal value of the storage facility in three ways: 1, using directly the net present value method; 2, solving the partial differential equations governing the value of the storage facility; 3, using the Monte Carlo method to simulate the decision making process. Issues about parameter estimations are also considered in the thesis. / by Ning Zhai. / S.M.
|
176 |
Computational modeling of crack initiation in cross-role piercingChiluveru, Sudhir January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 81-89). / Thle Mannesmann process is the preferred method in the oil industry for fabrication of hollow pipes. The critical phenomenon in this process is the formation of a small round hole at the center of the cylindrical billet ahead of the piercing plug. In this work the crack initiation that leads to the creation of tile small hole has been modeled. The Gurson-Tvergaard-Needlemnan model of porous plasticity is used to simulate the Mannesmann effect. The appearance of a crack at the center of the cylindrical bar is demonstrated and the stress profiles, plastic equivalent strain profiles and porosity distribution during the deformation process are analyzed. The influence of various parameters in the model on the evolution of porosity in tile specimen is studied. Other simple ductile fracture criteria that are proposed in literature are also implemented. An interface model for fracture using the discontinuous Galerkin framework combined with a cohesive fracture law is implemented. This approach and its advantages are illustrated in the application of tensile loading of a simple beam specimen. / by Sudhir Chiluveru. / S.M.
|
177 |
A comparison of discrete and flow-based models for air traffic flow managementPhu, Thi Vu January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (leaves 73-74). / The steady increase of congestion in air traffic networks has resulted in significant economic losses and potential safety issues in the air transportation. A potential way to reduce congestion is to adopt efficient air traffic management policies, such as, optimally scheduling and routing air traffic throughout the network. In recent years, several models have been proposed to predict and manage air traffic. This thesis focuses on the comparison of two such approaches to air traffic flow management: (i) a discrete Mixed Integer Program model, and (ii) a continuous flow-based model. The continuous model is applied in a multi-commodity setting to take into account the origins and destinations of the aircraft. Sequential quadratic programming is used to optimize the continuous model. A comparison of the performance of the two models based on a set of large scale test cases is provided. Preliminary results suggest that the linear programming relaxation of the discrete model provides results similar to the continuous flow-based model for high volumes of air traffic. / by Thi Vu Phu. / S.M.
|
178 |
Graduate school introductory computational simulation course pedagogyProctor, Laura L. (Laura Lynne), 1975- January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Vita. Cataloged from PDF version of thesis. / Numerical methods and algorithms have developed and matured vastly over the past three decades now that computational analysis can be performed on almost any personal computer. There is a need to be able to teach and present this material in a manner that is easy for the reader to understand and be able to go forward and use. Three popular course at MIT were without lecture notes; in this thesis the lecture notes are presented. The first chapter covers material taught in Numerical Methods for Partial Differential Equations (2.097/6.339/16.920) specifically the Integral Equation Methods section of this course, chapter two shows the notes for the course Introduction to Numerical Simulation (2.096/6.336/16.910), and chapter three contains the notes for the class Foundations of Algorithms and Computational Techniques in Systems Biology (6.581/20.482). These course notes give a broad overview of many algorithms and numerical methods that one can use to solve many problems that span many fields - from biology to aerospace to electronics to mechanics. / by Laura L. Proctor. / S.M.
|
179 |
A fast 3D full-wave solver for nanophotonics / fast three-dimensional full-wave solver for nanophotonicsZhang, Lei, Ph. D. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 57-61). / Conventional fast integral equation solvers seem to be ideal approaches for simulating 3-D nanophotonic devices, as these devices are considered to be open structures, generating fields in both an interior channel and in the infinite exterior domain. However, many devices of interest, such as optical ring resonator filters or waveguides, have channels that can not be terminated without generating numerical reflections. Therefore, designing absorbers for these channels is a new problem for integral equation methods, as integral equation methods were initially developed for problems with finite surfaces. In this thesis we present a technique to eliminate reflections, making the channel volume conductive outside the domain of interest. The surface integral equation (SIE) method is employed to take advantage of the piecewise homogeneous medium. The Poggio-Miller-Chang-Harrington-Wu (PM-CHW) formulation is formed and the boundary element method is employed to construct and solve a linear system. Moreover, the block Toeplitz matrix property and using FFT helps reduce memory requirement, and accelerate the circulant matrix vector product. Numerical experiments are presented to demonstrate that this method can effectively reduce reflections to 1%, and is easily incorporated in an fast integral equation solver. / by Lei Zhang. / S.M.
|
180 |
Fully-kinetic PIC simulations for Hall-effect thrusters / Fully-kinetic particle-in-cell simulations for Hall-effect thrustersFox, Justin M., 1981- January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 173-177). / In recent years, many groups have numerically modeled the near-anode region of a Hall thruster in attempts to better understand the associated physics of thruster operation. Originally, simulations assumed a continuum approximation for electrons and used magnetohydrodynamic fluid equations to model the significant processes. While these codes were computationally efficient, their applicability to non-equilibrated regions of the thruster, such as wall sheaths, was limited, and their accuracy was predicated upon the notion that the energy distributions of the various species remained Maxwellian at all times. The next generation of simulations used the fully-kinetic particle-in-cell (PIC) model. Although much more computationally expensive than the fluid codes, the full-PIC codes allowed for non-equilibrated thruster regions and did not rely on Maxwellian distributions. However, these simulations suffered for two main reasons. First, due to the high computational cost, fine meshing near boundaries which would have been required to properly resolve wall sheaths was often not attempted. Second, PIC is inherently a statistically noisy method and often the extreme tails of energy distributions would not be adequately sampled due to high energy particle dissipation. The current work initiates a third generation of Hall thruster simulation. A PIC-Vlasov hybrid model was implemented utilizing adaptive meshing techniques to enable automatically scalable resolution of fine structures during the simulation. The code retained the accuracy and versatility of a PIC simulation while intermittently recalculating and smoothing particle distribution functions within individual cells to ensure full velocity space coverage. A non-Monte Carlo collision technique was also implemented to reduce statistical noise. / (cont.) This thesis details the implementation and thorough benchmarking of that new simulation. The work was conducted with the aid of Delta Search Labs' supercomputing facility and technical expertise. The simulation was fully-parallelized using MPI and tested on a 128 processor SGI Origin machine. We gratefully acknowledge that funding for portions of this work has been provided by the United States Air Force Research Laboratory and the National Science Foundation. / by Justin M. Fox. / S.M.
|
Page generated in 0.1264 seconds