• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 236
  • 236
  • 176
  • 176
  • 175
  • 21
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A shifting method for dynamic system Model Order Reduction

Xu, Song, S.M. Massachusetts Institute of Technology January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 83-86). / Model Order Reduction (MOR) is becoming increasingly important in computational applications. At the same time, the need for more comprehensive models of systems is generating problems with increasing numbers of outputs and inputs. Classical methods, which were developed for Single-Input Single-Output (SISO) systems, generate reduced models that are too computationally inefficient for large Multiple-Input Multiple-Output (MIMO) systems. Although many approaches exclusively designed for MIMO systems have emerged during the past decade, they cannot satisfy the overall needs for maintaining the characteristics of systems. This research investigates the reasons for the poor performances of the proposed approaches, using specific examples. Inspired by these existing methods, this research develops a novel way to extract information from MIMO systems, by means of system transfer functions. The approach, called Shifting method, iteratively extracts time-constant shifts from the system and splits the transfer function into several simple systems referred to as contour terms that outline the system structure, and a reducible system referred to as remainder system that complement the Contour Terms. This algorithm produces a remainder system that existing approaches can reduce more effectively. This approach works particularly well for systems with either tightly clustered or well separated modes, and all the operations are O(n). The choice of shifts is based on an optimization process, with Chebyshev Polynomial roots as initial guesses. This paper concludes with a demonstration of the procedure as well as related error and stability analysis. / by Xu, Song. / S.M.
82

Random obtuse triangles and convex quadrilaterals

Banerjee, Nirjhar January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 83-85). / We intend to discuss in detail two well known geometrical probability problems. The first one deals with finding the probability that a random triangle is obtuse in nature. We initially discuss the various ways of choosing a random triangle. The problem is at first analyzed based on random angles (adding to 180 degrees) and random sides (obeying the triangle inequality) which is a direct modification of the Broken Stick Problem. We then study the effect of shape on the probability that when three random points are chosen inside a figure of that shape they will form an obtuse triangle. Literature survey reveals the existence of the analytical formulae only in the cases of square, circle and rectangle. We used Monte Carlo simulation to solve this problem in various shapes. We intend to show by means of simulation that the given probabilatity will reach its minimum value when the random points are taken inside a circle. We then introduce the concept of Random Walk in Triangles and show that the probability that a triangle formed during the process is obtuse is itself random. We also propose the idea of Differential Equation in Triangle Space and study the variation of angles during this dynamic process. We then propose to extend this to the problem of calculating the probability of the quadrilateral formed by four random points is convex. The effects of shape are distinctly different than those obtained in the random triangle problem. The effort of true random numbers and normally generated pseudorandom numbers are also compared for both the problems considered. / by Nirjhar Banerjee. / S.M.
83

Reduced basis method for 2nd order wave equation : application to one-dimensional seismic problem / Reduced basis method for second order wave equation : application to 1D seismic problem

Tan Yong Kwang, Alex January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / MIT Institute Archives copy: pages 93 and 94 bound in reverse order. / Includes bibliographical references (p. 93-95). / In this thesis, we solve the 2nd order wave equation, which is hyperbolic and linear in nature, to determine the pressure distribution for a one-dimensional seismic problem with smooth initial pressure and rate of pressure change with time. With Dirichlet and Neumann boundary conditions, the pressure distribution is solved for a total of 500 time steps, which is slighter more than a periodic cycle. Our focus is on the dependence of the output, the average surface pressure as it varies with time, on the system parameters ,u, which consist of the earthquake source x8 and the occurring time T. The reduced basis method, the offline-online computational procedures and the associated a posteriori error estimation are developed. We have shown that the reduced basis pressure distribution is an accurate approximation to the finite element pressure distribution. The greedy algorithm, the procedure of selecting the basis vectors which span the reduced basis space, works reasonably well although a period of slow convergence is experienced: this is because the finite element pressure distribution along the edges of the earthquake source-time space are fairly "unique" and cannot be accurately represented as a linear combination of the existing basis vectors; / (cont.) hence, the greedy algorithm has to bring these "unique" finite element pressure distribution into the reduced basis space individually, accounting for the slow convergence rate. Lastly, applying the online stage instead of the finite element method does not result in a reduction of computational cost: the dimension of the finite element space Af = 200 is comparable with the dimension of the reduced basis space N = 175; however, when the two-dimensional model problem is run, the dimension of the finite element space is A = 3.98 x .04 while the dimension of the reduced basis space is N = 267 and the online stage is around 62.2 times faster then the finite element method. The proposition for the a posteriori error estimation developed shows that the maximum effectivity. the maximum ratio of the error bound over the norm of the reduced basis error, is of magnitude O(103) and increases rapidly when the tolerance is lower. However, this high value is due to the norm of the reduced basis error having a low value and hence not a cause for concern. Furthermore, the ratio of the maximum error bound over the maximum norm of the reduced basis error has a constant magnitude of only 0(102). / (cont.) Lastly, the maximum output effectivity is significantly larger than the maximum effectivity of the pressure distribution due to a conservative bound for the dual contribution. The offline-online computational procedures work well in determining the reduced basis pressure distribution. However, during the a posteriori error estimation, heavy canceling of the various offline stage matrices results in small values for the square of the dual norm of the residuals which decreases as the tolerance is lowered. When the tolerance is of magnitude 0(10-6), the square of the dual norm of the residuals is of magnitude 0(10-14) which is very close to machine precision. Hence, precision error sets in and the offline-online computational procedures break down. Finally, the inverse problem works reasonably well, giving a "possibility region" of the set of system parameters where the actual system parameters may reside. We note that at least 9 time steps should be selected for observation to ensure that the rising and dropping region of the output is detected. Lastly, the greater the measured field error, the larger the "possibility region" we obtain. / by Tan Yong Kwang, Alex. / S.M.
84

MPI-based scalable computing platform for parallel numerical application / Message Passing Interface-based scalable computing platform for parallel numerical application

Albaiz, Abdulaziz (Abdulaziz Mohammad) January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2014. / Cataloged from PDF version of thesis. / Includes bibliographical references (page 61). / Developing parallel numerical applications, such as simulators and solvers, involves a variety of challenges in dealing with data partitioning, workload balancing, data dependencies, and synchronization. Many numerical applications share the need for an underlying parallel framework for parallelization on multi-core/multi-machine hardware. In this thesis, a computing platform for parallel numerical applications is designed and implemented. The platform performs parallelization by multiprocessing over MPI library, and serves as a layer of abstraction that hides the complexities in dealing with data distribution and inter-process communication. It also provides the essential functions that most numerical application use, such as handling data-dependency, workload-balancing, and overlapping communication and computation. The performance evaluation of the parallel platform shows that it is highly scalable for large problems. / by Abdulaziz Albaiz. / S.M.
85

Imaging biomarkers for Duchenne muscular dystrophy / Imaging biomarkers for DMD

Koppaka, Sisir January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 75-78). / Duchenne muscular dystrophy (DMD) is the most common muscular dystrophy of childhood and affects 1 in 3600 male births. The disease is caused by mutations in the dystrophin gene leading to progressive muscle weakness which ultimately results in death due to respiratory and cardiac failure. Accurate, practical, and painless tests to diagnose DMD and measure disease progression are needed in order to test the effectiveness of new therapies. Current clinical outcome measures such as the sixminute walk test and North Star Ambulatory Assessment (NSAA) can be subjective and limited by the patient's degree of effort and cannot be accurately performed in the very young or severely affected older patients. We propose the use of image-based biomarkers with suitable machine learning algorithms instead. We find that force-controlled (precise acquisition at a certain force) and force-correlated (acquisition over a force sweep) ultrasound helps to reduce variability in the imaging process. We show that there is a high degree of inter-operator and intra-operator reliability with this integrated hardware-software setup. We also discuss how other imaging biomarkers, segmentation algorithms to target specific subregions, and better machine learning techniques may provide a boost to the performance reported. Optimizing the ultrasound image acquisition process by maximizing the peak discriminatory power of the images vis-à-vis force applied at the contact force is also discussed. The techniques presented here have the potential for providing a reliable and non-invasive method to discriminate, and eventually track the progression of DMD in patients. / by Sisir Koppaka. / S.M.
86

Logistic regression for a better matching of buyers and suppliers in e-procurement

Tian, Shuo, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 57-58). / The thesis aims to provide a way to identify better matches between buyers and suppliers who are using an e-procurement platform provided by a US based worldwide online market company. The goal is to enhance the shopping experience of the clients, increase the retention rate and grow the customer base of the company. We establish two logistic regression models. The first model is to predict the probability of suppliers winning an RFQ (request for quote). From the calculated probabilities, we are able to rank all the suppliers and tell the buyers who may be the most qualified providers for them. Also, the suppliers will be aware of their odds of winning among all the competitors. Our model shows that price is the most decisive factor for winning, and geography and prior business relationships with the buyer are also important. The second model is used to estimate the probability of successfully awarding an RFQ. We model how likely the RFQ is to be awarded by the buyer. Such information will be especially helpful to suppliers. The process of the RFQ and the relation and intention of the buyer seem to be the most influential factors. / by Shuo Tian. / S.M.
87

Racing line optimization

Xiong, Ying, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 112-113). / Although most racers are good at controlling their cars, world champions are always talented at choosing the right racing line while others mostly fail to do that. Optimal racing line selection is a critical problem in car racing. However, currently it is strongly based on the intuition of experienced racers after they conduct repeated real-time experiments. It will be very useful to have a method which can generate the optimal racing line based on the given racing track and the car. This paper explains four methods to generate optimal racing lines: the Euler spiral method, artificial intelligence method, nonlinear programming solver method and integrated method. Firstly we study the problem and obtain the objective functions and constraints for both 2-D and 3-D situations. The mathematical and physical features of the racing tracks are studied. Then we try different ways of solving this complicated nonlinear programming problem. The Euler spiral method generates Euler spiral curve turns at corners and it gives optimal results fast and accurately for 2-D corners with no banking. The nonlinear programming solver method is based on the MINOS solver on AMPL and the MATLAB Optimization Toolbox and it only needs the input of the objective function and constraints. A heavy emphasis is placed on the artificial intelligence method. It works well for any 2-D or 3-D track shapes. It uses intelligent algorithms including branch-cutting and forward-looking to give optimal racing lines for both 2-D and 3-D tracks. And the integrated method combines methods and their advantages so that it is fast and practical for all situations. Different methods are compared, and their evolutions towards the optimum are described in detail. Convenient display software is developed to show the tracks and racing lines for observation. The approach to finding optimal racing lines for cars will be also helpful for finding optimal racing lines for bicycle racing, ice skating and skiing. / by Ying Xiong. / S.M.
88

Discontinuous Galerkin solution of the Boltzmann equation in multiple spatial dimensions

Lian, Zhengyi January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (leaves 77-79). / This thesis focuses on the numerical solution of a kinetic description of small scale dilute gas flows when the Navier-Stokes description breaks down. In particular, it investigates alternative solution techniques for the Boltzmann equation typically used when the Knudsen number (ratio of molecular mean free path to characteristic length scale of flow) exceeds (approximately) 0.1. Alternative solution methods are required because the prevalent Boltzmann solution technique, Direct Simulation Monte Carlo (DSMC), experiences a sharp rise in computational cost as the deviation from equilibrium decreases, such as in low signal flows. To address this limitation, L. L. Baker and N. G. Hadjiconstantinou recently developed a variance reduction technique [5] in which one only simulates the deviation from equilibrium. This thesis presents the implementation of this variance reduction approach to a Runge-Kutta Discontinuous Galerkin finite element formulation in multiple spatial dimensions. Emphasis is given to alternative algorithms for evaluating the advection operator terms, boundary fluxes and hydrodynamic quantities accurately and efficiently without the use of quadrature schemes. The collision integral is treated as a source term and evaluated using the variance-reduced Monte Carlo technique presented in [10, 9]. For piecewise linear (p = 1) and quadratic (p = 2) solutions to the Boltzmann equation in 5 spatial dimensions, the developed algorithms are able to compute the advection operator terms by a factor of 2.35 and 2.73 times faster than an algorithm based on quadrature, respectively; with the computation of hydrodynamic quantities, the overall performance improvement is a factor of 8.5 and 10, respectively. / (cont.) Although the collision integral takes up to 90% or more of the total computation cost, these improvements still provide tangible efficiency advantages in steady-flow calculations in which less expensive transient collision-operator calculation routines are used during a substantial part of the flow development. High order convergence in physical space has been verified by applying the implemented RKDG method on a test problem with a continuous solution. Furthermore, when applied to pressure driven Poiseuille flow through a rectangular channel, the steady state mass flux in the collisionless limit (where exact results exist) agrees within 0.5%, 0.8% and 1.2% of that obtained by Sone and Hasegawa [14] for aspect ratios of 1, 2 and 4 respectively under a spatial resolution of 52 x103 . For Kn = 0.2, 1 and 10, our results agree with those obtained by Sone and Hasegawa [14] from solutions of the linearized Boltzmann-Krook-Welander(BKW) equation by comparing them at an "equivalent" Knudsen number of 1.27Kn [21]. These results validate the implementation and demonstrate the feasibility of the variance-reduced RKDG method for solving the full Boltzmann equation in multiple spatial dimensions. To pursue higher accuracy for this pressure driven flow problem, a p = 1 scheme was found to be more efficient than a p = 2 scheme at a coarser spatial discretization. This can be achieved by using finer spatial discretization and non-uniform spacing to generate more elements near regions of discontinuities or large variations in the molecular distribution function. / by Zhengyi Lian. / S.M.
89

A reduced-basis method for input-output uncertainty propagation in stochastic PDEs

Vidal Codina, Ferran January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 123-132). / Recently there has been a growing interest in quantifying the effects of random inputs in the solution of partial differential equations that arise in a number of areas, including fluid mechanics, elasticity, and wave theory to describe phenomena such as turbulence, random vibrations, flow through porous media, and wave propagation through random media. Monte-Carlo based sampling methods, generalized polynomial chaos and stochastic collocation methods are some of the popular approaches that have been used in the analysis of such problems. This work proposes a non-intrusive reduced-basis method for the rapid and reliable evaluation of the statistics of linear functionals of stochastic PDEs. Our approach is based on constructing a reduced-basis model for the quantity of interest that enables to solve the full problem very efficiently. In particular, we apply a reduced-basis technique to the Hybridizable Discontinuous Galerkin (HDG) approximation of the underlying PDE, which allows for a rapid and accurate evaluation of the input-output relationship represented by a functional of the solution of the PDE. The method has been devised for problems where an affine parametrization of the PDE in terms of the uncertain input parameters may be obtained. This particular structure enables us to seek an offline-online computational strategy to economize the output evaluation. Indeed, the offline stage (performed once) is computationally intensive since its computational complexity depends on the dimension of the underlying high-order discontinuous finite element space. The online stage (performed many times) provides rapid output evaluation with a computational cost which is several orders of magnitude smaller than the computational cost of the HDG approximation. In addition, we incorporate two ingredients to the reduced-basis method. First, we employ the greedy algorithm to drive the sampling in the parameter space, by computing inexpensive bounds of the error in the output on the online stage. These error bounds allow us to detect which samples contribute most to the error, thereby enriching the reduced basis with high-quality basis functions. Furthermore, we develop the reduced basis for not only the primal problem, but also the adjoint problem. This allows us to compute an improved reduced basis output that is crucial in reducing the number of basis functions needed to achieve a prescribed error tolerance. Once the reduced bases have been constructed, we employ Monte-Carlo based sampling methods to perform the uncertainty propagation. The main achievement is that the forward evaluations needed for each Monte-Carlo sample are inexpensive, and therefore statistics of the output can be computed very efficiently. This combined technique renders an uncertainty propagation method that requires a small number of full forward model evaluations and thus greatly reduces the computational burden. We apply our approach to study the heat conduction of the thermal fin under uncertainty from the diffusivity coefficient and the wave propagation generated by a Gaussian source under uncertainty from the propagation medium. We shall also compare our approach to stochastic collocation methods and Monte-Carlo methods to assess the reliability of the computations. / by Ferran Vidal-Codina. / S.M.
90

Hierarchical Gaussian models for wind field estimation and path planning

Musolas Otaño, Antoni M. (Antoni Maria) January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 80-83). / Improvements in technology, autonomy, and positioning mechanisms have greatly broadened the range of application of unmanned aerial vehicles. These vehicles are now being used in aerial photography, package delivery, infrastructure inspection, and many other areas. Many of these uses demand new techniques for path planning in complex environments-in particular, spatially heterogeneous and time-evolving wind fields [22, 23, 24]. Navigating and planning [26, 25, 28, 12] in wind fields requires reliable and fast predictive models that quantify uncertainty in future wind velocities, and benefits strongly from the ability to incorporate onboard and external wind field measurements in real time. To make real-time inference and prediction possible, we construct simple hierarchical Gaussian models of the wind field as follows. Given realizations of the wind field over a domain of interest, obtained from detailed offline measurements or computational fluid dynamic simulations, we extract empirical estimates of the mean and covariance functions. The associated covariance matrices are anisotropic and non-stationary, and capture interactions among the wind vectors at all points in a discretization of the domain. We make the further assumption that, given a particular prevailing wind heading, the local wind velocities are jointly Gaussian. The result is a hierarchical Gaussian model in which the mean and covariance are functions of the prevailing wind conditions. Since these empirical covariances are known only for a few prevailing wind conditions, we close our model by interpolating covariance matrices on the appropriate manifold of positive semi-definite matrices [44], via a computationally efficient construction that takes advantage of low-rank structure. Finally, assimilation of successive point observations is conducted by embedding a standard Kalman filter within a hierarchical Bayesian inference framework. This representation will then be used for wind field exploitation. / by Antoni M. Musolas Otaño. / S.M.

Page generated in 0.1082 seconds