• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 236
  • 236
  • 176
  • 176
  • 175
  • 21
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Combining local and global optimization for planning and control in information space

Huynh, Vu Anh January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (leaves 99-102). / This thesis presents a novel algorithm, called the parametric optimized belief roadmap (POBRM), to address the problem of planning a trajectory for controlling a robot with imperfect state information under uncertainty. This question is formulated abstractly as a partially observable stochastic shortest path (POSSP) problem. We assume that the feature-based map of a region is available to assist the robot's decision-making. The POBRM is a two-phase algorithm that combines local and global optimization. In an offline phase, we construct a belief graph by probabilistically sampling points around the features that potentially provide the robot with valuable information. Each edge of the belief graph stores two transfer functions to predict the cost and the conditional covariance matrix of a final state estimate if the robot follows this edge given an initial mean and covariance. In an online phase, a sub-optimal trajectory is found by the global Dijkstra's search algorithm, which ensures the balance between exploration and exploitation. Moreover, we use the iterative linear quadratic Gaussian algorithm (iLQG) to find a locally-feedback control policy in continuous state and control spaces to traverse the sub-optimal trajectory. We show that, under some suitable technical assumptions, the error bound of a sub-optimal cost compared to the globally optimal cost can be obtained. The POBRM algorithm is not only robust to imperfect state information but also scalable to find a trajectory quickly in high-dimensional systems and environments. In addition, the POBRM algorithm is capable of answering multiple queries efficiently. We also demonstrate performance results by 2D simulation of a planar car and 3D simulation of an autonomous helicopter. / by Vu Anh Huynh. / S.M.
102

Multi-parameter estimation in glacier models with adjoint and algorithmic differentiation

Davis, Andrew D. (Andrew Donaldson) January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 75-77). / The cryosphere is comprised of about 33 million km³ of ice, which corresponds to 70 meters of global mean sea level equivalent [30]. Simulating continental ice masses, such as the Antarctic or Greenland Ice Sheets, requires computational models capturing abrupt changes in ice sheet dynamics, which are still poorly understood. Input parameters, such as basal drag and topography, have large effects on the applied stress and flow fields but whose direct observation is very difficult, if not impossible. Computational methods are designed to aid in the development of ice sheet models, ideally identifying the relative importance of each parameter and formulating inverse methods to infer uncertain parameters and thus constrain ice sheet flow. Efficient computation of the tangent linear and adjoint models give researchers easy access to model derivatives. The adjoint and tangent linear models enable efficient global sensitivity computation and parameter optimization on unknown or uncertain ice sheet properties, information used to identify model properties having large effects on sea-level. The adjoint equations are not always easily obtained analytically and often require discretizing additional PDE's. Algorithmic differentiation (AD) decomposes the model into a composite of elementary operations (+, -, *, /, etc ... ) and a source-to-source transformation generates code for the Jacobian and its transpose for each operations. Derivatives computed using the tangent linear and adjoint models, with code generated by AD, are applied to parameter estimation and sensitivity analysis of simple glacier models. AD is applied to two examples, equations describing changes in borehole temperature over time and instantaneous ice velocities. Borehole model predictions and data are compared to infer paleotemperatures, geothermal heat flux, and physical ice properties. Inversion using adjoint methods and AD increases the control space, allowing inference for all uncertain parameters. The sensitivities of ice velocities to basal friction and basal topography are compared. The basal topography has significantly larger sensitivities, suggesting it plays a larger role in flow dynamics and future work should seek to invert for this parameter. / by Andrew D. Davis. / S.M.
103

Modeling flow encountering abrupt topography using hybridizable discontinuous Galerkin projection methods

Vo, Johnathan Hiep January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 85-89). / In this work novel high-order hybridizable discontinuous Galerkin (HDG) projection methods are further developed for ocean dynamics and geophysical fluid predictions. We investigate the effects of the HDG stabilization parameter for both the momentum equation as well as tracer diffusion. We also make a correction to our singularity treatment algorithm for nailing down a numerically consistent and unique solution to the pressure Poisson equation with homogeneous Neumann boundary conditions everywhere along the boundary. Extensive numerical results using physically realistic ocean flows are presented to verify the HDG projection methods, including the formation of internal wave beams over a shallow but abrupt seamount, the generation of internal solitary waves from stratified oscillatory flow over steep topography, and the circulation of bottom gravity currents down a slope. Additionally, we investigate the implementation of open boundary conditions for finite element methods and present results in the context of our ocean simulations. Through this work we present the hybridizable discontinuous Galerkin projection methods as a viable and competitive alternative for large-scale, realistic ocean modeling. / by Johnathan Hiep Vo. / S.M.
104

Numerical simulations of perforated plate stabilized premixed flames with detailed chemistry

Kedia, Kushal Sharad January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 83-86). / The objective of this work is to develop a high efficiency two-dimensional reactive flow solver to investigate perforated-plate stabilized laminar premixed flames. The developed code is used to examine the impact of the operating conditions and the perforated plate design on the steady flame characteristics. It is also used to numerically investigate the response of these flames to imposed inlet velocity perturbations. The two-dimensional simulations are performed using a reduced chemical kinetics mechanism for methane-air combustion, consisting of 20 species and 79 reactions. Heat exchange is allowed between the gas mixture and the solid plate. The physical model is based on a zero-Mach-number formulation of the axi-symmetric compressible conservation equations. The steady results suggest that the flame consumption speed, the flame structure, and the flame surface area depend significantly on the equivalence ratio, mean inlet velocity, the distance between the perforated plate holes and the plate thermal conductivity. In the case of an adiabatic plate, a conical flame is formed, anchored near the corner of the hole. When the heat exchange between the mixture and the plate is finite, the flame acquires a Gaussian shape stabilizing at a stand-off distance, that grows with the plate conductivity. The flame tip is negatively curved; i.e. concave with respect to the reactants. Downstream of the plate, the flame base is positively curved; i.e. convex with respect to the reactants, stabilizing above a stagnation region established between neighboring holes. As the plate's thermal conductivity increases, the heat flux to the plate decreases, lowering its top surface temperature. As the equivalence ratio increases, the flame moves closer to the plate, raising its temperature, and lowering the flame stand-off distance. As the mean inlet velocity increases, the flame stabilizes further downstream, the flame tip becomes sharper, hence raising the burning rate at that location. The curvature of the flame base depends on the distance between the neighboring holes; and the flame there is characterized by high concentration of intermediates, like carbon monoxide. To investigate flame dynamics, linear transfer functions, for low mean inlet velocity oscillations, are analyzed for different equivalence ratio, mean inlet velocity, plate thermal conductivity and distance between adjacent holes. The oscillations of the heat exchange rate at the top of the burner surface plays a critical role in driving the growth of the perturbations over a wide range of conditions, including resonance. The flame response to the perturbations at its base takes the form of consumption speed oscillations in this region. Flame stand-off distance increases/decreases when the flame-wall interaction strengthens/weakens, impacting the overall dynamics of the heat release. The convective lag between the perturbations and the flame base response govern the phase of heat release rate oscillations. There is an additional convective lag between the perturbations at the flame base and the flame tip which has a weaker impact on the heat release rate oscillations. At higher frequencies, the flame-wall interaction is weaker and the heat release oscillations are driven by the flame area oscillations. The response of the flame to higher amplitude oscillations are used to gain further insight into the mechanisms. Key words: Laminar premixed flames, perforated-plate stabilized flames, flame-wall interactions, flame consumption speed, stand-off distance. / by Kushal Sharad Kedia. / S.M.
105

The Markov chain Monte Carlo approach to importance sampling in stochastic programming

Ustun, Berk (Tevfik Berk) January 2012 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 85-87). / Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the expected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it involves the evaluation of a multidimensional integral whose integrand is an optimization problem. Accordingly, the recourse function is estimated using quadrature rules or Monte Carlo methods. Although Monte Carlo methods present numerous computational benefits over quadrature rules, they require a large number of samples to produce accurate results when they are embedded in an optimization algorithm. We present an importance sampling framework for multistage stochastic programming that can produce accurate estimates of the recourse function using a fixed number of samples. Our framework uses Markov Chain Monte Carlo and Kernel Density Estimation algorithms to create a non-parametric importance sampling distribution that can form lower variance estimates of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using numerical experiments in which we solve variants of the Newsvendor problem. Our results show that even a simple implementation of our framework produces highly accurate estimates of the optimal solution and optimal cost for stochastic programming models, especially those with increased variance, multimodal or rare-event distributions. / by Berk Ustun. / S.M.
106

Computational issues and related mathematics of an exponential annealing homotropy for conic optimization

Chen, Jeremy, S.M. Massachusetts Institute of Technology January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 105-106). / We present a further study and analysis of an exponential annealing based algorithm for convex optimization. We begin by developing a general framework for applying exponential annealing to conic optimization. We analyze the hit-and-run random walk from the perspective of convergence and develop (partially) an intuitive picture that views it as the limit of a sequence of finite state Markov chains. We then establish useful results that guide our sampling. Modifications are proposed that seek to raise the computational practicality of exponential annealing for convex optimization. In particular, inspired by interior-point methods, we propose modifying the hit-and-run random walk to bias iterates away from the boundary of the feasible region and show that this approach yields a substantial reduction in computational cost. We perform computational experiments for linear and semidefinite optimization problems. For linear optimization problems, we verify the correlation of phase count with the Renegar condition measure (described in [13]); for semidefinite optimization, we verify the correlation of phase count with a geometry measure (presented in [4]). / by Jeremy Chen. / S.M.
107

Low rank matrix completion

Nan, Feng, S.M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 75-76). / We consider the problem of recovering a low rank matrix given a sampling of its entries. Such problems are of considerable interest in a diverse set of fields including control, system identification, statistics and signal processing. Although the general low rank matrix completion problem is NP-hard, there exist several heuristic methods that solve the problem approximately by solving the convex relaxation of the original problem. One particularly popular method is to use nuclear norm (sum of singular values) to approximate the rank of the matrix and formulate the problem as a semidefinite program that can be solved efficiently. In this thesis, we propose a local completion algorithm that searches for possible completion in the neighborhood of each unspecified entry given the rank of the matrix. Unlike existing methods, this algorithm requires only local information of the matrix if the rank is known. Critical in all low rank matrix completion algorithms is the sampling density. The denser the matrix is sampled, the more likely it can be recovered accurately. We then propose a condensation process that increases the sampling density in a specific part of the matrix through elementary row and column re-ordering. Hence we can solve a sub-problem of the original low rank matrix completion problem and gain information on the rank of the matrix. Then the local algorithm is readily applicable to recover the entire matrix. We also explore the effect of additional sampling structures on the completion rate of the low rank matrix completion problems. In particular, we show that imposing regularity in the sampling process leads to slightly better completion rates. / (cont.) We also provide a new semidefinite formulation for a particular block sampling structure that reduces the size of the constraint matrix sizes by a factor of 1.5. / by Feng Nan. / S.M.
108

Mouse behavior recognition with the wisdom of crowd

Ni, Yuzhao January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 67-72). / In this thesis, we designed and implemented a crowdsourcing system to annotate mouse behaviors in videos; this involves the development of a novel clip-based video labeling tools, that is more efficient than traditional labeling tools in crowdsourcing platform, as well as the design of probabilistic inference algorithms that predict the true labels and the workers' expertise from multiple workers' responses. Our algorithms are shown to perform better than majority vote heuristic. We also carried out extensive experiments to determine the effectiveness of our labeling tool, inference algorithms and the overall system. / by Yuzhao Ni. / S.M.
109

Robust transportation network design under user equilibrium

Lu, Yun January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 59-63). / We address the problem of designing a transportation network in the presence of demand uncertainty, multiple origin-destination pairs and a budget constraint for the overall construction cost, under the behavioral assumption that travelers optimize their own travel costs (i.e., the "user-equilibrium" condition). Under deterministic demand, we propose an exact integer optimization approach that leads to a quadratic objective, linear constraints optimization problem. As a result, the problem is efficiently solvable via commercial software, when the costs are linear functions of traffic flows. We then use an iterative algorithm to address the case of nonlinear cost functions. While the problem is intractable under probabilistic assumptions on demand uncertainty, we extend the previous model and propose an iterative algorithm using a robust optimization approach that models demand uncertainty. We finally report extensive numerical results to illustrate that our approach leads to tractable solutions for large scale networks. / by Yun Lu. / S.M.
110

Enhancements and computational evaluation of the hit-and-run random walk on polyhedra

Liang, Jiajie, S.M. Massachusetts Institute of Technology January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 55). / The symmetry function of a convex set offers us numerous useful information about the set in relation to probabilistic theory and geometric properties. The symmetry function is a measure of how symmetric the convex set is, and for a point, intuitively it measures how symmetric the set is with respect to that point. We call a point of high symmetry value a deep point. A random walk is a procedure that starts from a particular point in Rn and at each iteration, moves to a "neighboring" point according to some probability distribution that depends solely on the current point. The Hit-and-Run random walk on a convex set S picks a random line e through the point, and at next iteration goes to a new point that is chosen uniformly on the chord ℓ [intersection] S. In this thesis, we analyze and investigate the effectiveness of the Hit-and-Run random walk to compute a deep point in a convex body, given a randomly generated convex set. The effectiveness is evaluated in terms of the role of the starting point and the likelihood that the random walk will enter the zone of high symmetry. Additionally, some known probabilistic properties of the symmetry function are tested using the random walk, from which the integrity of the code is also verified. / (cont.) The final portion of this thesis analyzes the behavioral properties of convex sets that have non-Euclidean rounding, which renders the random walk less efficient. Therefore the pre-conditioned Hit-and-Run random walk is performed, and the performance is quantitatively presented in a power law equation that predicts the preconditioning iterations required, given the dimension of the convex set, a starting point near a corner and the width of that corner. / by Jiajie Liang. / S.M.

Page generated in 0.152 seconds