• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 14
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 178
  • 178
  • 31
  • 25
  • 25
  • 24
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sensitivity Analysis of Convex Relaxations for Nonsmooth Global Optimization

Yuan, Yingwei January 2020 (has links)
Nonsmoothness appears in various applications in chemical engineering, including multi-stream heat exchangers, nonsmooth flash calculation, process integration. In terms of numerical approaches, convex/concave relaxations of static and dynamic systems may also exhibit nonsmoothness. These relaxations are used in deterministic methods for global optimization. This thesis presents several new theoretical results for nonsmooth sensitivity analysis, with an emphasis on convex relaxations. Firstly, the "compass difference" and established ODE results by Pang and Stewart are used to describe a correct subgradient for a nonsmooth dynamic system with two parameters. This sensitivity information can be computed using standard ODE solvers. Next, this thesis also uses the compass difference to obtain a subgradient for the Tsoukalas-Mitsos convex relaxations of composite functions of two variables. Lastly, this thesis develops a new general subgradient result for Tsoukalas-Mitsos convex relaxations of composite functions. This result does not limit on the dimensions of input variables. It gives the whole subdifferential of Tsoukalas-Mitsos convex relaxations. Compare to Tsoukalas-Mitsos’ previous subdifferential results, it does not require additionally solving a dual optimization problem as well. The new subgradient results are extended to obtain directional derivatives for Tsoukalas-Mitsos convex relaxations. The new subgradient results and directional derivative results are computationally approachable: subgradients in this article can be calculated both by the vector forward AD mode and reverse AD mode. A proof-of-concept implementation in Matlab is discussed. / Thesis / Master of Applied Science (MASc)
22

Developing Fast and Accurate Water Models for Atomistic Molecular Dynamics Simulations

Xiong, Yeyue 15 September 2021 (has links)
Water models are of great importance for different fields of studies such as fluid mechanics, nano materials, and biomolecule simulations. In this dissertation, we focus on the water models applied in atomistic simulations, including those of biomolecules such as proteins and DNA. Despite water's simple structure and countless studies carried out over the decades, the best water models are still far from perfect. Water models are normally divided into two types--explicit model and implicit model. Here my research is mainly focused on explicit models. In explicit water models, fixed charge n-point models are most widely used in atomistic simulations, but have known accuracy drawbacks. Increasing the number of point charges, as well as adding electronic polarizability, are two common strategies for accuracy improvements. Both strategies come at considerable computational cost, which weighs heavily against modest possible accuracy improvements in practical simulations. With a careful comparison between the two strategies, results show that adding polarizability is a more favorable path to take. Optimal point charge approximation (OPCA) method is then applied along with a novel global optimization process, leading to a new polarizable water model OPC3-pol that can reproduce bulk liquid properties of water accurately and run at a speed comparable to 3- and 4-point non-polarizable water models. For practical use, OPC3-pol works with existing non-polarizable AMBER force fields for simulations of globular protein or DNA. In addition, for intrinsically disordered protein simulations, OPC3-pol fixes the over-compactness problem of the previous generation non-polarizable water models. / Doctor of Philosophy / With the rapid advancements of computer technologies, computer simulation has become increasingly popular in biochemistry research fields. Simulations of microscopic substances that are vital for living creatures such as proteins and DNAs have brought us more and more insights into their structures and functions. Because of the fact that almost all the microscopic substances are immersed in water no matter they are in a human body, a plant, or in bacteria, accurately simulating water is crucial for the success of such simulations. My research is focused on developing accurate and fast water models that can be used by researchers in their biochemical simulations. One particular challenge is that water in nature is very flexible and properties of water can change drastically when its surroundings change. Many classical water models cannot correctly mimic this flexibility, and some more advanced water models that are able to mimic it can cost several times more computing resources. Our latest water model OPC3-pol, benefited from a new design, accurately mimics the flexibility and runs as fast as a traditional rigid water model.
23

Computational Tools for Molecular Networks in Biological Systems

Zwolak, Jason W. 07 January 2005 (has links)
Theoretical molecular biologists try to understand the workings of cells through mathematics. Some theoreticians use systems of ordinary differential equations (ODEs) as the basis for mathematical modelling of molecular networks. This thesis develops algorithms for estimating molecular reaction rate constants within those mathematical models by fitting the models to experimental data. An additional step is taken to fit non-timecourse experimental data (e.g., transformations must be performed on the ODE solutions before the experimental and simulation data are similar, and therefore, comparable). VTDIRECT is used to perform (a deterministic direct search) global estimation and ODRPACK is used to perform (a trust region Levenberg-Marquardt based) local estimation of rate constants. One such transformation performed on the ODE solutions determines the value of the steady state of the ODE solutions. A new algorithm was developed that finds all steady state solutions of the ODE system given that the system has a special structure (e.g., the right hand sides of the ODEs are rational functions). Also, since the rate constants in the models cannot be negative and may have other restrictions on the values, ODRPACK was modified to address this problem of bound constraints. The new Fortran 95 version of ODRPACK is named ODRPACK95. / Ph. D.
24

Design and Evaluation of a Data-distributed Massively Parallel Implementation of a Global Optimization Algorithm---DIRECT

He, Jian 12 January 2008 (has links)
The present work aims at an efficient, portable, and robust design of a data-distributed massively parallel DIRECT, the deterministic global optimization algorithm widely used in multidisciplinary engineering design, biological science, and physical science applications. The original algorithm is modified to adapt to different problem scales and optimization (exploration vs.\ exploitation) goals. Enhanced with a memory reduction technique, dynamic data structures are used to organize local data, handle unpredictable memory requirements, reduce the memory usage, and share the data across multiple processors. The parallel scheme employs a multilevel functional and data parallelism to boost concurrency and mitigate the data dependency, thus improving the load balancing and scalability. In addition, checkpointing features are integrated to provide fault tolerance and hot restarts. Important algorithm modifications and design considerations are discussed regarding data structures, parallel schemes, error handling, and portability. Using several benchmark functions and real-world applications, the present work is evaluated in terms of optimization effectiveness, data structure efficiency, memory usage, parallel performance, and checkpointing overhead. Modeling and analysis techniques are used to investigate the design effectiveness and performance sensitivity under various problem structures, parallel schemes, and system settings. Theoretical and experimental results are compared for two parallel clusters with different system scale and network connectivity. An analytical bounding model is constructed to measure the load balancing performance under different schemes. Additionally, linear regression models are used to characterize two major overhead sources---interprocessor communication and processor idleness, and also applied to the isoefficiency functions in scalability analysis. For a variety of high-dimensional problems and large scale systems, the data-distributed massively parallel design has achieved reasonable performance. The results of the performance study provide guidance for efficient problem and scheme configuration. More importantly, the generalized design considerations and analysis techniques are beneficial for transforming many global search algorithms to become effective large scale parallel optimization tools. / Ph. D.
25

Global Optimization of the Nonconvex Containership Design Problem Using the Reformulation-Linearization Technique

Ganesan, Vikram 19 August 2001 (has links)
The containership design problem involves optimizing a nonconvex objective function over a design space that is restricted by a set of constraints defined in terms of nonconvex functions. An application of standard nonlinear optimization methods to such a problem can at best attain a local optimum that need not be a global optimum. This thesis investigates the application of alternative modeling, approximation, and global optimization techniques for developing a multidisciplinary approach to the containership design problem. The problem involves five design variables, which prioritized according to their relative importance in the model are: design draft, depth at side, speed, overall length, and maximum beam. Five constraints are imposed on the design, viz., an equality constraint to enforce the balance between the weight and the displacement, a linear inequality constraint on the length to depth ratio that is implied by the lightship weight formulation for the design to be acceptable, an inequality constraint on the metacentric height to ensure that the design satisfies the Coast Guard wind heel criterion, an inequality on the freeboard to ensure the minimum required freeboard governed by the code of federal regulations for freeboard (46 CFR 42), and an inequality constraint on the rolling period to ensure that the design satisfies the minimum required rolling period criterion. The objective function employed is the required freight rate, expressed in dollars per metric ton per nautical mile in order to recover annualized construction and operational costs. The model also accommodates various practical issues in a manner suitable to improve its representability. For example, it takes into account the discrete container stowage issue. The carrying capacity (number of containers) is expressed as a continuous function of the principal dimensions by using a linear response surface fit that in turn makes the objective function continuous. The weight-displacement balance is maintained by including draft as a design variable and imposing an equality constraint on the weight and displacement rather than introducing an internal loop to calculate draft at each iteration. This speeds up the optimization process. Also, the weight is formulated independent of the draft to ensure independence of the weight and the displacement, which simplifies the optimization process. The time for loading and unloading containers at a given port is a function of the number of cranes available. The number of cranes is formulated as a function of the length of the ship, and the resulting expression is made continuous through a linear response surface fit. To solve this problem, we design two approaches based on employing a sequence of polynomial programming approximations, each within two alternative branch-and-bound frameworks. In the first approach, we construct a polynomial programming approximation to the containership design problem using the Response Surface Methodology (RSM) and solve this model to global optimality using the software package BARON (Branch-and-Reduce Optimization Navigator - see Sahinidis, 1996), although the Reformulation-Linearization Technique (RLT)-based procedure of Sherali and Tuncbilek (1992, 1997) offers a viable alternative (BARON itself incorporates some elements of the latter approach). The resulting solution is refined by the application of a local search method. This procedure is integrated into two alternative branch-and-bound frameworks. The motivation is that the solution of the nonconvex polynomial approximations is likely to yield solutions in the near vicinity of the true underlying global optimum, and hence, the application of a local search method initiated at such a solution has a greater prospect of detecting such a global optimum. In the second approach, we utilize a continuous-space branch-and-bound procedure based on linear programming (LP) relaxations. These relaxations are generated through an approximation scheme that first utilizes RSM to derive polynomial approximations to the objective function and the constraints, and then applies the RLT to obtain an LP relaxation. The initial stage of this lower bounding step generates a tight, nonconvex polynomial programming relaxation for the problem, and the subsequent step constructs an LP relaxation to the resulting polynomial program via a suitable RLT procedure. The underlying motivation for these two steps is to generate a tight outer approximation of the convex envelope of the objective function over the convex hull of the feasible region. The solution obtained using the polynomial approximations is treated as a lower bound. A local search method is applied to this solution to compute an upper bound. This bounding step is then integrated into two alternative branch-and-bound frameworks. The node partitioning schemes are especially designed so that the gaps resulting from these two levels of approximations are induced to approach zero in the limit, thereby ensuring convergence to a (near) global optimum. A comparison of the containership design obtained from the designed algorithmic approaches with that obtained from the application of the nonlinear optimization methods as in previous research, exhibits a significant improvement in the design parameters translating to a significant amount of annual cost savings. For a typical containership of the size pertaining to a test case addressed in this work, having a gross weight of 90,000 metric tons, an annual transportation capacity of 99,000 containers corresponding to an annual deadweight of 1,188,000 metric tons, and logging 119,000 nautical miles annually, the improvement in the prescribed design translates to an annual estimated savings of $ 1,862,784 (or approximately $ 1.86 million) and an estimated 27 % increase in the return on investment over the life of the ship. The main contribution of this research is that it develops a detailed formulation and a more precise model of the containership design problem, along with suitable response surface and global optimization methodologies for prescribing an improved modeling and algorithmic approach for the highly nonconvex containership design problem. / Master of Science
26

Supervised Descent Method

Xiong, Xuehan 01 September 2015 (has links)
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
27

NOVEL DENSE STEREO ALGORITHMS FOR HIGH-QUALITY DEPTH ESTIMATION FROM IMAGES

Wang, Liang 01 January 2012 (has links)
This dissertation addresses the problem of inferring scene depth information from a collection of calibrated images taken from different viewpoints via stereo matching. Although it has been heavily investigated for decades, depth from stereo remains a long-standing challenge and popular research topic for several reasons. First of all, in order to be of practical use for many real-time applications such as autonomous driving, accurate depth estimation in real-time is of great importance and one of the core challenges in stereo. Second, for applications such as 3D reconstruction and view synthesis, high-quality depth estimation is crucial to achieve photo realistic results. However, due to the matching ambiguities, accurate dense depth estimates are difficult to achieve. Last but not least, most stereo algorithms rely on identification of corresponding points among images and only work effectively when scenes are Lambertian. For non-Lambertian surfaces, the "brightness constancy" assumption is no longer valid. This dissertation contributes three novel stereo algorithms that are motivated by the specific requirements and limitations imposed by different applications. In addressing high speed depth estimation from images, we present a stereo algorithm that achieves high quality results while maintaining real-time performance. We introduce an adaptive aggregation step in a dynamic-programming framework. Matching costs are aggregated in the vertical direction using a computationally expensive weighting scheme based on color and distance proximity. We utilize the vector processing capability and parallelism in commodity graphics hardware to speed up this process over two orders of magnitude. In addressing high accuracy depth estimation, we present a stereo model that makes use of constraints from points with known depths - the Ground Control Points (GCPs) as referred to in stereo literature. Our formulation explicitly models the influences of GCPs in a Markov Random Field. A novel regularization prior is naturally integrated into a global inference framework in a principled way using the Bayes rule. Our probabilistic framework allows GCPs to be obtained from various modalities and provides a natural way to integrate information from various sensors. In addressing non-Lambertian reflectance, we introduce a new invariant for stereo correspondence which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions - BRDFs). This invariant can be used to formulate a rank constraint on stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies.
28

Bayesian numerical analysis : global optimization and other applications

Fowkes, Jaroslav Mrazek January 2011 (has links)
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
29

Riešenie problému globálnej optimalizácie využitím GPU / Employing GPUs in Global Optimization Problems

Hošala, Michal January 2014 (has links)
The global optimization problem -- i.e., the problem of finding global extreme points of given function on restricted domain of values -- often appears in many real-world applications. Improving efficiency of this task can reduce the latency of the application or provide more precise result since the task is usually solved by an approximative algorithm. This thesis focuses on the practical aspects of global optimization algorithms, especially in the domain of algorithmic trading data analysis. Successful implementations of the global optimization solver already exist for CPUs, but they are quite time demanding. The main objective of this thesis is to design a GO solver that utilizes the raw computational power of the GPU devices. Despite the fact that the GPUs have significantly more computational cores than the CPUs, the parallelization of a known serial algorithm is often quite challenging due to the specific execution model and the memory architecture constraints of the existing GPU architectures. Therefore, the thesis will explore multiple approaches to the problem and present their experimental results.
30

Optimization of the compression/restoration chain for satellite images / Optimisation de la chaîne compression/restauration pour les images satellite

Carlavan, Mikaël 10 June 2013 (has links)
Le sujet de cette thèse concerne le codage et la restauration d'image dans le contexte de l'imagerie satellite. En dépit des récents développements en restauration et compression embarquée d'images, de nombreux artéfacts apparaissent dans la reconstruction de l'image. L'objectif de cette thèse est d'améliorer la qualité de l'image finale en étudiant la structure optimale de décodage et de restauration en fonction des caractéristiques des processus d'acquisition et de compression. Plus globalement, le but de cette thèse est de proposer une méthode efficace permettant de résoudre le problème de décodage-déconvolution-débruitage optimal dans un objectif d'optimisation globale de la chaîne compression/restauration. Le manuscrit est organisé en trois parties. La première partie est une introduction générale à la problématique traitée dans ce travail. Nous présentons un état de l'art des techniques de restauration et de compression pour l'imagerie satellite et nous décrivons la chaîne de traitement actuellement utilisée par le Centre National d'Etudes Spatiales (CNES) qui servira de référence tout au long de ce manuscrit. La deuxième partie concerne l'optimisation globale de la chaîne e d'imagerie satellite. Nous proposons une approche pour estimer la distorsion théorique de la chaîne complète et développons, dans trois configurations différentes de codage/restauration, un algorithme pour réaliser la minimisation. Notre deuxième contribution met également l'accent sur l'étude la chaîne globale mais est plus ciblée sur l'optimisation de la qualité visuelle de l'image finale. Nous présentons des méthodes numériques permettant d'améliorer la qualité de l'image reconstruite et nous proposons une nouvelle chaîne image basée sur les résultats d'évaluation de qualité de ces techniques. La dernière partie de la thèse introduit une chaîne d'imagerie satellite basée sur une nouvelle théorie de l'échantillonnage. Cette technique d'échantillonnage est intéressante dans le domaine du satellitaire car elle permet de transférer toutes les difficultés au décodeur qui se situe au sol. Nous rappelons les principaux résultats théoriques de cette technique d'échantillonnage et nous présentons une chaîne image construite à partir de cette méthode. Nous proposons un algorithme permettant de résoudre le problème de reconstruction et nous concluons cette partie en comparant les résultats obtenus avec cette chaîne et celle utilisée actuellement par le CNES. / The subject of this work is image coding and restoration in the context of satellite imaging. Regardless of recent developments in image restoration techniques and embedded compression algorithms, the reconstructed image still suffers from coding artifacts making its quality evaluation difficult. The objective of the thesis is to improve the quality of the final image with the study of the optimal structure of decoding and restoration regarding to the properties of the acquisition and compression processes. More essentially, the aim of this work is to propose a reliable technique to address the optimal decoding-deconvolution-denoising problem in the objective of global optimization of the compression/restoration chain. The thesis is organized in three parts. The first part is a general introduction to the problematic addressed in this work. We then review a state-of-the-art of restoration and compression techniques for satellite imaging and we describe the current imaging chain used by the French Space Agency as this is the focus of the thesis. The second part is concerned with the global optimization of the satellite imaging chain. We propose an approach to estimate the theoretical distortion of the complete chain and we present, for three different configurations of coding/restoration, an algorithm to perform its minimization. Our second contribution is also focused on the study of the global chain but is more aimed to optimize the visual quality of the final image. We present numerical methods to improve the quality of the reconstructed image and we propose a novel imaging chain based on the image quality assessment results of these techniques. The last part of the thesis introduces a satellite imaging chain based on a new sampling approach. This approach is interesting in the context of satellite imaging as it allows transferring all the difficulties to the on-ground decoder. We recall the main theoretical results of this sampling technique and we present a satellite imaging chain based on this framework. We propose an algorithm to solve the reconstruction problem and we conclude by comparing the proposed chain to the one currently used by the CNES.

Page generated in 0.0878 seconds