• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 45
  • 45
  • 27
  • 14
  • 13
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Um metodo do tipo lagrangiano aumentado com região de confiança / On augmented lagrangian methods with trust-region

Castelani, Emerson Vitor 13 August 2018 (has links)
Orientador: Jose Mario Martinez Perez / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-13T22:53:44Z (GMT). No. of bitstreams: 1 Castelani_EmersonVitor_D.pdf: 695936 bytes, checksum: 9434e07a75cde154320a5156daf73684 (MD5) Previous issue date: 2009 / Resumo: Ao resolver problemas de programação não linear usando métodos do tipo Lagrangiano Aumentado, um fenômeno chamado voracidade pode ocorrer. Quando este fenômeno ocorre, o método busca pontos muito infactíveis com valor de função objetivo muito pequeno. Tais fatos ocorrem, em geral, na primeiras iterações e então, o parâmetro de penalidade precisa crescer excessivamente, tornado os subproblemas mal condicionados, prejudicando assim a convergência. Desta forma, o propósito deste trabalho é adicionar restrições de caixas adaptativas (região de confiança) a cada subproblema em cada iteração externa, de modo que, a distância entre dois iterando consecutivos das iterações externas é controlada. O novo método inibe a possibilidade do fenômeno de voracidade. Resultados de convergência, limitação de parâmetro de penalidade e exemplos numéricos são apresentados / Abstract: When we solve nonlinear programming problems by means of algorithms of kind of Augmented Lagrangian, a phenomenon called greediness may occur. Unconstrained minimizers attract the iterates at early stages of the calculations and, so, the penalty parameter needs to grow excessively, in such a way that ill-conditioning harms the overall convergence. In this sense, the proposal of this work is to add an adaptive artificial box constraint (trust-region) to the subproblem at every outer iteration, in such a way that the distance between consecutive outer iterates is controlled. The new method inhibits the possibility of greediness phenomenon. Convergence proofs and numerical examples are given / Doutorado / Otimização / Doutor em Matemática Aplicada
22

Continuous steepest descent path for traversing non-convex regions

Beddiaf, Salah January 2016 (has links)
In this thesis, we investigate methods of finding a local minimum for unconstrained problems of non-convex functions with n variables, by following the solution curve of a system of ordinary differential equations. The motivation for this was the fact that existing methods (e.g. those based on Newton methods with line search) sometimes terminate at a non-stationary point when applied to functions f(x) that do not a have positive-definite Hessian (i.e. ∇²f → 0) for all x. Even when methods terminate at a stationary point it could be a saddle or maximum rather than a minimum. The only method which makes intuitive sense in non-convex region is the trust region approach where we seek a step which minimises a quadratic model subject to a restriction on the two-norm of the step size. This gives a well-defined search direction but at the expense of a costly evaluation. The algorithms derived in this thesis are gradient based methods which require systems of equations to be solved at each step but which do not use a line search in the usual sense. Progress along the Continuous Steepest Descent Path (CSDP) is governed both by the decrease in the function value and measures of accuracy of a local quadratic model. Numerical results on specially constructed test problems and a number of standard test problems from CUTEr [38] show that the approaches we have considered are more promising when compared with routines in the optimization tool box of MATLAB [46], namely the trust region method and the quasi-Newton method. In particular, they perform well in comparison with the, superficially similar, gradient-flow method proposed by Behrman [7].
23

Computationally Efficient and Robust Kinematic Calibration Methodologies and their Application to Industrial Robots

Messay-Kebede, Temesguen January 2014 (has links)
No description available.
24

Adaptive Sampling Methods for Stochastic Optimization

Daniel Andres Vasquez Carvajal (10631270) 08 December 2022 (has links)
<p>This dissertation investigates the use of sampling methods for solving stochastic optimization problems using iterative algorithms. Two sampling paradigms are considered: (i) adaptive sampling, where, before each iterate update, the sample size for estimating the objective function and the gradient is adaptively chosen; and (ii) retrospective approximation (RA), where, iterate updates are performed using a chosen fixed sample size for as long as progress is deemed statistically significant, at which time the sample size is increased. We investigate adaptive sampling within the context of a trust-region framework for solving stochastic optimization problems in $\mathbb{R}^d$, and retrospective approximation within the broader context of solving stochastic optimization problems on a Hilbert space. In the first part of the dissertation, we propose Adaptive Sampling Trust-Region Optimization (ASTRO), a class of derivative-based stochastic trust-region (TR) algorithms developed to solve smooth stochastic unconstrained optimization problems in $\mathbb{R}^{d}$ where the objective function and its gradient are observable only through a noisy oracle or using a large dataset. Efficiency in ASTRO stems from two key aspects: (i) adaptive sampling to ensure that the objective function and its gradient are sampled only to the extent needed, so that small sample sizes are chosen when the iterates are far from a critical point and large sample sizes are chosen when iterates are near a critical point; and (ii) quasi-Newton Hessian updates using BFGS. We prove three main results for ASTRO and for general stochastic trust-region methods that estimate function and gradient values adaptively, using sample sizes that are stopping times with respect to the sigma algebra of the generated observations. The first asserts strong consistency when the adaptive sample sizes have a mild logarithmic lower bound, assuming that the oracle errors are light-tailed. The second and third results characterize the iteration and oracle complexities in terms of certain risk functions. Specifically, the second result asserts that the best achievable $\mathcal{O}(\epsilon^{-1})$ iteration complexity (of squared gradient norm) is attained when the total relative risk associated with the adaptive sample size sequence is finite; and the third result characterizes the corresponding oracle complexity in terms of the total generalized risk associated with the adaptive sample size sequence. We report encouraging numerical results in certain settings. In the second part of this dissertation, we consider the use of RA as an alternate adaptive sampling paradigm to solve smooth stochastic constrained optimization problems in infinite-dimensional Hilbert spaces. RA generates a sequence of subsampled deterministic infinite-dimensional problems that are approximately solved within a dynamic error tolerance. The bottleneck in RA becomes solving this sequence of problems efficiently. To this end, we propose a progressive subspace expansion (PSE) framework to solve smooth deterministic optimization problems in infinite-dimensional Hilbert spaces with a TR Sequential Quadratic Programming (SQP) solver. The infinite-dimensional optimization problem is discretized, and a sequence of finite-dimensional problems are solved where the problem dimension is progressively increased. Additionally, (i) we solve this sequence of finite-dimensional problems only to the extent necessary, i.e., we spend just enough computational work needed to solve each problem within a dynamic error tolerance, and (ii) we use the solution of the current optimization problem as the initial guess for the subsequent problem. We prove two main results for PSE. The first assesses convergence to a first-order critical point of a subsequence of iterates generated by the PSE TR-SQP algorithm. The second characterizes the relationship between the error tolerance and the problem dimension, and provides an oracle complexity result for the total amount of computational work incurred by PSE. This amount of computational work is closely connected to three quantities: the convergence rate of the finite-dimensional spaces to the infinite-dimensional space, the rate of increase of the cost of making oracle calls in finite-dimensional spaces, and the convergence rate of the solution method used. We also show encouraging numerical results on an optimal control problem supporting our theoretical findings.</p> <p>  </p>
25

Inexact Newton Methods Applied to Under-Determined Systems

Simonis, Joseph P 04 May 2006 (has links)
Consider an under-determined system of nonlinear equations F(x)=0, F:R^m→R^n, where F is continuously differentiable and m > n. This system appears in a variety of applications, including parameter-dependent systems, dynamical systems with periodic solutions, and nonlinear eigenvalue problems. Robust, efficient numerical methods are often required for the solution of this system. Newton's method is an iterative scheme for solving the nonlinear system of equations F(x)=0, F:R^n→R^n. Simple to implement and theoretically sound, it is not, however, often practical in its pure form. Inexact Newton methods and globalized inexact Newton methods are computationally efficient variations of Newton's method commonly used on large-scale problems. Frequently, these variations are more robust than Newton's method. Trust region methods, thought of here as globalized exact Newton methods, are not as computationally efficient in the large-scale case, yet notably more robust than Newton's method in practice. The normal flow method is a generalization of Newton's method for solving the system F:R^m→R^n, m > n. Easy to implement, this method has a simple and useful local convergence theory; however, in its pure form, it is not well suited for solving large-scale problems. This dissertation presents new methods that improve the efficiency and robustness of the normal flow method in the large-scale case. These are developed in direct analogy with inexact-Newton, globalized inexact-Newton, and trust-region methods, with particular consideration of the associated convergence theory. Included are selected problems of interest simulated in MATLAB.
26

Algorithm Design and Analysis for Large-Scale Semidefinite Programming and Nonlinear Programming

Lu, Zhaosong 24 June 2005 (has links)
The limiting behavior of weighted paths associated with the semidefinite program (SDP) map $X^{1/2}SX^{1/2}$ was studied and some applications to error bound analysis and superlinear convergence of a class of primal-dual interior-point methods were provided. A new approach for solving large-scale well-structured sparse SDPs via a saddle point mirror-prox algorithm with ${cal O}(epsilon^{-1})$ efficiency was developed based on exploiting sparsity structure and reformulating SDPs into smooth convex-concave saddle point problems. An iterative solver-based long-step primal-dual infeasible path-following algorithm for convex quadratic programming (CQP) was developed. The search directions of this algorithm were computed by means of a preconditioned iterative linear solver. A uniform bound, depending only on the CQP data, on the number of iterations performed by a preconditioned iterative linear solver was established. A polynomial bound on the number of iterations of this algorithm was also obtained. One efficient ``nearly exact' type of method for solving large-scale ``low-rank' trust region subproblems was proposed by completely avoiding the computations of Cholesky or partial Cholesky factorizations. A computational study of this method was also provided by applying it to solve some large-scale nonlinear programming problems.
27

Optimization of force fields for molecular dynamics

Di Pierro, Michele 09 February 2015 (has links)
A technology for optimization of potential parameters from condensed phase simulations (POP) is discussed and illustrated. It is based on direct calculations of the derivatives of macroscopic observables with respect to the potential parameters. The derivatives are used in a local minimization scheme, comparing simulated and experimental data. In particular, we show that the Newton Trust-Region protocol allows for accurate and robust optimization. POP is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. We then apply the newly developed technology to study the liquid mixture of tert-butanol and water. We are able to obtain, after 4 iterations, the correct phase behavior and accurately predict the value of the Kirkwood Buff (KB) integrals. We further illustrate that a potential that is determined solely by KB information, or the pair correlation function, is not necessarily unique. / text
28

Progressive Validity Metamodel Trust Region Optimization

Thomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
29

Progressive Validity Metamodel Trust Region Optimization

Thomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
30

Derivative Free Optimization Methods: Application In Stirrer Configuration And Data Clustering

Akteke, Basak 01 July 2005 (has links) (PDF)
Recent developments show that derivative free methods are highly demanded by researches for solving optimization problems in various practical contexts. Although well-known optimization methods that employ derivative information can be very effcient, a derivative free method will be more effcient in cases where the objective function is nondifferentiable, the derivative information is not available or is not reliable. Derivative Free Optimization (DFO) is developed for solving small dimensional problems (less than 100 variables) in which the computation of an objective function is relatively expensive and the derivatives of the objective function are not available. Problems of this nature more and more arise in modern physical, chemical and econometric measurements and in engineering applications, where computer simulation is employed for the evaluation of the objective functions. In this thesis, we give an example of the implementation of DFO in an approach for optimizing stirrer configurations, including a parametrized grid generator, a flow solver, and DFO. A derivative free method, i.e., DFO is preferred because the gradient of the objective function with respect to the stirrer&rsquo / s design variables is not directly available. This nonlinear objective function is obtained from the flow field by the flow solver. We present and interpret numerical results of this implementation. Moreover, a contribution is given to a survey and a distinction of DFO research directions, to an analysis and discussion of these. We also state a derivative free algorithm used within a clustering algorithm in combination with non-smooth optimization techniques to reveal the effectiveness of derivative free methods in computations. This algorithm is applied on some data sets from various sources of public life and medicine. We compare various methods, their practical backgrounds, and conclude with a summary and outlook. This work may serve as a preparation of possible future research.

Page generated in 0.0608 seconds