Spelling suggestions: "subject:"trust egions"" "subject:"trust 1regions""
1 |
Constrained an unconstrained optimisation with trust regions methods and quadratic local approximations/Optimisation contrainte et non-contrainte par régions de confiance avec approximations locales quadratiquesWalmag, Jérôme 21 March 2011 (has links)
This work deals with optimisation problems in which the numerical cost associated with the evaluation of the target function and/or of the constraints is large; the number of calls to these functions by the optimisation algorithm should therefore be kept as small as possible.
The first part of the work is about globalisation by trust regions where the objective function and the constraints are replaced by a local approximation, easier to use, in a certain region of confidence.
Different types of local approximations are introduced but the main part of the work deals with quadratic approximations. The theoretical aspects of the global convergence of trust regions methods are also presented.
One of the applications considered in this work is the parametrical identification of a dynamical model with respect to experimental measurements. This identification can be expressed in the form of an unconstrained optimisation problem. For the practical implementation of the identification algorithm, the derivative of the objective function is required, which asks for the derivation of the underlying model. An algorithm, named Trust, has been implemented: it is a trust region method of quasi-Newton type which uses quadratic local approximations. The choice of the differentiation method is analysed in details in relation with its influence on the rate of convergence.
A brand new update strategy of the trust region radius is also introduced. The trust region radius is a parameter that measures the size of the trust region around the current iterate. The new approach relies on the identification and appropriate handling of so-called too successful iterations that lead to a much more important reduction of the function objective than predicted by the local approximation. This approach goes with a significant improvement of the performances of the algorithm.
Constrained optimisation is then considered using sequential quadratic methods. A fully effective algorithm for the resolution of quadratic convex sub-problems with quadratic constraints is introduced. This original method, named UVQCQP, makes use of an exact non-differentiable penalty function to addresses the constrained optimisation problem. The algorithm relies on a decomposition of the variable space into three orthogonal subspaces: a first subspace taking into account bound constraints, a second one in which the objective function is continuously derivable and a third one with slope discontinuities. The performances of this algorithm are further improved by the implementation of a fast mode taking into account the second order corrections.
Eventually, the UVQCQP algorithm is applied within the framework of sequential algorithms of quadratic programming with quadratic constraints: its advantages are demonstrated through some examples. The numerical tests carried out reveal very encouraging prospects.
/Ce travail s'intéresse aux problèmes doptimisation dans lesquels le temps de calcul de la fonction cible et/ou des contraintes est très important et où le nombre dappels à ces fonctions par l'algorithme doptimisation doit donc être aussi réduit que possible.
La première partie du travail porte sur la technique de globalisation dite par régions de confiance où la fonction objectif et les contraintes peuvent être remplacées par une approximation locale, plus facile à utiliser, dans une certaine zone de confiance.
Plusieurs types d'approximations locales sont développées en détail mais l'essentiel du travail se concentre sur les approximations quadratiques. Les aspects théoriques de convergence globale des méthodes par régions de confiance sont également présentés.
Une des applications envisagées porte sur l'identification paramétrique d'un modèle dynamique par rapport à des mesures expérimentales. Cette identification peut être exprimée sous la forme d'un problème d'optimisation non-contraint. Pour être menée à bien, lidentification nécessite la différentiation de la fonction objectif et donc du modèle sous-jacent : le facteur-clé est son coût en ressources informatiques et cette question est passée en revue en détail. Un algorithme, nommé Trust, a été implémenté : cest une méthode par régions de confiance de type quasi-Newton qui utilise des approximations locales quadratiques. Le choix de la méthode de différentiation est analysé car celui-ci influence la vitesse de convergence.
Ce travail introduit également une stratégie nouvelle de mise à jour du rayon de confiance. Le rayon de confiance est un paramètre des méthodes par régions de confiance qui mesure l'étendue de la dite région autour de l'itéré courant. La nouveauté développée ici invite à se méfier des itérations menant à une réduction de la fonction objectif bien plus importante que celle prévue par l'approximation locale. Cette approche permet une amélioration sensible des performances de l'algorithme.
Le travail aborde ensuite la question de l'optimisation contrainte en se basant sur les méthodes quadratiques séquentielles. Il présente un algorithme complet et efficace de résolution d'un sous-problème convexe quadratique à contraintes quadratiques. Cette méthode originale, nommée UVQCQP, se base sur une fonction de pénalité exacte non-différentiable. L'algorithme tire profit d'une décomposition de l'espace des variables en trois sous-espaces orthogonaux : un premier permettant de gérer des contraintes de bornes, un deuxième dans lequel la fonction objectif est continûment dérivable et un troisième où elle présente des cassures de pente. Les performances de cet algorithme sont encore améliorées par l'implémentation d'un mode rapide qui tire profit de corrections du second ordre.
Pour terminer, lalgorithme UVQCQP est appliqué dans le cadre dalgorithmes séquentiels de programmation quadratique à contraintes quadratiques : ses avantages sont abordés au travers de quelques exemples. Les tests numériques effectués font apparaître des perspectives très encourageantes.
|
2 |
Robust Parameter Inversion Using Stochastic EstimatesMunster, Drayton William 10 January 2020 (has links)
For parameter inversion problems governed by systems of partial differential equations, such as those arising in Diffuse Optical Tomography (DOT), even the cost of repeated objective function evaluation can be overwhelming. Despite the linear (in the state variable) nature of the DOT problem, the nonlinear parameter inversion process is dominated by the computational burden of solving a large linear system for each source and frequency. To compute the Jacobian for use in Newton-type methods, an adjoint solve is required for each detector and frequency. When a three-dimensional tomography problem may have nearly 1,000 sources and detectors, the computational cost of an optimization routine is a large burden. While techniques from model order reduction can partially alleviate the computational cost, obtaining error bounds in parameter space is typically not feasible. In this work, we examine two different remedies based on stochastic estimates of the objective function.
In the first manuscript, we focus on maximizing the efficiency of using stochastic estimates by replacing our objective function with a surrogate objective function computed from a reduced order model (ROM). We use as few as a single sample to detect a misfit between the full-order and surrogate objective functions. Once a sufficiently large difference is detected, it is necessary to update the ROM to reduce the error. We propose a new technique for improving the ROM with very few large linear solutions. Using this techniques, we observe a reduction of up to 98% in the number of large linear solutions for a three-dimensional tomography problem.
In the second manuscript, we focus on establishing a robust algorithm. We propose a new trust region framework that replaces the objective function evaluations with stochastic estimates of the improvement factor and the misfit between the model and objective function gradients. If these estimates satisfy a fixed multiplicative error bound with a high, but fixed, probability, we show that this framework converges almost surely to a stationary point of the objective function. We derive suitable bounds for the DOT problem and present results illustrating the robust nature of these estimates with only 10 samples per iteration. / Doctor of Philosophy / For problems such as medical imaging, the process of reconstructing the state of a system from measurement data can be very expensive to compute. The ever increasing need for high accuracy requires very large models to be used. Reducing the computational burden by replacing the model with a specially constructed smaller model is an established and effective technique. However, it can be difficult to determine how well the smaller model matches the original model. In this thesis, we examine two techniques for estimating the quality of a smaller model based on randomized combinations of sources and detectors.
The first technique focuses on reducing the computational cost as much as possible. With the equivalent of a single randomized source, we show that this estimate is an effective measure of the model quality. Coupled with a new technique for improving the smaller model, we demonstrate a highly efficient and robust method.
The second technique prioritizes robustness in its algorithm. The algorithm uses these randomized combinations to estimate how the observations change for different system states. If these estimates are accurate with a high probability, we show that this leads to a method that always finds a minimum misfit between predicted values and the observed data.
|
3 |
An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain CoefficientsKouri, Drew 05 September 2012 (has links)
Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.
|
Page generated in 0.0458 seconds