1 |
Efficient Solution Of Optimization Problems With Constraints And/or Cost Functions Expensive To EvaluateKurtdere, Ahmet Gokhan 01 January 2010 (has links) (PDF)
There are many optimization problems motivated by engineering applications, whose constraints and/or cost functions are computationally expensive to evaluate. What is more derivative information is usually not available or available at a considerable cost. For that reason, classical optimization methods, based on derivatives, are not applicable. This study presents a framework based on available methods in literature to overcome this important problem. First, a penalized model is constructed where the violation of the constraints are added to the cost function. The model is optimized with help of stochastic approximation algorithms until a point satisfying the constraints is obtained. Then, a sample point set satisfying the constraints is obtained by taking advantage of direct search algorithms based sampling strategies. In this context, two search direction estimation methods, convex hull based and estimated radius of curvature of the sample point set based methods can be applicable. Point set is used to create a barrier which imposes a large cost for points near to the boundary. The aim is to obtain convergence to local optima using the most promising direction with help of direct search methods. As regards to the evaluation of the cost function there are two directions to follow: a-) Gradient-based methods, b-) Non-gradient methods. In gradient-based methods, the gradient is approximated using the so-called stochastic approximation algorithms. In the latter case, direct search algorithms based sampling strategy is realized. This study is concluded by using all these ideas in the solution of complicated test problems where the cost and the constraint functions are costly to evaluate.
|
2 |
Algorithmes d'optimisation sans dérivées à caractère probabiliste ou déterministe : analyse de complexité et importance en pratique / Derivative-free optimization methods based on probabilistic and deterministic properties : complexity analysis and numerical relevanceRoyer, Clément 04 November 2016 (has links)
L'utilisation d'aspects aléatoires a contribué de façon majeure aux dernières avancées dans le domaine de l'optimisation numérique; cela est dû en partie à la recrudescence de problèmes issus de l'apprentissage automatique (machine learning). Dans un tel contexte, les algorithmes classiques d'optimisation non linéaire, reposant sur des principes déterministes, se révèlent en effet bien moins performants que des variantes incorporant de l'aléatoire. Le coût de ces dernières est souvent inférieur à celui de leurs équivalents déterministes; en revanche, il peut s'avérer difficile de maintenir les propriétés théoriques d'un algorithme déterministe lorsque de l'aléatoire y est introduit. Effectuer une analyse de complexité d'une telle méthode est un procédé très répandu dans ce contexte. Cette technique permet déstimer la vitesse de convergence du schéma considéré et par là même d'établir une forme de convergence de celui-ci. Les récents travaux sur ce sujet, en particulier pour des problèmes d'optimisation non convexes, ont également contribué au développement de ces aspects dans le cadre déterministe, ceux-ci apportant en effet un éclairage nouveau sur le comportement des algorithmes. Dans cette thèse, on s'intéresse à l'amélioration pratique d'algorithmes d'optimisation sans dérivées à travers l'introduction d'aléatoire, ainsi qu'à l'impact numérique des analyses de complexité. L'étude se concentre essentiellement sur les méthodes de recherche directe, qui comptent parmi les principales catégories d'algorithmes sans dérivées; cependant, l'analyse sous-jacente est applicable à un large éventail de ces classes de méthodes. On propose des variantes probabilistes des propriétés requises pour assurer la convergence des algorithmes étudiés, en mettant en avant le gain en efficacité induit par ces variantes: un tel gain séxplique principalement par leur coût très faible en évaluations de fonction. Le cadre de base de notre analyse est celui de méthodes convergentes au premier ordre, que nous appliquons à des problèmes sans ou avec contraintes linéaires. Les bonnes performances obtenues dans ce contexte nous incitent par la suite à prendre en compte des aspects d'ordre deux. A partir des propriétés de complexité des algorithmes sans dérivées, on développe de nouvelles méthodes qui exploitent de l'information du second ordre. L'analyse de ces procédures peut être réalisée sur un plan déterministe ou probabiliste: la deuxième solution nous permet d'étudier de nouveaux aspects aléatoires ainsi que leurs conséquences sur l'éfficacité et la robustesse des algorithmes considérés. / Randomization has had a major impact on the latest developments in the field of numerical optimization, partly due to the outbreak of machine learning applications. In this increasingly popular context, classical nonlinear programming algorithms have indeed been outperformed by variants relying on randomness. The cost of these variants is usually lower than for the traditional schemes, however theoretical guarantees may not be straightforward to carry out from the deterministic to the randomized setting. Complexity analysis is a useful tool in the latter case, as it helps in providing estimates on the convergence speed of a given scheme, which implies some form of convergence. Such a technique has also gained attention from the deterministic optimization community thanks to recent findings in the nonconvex case, as it brings supplementary indicators on the behavior of an algorithm. In this thesis, we investigate the practical enhancement of deterministic optimization algorithms through the introduction of random elements within those frameworks, as well as the numerical impact of their complexity results. We focus on direct-search methods, one of the main classes of derivative-free algorithms, yet our analysis applies to a wide range of derivative-free methods. We propose probabilistic variants on classical properties required to ensure convergence of the studied methods, then enlighten their practical efficiency induced by their lower consumption of function evaluations. Firstorder concerns form the basis of our analysis, which we apply to address unconstrained and linearly-constrained problems. The observed gains incite us to additionally take second-order considerations into account. Using complexity properties of derivative-free schemes, we develop several frameworks in which information of order two is exploited. Both a deterministic and a probabilistic analysis can be performed on these schemes. The latter is an opportunity to introduce supplementary probabilistic properties, together with their impact on numerical efficiency and robustness.
|
3 |
Some Population Set-Based Methods for Unconstrained Global OptimizationKaelo, Professor 16 November 2006 (has links)
Student Number : 0214677F -
PhD thesis -
School of Camputational and Applied Mathematics -
Faculty of Science / Many real-life problems are formulated as global optimization problems with continuous variables.
These problems are in most cases nonsmooth, nonconvex and often simulation based,
making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable
and derivative-free global optimization methods for solving such problems are needed.
In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization
methods. In particular, we concentrate on improving some population set-based methods
for unconstrained global optimization, mainly through hybridization. Hybridization has widely
been recognized to be one of the most attractive areas of unconstrained global optimization.
Experiments have shown that through hybridization, new methods that inherit the strength of
the original elements but not their weakness can be formed. We suggest a number of new hybridized
population set-based methods based on differential evolution (de), controlled random
search (crs2) and real coded genetic algorithm (ga).
We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called
random localization, in the mutation phase of de. In the second version, we propose a localization
in the acceptance phase of de. In the third version, we form a de hybrid algorithm by
probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm.
The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation
of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose
#2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2
by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically
combine the point generation scheme of crs2 with the linear interpolation scheme of a
trust-region based method. The fourth version is a crs hybrid that probabilistically combines the
quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme
of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real
coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a
local technique. We propose, in the second version, an integrated crossover rule that generates
two children at a time using two different crossover rules. We introduce a local technique in
the second version to obtain the third version. The fourth and #2;fth versions are based on the
probabilistic adaptation of crossover rules.
The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments
using a large test suite of both simple and dif#2;cult problems from the literature. Results
indicate that the new hybrids are much better than their original counterparts both in reliability
and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many
currently available stochastic algorithms for solving global optimization problems in which the
gradient information is not readily available.
|
Page generated in 0.0819 seconds