Spelling suggestions: "subject:"continuous optimization"" "subject:"eontinuous optimization""
1 |
Geometry of convex sets arising from hyperbolic polynomialsMyklebust, Tor Gunnar Josefsson Jay 29 August 2008 (has links)
This thesis focuses on convex sets and convex cones defined using hyperbolic polynomials.
We first review some of the theory of convex sets in $\R^d$ in general. We then review some classical algebraic theorems concerning polynomials in a single variable, as well as presenting a few more modern results about them. We then discuss the theory of hyperbolic polynomials in several variables and their associated hyperbolicity cones. We survey various ways to build and decompose hyperbolic cones and we prove that every nontrivial hyperbolic cone is the intersection of its derivative cones. We conclude with a brief discussion of the set of extreme rays of a hyperbolic cone.
|
2 |
Geometry of convex sets arising from hyperbolic polynomialsMyklebust, Tor Gunnar Josefsson Jay 29 August 2008 (has links)
This thesis focuses on convex sets and convex cones defined using hyperbolic polynomials.
We first review some of the theory of convex sets in $\R^d$ in general. We then review some classical algebraic theorems concerning polynomials in a single variable, as well as presenting a few more modern results about them. We then discuss the theory of hyperbolic polynomials in several variables and their associated hyperbolicity cones. We survey various ways to build and decompose hyperbolic cones and we prove that every nontrivial hyperbolic cone is the intersection of its derivative cones. We conclude with a brief discussion of the set of extreme rays of a hyperbolic cone.
|
3 |
Population-based heuristic algorithms for continuous and mixed discrete-continuous optimization problemsLiao, Tianjun 28 June 2013 (has links)
Continuous optimization problems are optimization problems where all variables<p>have a domain that typically is a subset of the real numbers; mixed discrete-continuous<p>optimization problems have additionally other types of variables, so<p>that some variables are continuous and others are on an ordinal or categorical<p>scale. Continuous and mixed discrete-continuous problems have a wide range<p>of applications in disciplines such as computer science, mechanical or electrical<p>engineering, economics and bioinformatics. These problems are also often hard to<p>solve due to their inherent difficulties such as a large number of variables, many<p>local optima or other factors making problems hard. Therefore, in this thesis our<p>focus is on the design, engineering and configuration of high-performing heuristic<p>optimization algorithms.<p>We tackle continuous and mixed discrete-continuous optimization problems<p>with two classes of population-based heuristic algorithms, ant colony optimization<p>(ACO) algorithms and evolution strategies. In a nutshell, the main contributions<p>of this thesis are that (i) we advance the design and engineering of ACO algorithms to algorithms that are competitive or superior to recent state-of-the-art<p>algorithms for continuous and mixed discrete-continuous optimization problems,<p>(ii) we improve upon a specific state-of-the-art evolution strategy, the covariance<p>matrix adaptation evolution strategy (CMA-ES), and (iii) we extend CMA-ES to<p>tackle mixed discrete-continuous optimization problems.<p>More in detail, we propose a unified ant colony optimization (ACO) framework<p>for continuous optimization (UACOR). This framework synthesizes algorithmic<p>components of two ACO algorithms that have been proposed in the literature<p>and an incremental ACO algorithm with local search for continuous optimization,<p>which we have proposed during my doctoral research. The design of UACOR<p>allows the usage of automatic algorithm configuration techniques to automatically<p>derive new, high-performing ACO algorithms for continuous optimization. We also<p>propose iCMAES-ILS, a hybrid algorithm that loosely couples IPOP-CMA-ES, a<p>CMA-ES variant that uses a restart schema coupled with an increasing population<p>size, and a new iterated local search (ILS) algorithm for continuous optimization.<p>The hybrid algorithm consists of an initial competition phase, in which IPOP-CMA-ES and the ILS algorithm compete for further deployment during a second<p>phase. A cooperative aspect of the hybrid algorithm is implemented in the form<p>of some limited information exchange from IPOP-CMA-ES to the ILS algorithm<p>during the initial phase. Experimental studies on recent benchmark functions<p>suites show that UACOR and iCMAES-ILS are competitive or superior to other<p>state-of-the-art algorithms.<p>To tackle mixed discrete-continuous optimization problems, we extend ACOMV <p>and propose CESMV, an ant colony optimization algorithm and a covariance matrix adaptation evolution strategy, respectively. In ACOMV and CESMV ,the decision variables of an optimization problem can be declared as continuous, ordinal, or categorical, which allows the algorithm to treat them adequately. ACOMV and<p>CESMV include three solution generation mechanisms: a continuous optimization<p>mechanism, a continuous relaxation mechanism for ordinal variables, and a categorical optimization mechanism for categorical variables. Together, these mechanisms allow ACOMV and CESMV to tackle mixed variable optimization problems.<p>We also propose a set of artificial, mixed-variable benchmark functions, which can<p>simulate discrete variables as ordered or categorical. We use them to automatically tune ACOMV and CESMV's parameters and benchmark their performance.<p>Finally we test ACOMV and CESMV on various real-world continuous and mixed-variable engineering optimization problems. Comparisons with results from the<p>literature demonstrate the effectiveness and robustness of ACOMV and CESMV<p>on mixed-variable optimization problems.<p>Apart from these main contributions, during my doctoral research I have accomplished a number of additional contributions, which concern (i) a note on the<p>bound constraints handling for the CEC'05 benchmark set, (ii) computational results for an automatically tuned IPOP-CMA-ES on the CEC'05 benchmark set and<p>(iii) a study of artificial bee colonies for continuous optimization. These additional<p>contributions are to be found in the appendix to this thesis.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
4 |
A Scaled Gradient Descent Method for Unconstrained Optimization Problems With A Priori Estimation of the Minimum ValueD'Alves, Curtis January 2017 (has links)
A scaled gradient descent method for competition of applications of conjugate gradient with priori estimations of the minimum value / This research proposes a novel method of improving the Gradient Descent method in an effort to be competitive with applications of the conjugate gradient method while reducing computation per iteration. Iterative methods for unconstrained optimization have found widespread application in digital signal processing applications for large inverse problems, such as the use of conjugate gradient for parallel image reconstruction in MR Imaging. In these problems, very good estimates of the minimum value at the objective function can be obtained by estimating the noise variance in the signal, or using additional measurements.
The method proposed uses an estimation of the minimum to develop a scaling for Gradient Descent at each iteration, thus avoiding the necessity of a computationally extensive line search. A sufficient condition for convergence and proof are provided for the method, as well as an analysis of convergence rates for varying conditioned problems. The method is compared against the gradient descent and conjugate gradient methods.
A method with a computationally inexpensive scaling factor is achieved that converges linearly for well-conditioned problems. The method is tested with tricky non-linear problems against gradient descent, but proves unsuccessful without augmenting with a line search. However with line search augmentation the method still outperforms gradient descent in iterations. The method is also benchmarked against conjugate gradient for linear problems, where it achieves similar convergence for well-conditioned problems even without augmenting with a line search. / Thesis / Master of Science (MSc) / This research proposes a novel method of improving the Gradient Descent method in an effort to be competitive with applications of the conjugate gradient method while reducing computation per iteration. Iterative methods for unconstrained optimization have found widespread application in digital signal processing applications for large inverse problems, such as the use of conjugate gradient for parallel image reconstruction in MR Imaging. In these problems, very good estimates of the minimum value at the objective function can be obtained by estimating the noise variance in the signal, or using additional measurements.
The method proposed uses an estimation of the minimum to develop a scaling for Gradient Descent at each iteration, thus avoiding the necessity of a computationally extensive line search. A sufficient condition for convergence and proof are provided for the method, as well as an analysis of convergence rates for varying conditioned problems. The method is compared against the gradient descent and conjugate gradient methods.
A method with a computationally inexpensive scaling factor is achieved that converges linearly for well-conditioned problems. The method is tested with tricky non-linear problems against gradient descent, but proves unsuccessful without augmenting with a line search. However with line search augmentation the method still outperforms gradient descent in iterations. The method is also benchmarked against conjugate gradient for linear problems, where it achieves similar convergence for well-conditioned problems even without augmenting with a line search.
|
5 |
Ant Colony Optimization for Continuous and Mixed-Variable DomainsSocha, Krzysztof 09 May 2008 (has links)
In this work, we present a way to extend Ant Colony Optimization (ACO), so that it can be applied to both continuous and mixed-variable optimization problems. We demonstrate, first, how ACO may be extended to continuous domains. We describe the algorithm proposed, discuss the different design decisions made, and we position it among other metaheuristics.
Following this, we present the results of numerous simulations and testing. We compare the results obtained by the proposed algorithm on typical benchmark problems with those obtained by other methods used for tackling continuous optimization problems in the literature. Finally, we investigate how our algorithm performs on a real-world problem coming from the medical field—we use our algorithm for training neural network used for pattern classification in disease recognition.
Following an extensive analysis of the performance of ACO extended to continuous domains, we present how it may be further adapted to handle both continuous and discrete variables simultaneously. We thus introduce the first native mixed-variable version of an ACO algorithm. Then, we analyze and compare the performance of both continuous and mixed-variable
ACO algorithms on different benchmark problems from the literature. Through the research performed, we gain some insight into the relationship between the formulation of mixed-variable problems, and the best methods to tackle them. Furthermore, we demonstrate that the performance of ACO on various real-world mixed-variable optimization problems coming from the mechanical engineering field is comparable to the state of the art.
|
6 |
Aplicações de computação paralela em otimização contínua / Applications of parallel computing in continuous optimizationAbrantes, Ricardo Luiz de Andrade 22 February 2008 (has links)
No presente trabalho, estudamos alguns conceitos relacionados ao desenvolvimento de programas paralelos, algumas formas de aplicar computação paralela em métodos de otimização contínua e dois métodos que envolvem o uso de otimização. O primeiro método que apresentamos, chamado PUMA (Pointwise Unconstrained Minimization Approach), recupera constantes óticas e espessuras de filmes finos a partir de valores de transmitância. O problema de recuperação é modelado como um problema inverso e resolvido com auxílio de um método de otimização. Através da paralelização do PUMA viabilizamos a recuperação empírica de constantes e espessuras de sistemas compostos por até dois filmes sobrepostos. Relatamos aqui os resultados obtidos e discutimos o desempenho da versão paralela e a qualidade dos resultados obtidos. O segundo método estudado tem o objetivo de obter configurações iniciais de moléculas para simulações de dinâmica molecular e é chamado PACKMOL. O problema de obter uma configuração inicial de moléculas é modelado como um problema de empacotamento e resolvido com o auxílio de um método de otimização. Construímos uma versão paralela do PACKMOL e mostramos os ganhos de desempenho obtidos com a paralelização. / In this work we studied some concepts of parallel programming, some ways of using parallel computing in continuous optimization methods and two optimization methods. The first method we present is called PUMA (Pointwise Unconstrained Minimization Approach), and it retrieves optical constants and thicknesses of thin films from transmitance data. The problem of retrieve thickness and optical constants is modeled as an inverse problem and solved with aid of an optimization method. Through the paralelization of PUMA we managed to retrieve optical constants and thicknesses of thin films in structures with one and two superposed films. We describe some results and discuss the performance of the parallel PUMA and the quality of the retrievals. The second studied method is used to build an initial configuration of molecules for molecular dynamics simulations and it is called PACKMOL. The problem of create an initial configuration of molecules is modeled as a packing problem and solved with aid of an optimization method. We developed a parallel version of PACKMOL and we show the obtained performance gains.
|
7 |
Modélisation dynamique des mécanismes de signalisation cellulaire induits par l'hormone folliculo-stimulante et l'angiotensine. / Modelling cellular networks induce by FSH (Follicle stimulating hormone) and angiotensin IIHeitzler, Domitille 07 January 2011 (has links)
La signalisation cellulaire induite par les récepteurs à sept domaines transmembranaires(R7TM) contrôle les principales fonctions physiologiques humaines. Ces R7TMs sont cibles de médicaments et initient de larges réseaux d'interactions. Nous avons modélisé dynamiquement les réseaux de signalisation du récepteur à l'hormone folliculo-stimulante(FSH) régulant la fonction de reproduction et du récepteur angiotensine, un R7TM modèle régulant la tension pour comprendre le fonctionnement de ces réseaux et prédire des données inaccessibles expérimentalement. Notre modélisation a utilisé des équations différentielles ordinaires, en assimilant une variable par espèce et un paramètre par constante cinétique. Les paramètres manquants ont été déterminés par optimisation paramétrique.Puis, nous avons développé un environnement afin de comparer plusieurs algorithmes d'optimisation et créer une nouvelle méthode hybride plus performante et adaptée à la paramétrisation des réseaux de signalisation. / Seven transmembrane receptor (7TMR) signaling controls the main human physiological functions. R7TMs are targeted by drugs and initiate large and complex networks responsible for physiological effects. In this thesis, we dynamically modelled the FSHR induced network that have critical role in reproduction and the angiotensin receptor induced network which regulates blood presure and is considered as a model 7TMR with the objective to understand their mechanisms of functionning and to predict experimentally unreachable data. Our modeling, based on ODE formalism, assumes each species as a variable and each kinetic rate as a parameter. Some parameters were unknown and requiered an adjustment. This led us to develop an environement allowing the comparaisonof existing adjustment methods and to create a novel and efficient hybrid method well-adapted to parametrization of signaling networks.
|
8 |
Aplicações de computação paralela em otimização contínua / Applications of parallel computing in continuous optimizationRicardo Luiz de Andrade Abrantes 22 February 2008 (has links)
No presente trabalho, estudamos alguns conceitos relacionados ao desenvolvimento de programas paralelos, algumas formas de aplicar computação paralela em métodos de otimização contínua e dois métodos que envolvem o uso de otimização. O primeiro método que apresentamos, chamado PUMA (Pointwise Unconstrained Minimization Approach), recupera constantes óticas e espessuras de filmes finos a partir de valores de transmitância. O problema de recuperação é modelado como um problema inverso e resolvido com auxílio de um método de otimização. Através da paralelização do PUMA viabilizamos a recuperação empírica de constantes e espessuras de sistemas compostos por até dois filmes sobrepostos. Relatamos aqui os resultados obtidos e discutimos o desempenho da versão paralela e a qualidade dos resultados obtidos. O segundo método estudado tem o objetivo de obter configurações iniciais de moléculas para simulações de dinâmica molecular e é chamado PACKMOL. O problema de obter uma configuração inicial de moléculas é modelado como um problema de empacotamento e resolvido com o auxílio de um método de otimização. Construímos uma versão paralela do PACKMOL e mostramos os ganhos de desempenho obtidos com a paralelização. / In this work we studied some concepts of parallel programming, some ways of using parallel computing in continuous optimization methods and two optimization methods. The first method we present is called PUMA (Pointwise Unconstrained Minimization Approach), and it retrieves optical constants and thicknesses of thin films from transmitance data. The problem of retrieve thickness and optical constants is modeled as an inverse problem and solved with aid of an optimization method. Through the paralelization of PUMA we managed to retrieve optical constants and thicknesses of thin films in structures with one and two superposed films. We describe some results and discuss the performance of the parallel PUMA and the quality of the retrievals. The second studied method is used to build an initial configuration of molecules for molecular dynamics simulations and it is called PACKMOL. The problem of create an initial configuration of molecules is modeled as a packing problem and solved with aid of an optimization method. We developed a parallel version of PACKMOL and we show the obtained performance gains.
|
9 |
Stochastic Black-Box Optimization and Benchmarking in Large Dimensions / Optimisation stochastique de problèmes en boîtes noires et benchmarking en grandes dimensionsAit Elhara, Ouassim 28 July 2017 (has links)
Etant donné le coût élevé qui accompagne, en général, la résolution de problème en grandes dimensions, notamment quand il s'agit de problèmes réels; le recours à des fonctions dite benchmarks et une approche communément utilisée pour l'évaluation d'algorithmes avec un coût minime. Il est alors question de savoir identifier les formes par lesquelles ces problèmes se présentent pour pouvoir les reproduire dans ces benchmarks. Une question dont la réponse est difficile vu la variété de ces problèmes, leur complexité, et la difficulté de tous les décrire pertinemment. L'idée est alors d'examiner les difficultés qui accompagnent généralement ces problème, ceci afin de les reproduire dans les fonctions benchmarks et évaluer la capacité des algorithmes à les résoudre. Dans le cas des problèmes de grandes dimensions, il serait pratique de pouvoir simplement étendre les benchmarks déjà utilisés pour les dimensions moins importantes. Cependant, il est important de prendre en compte les contraintes additionnelles qui accompagnent les problèmes de grandes dimensions, notamment ceux liés à la complexité d'évaluer ces fonctions benchmark. Idéalement, les fonctions benchmark en grandes dimension garderaient la majorité des propriétés de leurs contreparties en dimensions réduite tout en ayant un coût raisonnable. Les problèmes benchmark sont souvent classifiés en catégories suivant les difficultés qu'ils présentent. Même dans un scénario en boîte-noire où ce genre d'information n'est pas partagée avec l'algorithme, il reste important et pertinent d'avoir cette classification. Ceci permet d'identifier les lacunes d'un algorithme vis à vis d'une difficulté en particulier, et donc de plus facilement pouvoir l'améliorer. Une autre question importante à se poser en modélisant des problèmes de grandes dimensions est la pertinence des variables. En effet, quand la dimension est relativement petite, il n'est pas rare de voir toutes les variables contribuer à définir la qualité d'une solution. Cependant, quand la dimension grandit, il arrive souvent que des variables deviennent redondantes voire inutiles; notamment vu la difficulté de trouver une représentation minimaliste du problème. Ce dernier point encourage la conception et d'algorithmes et de fonctions benchmark traitant cette classe de problèmes. Dans cette thèse, on répond, principalement, à trois questions rencontrées dans l'optimisation stochastique continue en grandes dimensions : 1. Comment concevoir une méthode d'adaptation du pas d'une stratégie d'évolution qui, à la fois, est efficace et a un coût en calculs raisonnable ? 2. Comment construire et généraliser des fonctions à faible dimension effective ? 3. Comment étendre un ensemble de fonctions benchmarks pour des cas de grandes dimensions en préservant leurs propriétés sans avoir des caractéristiques qui soient exploitables ? / Because of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
|
10 |
Matrix Sketching in OptimizationGregory Paul Dexter (18414855) 19 April 2024 (has links)
<p dir="ltr">Continuous optimization is a fundamental topic both in theoretical computer science and applications of machine learning. Meanwhile, an important idea in the development modern algorithms it the use of randomness to achieve empirical speedup and improved theoretical runtimes. Stochastic gradient descent (SGD) and matrix-multiplication time linear program solvers [1] are two important examples of such achievements. Matrix sketching and related ideas provide a theoretical framework for the behavior of random matrices and vectors that arise in these algorithms, thereby provide a natural way to better understand the behavior of such randomized algorithms. In this dissertation, we consider three general problems in this area.</p>
|
Page generated in 0.1361 seconds