• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 5
  • 1
  • 1
  • Tagged with
  • 26
  • 26
  • 9
  • 8
  • 8
  • 8
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Surrogate-Assisted Evolutionary Algorithms / Les algorithmes évolutionnaires à la base de méta-modèles scalaires

Loshchilov, Ilya 08 January 2013 (has links)
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif,pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul.La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire.Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions); il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche.L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif.Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est coupléeà une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées. / Evolutionary Algorithms (EAs) have received a lot of attention regarding their potential to solve complex optimization problems using problem-specific variation operators. A search directed by a population of candidate solutions is quite robust with respect to a moderate noise and multi-modality of the optimized function, in contrast to some classical optimization methods such as quasi-Newton methods. The main limitation of EAs, the large number of function evaluations required, prevents from using EAs on computationally expensive problems, where one evaluation takes much longer than 1 second.The present thesis focuses on an evolutionary algorithm, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which has become a standard powerful tool for continuous black-box optimization. We present several state-of-the-art algorithms, derived from CMA-ES, for solving single- and multi-objective black-box optimization problems.First, in order to deal with expensive optimization, we propose to use comparison-based surrogate (approximation) models of the optimized function, which do not exploit function values of candidate solutions, but only their quality-based ranking.The resulting self-adaptive surrogate-assisted CMA-ES represents a tight coupling of statistical machine learning and CMA-ES, where a surrogate model is build, taking advantage of the function topology given by the covariance matrix adapted by CMA-ES. This allows to preserve two key invariance properties of CMA-ES: invariance with respect to i). monotonous transformation of the function, and ii). orthogonal transformation of the search space. For multi-objective optimization we propose two mono-surrogate approaches: i). a mixed variant of One Class Support Vector Machine (SVM) for dominated points and Regression SVM for non-dominated points; ii). Ranking SVM for preference learning of candidate solutions in the multi-objective space. We further integrate these two approaches into multi-objective CMA-ES (MO-CMA-ES) and discuss aspects of surrogate-model exploitation.Second, we introduce and discuss various algorithms, developed to understand, explore and expand frontiers of the Evolutionary Computation domain, and CMA-ES in particular. We introduce linear time Adaptive Coordinate Descent method for non-linear optimization, which inherits a CMA-like procedure of adaptation of an appropriate coordinate system without losing the initial simplicity of Coordinate Descent.For multi-modal optimization we propose to adaptively select the most suitable regime of restarts of CMA-ES and introduce corresponding alternative restart strategies.For multi-objective optimization we analyze case studies, where original parent selection procedures of MO-CMA-ES are inefficient, and introduce reward-based parent selection strategies, focused on a comparative success of generated solutions.
22

On continuous maximum flow image segmentation algorithm / Segmentation d'images par l'algorithme des flot maximum continu

Marak, Laszlo 28 March 2012 (has links)
Ces dernières années avec les progrès matériels, les dimensions et le contenu des images acquises se sont complexifiés de manière notable. Egalement, le différentiel de performance entre les architectures classiques mono-processeur et parallèles est passé résolument en faveur de ces dernières. Pourtant, les manières de programmer sont restées largement les mêmes, instituant un manque criant de performance même sur ces architectures. Dans cette thèse, nous explorons en détails un algorithme particulier, les flots maximaux continus. Nous explicitons pourquoi cet algorithme est important et utile, et nous proposons plusieurs implémentations sur diverses architectures, du mono-processeur à l'architecture SMP et NUMA, ainsi que sur les architectures massivement parallèles des GPGPU. Nous explorons aussi des applications et nous évaluons ses performances sur des images de grande taille en science des matériaux et en biologie à l'échelle nano / In recent years, with the advance of computing equipment and image acquisition techniques, the sizes, dimensions and content of acquired images have increased considerably. Unfortunately as time passes there is a steadily increasing gap between the classical and parallel programming paradigms and their actual performance on modern computer hardware. In this thesis we consider in depth one particular algorithm, the continuous maximum flow computation. We review in detail why this algorithm is useful and interesting, and we propose efficient and portable implementations on various architectures. We also examine how it performs in the terms of segmentation quality on some recent problems of materials science and nano-scale biology
23

Perfectionnement d'un algorithme adaptatif d'optimisation par essaim particulaire : application en génie médical et en électronique / Improvement of an adaptive algorithm of Optimization by Swarm Particulaire : application in medical engineering and in electronics

Cooren, Yann 27 November 2008 (has links)
Les métaheuristiques sont une famille d'algorithmes stochastiques destinés à résoudre des problèmes d 'optimisation difficile . Utilisées dans de nombreux domaines, ces méthodes présentent l'avantage d'être généralement efficaces, sans pour autant que l'utilisateur ait à modifier la structure de base de l'algorithme qu'il utilise. Parmi celles-ci, l'Optimisation par Essaim Particulaire (OEP) est une nouvelle classe d'algorithmes proposée pour résoudre les problèmes à variables continues. Les algorithmes d'OEP s'inspirent du comportement social des animaux évoluant en essaim, tels que les oiseaux migrateurs ou les poissons. Les particules d'un même essaim communiquent de manière directe entre elles tout au long de la recherche pour construire une solution au problème posé, en s'appuyant sur leur expérience collective. Reconnues depuis de nombreuses années pour leur efficacité, les métaheuristiques présentent des défauts qui rebutent encore certains utilisateurs. Le réglage des paramètres des algorithmes est un de ceux-ci. Il est important, pour chaque probléme posé, de trouver le jeu de paramètres qui conduise à des performances optimales de l'algorithme. Cependant, cette tâche est fastidieuse et coûteuse en temps, surtout pour les utilisateurs novices. Pour s'affranchir de ce type de réglage, des recherches ont été menées pour proposer des algorithmes dits adaptatifs . Avec ces algorithmes, les valeurs des paramètres ne sont plus figées, mais sont modifiées, en fonction des résultats collectés durant le processus de recherche. Dans cette optique-là, Maurice Clerc a proposé TRIBES, qui est un algorithme d'OEP mono-objectif sans aucun paramètre de contrôle. Cet algorithme fonctionne comme une boite noire , pour laquelle l'utilisateur n'a qu'à définir le problème à traiter et le critàre d'arrêt de l'algorithme. Nous proposons dans cette thèse une étude comportementale de TRIBES, qui permet d'en dégager les principales qualités et les principaux défauts. Afin de corriger certains de ces défauts, deux modules ont été ajoutés à TRIBES. Une phase d'initialisation régulière est insérée, afin d'assurer, dès le départ de l'algorithme, une bonne couverture de l'espace de recherche par les particules. Une nouvelle stratégie de déplacement, basée sur une hybridation avec un algorithme à estimation de distribution, est aussi définie, afin de maintenir la diversité au sein de l'essaim, tout au long du traitement. Le besoin croissant de méthodes de résolution de problèmes multiobjectifs a conduit les concepteurs à adapter leurs méthodes pour résoudre ce type de problème. La complexité de cette opération provient du fait que les objectifs à optimiser sont souvent contradictoires. Nous avons élaboré une version multiobjectif de TRIBES, dénommée MO-TRIBES. Nos algorithmes ont été enfin appliqués à la résolution de problèmes de seuillage d'images médicales et au problème de dimensionnement de composants de circuits analogiques / Metaheuristics are a new family of stochastic algorithms which aim at solving difficult optimization problems. Used to solve various applicative problems, these methods have the advantage to be generally efficient on a large amount of problems. Among the metaheuristics, Particle Swarm Optimization (PSO) is a new class of algorithms proposed to solve continuous optimization problems. PSO algorithms are inspired from the social behavior of animals living in swarm, such as bird flocks or fish schools. The particles of the swarm use a direct way of communication in order to build a solution to the considered problem, based on their collective experience. Known for their e ciency, metaheuristics show the drawback of comprising too many parameters to be tuned. Such a drawback may rebu some users. Indeed, according to the values given to the parameters of the algorithm, its performance uctuates. So, it is important, for each problem, to nd the parameter set which gives the best performance of the algorithm. However, such a problem is complex and time consuming, especially for novice users. To avoid the user to tune the parameters, numerous researches have been done to propose adaptive algorithms. For such algorithms, the values of the parameters are changed according to the results previously found during the optimization process. TRIBES is an adaptive mono-objective parameter-free PSO algorithm, which was proposed by Maurice Clerc. TRIBES acts as a black box , for which the user has only the problem and the stopping criterion to de ne. The rst objective of this PhD is to make a global study of the behavior of TRIBES under several conditions, in order to determine the strengths and drawbacks of this adaptive algorithm. In order to improve TRIBES, two new strategies are added. First, a regular initialization process is defined in order to insure an exploration as wide as possible of the search space, since the beginning of the optimization process. A new strategy of displacement, based on an hybridation with an estimation of distribution algorithm, is also introduced to maintain the diversity in the swarm all along the process. The increasing need for multiobjective methods leads the researchers to adapt their methods to the multiobjective case. The di culty of such an operation is that, in most cases, the objectives are con icting. We designed MO-TRIBES, which is a multiobjective version of TRIBES. Finally, our algorithms are applied to thresholding segmentation of medical images and to the design of electronic components
24

Estudo de algoritmos para o problema de otimização de vazão de poços de petróleo

Vasconcelos, João Olavo Baião de 21 December 2011 (has links)
Made available in DSpace on 2016-12-23T14:33:32Z (GMT). No. of bitstreams: 1 Joao Olavo Baiao de Vasconcelos.pdf: 325868 bytes, checksum: 0459e6ca76a321095f4fc0d37ab23f21 (MD5) Previous issue date: 2011-12-21 / Petroleum Engineer activity is constantly enrolled on a series of optimization problems on many contexts, as, for instance, defining efficient and optimized projects on petroleum reserves development. However, there is an extreme difficulty on resolution of exploration and production (P&E) optimization problems, since they are often complex, with high degree of nonlinearity, presenting high uncertain number, and huge computational cost involved. Among them, there is the problem of determining the best throughput distribution among the wells of a petroleum production platform that achieves the biggest financial profitability of an E&P project, here named Petroleum Well Throughput Optimization Problem (PWTOP). In order to deal with PWTOP, some continuous optimization algorithms that deals with linearity restrictions present on the problem were studied, that are the Derivative Free Optimization (DFO), the Generating Set Search (GSS), and the Differential Evolution (DE). DFO is a sequential algorithm, whereas GSS and DE are parallel algorithms. Two case studies are also presented that represents synthetic petroleum fields. The results show how the studied algorithms behave on dealing with PWTOP for the two case studies, comparing experimental results obtained on optimized financial values, execution times and amount of objective function evaluation. Concludes, lastly, that, for the simplest case study, GSS had the best result, and for the most complex case study, more like real reservoirs, DE stood out / A atividade de Engenharia de Petróleo está rotineiramente envolvida em uma série de problemas de otimização em variados contextos, como definir projetos otimizados e eficientes na produção e no desenvolvimento de reservas de petróleo. Entretanto, há uma extrema dificuldade na resolução de problemas de otimização de exploração e produção (E&P), uma vez que são problemas frequentemente complexos, com elevado grau de não-linearidade, que apresentam alto número de incertezas e com enorme custo computacional envolvido. Dentre eles, está o problema de determinar a melhor distribuição de vazões entre os poços de uma plataforma de produção de petróleo capaz de resultar em um projeto de E&P de maior rentabilidade financeira, aqui denominado Problema de Otimização de Vazão de Poços de Petróleo (POVPP). Para tratar o POVPP, foram estudados alguns algoritmos de otimização contínua que possam lidar com as restrições lineares presentes no problema, que são o Otimização sem Derivadas (Derivative Free Optimization DFO), o Busca por Conjunto Gerador (Generating Set Search GSS) e o Evolução Diferencial (Differential Evolution DE). O DFO é um algoritmo sequencial, enquanto que o GSS e o DE são algoritmos paralelos. Também são apresentados dois estudos de caso que representam campos de petróleo sintéticos. Os resultados mostram como os algoritmos estudados se comportam ao tratar o POVPP para os dois estudos de caso, comparando-se dados obtidos de valores financeiros otimizados, tempos de execução e quantidade de avaliações da função objetivo. Conclui-se, por fim, que, para o estudo de caso simples, o GSS teve o melhor resultado, e para o estudo de caso mais complexo, mais semelhante a reservatórios reais, o DE se sobressaiu
25

Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning Settings

Joseph, Ajin George January 2017 (has links) (PDF)
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in ma-chine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for “absolute optimum” or solutions close to it, which makes the task even more challenging. In this thesis, we propose an optimization algorithm which is “gradient-free”, i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions. Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions. In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modelling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, i.e., estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach. In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.
26

Ant colony optimization for continuous and mixed-variable domains

Socha, Krzysztof 09 May 2008 (has links)
In this work, we present a way to extend Ant Colony Optimization (ACO), so that it can be applied to both continuous and mixed-variable optimization problems. We demonstrate, first, how ACO may be extended to continuous domains. We describe the algorithm proposed, discuss the different design decisions made, and we position it among other metaheuristics.<p>Following this, we present the results of numerous simulations and testing. We compare the results obtained by the proposed algorithm on typical benchmark problems with those obtained by other methods used for tackling continuous optimization problems in the literature. Finally, we investigate how our algorithm performs on a real-world problem coming from the medical field—we use our algorithm for training neural network used for pattern classification in disease recognition.<p>Following an extensive analysis of the performance of ACO extended to continuous domains, we present how it may be further adapted to handle both continuous and discrete variables simultaneously. We thus introduce the first native mixed-variable version of an ACO algorithm. Then, we analyze and compare the performance of both continuous and mixed-variable<p>ACO algorithms on different benchmark problems from the literature. Through the research performed, we gain some insight into the relationship between the formulation of mixed-variable problems, and the best methods to tackle them. Furthermore, we demonstrate that the performance of ACO on various real-world mixed-variable optimization problems coming from the mechanical engineering field is comparable to the state of the art. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished

Page generated in 0.1539 seconds