• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 19
  • 9
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 48
  • 27
  • 24
  • 24
  • 21
  • 21
  • 20
  • 19
  • 18
  • 16
  • 13
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Contribution à l'amélioration des techniques de la programmation génétique / Some contributions to improve Genetic Programming

El Gerari, Oussama 08 December 2011 (has links)
Dans le cadre de cette thèse, nous nous intéresseons à l'amélioration des techniques de programmation génétique (PG), en particulier nous avons essayer d'améliorer la performance de la PG en cas d'utilisation de grammaire riche, où l'ensemble de terminaux contient plus que nécessaire pour représenter des solutions optimales. Pour cela, nous avons présenté le problème de la sélection d'attributs en rappelant les principales approches, et nous avons utilisé la technique de mesure de poids des terminaux pour affiner la sélection d'attributs. En second lieu, nous présentons des travaux sur un autre algorithme qui s'inspire de la boucle évolutionnaire : l'évolution différentielle (ED), et nous étudions la performance de cette technique sur la branche de la programmation génétique linéaire. Nous présentons et comparons les performances de cette dernière technique sur un ensemble de "benchmarks" classique de la PG. / This thesis mainly deals with genetic programming. In this work, we are interested in improving the overall performance of genetic programming (GP) when dealing with rich grammar when the terminal set is very large. We introduce the problem of attributes selection and in our work we introduce a scheme based on the weight (based on the frequency) to refine the attribute selection. In the second part of this work, we try to improve the evolution engine with the help of the differential evolution (DE) algorithm. This new engine is applied to linear genetic programming. We then present some experiments and make some comparisons on a set of classical benchmarks.
12

Differential evolution algorithms for constrained global optimization

Kajee-Bagdadi, Zaakirah 04 April 2008 (has links)
In this thesis we propose four new methods for solving constrained global optimization problems. The first proposed algorithm is a differential evolution (DE) algorithm using penalty functions for constraint handling. The second algorithm is based on the first DE algorithm but also incorporates a filter set as a diversification mechanism. The third algorithm is also based on DE but includes an additional local refinement process in the form of the pattern search (PS) technique. The last algorithm incorporates both the filter set and PS into the DE algorithm for constrained global optimization. The superiority of feasible points (SFP) and the parameter free penalty (PFP) schemes are used as constraint handling mechanisms. The new algorithms were numerically tested using two sets of test problems and the results where compared with those of the genetic algorithm (GA). The comparison shows that the new algorithms outperformed GA. When the new methods are compared to each other, the last three methods performed better than the first method i.e. the DE algorithm. The new algorithms show promising results with potential for further research. Keywords: constrained global optimization, differential evolution, pattern search, filter method, penalty function, superiority of feasible points, parameter free penalty. ii
13

Rotationally Invariant Techniques for Handling Parameter Interactions in Evolutionary Multi-Objective Optimization

Iorio, Antony William, iantony@gmail.com January 2008 (has links)
In traditional optimization approaches the interaction of parameters associated with a problem is not a significant issue, but in the domain of Evolutionary Multi-Objective Optimization (EMOO) traditional genetic algorithm approaches have difficulties in optimizing problems with parameter interactions. Parameter interactions can be introduced when the search space is rotated. Genetic algorithms are referred to as being not rotationally invariant because their behavior changes depending on the orientation of the search space. Many empirical studies in single and multi-objective evolutionary optimization are done with respect to test problems which do not have parameter interactions. Such studies provide a favorably biased indication of genetic algorithm performance. This motivates the first aspect of our work; the improvement of the testing of EMOO algorithms with respect to the aforementioned difficulties that genetic algorithms experience in the presence of para meter interactions. To this end, we examine how EMOO algorithms can be assessed when problems are subject to an arbitrarily uniform degree of parameter interactions. We establish a theoretical basis for parameter interactions and how they can be measured. Furthermore, we ask the question of what difficulties a multi-objective genetic algorithm experiences on optimization problems exhibiting parameter interactions. We also ask how these difficulties can be overcome in order to efficiently find the Pareto-optimal front on such problems. Existing multi-objective test problems in the literature typically introduce parameter interactions by altering the fitness landscape, which is undesirable. We propose a new suite of test problems that exhibit parameter interactions through a rotation of the decision space, without altering the fitness landscape. In addition, we compare the performance of a number of recombination operators on these test problems. The second aspect of this work is concerned with developing an efficient multi-objective optimization algorithm which works well on problems with parameter interactions. We investigate how an evolutionary algorithm can be made more efficient on multi-objective problems with parameter interactions by developing four novel rotationally invariant differential evolution approaches. We also ask whether the proposed approaches are competitive in comparison with a state-of-the-art EMOO algorithm. We propose several differential evolution approaches incorporating directional information from the multi-objective search space in order to accelerate and direct the search. Experimental results indicate that dramatic improvements in efficiency can be achieved by directing the search towards points which are more dominant and more diverse. We also address the important issue of diversity loss in rotationally invariant vector-wise differential evolution. Being able to generate diverse solutions is critically important in order to avoid stagnation. In order to address this issue, one of the directed approaches that we examine incorporates a novel sampling scheme around better individuals in the search space. This variant is able to perform exceptionally well on the test problems with much less computational cost and scales to very high decision space dimensions even in the presence of parameter interactions.
14

Evolutionary Optimization Algorithms for Nonlinear Systems

Raj, Ashish 01 May 2013 (has links)
Many real world problems in science and engineering can be treated as optimization problems with multiple objectives or criteria. The demand for fast and robust stochastic algorithms to cater to the optimization needs is very high. When the cost function for the problem is nonlinear and non-differentiable, direct search approaches are the methods of choice. Many such approaches use the greedy criterion, which is based on accepting the new parameter vector only if it reduces the value of the cost function. This could result in fast convergence, but also in misconvergence where it could lead the vectors to get trapped in local minima. Inherently, parallel search techniques have more exploratory power. These techniques discourage premature convergence and consequently, there are some candidate solution vectors which do not converge to the global minimum solution at any point of time. Rather, they constantly explore the whole search space for other possible solutions. In this thesis, we concentrate on benchmarking three popular algorithms: Real-valued Genetic Algorithm (RGA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). The DE algorithm is found to out-perform the other algorithms in fast convergence and in attaining low-cost function values. The DE algorithm is selected and used to build a model for forecasting auroral oval boundaries during a solar storm event. This is compared against an established model by Feldstein and Starkov. As an extended study, the ability of the DE is further put into test in another example of a nonlinear system study, by using it to study and design phase-locked loop circuits. In particular, the algorithm is used to obtain circuit parameters when frequency steps are applied at the input at particular instances.
15

Opposition-Based Differential Evolution

Rahnamayan, Shahryar 25 April 2007 (has links)
Evolutionary algorithms (EAs) are well-established techniques to approach those problems which for the classical optimization methods are difficult to solve. Tackling problems with mixed-type of variables, many local optima, undifferentiable or non-analytical functions are some examples to highlight the outstanding capabilities of the evolutionary algorithms. Among the various kinds of evolutionary algorithms, differential evolution (DE) is well known for its effectiveness and robustness. Many comparative studies confirm that the DE outperforms many other optimizers. Finding more accurate solution(s), in a shorter period of time for complex black-box problems, is still the main goal of all evolutionary algorithms. The opposition concept, on the other hand, has a very old history in philosophy, set theory, politics, sociology, and physics. But, there has not been any opposition-based contribution to optimization. In this thesis, firstly, the opposition-based optimization (OBO) is constituted. Secondly, its advantages are formally supported by establishing mathematical proofs. Thirdly, the opposition-based acceleration schemes, including opposition-based population initialization and generation jumping, are proposed. Fourthly, DE is selected as a parent algorithm to verify the acceleration effects of proposed schemes. Finally, a comprehensive set of well-known complex benchmark functions is employed to experimentally compare and analyze the algorithms. Results confirm that opposition-based DE (ODE) performs better than its parent (DE), in terms of both convergence speed and solution quality. The main claim of this thesis is not defeating DE, its numerous versions, or other optimizers, but to introduce a new notion into nonlinear continuous optimization via innovative metaheuristics, namely the notion of opposition. Although, ODE has been compared with six other optimizers and outperforms them overall. Furthermore, both presented experimental and mathematical results conform with each other and demonstrate that opposite points are more beneficial than pure random points for black-box problems; this fundamental knowledge can serve to accelerate other machine learning approaches as well (such as reinforcement learning and neural networks). And perhaps in future, it could replace the pure randomness with random-opposition model when there is no a priori knowledge about the solution/problem. Although, all conducted experiments utilize DE as a parent algorithm, the proposed schemes are defined at the population level and, hence, have an inherent potential to be utilized for acceleration of other DE extensions or even other population-based algorithms, such as genetic algorithms (GAs). Like many other newly introduced concepts, ODE and the proposed opposition-based schemes still require further studies to fully unravel their benefits, weaknesses, and limitations.
16

Opposition-Based Differential Evolution

Rahnamayan, Shahryar 25 April 2007 (has links)
Evolutionary algorithms (EAs) are well-established techniques to approach those problems which for the classical optimization methods are difficult to solve. Tackling problems with mixed-type of variables, many local optima, undifferentiable or non-analytical functions are some examples to highlight the outstanding capabilities of the evolutionary algorithms. Among the various kinds of evolutionary algorithms, differential evolution (DE) is well known for its effectiveness and robustness. Many comparative studies confirm that the DE outperforms many other optimizers. Finding more accurate solution(s), in a shorter period of time for complex black-box problems, is still the main goal of all evolutionary algorithms. The opposition concept, on the other hand, has a very old history in philosophy, set theory, politics, sociology, and physics. But, there has not been any opposition-based contribution to optimization. In this thesis, firstly, the opposition-based optimization (OBO) is constituted. Secondly, its advantages are formally supported by establishing mathematical proofs. Thirdly, the opposition-based acceleration schemes, including opposition-based population initialization and generation jumping, are proposed. Fourthly, DE is selected as a parent algorithm to verify the acceleration effects of proposed schemes. Finally, a comprehensive set of well-known complex benchmark functions is employed to experimentally compare and analyze the algorithms. Results confirm that opposition-based DE (ODE) performs better than its parent (DE), in terms of both convergence speed and solution quality. The main claim of this thesis is not defeating DE, its numerous versions, or other optimizers, but to introduce a new notion into nonlinear continuous optimization via innovative metaheuristics, namely the notion of opposition. Although, ODE has been compared with six other optimizers and outperforms them overall. Furthermore, both presented experimental and mathematical results conform with each other and demonstrate that opposite points are more beneficial than pure random points for black-box problems; this fundamental knowledge can serve to accelerate other machine learning approaches as well (such as reinforcement learning and neural networks). And perhaps in future, it could replace the pure randomness with random-opposition model when there is no a priori knowledge about the solution/problem. Although, all conducted experiments utilize DE as a parent algorithm, the proposed schemes are defined at the population level and, hence, have an inherent potential to be utilized for acceleration of other DE extensions or even other population-based algorithms, such as genetic algorithms (GAs). Like many other newly introduced concepts, ODE and the proposed opposition-based schemes still require further studies to fully unravel their benefits, weaknesses, and limitations.
17

Differential evolution for constrained optimization problems / Evolução diferencial para problemas de otimização restrita

Eduardo Krempser da Silva 04 March 2009 (has links)
Optimization is a large area of knowledge concerned with the need of a better use of resources and activities, becoming indispensable in the solution of several problems which arise from the study and formulation of real-world problems. Furthermore, the constraints that must be respected for each situation introduce in the methodologies of optimization an additional complication. Differential Evolution, which in its original formulation is applied only to unconstrained optimization problems in continuous space, also provides good results when applied to constrained optimization with discrete and continuous variables. This work presents the necessary improvements to Differential Evolution for its proper application to this class of problems, and proposes a new combination of techniques for this application, as well as a mechanism for dynamic selection of the appropriate variant of the technique. The initial proposal is a combination of Differential Evolution with a technique of adaptive penalty (APM) and the second proposal concerns the dynamic selection of variants during the search process. Several computational experiments are carried out confirming the competitiveness of the proposed algorithms. / A otimização é uma grande área de conhecimento voltada para a necessidade de um melhor aproveitamento de recursos e atividades, tornando-se indispensável na resolução de grande parte dos problemas oriundos de estudos e formulações de problemas reais. Além disso, as restrições que devem ser respeitadas para cada situação introduzem nas metodologias de otimização um complicador adicional. A Evolução Diferencial, que em sua formulação original é aplicada somente a problemas de otimização irrestrita e em espaços contínuos, apresenta também bons resultados quando aplicada à otimização restrita com variáveis contínuas e discretas. Este trabalho apresenta os aperfeiçoamentos necessários à Evolução Diferencial para sua adequada aplicação sobre essa classe de problemas, além de propor uma nova combinação de técnicas para essa aplicação, bem como um mecanismo de seleção dinâmica da variante adequada da técnica. A proposta inicial é a combinação da Evolução Diferencial com uma técnica adaptativa de penalização (APM) e a segunda proposta visa a seleção dinâmica de variantes durante o processo de busca. Vários experimentos computacionais são executados confirmando a competitividade dos algoritmos propostos.
18

Evolução diferencial para problemas de otimização com restrições lineares

Araujo, Rodrigo Leppaus de 05 November 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-16T11:37:46Z No. of bitstreams: 1 rodrigoleppausdearaujo.pdf: 1937685 bytes, checksum: 0f43bd0bab8063bdc6af288e7aa82320 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-16T14:17:13Z (GMT) No. of bitstreams: 1 rodrigoleppausdearaujo.pdf: 1937685 bytes, checksum: 0f43bd0bab8063bdc6af288e7aa82320 (MD5) / Made available in DSpace on 2017-03-16T14:17:13Z (GMT). No. of bitstreams: 1 rodrigoleppausdearaujo.pdf: 1937685 bytes, checksum: 0f43bd0bab8063bdc6af288e7aa82320 (MD5) Previous issue date: 2016-11-05 / Meta-heurísticas têm sido frequentemente empregadas na resolução de problemas de otimização. Em particular, pode-se destacar a Evolução Diferencial (DE), que vem sendo aplicada com sucesso em situações onde o espaço de busca é contínuo. Apesar das vantagens dessas técnicas, elas precisam de adequações para tratar as restrições, que comumente limitam o espaço de busca em problemas reais de otimização. Nesse trabalho, uma modificação na DE é proposta a fim de tratar as restrições lineares de igualdade do problema. O método proposto, denotado aqui por DELEqC, gera uma população inicial de soluções candidatas que é factível em relação às restrições lineares de igualdade e gera os novos indivíduos sem utilizar o operador padrão de cruzamento. Com isso, pretende-se gerar novas soluções que também sejam viáveis quanto a esse tipo de restrição. O procedimento proposto de geração de indivíduos e manutenção da factibilidade da população é direto quando restrições lineares de igualdade são consideradas, mas requer o uso de variáveis de folga quando há desigualdades lineares no problema. Caso o problema de otimização envolva restrições não-lineares, o seu tratamento é feito aqui através de uma técnica de penalização adaptativa (APM) ou por meio de um esquema de seleção (DSS). O procedimento proposto é aplicado a problemas disponíveis na literatura e os resultados obtidos são comparados à queles apresentados por outras técnicas de tratamento de restrições. A análise de resultados indica que a proposta apresentada encontrou soluções competitivas em relação às outras técnicas específicas para o tratamento de restrições de igualdade lineares e melhores do que as alcançadas por estratégias comumente adotadas em meta-heurísticas. / Metaheuristics have been used to solve optimization problems. In particular, we can highlight the Differential Evolution(DE),which has been successfully applied insituations where the search space is continuous. Despite the advantages of those techniques, they require adjustments in order to deal with constraints, which commonly restrict the search space in real optimization problems. In this work, a change in the DE is proposed in order to deal with the linear equality constraints of the problem. The proposed method, here denoted by DELEqC, generates an initial population of candidate solutions, which are feasible with respect to the linear equality constraints, and generates new individuals without the standard crossover operation. The idea is to generate new solutions that are also feasible with respect to this kind of constraint. The proposed procedure for generating individuals and maintaining the feasibility of the population is straightforward when linear equality constraints are considered, but requires the use of slack variables when linear inequalities are present. If the optimization problem involves nonlinear constraints, their treatment is done here using an adaptive penalty method (APM), or by means of a selection scheme (DSS). The proposed procedure is applied to problems available in the literature and the results obtained are compared to those presented by other constraint handling techniques. The analysis of results indicates that the presented proposal found competitive solutions in relation to other specific techniques for the treatment of linear equality constraints and better than those achieved by strategies commonly adopted in metaheuristics.
19

Adaptive multi-population differential evolution for dynamic environments

Du Plessis, M.C. (Mathys Cornelius) 26 September 2012 (has links)
Dynamic optimisation problems are problems where the search space does not remain constant over time. Evolutionary algorithms aimed at static optimisation problems often fail to effectively optimise dynamic problems. The main reason for this is that the algorithms converge to a single optimum in the search space, and then lack the necessary diversity to locate new optima once the environment changes. Many approaches to adapting traditional evolutionary algorithms to dynamic environments are available in the literature, but differential evolution (DE) has been investigated as a base algorithm by only a few researchers. This thesis reports on adaptations of existing DE-based optimisation algorithms for dynamic environments. A novel approach, which evolves DE sub-populations based on performance in order to discover optima in an dynamic environment earlier, is proposed. It is shown that this approach reduces the average error in a wide range of benchmark instances. A second approach, which is shown to improve the location of individual optima in the search space, is combined with the first approach to form a new DE-based algorithm for dynamic optimisation problems. The algorithm is further adapted to dynamically spawn and remove sub-populations, which is shown to be an effective strategy on benchmark problems where the number of optima is unknown or fluctuates over time. Finally, approaches to self-adapting DE control parameters are incorporated into the newly created algorithms. Experimental evidence is presented to show that, apart from reducing the number of parameters to fine-tune, a benefit in terms of lower error values is found when employing self-adaptive control parameters. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted
20

The Development of a Hybrid Optimization Algorithm for the Evaluation and Optimization of the Asynchronous Pulse Unit

Inclan, Eric 01 January 2014 (has links)
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.

Page generated in 0.1297 seconds