• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 11
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Where is my inhaler? : A simulation and optimization study of the Quality Control on Symbicort Turbuhaler at AstraZeneca / Var är min inhalator? : En simulerings- och optimeringsstudie på kvalitetskontrollen av Symbicort Turbuhaler vid AstraZeneca

Haddad, Shirin, Nilsson, Marie January 2019 (has links)
Symbicort Turbuhaler is a medical device produced by the pharmaceutical company AstraZeneca for the treatment of asthma and symptoms of chronic obstructive pulmonary disease. The delivery reliability of the product is dependent on the performance of the whole supply chain and as part of the chain the results from the department, Quality Control (QC), are mandatory to release the produced batches to the market. The performance of QC is thus an important part of the supply chain. In order to reduce the risk of supply problems and market shortage, it is very important to investigate whether the performance of QC can be improved. The purpose of the thesis is to provide AstraZeneca with scientifically based data to identify sensitive parameters and readjust work procedures in order to improve the performance of QC. The goal of this thesis is to map out the flow of the QC Symbicort Turbuhaler operation and construct a model of it. The model is intended to be used to simulate and optimize different parameters, such as the inflow of batch samples, the utilization of the instrumentation and staff workload. QC is modelled in a simulation software. The model is used to simulate and optimize different scenarios following a discrete event simulation and an optimization technique based on evolution strategies. By reducing the number of analytical robots from 14 to 10, it is possible to maintain existing average lead time. Through a reduction, the utilization of the robots increases, simultaneously the workload decreases for some of the staff. However, it is not possible to extend the durability of the system suitability test (SST), and still achieve existing average lead time. From the investigation of different parameters, it is found that, an added laboratory engineer at the high-performance liquid chromatography (HPLC) station has the best outcome on lead time and overall equipment effectiveness. However, a reduced laboratory engineer at the Minispice robots has the worst outcome. With the resources available today the lead times cannot be maintained in the long run, if the inflow is of 35 batch samples a week or more. By adding a laboratory engineer at the HPLC station and by using a SST with durability of 48 hours, the best outcome in terms of average lead time and number of batch samples with a lead time less than 10 days is received. / Symbicort Turbuhaler är en medicinsk enhet som tillverkas av läkemedelsföretaget AstraZeneca för behandling av astma och symptomen för kronisk obstruktiv lungsjukdom. Leveranssäkerheten för produkten är beroende av hela försörjningskedjans prestanda och som en del utav kedjan är resultaten från kvalitetskontrollen (QC) obligatoriska för att släppa en batch av produkten till marknaden. QCs prestanda är därför en viktig del av försörjningskedjan. För att minska risken för leveransproblem och produktbrist på marknaden är det viktigt att undersöka huruvida prestandan hos QC kan förbättras. Syftet med arbetet är att ge AstraZeneca vetenskapligt baserat data för att identifiera känsliga parametrar och justera arbetssätt för att förbättra prestandan hos QC. Målet med detta arbete är att kartlägga flödet av QC Symbicort Turbuhaler och konstruera en modell utifrån det flödet. Modellen är avsedd för att simulera och optimera olika parametrar, såsom inflödet av batchprover, utnyttjande av instrumentering och arbetsbelastning av personal. Genom att minska antalet analytiska robotar från 14 till 10, är det möjligt att bibehålla befintlig genomsnittlig ledtid. Genom denna minskning ökar utnyttjandet av robotarna, samtidigt som arbetsbelastningen minskar för en del av bemanningen. Det är inte möjligt att förlänga hållbarheterna på robotarnas systemtest (SST) och fortfarande uppnå befintlig genomsnittlig ledtid. Vid undersökning av olika parametrar indikerar resultatet att en ytterligare laboratorieingenjör vid högpresterande vätskekromatografi-stationen (HPLC) har den bästa effekten på ledtid och produktionseffektivitet. En laboratorieingenjör som reduceras från Minispice-robotarna har däremot den värsta effekten. Med de resurser som finns tillgängliga idag kan ledtiderna inte bibehållas långsiktigt om inflödet är 35 batchprover per vecka eller mer. Genom att addera en laboratorieingenjör vid HPLC-stationen och användaen SST med en hållbarhet på 48 timmar, erhålls det bästa resultatet i termer av genomsnittlig ledtid och antal batchprover som har en individuell ledtid på mindre än 10 dagar.
2

Using Niched Co-Evolution Strategies to Address Non-Uniqueness in Characterizing Sources of Contamination in a Water Distribution System

Drake, Kristen Leigh 2011 August 1900 (has links)
Threat management of water distribution systems is essential for protecting consumers. In a contamination event, different strategies may be implemented to protect public health, including flushing the system through opening hydrants or isolating the contaminant by manipulating valves. To select the most effective options for responding to a contamination threat, the location and loading profile of the source of the contaminant should be considered. These characteristics can be identified by utilizing water quality data from sensors that have been strategically placed in a water distribution system. A simulation-optimization approach is described here to solve the inverse problem of source characterization, by coupling an evolutionary computation-based search with a water distribution system model. The solution of this problem may reveal, however, that a set of non-unique sources exists, where sources with significantly different locations and loading patterns produce similar concentration profiles at sensors. The problem of non-uniqueness should be addressed to prevent the misidentification of a contaminant source and improve response planning. This paper aims to address the problem of non-uniqueness through the use of Niched Co-Evolution Strategies (NCES). NCES is an evolutionary algorithm designed to identify a specified number of alternative solutions that are maximally different in their decision vectors, which are source characteristics for the water distribution problem. NCES is applied to determine the extent of non-uniqueness in source characterization for a virtual city, Mesopolis, with a population of approximately 150,000 residents. Results indicate that NCES successfully identifies non-uniqueness in source characterization and provides alternative sources of contamination. The solutions found by NCES assist in making decisions about response actions. Once alternative sources are identified, each source can be modeled to determine where the vulnerable areas of the system are, indicating the areas where response actions should be implemented.
3

Modified Selection Mechanisms Designed to Help Evolution Strategies Cope with Noisy Response Surfaces

Gadiraju, Sriphani Raju 02 August 2003 (has links)
With the rise in the application of evolution strategies for simulation optimization, a better understanding of how these algorithms are affected by the stochastic output produced by simulation models is needed. At very high levels of stochastic variance in the output, evolution strategies in their standard form experience difficulty locating the optimum. The degradation of the performance of evolution strategies in the presence of very high levels of variation can be attributed to the decrease in the proportion of correctly selected solutions as parents from which offspring solutions are generated. The proportion of solutions correctly selected as parents can be increased by conducting additional replications for each solution. However, experimental evaluation suggests that a very high proportion of correctly selected solutions as parents is not required. A proportion of correctly selected solutions of around 0.75 seems sufficient for evolution strategies to perform adequately. Integrating statistical techniques into the algorithm?s selection process does help evolution strategies cope with high levels of noise. There are four categories of techniques: statistical ranking and selection techniques, multiple comparison procedures, clustering techniques, and other techniques. Experimental comparison of indifference zone selection procedure by Dudewicz and Dalal (1975), sequential procedure by Kim and Nelson (2001), Tukey?s Procedure, clustering procedure by Calsinki and Corsten (1985), and Scheffe?s procedure (1985) under similar conditions suggests that the sequential ranking and selection procedure by Kim and Nelson (2001) helps evolution strategies cope with noise using the smallest number of replications. However, all of the techniques required a rather large number of replications, which suggests that better methods are needed. Experimental results also indicate that a statistical procedure is especially required during the later generations when solutions are spaced closely together in the search space (response surface).
4

Optimum Design Of Double-layer Grid Systems: Comparison With Current Design Practice Using Real-life Industrial Applications

Aydincilar, Yilmaz 01 August 2010 (has links) (PDF)
Double-layer grid systems are three-dimensional pin-jointed structures, which are generally used for covering roofs having large spans. In this study, evolution strategies method is used to optimize space trusses. Evolution strategies method is a type of evolutionary algorithms, which simulate biological evolution and natural selection phenomenon to find the best solution for an optimization problem. In this method, an initial population is formed by various solutions of design problem. Then this initial population starts to evolve by using recombination, mutation, and selection operators, which are adopted for optimization of space trusses by modifying some parameters. Optimization routine continues for a certain number of generations, and best design obtained in this process is accepted as optimum solution. OFES, a design and optimization software developed for optimum design of steel frames, is modified in this study to handle space truss systems. By using this v software, six design examples taken from real-life industrial applications with element numbers changing between 792 and 4412 are studied. The structural systems defined in examples are optimized for minimum weight in accordance with design provisions imposed by Turkish Specification, TS648. The optimization is performed based on selecting member sizes and/or determining the elevation of the structure and/or setting the support conditions of the system. The results obtained are compared with those of FrameCAD, a software which is predominantly used for design of such systems in national current design practice.
5

Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI

Lövgren, Hans January 2014 (has links)
Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\newline Objectives. We investigate whether modern hardware and software, GPGPU, combined with state-of-art Evolution Strategies, CMA-Neuro-ES, can potentially increase the efficiency of solving RL problems.\newline Methods. In order to do this, both an implementational as well as an experimental research method is used. The implementational research mainly involves developing and setting up an experimental framework in which to measure efficiency through benchmarking. In this framework, the GPGPU/ES solution is later developed. Using this framework, experiments are conducted on a conventional sequential solution as well as our own parallel GPGPU solution.\newline Results. The results indicate that utilizing GPGPU and state-of-art ES when attempting to solve RL problems can be more efficient in terms of time consumption in comparison to a conventional and sequential CPU approach.\newline Conclusions. We conclude that our proposed solution requires additional work and research but that it shows promise already in this initial study. As the study is focused on primarily generating benchmark performance data from the experiments, the study lacks data on RL efficiency and thus motivation for using our approach. However we do conclude that the GPGPU approach suggested does allow less time consuming RL problem solving.
6

Stochastic Black-Box Optimization and Benchmarking in Large Dimensions / Optimisation stochastique de problèmes en boîtes noires et benchmarking en grandes dimensions

Ait Elhara, Ouassim 28 July 2017 (has links)
Etant donné le coût élevé qui accompagne, en général, la résolution de problème en grandes dimensions, notamment quand il s'agit de problèmes réels; le recours à des fonctions dite benchmarks et une approche communément utilisée pour l'évaluation d'algorithmes avec un coût minime. Il est alors question de savoir identifier les formes par lesquelles ces problèmes se présentent pour pouvoir les reproduire dans ces benchmarks. Une question dont la réponse est difficile vu la variété de ces problèmes, leur complexité, et la difficulté de tous les décrire pertinemment. L'idée est alors d'examiner les difficultés qui accompagnent généralement ces problème, ceci afin de les reproduire dans les fonctions benchmarks et évaluer la capacité des algorithmes à les résoudre. Dans le cas des problèmes de grandes dimensions, il serait pratique de pouvoir simplement étendre les benchmarks déjà utilisés pour les dimensions moins importantes. Cependant, il est important de prendre en compte les contraintes additionnelles qui accompagnent les problèmes de grandes dimensions, notamment ceux liés à la complexité d'évaluer ces fonctions benchmark. Idéalement, les fonctions benchmark en grandes dimension garderaient la majorité des propriétés de leurs contreparties en dimensions réduite tout en ayant un coût raisonnable. Les problèmes benchmark sont souvent classifiés en catégories suivant les difficultés qu'ils présentent. Même dans un scénario en boîte-noire où ce genre d'information n'est pas partagée avec l'algorithme, il reste important et pertinent d'avoir cette classification. Ceci permet d'identifier les lacunes d'un algorithme vis à vis d'une difficulté en particulier, et donc de plus facilement pouvoir l'améliorer. Une autre question importante à se poser en modélisant des problèmes de grandes dimensions est la pertinence des variables. En effet, quand la dimension est relativement petite, il n'est pas rare de voir toutes les variables contribuer à définir la qualité d'une solution. Cependant, quand la dimension grandit, il arrive souvent que des variables deviennent redondantes voire inutiles; notamment vu la difficulté de trouver une représentation minimaliste du problème. Ce dernier point encourage la conception et d'algorithmes et de fonctions benchmark traitant cette classe de problèmes. Dans cette thèse, on répond, principalement, à trois questions rencontrées dans l'optimisation stochastique continue en grandes dimensions : 1. Comment concevoir une méthode d'adaptation du pas d'une stratégie d'évolution qui, à la fois, est efficace et a un coût en calculs raisonnable ? 2. Comment construire et généraliser des fonctions à faible dimension effective ? 3. Comment étendre un ensemble de fonctions benchmarks pour des cas de grandes dimensions en préservant leurs propriétés sans avoir des caractéristiques qui soient exploitables ? / Because of the generally high computational costs that come with large-scale problems, more so on real world problems, the use of benchmarks is a common practice in algorithm design, algorithm tuning or algorithm choice/evaluation. The question is then the forms in which these real-world problems come. Answering this question is generally hard due to the variety of these problems and the tediousness of describing each of them. Instead, one can investigate the commonly encountered difficulties when solving continuous optimization problems. Once the difficulties identified, one can construct relevant benchmark functions that reproduce these difficulties and allow assessing the ability of algorithms to solve them. In the case of large-scale benchmarking, it would be natural and convenient to build on the work that was already done on smaller dimensions, and be able to extend it to larger ones. When doing so, we must take into account the added constraints that come with a large-scale scenario. We need to be able to reproduce, as much as possible, the effects and properties of any part of the benchmark that needs to be replaced or adapted for large-scales. This is done in order for the new benchmarks to remain relevant. It is common to classify the problems, and thus the benchmarks, according to the difficulties they present and properties they possess. It is true that in a black-box scenario, such information (difficulties, properties...) is supposed unknown to the algorithm. However, in a benchmarking setting, this classification becomes important and allows to better identify and understand the shortcomings of a method, and thus make it easier to improve it or alternatively to switch to a more efficient one (one needs to make sure the algorithms are exploiting this knowledge when solving the problems). Thus the importance of identifying the difficulties and properties of the problems of a benchmarking suite and, in our case, preserving them. One other question that rises particularly when dealing with large-scale problems is the relevance of the decision variables. In a small dimension problem, it is common to have all variable contribute a fair amount to the fitness value of the solution or, at least, to be in a scenario where all variables need to be optimized in order to reach high quality solutions. This is however not always the case in large-scales; with the increasing number of variables, some of them become redundant or groups of variables can be replaced with smaller groups since it is then increasingly difficult to find a minimalistic representation of a problem. This minimalistic representation is sometimes not even desired, for example when it makes the resulting problem more complex and the trade-off with the increase in number of variables is not favorable, or larger numbers of variables and different representations of the same features within a same problem allow a better exploration. This encourages the design of both algorithms and benchmarks for this class of problems, especially if such algorithms can take advantage of the low effective dimensionality of the problems, or, in a complete black-box scenario, cost little to test for it (low effective dimension) and optimize assuming a small effective dimension. In this thesis, we address three questions that generally arise in stochastic continuous black-box optimization and benchmarking in high dimensions: 1. How to design cheap and yet efficient step-size adaptation mechanism for evolution strategies? 2. How to construct and generalize low effective dimension problems? 3. How to extend a low/medium dimension benchmark to large dimensions while remaining computationally reasonable, non-trivial and preserving the properties of the original problem?
7

Evoluční algoritmy / Evolutionary Algorithms

Szöllösi, Tomáš January 2012 (has links)
The task of this thesis was focused on comparison selected evolutionary algorithms for their success and computing needs. The paper discussed the basic principles and concepts of evolutionary algorithms used for optimization problems. Author programmed selected evolutionary algorithms and subsequently tasted on various test functions with exactly the given input conditions. Finally the algorithms were compared and evaluated the results obtained for different settings.
8

Optimal Wind Bracing Systems For Multi-storey Steel Buildings

Yildirim, Ilyas 01 August 2009 (has links) (PDF)
The major concern in the design of the multi-storey buildings is the structure to have enough lateral stability to resist wind forces. There are different ways to limit the lateral drift. First method is to use unbraced frame with moment-resisting connections. Second one is to use braced frames with moment-resisting connections. Third one is to use pin-jointed connections instead of moment-resisting one and using bracings. Finally braced frame with both moment-resisting and pin-jointed connections is a solution. There are lots of bracing models and the designer should choose the appropriate one. This thesis investigates optimal lateral bracing systems in steel structures. The method selects appropriate sections for beams, columns and bracings, from a given steel section set, and obtains a design with least weight. After obtaining the best designs in case of weight, cost analysis of all structures are carried out so that the most economical model is found. For this purpose evolution strategies optimization method is used which is a member of the evolutionary algorithms search techniques. First optimum design of steel frames is introduced in the thesis. Then evolution strategies technique is explained. This is followed by some information about design loads and bracing systems are given. It is continued by the cost analysis of the models. Finally numerical examples are presented. Optimum designs of three different structures, comprising twelve different bracing models, are carried out. The calculations are carried out by a computer program (OPTSTEEL) which is recently developed to achieve size optimization design of skeletal structures.
9

Evolutionary Control of Autonomous Underwater Vehicles

Smart, Royce Raymond, roycesmart@hotmail.com January 2009 (has links)
The goal of Evolutionary Robotics (ER) is the development of automatic processes for the synthesis of robot control systems using evolutionary computation. The idea that it may be possible to synthesise robotic control systems using an automatic design process is appealing. However, ER is considerably more challenging and less automatic than its advocates would suggest. ER applies methods from the field of neuroevolution to evolve robot control systems. Neuroevolution is a machine learning algorithm that applies evolutionary computation to the design of Artificial Neural Networks (ANN). The aim of this thesis is to assay the practical characteristics of neuroevolution by performing bulk experiments on a set of Reinforcement Learning (RL) problems. This thesis was conducted with the view of applying neuroevolution to the design of neurocontrollers for small low-cost Autonomous Underwater Vehicles (AUV). A general approach to neuroevolution for RL problems is presented. The is selected to evolve ANN connection weights on the basis that it has shown competitive performance on continuous optimisation problems, is self-adaptive and can exploit dependencies between connection weights. Practical implementation issues are identified and discussed. A series of experiments are conducted on RL problems. These problems are representative of problems from the AUV domain, but manageable in terms of problem complexity and computational resources required. Results from these experiments are analysed to draw out practical characteristics of neuroevolution. Bulk experiments are conducted using the inverted pendulum problem. This popular control benchmark is inherently unstable, underactuated and non-linear: characteristics common to underwater vehicles. Two practical characteristics of neuroevolution are demonstrated: the importance of using randomly generated evaluation sets and the effect of evaluation noise on search performance. As part of these experiments, deficiencies in the benchmark are identified and modifications suggested. The problem of an underwater vehicle travelling to a goal in an obstacle free environment is studied. The vehicle is modelled as a Dubins car, which is a simplified model of the high-level kinematics of a torpedo class underwater vehicle. Two practical characteristics of neuroevolution are demonstrated: the importance of domain knowledge when formulating ANN inputs and how the fitness function defines the set of evolvable control policies. Paths generated by the evolved neurocontrollers are compared with known optimal solutions. A framework is presented to guide the practical application of neuroevolution to RL problems that covers a range of issues identified during the experiments conducted in this thesis. An assessment of neuroevolution concludes that it is far from automatic yet still has potential as a technique for solving reinforcement problems, although further research is required to better understand the process of evolutionary learning. The major contribution made by this thesis is a rigorous empirical study of the practical characteristics of neuroevolution as applied to RL problems. A critical, yet constructive, viewpoint is taken of neuroevolution. This viewpoint differs from much of the reseach undertaken in this field, which is often unjustifiably optimistic and tends to gloss over difficult practical issues.
10

Markov chain Analysis of Evolution Strategies / Analyse Markovienne des Stratégies d'Evolution

Chotard, Alexandre 24 September 2015 (has links)
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes. Les preuves de convergence et de divergence obtenues dans cette thèse permettent d'établir le comportement asymptotique des ESs dans le cadre de l'optimisation d'une fonction linéaire avec ou sans contrainte, qui est un cas clé pour des preuves de convergence d'ESs sur de larges classes de fonctions.Cette thèse présente tout d'abord une introduction aux chaînes de Markov puis un état de l'art sur les ESs et leur contexte parmi les algorithmes d'optimisation continue boîte noire, ainsi que les liens établis entre ESs et chaînes de Markov. Les contributions de cette thèse sont ensuite présentées:o Premièrement des outils mathématiques généraux applicables dans d'autres problèmes sont développés. L'utilisation de ces outils permet d'établir aisément certaines propriétés (à savoir l'irreducibilité, l'apériodicité et le fait que les compacts sont des small sets pour la chaîne de Markov) sur les chaînes de Markov étudiées. Sans ces outils, établir ces propriétés était un processus ad hoc et technique, pouvant se montrer très difficile.o Ensuite différents ESs sont analysés dans différents problèmes. Un (1,\lambda)-ES utilisant cumulative step-size adaptation est étudié dans le cadre de l'optimisation d'une fonction linéaire. Il est démontré que pour \lambda > 2 l'algorithme diverge log-linéairement, optimisant la fonction avec succès. La vitesse de divergence de l'algorithme est donnée explicitement, ce qui peut être utilisé pour calculer une valeur optimale pour \lambda dans le cadre de la fonction linéaire. De plus, la variance du step-size de l'algorithme est calculée, ce qui permet de déduire une condition sur l'adaptation du paramètre de cumulation avec la dimension du problème afin d'obtenir une stabilité de l'algorithme. Ensuite, un (1,\lambda)-ES avec un step-size constant et un (1,\lambda)-ES avec cumulative step-size adaptation sont étudiés dans le cadre de l'optimisation d'une fonction linéaire avec une contrainte linéaire. Avec un step-size constant, l'algorithme résout le problème en divergeant lentement. Sous quelques conditions simples, ce résultat tient aussi lorsque l'algorithme utilise des distributions non Gaussiennes pour générer de nouvelles solutions. En adaptant le step-size avec cumulative step-size adaptation, le succès de l'algorithme dépend de l'angle entre les gradients de la contrainte et de la fonction optimisée. Si celui ci est trop faible, l'algorithme convergence prématurément. Autrement, celui ci diverge log-linéairement.Enfin, les résultats sont résumés, discutés, et des perspectives sur des travaux futurs sont présentées. / In this dissertation an analysis of Evolution Strategies (ESs) using the theory of Markov chains is conducted. Proofs of divergence or convergence of these algorithms are obtained, and tools to achieve such proofs are developed.ESs are so called "black-box" stochastic optimization algorithms, i.e. information on the function to be optimized are limited to the values it associates to points. In particular, gradients are unavailable. Proofs of convergence or divergence of these algorithms can be obtained through the analysis of Markov chains underlying these algorithms. The proofs of log-linear convergence and of divergence obtained in this thesis in the context of a linear function with or without constraint are essential components for the proofs of convergence of ESs on wide classes of functions.This dissertation first gives an introduction to Markov chain theory, then a state of the art on ESs and on black-box continuous optimization, and present already established links between ESs and Markov chains.The contributions of this thesis are then presented:o General mathematical tools that can be applied to a wider range of problems are developed. These tools allow to easily prove specific Markov chain properties (irreducibility, aperiodicity and the fact that compact sets are small sets for the Markov chain) on the Markov chains studied. Obtaining these properties without these tools is a ad hoc, tedious and technical process, that can be of very high difficulty.o Then different ESs are analyzed on different problems. We study a (1,\lambda)-ES using cumulative step-size adaptation on a linear function and prove the log-linear divergence of the step-size; we also study the variation of the logarithm of the step-size, from which we establish a necessary condition for the stability of the algorithm with respect to the dimension of the search space. Then we study an ES with constant step-size and with cumulative step-size adaptation on a linear function with a linear constraint, using resampling to handle unfeasible solutions. We prove that with constant step-size the algorithm diverges, while with cumulative step-size adaptation, depending on parameters of the problem and of the ES, the algorithm converges or diverges log-linearly. We then investigate the dependence of the convergence or divergence rate of the algorithm with parameters of the problem and of the ES. Finally we study an ES with a sampling distribution that can be non-Gaussian and with constant step-size on a linear function with a linear constraint. We give sufficient conditions on the sampling distribution for the algorithm to diverge. We also show that different covariance matrices for the sampling distribution correspond to a change of norm of the search space, and that this implies that adapting the covariance matrix of the sampling distribution may allow an ES with cumulative step-size adaptation to successfully diverge on a linear function with any linear constraint.Finally, these results are summed-up, discussed, and perspectives for future work are explored.

Page generated in 0.094 seconds