Spelling suggestions: "subject:"evolutionary computational."" "subject:"mvolutionary computational.""
31 |
GPU: the paradigm of parallel power for evolutionary computation.January 2005 (has links)
Fok Ka Ling. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 96-101). / Abstracts in English and Chinese. / Abstract --- p.1 / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Evolutionary Computation --- p.1 / Chapter 1.2 --- Graphics Processing Unit --- p.2 / Chapter 1.3 --- Objective --- p.3 / Chapter 1.4 --- Contribution --- p.4 / Chapter 1.5 --- Thesis Organization --- p.4 / Chapter 2 --- Evolutionary Computation --- p.6 / Chapter 2.1 --- Introduction --- p.6 / Chapter 2.2 --- General Framework --- p.7 / Chapter 2.3 --- Features of Evolutionary Algorithm --- p.8 / Chapter 2.3.1 --- Widely Applicable --- p.8 / Chapter 2.3.2 --- Parallelism --- p.9 / Chapter 2.3.3 --- Robust to Change --- p.9 / Chapter 2.4 --- Parallel and Distributed Evolutionary Algorithm --- p.9 / Chapter 2.4.1 --- Global Parallel Evolutionary Algorithms --- p.10 / Chapter 2.4.2 --- Fine-Grained Evolutionary Algorithms --- p.11 / Chapter 2.4.3 --- Island Distributed Evolutionary Algorithms --- p.12 / Chapter 2.5 --- Summary --- p.14 / Chapter 3 --- Graphics Processing Unit --- p.15 / Chapter 3.1 --- Introduction --- p.15 / Chapter 3.2 --- History of GPU --- p.16 / Chapter 3.2.1 --- First-Generation GPUs --- p.16 / Chapter 3.2.2 --- Second-Generation GPUs --- p.17 / Chapter 3.2.3 --- Third-Generation GPUs --- p.17 / Chapter 3.2.4 --- Fourth-Generation GPUs --- p.17 / Chapter 3.3 --- The Graphics Pipelining --- p.18 / Chapter 3.3.1 --- Standard Graphics Pipeline --- p.18 / Chapter 3.3.2 --- Programmable Graphics Pipeline --- p.18 / Chapter 3.3.3 --- Fragment Processors for Scientific Computation --- p.21 / Chapter 3.4 --- GPU-CPU Analogy --- p.23 / Chapter 3.4.1 --- Memory Architecture --- p.23 / Chapter 3.4.2 --- Processing Model --- p.24 / Chapter 3.5 --- Limitation of GPU --- p.24 / Chapter 3.5.1 --- Limited Input and Output --- p.24 / Chapter 3.5.2 --- Slow Data Readback --- p.24 / Chapter 3.5.3 --- No Random Number Generator --- p.25 / Chapter 3.6 --- Summary --- p.25 / Chapter 4 --- Evolutionary Programming on GPU --- p.26 / Chapter 4.1 --- Introduction --- p.26 / Chapter 4.2 --- Evolutionary Programming --- p.26 / Chapter 4.3 --- Data Organization --- p.29 / Chapter 4.4 --- Fitness Evaluation --- p.31 / Chapter 4.4.1 --- Introduction --- p.31 / Chapter 4.4.2 --- Different Forms of Fitness Function --- p.32 / Chapter 4.4.3 --- Parallel Fitness Function Evaluation using GPU --- p.33 / Chapter 4.5 --- Mutation --- p.34 / Chapter 4.5.1 --- Introduction --- p.34 / Chapter 4.5.2 --- Self Adaptive Mutation Operators --- p.36 / Chapter 4.5.3 --- Mutation on GPU --- p.37 / Chapter 4.6 --- Selection for Replacement --- p.39 / Chapter 4.6.1 --- Introduction --- p.39 / Chapter 4.6.2 --- Classification of Selection Operator --- p.39 / Chapter 4.6.3 --- q -Tournament Selection --- p.40 / Chapter 4.6.4 --- Median Searching --- p.41 / Chapter 4.6.5 --- Minimizing Data Transfer --- p.43 / Chapter 4.7 --- Experimental Results --- p.44 / Chapter 4.7.1 --- Visualization --- p.48 / Chapter 4.8 --- Summary --- p.49 / Chapter 5 --- Genetic Algorithm on GPU --- p.56 / Chapter 5.1 --- Introduction --- p.56 / Chapter 5.2 --- Canonical Genetic Algorithm --- p.57 / Chapter 5.2.1 --- Parent Selection --- p.57 / Chapter 5.2.2 --- Crossover and Mutation --- p.62 / Chapter 5.2.3 --- Replacement --- p.63 / Chapter 5.3 --- Experiment Results --- p.64 / Chapter 5.4 --- Summary --- p.66 / Chapter 6 --- Multi-Objective Genetic Algorithm --- p.70 / Chapter 6.1 --- Introduction --- p.70 / Chapter 6.2 --- Definitions --- p.71 / Chapter 6.2.1 --- General MOP --- p.71 / Chapter 6.2.2 --- Decision Variables --- p.71 / Chapter 6.2.3 --- Constraints --- p.71 / Chapter 6.2.4 --- Feasible Region --- p.72 / Chapter 6.2.5 --- Optimal Solution --- p.72 / Chapter 6.2.6 --- Pareto Optimum --- p.73 / Chapter 6.2.7 --- Pareto Front --- p.73 / Chapter 6.3 --- Multi-Objective Genetic Algorithm --- p.75 / Chapter 6.3.1 --- Ranking --- p.76 / Chapter 6.3.2 --- Fitness Scaling --- p.77 / Chapter 6.3.3 --- Diversity Preservation --- p.77 / Chapter 6.4 --- A Niched and Elitism Multi-Objective Genetic Algorithm on GPU --- p.79 / Chapter 6.4.1 --- Objective Values Evaluation --- p.80 / Chapter 6.4.2 --- Pairwise Pareto Dominance and Pairwise Distance --- p.81 / Chapter 6.4.3 --- Fitness Assignment --- p.85 / Chapter 6.4.4 --- Embedded Archiving Replacement --- p.87 / Chapter 6.5 --- Experiment Result --- p.89 / Chapter 6.6 --- Summary --- p.90 / Chapter 7 --- Conclusion --- p.95 / Bibliography --- p.96
|
32 |
Discovering acyclic dependency relationships by evolutionary computation. / CUHK electronic theses & dissertations collectionJanuary 2007 (has links)
Data mining algorithms discover knowledge from data. The knowledge are commonly expressed as dependency relationships in various forms, like rules, decision trees and Bayesian Networks (BNs). Moreover, many real-world problems are multi-class problems, in which more than one of the variables in the data set are considered as classes. However, most of the rule learners available were proposed for single-class problems only and would produce cyclic rules if they are applied to multi-class ones. In addition, most of them produce rules with conflicts, i.e. more than one of the rules classify the same data items and different rules have different predictions. Similarly, existing decision trees learners cannot handle multi-class problems, and thus cannot detect and avoid cycles. In contrast, BNs represent acyclic dependency relationships among variables, but they can handle discrete values only. They cannot manage continuous, interval and ordinal values and cannot represent higher-order relationships. Consequently, BNs have higher network complexity and lower understandability when they are used for such problems. / This thesis has studied in depth discovering dependency relationships in various forms by Evolutionary Computation (EC). Through analysis of the reasons leading to the disadvantages of rules, decision trees and BNs, and their learners, we have proposed a sequence of EAs, a novel functional dependency network (FDN) and two techniques for dependency relationship learning and for multi-class problems. They are the multi-population Genetic Programming (GP) using backward chaining procedure and the GP employing co-operating scoring stage for acyclic rules learning. The dependency network with functions can manage all kinds of values and represent any kind of relationships among variables, the flexible and robust MDLGP to learn the novel dependency network and BN. Based on the FDN we have further developed the techniques to learn rules without conflict and acyclic decision trees for multi-class problems respectively. The new self-organizing map (SOM) with expanding force for clustering and data visualization for data preprocessing have also been given in the appendix. / Shum Wing Ho. / "May 2007." / Adviser: Kwong-Sak Leung. / Source: Dissertation Abstracts International, Volume: 69-01, Section: B, page: 0436. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 221-240). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
|
33 |
Exploring the Modularity and Structure of Robots Evolved in Multiple EnvironmentsCappelle, Collin 01 January 2019 (has links)
Traditional techniques for the design of robots require human engineers to plan every aspect of the system, from body to controller. In contrast, the field of evolu- tionary robotics uses evolutionary algorithms to create optimized morphologies and neural controllers with minimal human intervention. In order to expand the capability of an evolved agent, it must be exposed to a variety of conditions and environments.
This thesis investigates the design and benefits of virtual robots which can reflect the structure and modularity in the world around them. I show that when a robot’s morphology and controller enable it to perceive each environment as a collection of independent components, rather than a monolithic entity, evolution only needs to optimize on a subset of environments in order to maintain performance in the overall larger environmental space. I explore previously unused methods in evolutionary robotics to aid in the evolution of modularity, including using morphological and neurological cost.
I utilize a tree morphology which makes my results generalizable to other mor- phologies while also allowing in depth theoretical analysis about the properties rel- evant to modularity in embodied agents. In order to better frame the question of modularity in an embodied context, I provide novel definitions of morphological and neurological modularity as well as create the sub-goal interference metric which mea- sures how much independence a robot exhibits with regards to environmental stimu- lus.
My work extends beyond evolutionary robotics and can be applied to the opti- mization of embodied systems in general as well as provides insight into the evolution of form in biological organisms.
|
34 |
Application of evolutionary algorithms to engineering designHayward, Kevin January 2008 (has links)
The efficiency of the mechanical design process can be improved by the use of evolutionary algorithms. Evolutionary algorithms provide a convenient and robust method to search for appropriate design solutions. Difficult non-linear problems are often encountered during the mechanical engineering design process. Solutions to these problems often involve computationally-intensive simulations. Evolutionary algorithms tuned to work with a small number of solution iterations can be used to automate the search for optimal solutions to these problems. An evolutionary algorithm was designed to give reliable results after a few thousand iterations; additionally the scalability and the ease of application to varied problems were considered. Convergence velocity of the algorithm was improved considerably by altering the mutation-based parameters in the algorithm. Much of this performance gain can be attributed to making the magnitude of the mutation and the minimum mutation rates self-adaptive. Three motorsport based design problems were simulated and the evolutionary algorithm was applied to search for appropriate solutions. The first two, a racing-line generator and a suspension kinematics simulation, were investigated to highlight properties of the evolutionary algorithm: reliability; solution representation; determining variable/performance relationships; and multiple objectives were discussed. The last of these problems was the lap-time simulation of a Formula SAE vehicle. This problem was solved with 32 variables, including a number of major conceptual differences. The solution to this optimisation was found to be significantly better than the 2004 UWA Motorsport vehicle, which finished 2nd in the 2005 US competition. A simulated comparison showed the optimised vehicle would score 62 more points (out of 675) in the dynamic events of the Formula SAE competition. Notably the optimised vehicle had a different conceptual design to the actual UWA vehicle. These results can be used to improve the design of future Formula SAE vehicles. The evolutionary algorithm developed here can be used as an automated search procedure for problems where performance solutions are computationally intensive.
|
35 |
Segmentation et évolution pour la planification : le système Divide-And-EvolveBibai, Jacques 08 October 2010 (has links) (PDF)
DAEX is a metaheuristic designed to improve the plan quality and the scalability of an encapsulated planning system. DAEX is based on a state decomposition strategy, driven by an evolutionary algorithm, which benefits from the use of a classical planning heuristic to maintain an ordering of atoms within the individuals. The proof of concept is achieved by embedding the domain-independent satisficing YAHSP planner and using the critical path h1 heuristic. Experiments with the resulting algorithm are performed on a selection of IPC benchmarks from classical, cost-based and temporal domains. Under the experimental conditions of the IPC, and in particular with a universal parameter setting common to all domains, DAEYAHSP is compared to the best planner for each type of domain. Results show that DAEYAHSP performs very well both on coverage and quality metrics. It is particularly noticeable that DAEX improves a lot on plan quality when compared to YAHSP, which is known to provide largely suboptimal solutions, making it competitive with state-of-the-art planners. This article gives a full account of the algorithm, reports on the experiments and provides some insights on the algorithm behavior.
|
36 |
Probabilistic Darwin Machines: A new approach to develop Evolutionary Object Detection SystemsBaró i Solé, Xavier 03 April 2009 (has links)
Des dels principis de la informàtica, s'ha intentat dotar als ordinadors de la capacitat per realitzar moltes de les tasques quotidianes de les persones. Un dels problemes més estudiats i encara menys entesos actualment és la capacitat d'aprendre a partir de les nostres experiències i generalitzar els coneixements adquirits.Una de les tasques inconscients per a les persones i que més interès està despertant en àmbit científics des del principi, és el que es coneix com a reconeixement de patrons. La creació de models del món que ens envolta, ens serveix per a reconèixer objectes del nostre entorn, predir situacions, identificar conductes, etc. Tota aquesta informació ens permet adaptar-nos i interactuar amb el nostre entorn. S'ha arribat a relacionar la capacitat d'adaptació d'un ésser al seu entorn amb la quantitat de patrons que és capaç d'identificar.Quan parlem de reconeixement de patrons en el camp de la Visió per Computador, ens referim a la capacitat d'identificar objectes a partir de la informació continguda en una o més imatges. En aquest camp s'ha avançat molt en els últims anys, i ara ja som capaços d'obtenir resultats "útils" en entorns reals, tot i que encara estem molt lluny de tenir un sistema amb la mateixa capacitat d'abstracció i tan robust com el sistema visual humà.En aquesta tesi, s'estudia el detector de cares de Viola i Jones, un dels mètode més estesos per resoldre la detecció d'objectes. Primerament, s'analitza la manera de descriure els objectes a partir d'informació de contrastos d'il·luminació en zones adjacents de les imatges, i posteriorment com aquesta informació és organitzada per crear estructures més complexes. Com a resultat d'aquest estudi, i comparant amb altres metodologies, s'identifiquen dos punts dèbils en el mètode de detecció de Viola i Jones. El primer fa referència a la descripció dels objectes, i la segona és una limitació de l'algorisme d'aprenentatge, que dificulta la utilització de millors descriptors.La descripció dels objectes utilitzant les característiques de Haar, limita la informació extreta a zones connexes de l'objecte. En el cas de voler comparar zones distants, s'ha d'optar per grans mides de les característiques, que fan que els valors obtinguts depenguin més del promig de valors d'il·luminació de l'objecte, que de les zones que es volen comparar. Amb l'objectiu de poder utilitzar aquest tipus d'informacions no locals, s'intenta introduir els dipols dissociats en l'esquema de detecció d'objectes.El problema amb el que ens trobem en voler utilitzar aquest tipus de descriptors, és que la gran cardinalitat del conjunt de característiques, fa inviable la utilització de l'Adaboost, l'algorisme utilitzat per a l'aprenentatge. El motiu és que durant el procés d'aprenentatge, es fa un anàlisi exhaustiu de tot l'espai d'hipòtesis, i al ser tant gran, el temps necessari per a l'aprenentatge esdevé prohibitiu. Per eliminar aquesta limitació, s'introdueixen mètodes evolutius dins de l'esquema de l'Adaboost i s'estudia els efectes d'aquest canvi en la capacitat d'aprenentatge. Les conclusions extretes són que no només continua essent capaç d'aprendre, sinó que la velocitat de convergència no és afectada significativament.Aquest nou Adaboost amb estratègies evolutives obre la porta a la utilització de conjunts de característiques amb cardinalitats arbitràries, el que ens permet indagar en noves formes de descriure els nostres objectes, com per exemple utilitzant els dipols dissociats. El primer que fem és comparar la capacitat d'aprenentatge del mètode utilitzant les característiques de Haar i els dipols dissociats. Com a resultat d'aquesta comparació, el que veiem és que els dos tipus de descriptors tenen un poder de representació molt similar, i depenent del problema en que s'apliquen, uns s'adapten una mica millor que els altres. Amb l'objectiu d'aconseguir un sistema de descripció capaç d'aprofitar els punts forts tant de Haar com dels dipols, es proposa la utilització d'un nou tipus de característiques, els dipols dissociats amb pesos, els quals combinen els detectors d'estructures que fan robustes les característiques de Haar amb la capacitat d'utilitzar informació no local dels dipols dissociats. A les proves realitzades, aquest nou conjunt de característiques obté millors resultats en tots els problemes en que s'ha comparat amb les característiques de Haar i amb els dipols dissociats.Per tal de validar la fiabilitat dels diferents mètodes, i poder fer comparatives entre ells, s'ha utilitzat un conjunt de bases de dades públiques per a diferents problemes, tals com la detecció de cares, la detecció de texts, la detecció de vianants i la detecció de cotxes. A més a més, els mètodes també s'han provat sobre una base de dades més extensa, amb la finalitat de detectar senyals de trànsit en entorns de carretera i urbans. / Ever since computers were invented, we have wondered whether they might perform some of the human quotidian tasks. One of the most studied and still nowadays less understood problem is the capacity to learn from our experiences and how we generalize the knowledge that we acquire.One of that unaware tasks for the persons and that more interest is awakening in different scientific areas since the beginning, is the one that is known as pattern recognition. The creation of models that represent the world that surrounds us, help us for recognizing objects in our environment, to predict situations, to identify behaviors... All this information allows us to adapt ourselves and to interact with our environment. The capacity of adaptation of individuals to their environment has been related to the amount of patterns that are capable of identifying.When we speak about pattern recognition in the field of Computer Vision, we refer to the ability to identify objects using the information contained in one or more images. Although the progress in the last years, and the fact that nowadays we are already able to obtain "useful" results in real environments, we are still very far from having a system with the same capacity of abstraction and robustness as the human visual system.In this thesis, the face detector of Viola & Jones is studied as the paradigmatic and most extended approach to the object detection problem. Firstly, we analyze the way to describe the objects using comparisons of the illumination values in adjacent zones of the images, and how this information is organized later to create more complex structures. As a result of this study, two weak points are identified in this family of methods: The first makes reference to the description of the objects, and the second is a limitation of the learning algorithm, which hampers the utilization of best descriptors.Describing objects using Haar-like features limits the extracted information to connected regions of the object. In the case we want to compare distant zones, large contiguous regions must be used, which provokes that the obtained values depend more on the average of lighting values of the object than in the regions we are wanted to compare. With the goal to be able to use this type of non local information, we introduce the Dissociated Dipoles into the outline of objects detection.The problem using this type of descriptors is that the great cardinality of this feature set makes unfeasible the use of Adaboost as learning algorithm. The reason is that during the learning process, an exhaustive search is made over the space of hypotheses, and since it is enormous, the necessary time for learning becomes prohibitive. Although we studied this phenomenon on the Viola & Jones approach, it is a general problem for most of the approaches, where learning methods introduce a limitation on the descriptors that can be used, and therefore, on the quality of the object description. In order to remove this limitation, we introduce evolutionary methods into the Adaboost algorithm, studying the effects of this modification on the learning ability. Our experiments conclude that not only it continues being able to learn, but its convergence speed is not significantly altered.This new Adaboost with evolutionary strategies opens the door to the use of feature sets with an arbitrary cardinality, which allows us to investigate new ways to describe our objects, such as the use of Dissociated Dipoles. We first compare the learning ability of this evolutionary Adaboost using Haar-like features and Dissociated Dipoles, and from the results of this comparison, we conclude that both types of descriptors have similar representation power, but depends on the problem they are applied, one adapts a little better than the other. With the aim of obtaining a descriptor capable of share the strong points from both Haar-like and Dissociated Dipoles, we propose a new type of feature, the Weighted Dissociated Dipoles, which combines the robustness of the structure detectors present in the Haar-like features, with the Dissociated Dipoles ability to use non local information. In the experiments we carried out, this new feature set obtains better results in all problems we test, compared with the use of Haar-like features and Dissociated Dipoles.In order to test the performance of each method, and compare the different methods, we use a set of public databases, which covers face detection, text detection, pedestrian detection, and cars detection. In addition, our methods are tested to face a traffic sign detection problem, over large databases containing both, road and urban scenes.
|
37 |
Robust non-linear control through neuroevolutionGomez, Faustino John, January 2003 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2003. / Vita. Includes bibliographical references. Available also from UMI Company.
|
38 |
Robust non-linear control through neuroevolutionGomez, Faustino John 28 August 2008 (has links)
Not available / text
|
39 |
Agent-Based Modelling of Stress and Productivity Performance in the WorkplacePage, Matthew, Page, Matthew 23 August 2013 (has links)
The ill-effects of stress due to fatigue significantly impact the welfare of individuals and consequently impact overall corporate productivity. This study introduces a simplified model of stress in the workplace using agent-based simulation. This study represents a novel contribution to the field of evolutionary computation. Agents are
encoded initially using a String Representation and later expanded to multi-state Binary Decision Automata to choose between work on a base task, special project or rest. Training occurs by agents inaccurately mimicking behaviour of highly productive mentors. Stress is accumulated through working long hours thereby decreasing productivity performance of an agent. Lowest productivity agents are fired or retrained. The String representation for agents demonstrated near average performance attributed to the normally distributed tasks assigned to the string. The BDA representation was found to be highly adaptive, responding robustly to parameter changes. By reducing the number of simplifications for the model, a more accurate representation of the real world can be achieved.
|
40 |
On the Pareto-Following Variation Operator for fast converging Multiobjective Evolutionary AlgorithmsTalukder, A. K. M. K. A. January 2008 (has links)
The focus of this research is to provide an efficient approach to deal with computationally expensive Multiobjective Optimization Problems (MOP’s). Typically, approximation or surrogate based techniques are adopted to reduce the computational cost. In such cases, the original expensive objective function is replaced by a cheaper mathematical model, where this model mimics the behavior/input-output (i.e. design variable – objective value) relationship. However, it is difficult to model an exact substitute of the targeted objective function. Furthermore, if this kind of approach is used in an evolutionary search, literally, the number of function evaluations does not reduce (i.e. The number of original function evaluation is replaced by the number of surrogate/approximate function evaluation). However, if a large number of individuals are considered, the surrogate model fails to offer smaller computational cost. / To tackle this problem, we have reformulated the concept of surrogate modeling in a different way, which is more suitable for the Multiobjective Evolutionary Algorithm(MOEA) paradigm. In our approach, we do not approximate the objective function; rather we model the input-output behavior of the underlying MOEA itself. The model attempts to identify the search path (in both design-variable and objective spaces) and from this trajectory the model is guaranteed to generate non-dominated solutions (especially, during the initial iterations of the underlying MOEA – with respect to the current solutions) for the next iterations of the MOEA. Therefore, the MOEA can avoid re-evaluating the dominated solutions and thus can save large amount of computational cost due to expensive function evaluations. We have designed our approximation model as a variation operator – that follows the trajectory of the fronts and can be “plugged-in” to any kind of MOEA where non-domination based selection is used. Hence it is termed– the “Pareto-Following Variation Operator (PFVO)”. This approach also provides some added advantage that we can still use the original objective function and thus the search procedure becomes robust and suitable, especially for dynamic problems. / We have integrated the model into three base-line MOEA’s: “Non-dominated Sorting Genetic Algorithm - II (NSGA-II)”, “Strength Pareto Evolutionary Algorithm - II (SPEAII)”and the recently proposed “Regularity Model Based Estimation of Distribution Algorithm (RM-MEDA)”. We have also conducted an exhaustive simulation study using several benchmark MOP’s. Detailed performance and statistical analysis reveals promising results. As an extension, we have implemented our idea for dynamic MOP’s. We have also integrated PFVO into diffusion based/cellular MOEA in a distributed/Grid environment. Most experimental results and analysis reveal that PFVO can be used as a performance enhancement tool for any kind of MOEA.
|
Page generated in 0.1081 seconds