• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 122
  • 86
  • 7
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 259
  • 259
  • 83
  • 80
  • 78
  • 69
  • 67
  • 54
  • 54
  • 54
  • 53
  • 47
  • 41
  • 39
  • 39
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A Risk-based Optimization Modeling Framework for Mitigating Fire Events for Water and Fire Response Infrastructures

Kanta, Lufthansa Rahman 2009 December 1900 (has links)
The purpose of this dissertation is to address risk and consequences of and effective mitigation strategies for urban fire events involving two critical infrastructures- water distribution and emergency services. Water systems have been identified as one of the United States' critical infrastructures and are vulnerable to various threats caused by natural disasters or malevolent actions. The primary goals of urban water distribution systems are reliable delivery of water during normal and emergency conditions (such as fires), ensuring this water is of acceptable quality, and accomplishing these tasks in a cost-effective manner. Due to interdependency of water systems with other critical infrastructures-e.g., energy, public health, and emergency services (including fire response)- water systems planning and management offers numerous challenges to water utilities and affiliated decision makers. The dissertation is divided into three major sections, each of which presents and demonstrates a methodological innovation applied to the above problem. First, a risk based dynamic programming modeling approach is developed to identify the critical components of a water distribution system during fire events under three failure scenarios: (1) accidental failure due to soil-pipe interaction, (2) accidental failure due to a seismic activity, and (3) intentional failure or malevolent attack. Second, a novel evolutionary computation based multi-objective optimization technique, Non-dominated Sorting Evolution Strategy (NSES), is developed for systematic generation of optimal mitigation strategies for urban fire events for water distribution systems with three competing objectives: (1) minimizing fire damages, (2) minimizing water quality deficiencies, and (3) minimizing the cost of mitigation. Third, a stochastic modeling approach is developed to assess urban fire risk for the coupled water distribution and fire response systems that includes probabilistic expressions for building ignition, WDS failure, and wind direction. Urban fire consequences are evaluated in terms of number of people displaced and cost of property damage. To reduce the assessed urban fire risk, the NSES multi-objective approach is utilized to generate Pareto-optimal solutions that express the tradeoff relationship between risk reduction, mitigation cost, and water quality objectives. The new methodologies are demonstrated through successful application to a realistic case study in water systems planning and management.
132

ACE-Model: A Conceptual Evolutionary Model For Evolutionary Computation And Artificial Life

Dukkipati, Ambedkar 03 1900 (has links)
Darwinian Evolutionary system - a system satisfying the abstract conditions: reproduction with heritable variation, in a finite world, giving rise to Natural Selection encompasses a complex and subtle system of interrelated theories, whose substantive transplantation to any artificial medium let it be mathematical model or computational model - will be very far from easy. There are two motives in bringing Darwinian evolution into computational frameworks: one to understand the Darwinian evolution, and the other is to view Darwinian evolution - that carries out controlled adaptive-stochastic search in the space of all possible DNA-sequences for emergence and improvement of the living beings on our planet - as an optimization process, which can be simulated in appropriate frameworks to solve some intractable problems. The first motive led to emerging field of study commonly referred to as Artificial Life, and other gave way to emergence of Evolutionary Computation, which is speculated to be the only practical path to the development of ontogenetic machine intelligence. In this thesis we touch upon all the above aspects. Natural selection is the central concept of Darwinian evolution and hence capturing natural selection in computational frameworks which maintains the spirit of Darwinian evolution in the sense of conventional, terrestrial and biological perspectives is essential. Naive models of evolution define natural selection as a process which brings in differential reproductive capabilities in organisms of a population, and hence, most of the evolutionary simulations in Artificial Life and Evolutionary Computation implement selection by differential reproduction: the Attest members of the population are reproduced preferentially at the expense of the less fit members of the population. Formal models in evolutionary biology often subdivide selection into components called 'episodes of selection' to capture the different complex mechanisms of nature by which Darwinian evolution can occur. In this thesis we introduce the concept of 'episodes of selection' into computational frameworks of Darwinian evolution by means of A Conceptual Evolutionary model (ACE-model). ACE-model is proposed to be simple and yet it captures the essential features of modern evolutionary perspectives in evolutionary computation framework. ACE-model is rich enough to offer abstract and structural framework for evolutionary computation and can serve as a basic model for evolutionary algorithms. It captures selection in two episodes in two phases of evolutionary cycle and it offers various parameters by which evolutionary algorithms can control selection mechanisms. In this thesis we propose two evolutionary algorithms namely Malthus evolutionary algorithms and Malthus Spencer evolutionary algorithms based on the ACE-model and we discuss the relevance of parameters offered by ACE-model by simulation studies. As an application of ACE-model to artificial life we study misconceptions involved in defining fitness in evolutionary biology, and we also discuss the importance of introducing fitness landscape in the theories of Darwinian evolution. Another important and independent contribution of this thesis is: A Mathematical Abstraction of Evolutionary process. Evolutionary process is characterized by Evolutionary Criteria and Evolutionary Mechanism which are formalized by classical mathematical tools. Even though the model is in its premature stage to develop any theory based on it, we develop convergence criteria of evolutionary process based on this model.
133

Genetic Programming Based Multicategory Pattern Classification

Kishore, Krishna J 03 1900 (has links)
Nature has created complex biological structures that exhibit intelligent behaviour through an evolutionary process. Thus, intelligence and evolution are intimately connected. This has inspired evolutionary computation (EC) that simulates the evolutionary process to develop powerful techniques such as genetic algorithms (GAs), genetic programming (GP), evolutionary strategies (ES) and evolutionary programming (EP) to solve real-world problems in learning, control, optimization and classification. GP discovers the relationship among data and expresses it as a LISP-S expression i.e., a computer program. Thus the goal of program discovery as a solution for a problem is addressed by GP in the framework of evolutionary computation. In this thesis, we address for the first time the problem of applying GP to mu1ticategory pattern classification. In supervised pattern classification, an input vector of m dimensions is mapped onto one of the n classes. It has a number of application areas such as remote sensing, medical diagnosis etc., A supervised classifier is developed by using a training set that contains representative samples of various classes present in the application. Supervised classification has been done earlier with maximum likelihood classifier: neural networks and fuzzy logic. The major considerations in applying GP to pattern classification are listed below: (i) GP-based techniques are data distribution-free i.e., no a priori knowledge is needed abut the statistical distribution of the data or no assumption such as normal distribution for data needs to be made as in MLC. (ii) GP can directly operate on the data in its original form. (iii) GP can detect the underlying but unknown relationship that mists among data and express it as a mathematical LISP S-expression. The generated LISP S-expressions can be directly used in the application environment. (iv) GP can either discover the most important discriminating features of a class during evolution or it requires minor post-processing of the LISP-S expression to discover the discriminant features. In a neural network, the knowledge learned by the neural network about the data distributions is embedded in the interconnection weights and it requires considerable amount of post-processing of the weights to understand the decision of the neural network. In 2-category pattern classification, a single GP expression is evolved as a discriminant function. The output of the GP expression can be +l for samples of one class and -1 for samples of the other class. When the GP paradigm is applied to an n-class problem, the following questions arise: Ql. As a typical GP expression returns a value (+l or -1) for a 2-class problem, how does one apply GP for the n-class pattern classification problem? Q2. What should be the fitness function during evolution of the GP expressions? Q3. How does the choice of a function set affect the performance of GP-based classification? Q4. How should training sets be created for evaluating fitness during the evolution of GP classifier expressions? Q5. How does one improve learning of the underlying data distributions in a GP framework? Q6. How should conflict resolution be handled before assigning a class to the input feature vector? Q7. How does GP compare with other classifiers for an n-class pattern classification problem? The research described here seeks to answer these questions. We show that GP can be applied to an n-category pattern classification problem by considering it as n 2-class problems. The suitability of this approach is demonstrated by considering a real-world problem based on remotely sensed satellite images and Fisher's Iris data set. In a 2-class problem, simple thresholding is sufficient for a discriminant function to divide the feature space into two regions. This means that one genetic programming classifier expression (GPCE) is sufficient to say whether or not the given input feature vector belongs to that class; i.e., the GP expression returns a value (+1 or -1). As the n-class problem is formulated as n 2-class problems, n GPCEs are evolved. Hence, n GPCE specific training sets are needed to evolve these n GPCEs. For the sake of illustration, consider a 5-class pat tern classification problem. Let n, be the number of samples that belong to class j, and N, be the number of samples that do not belong to class j, (j = 1,..., 5). Thus, N1=n2+n3+n4+n5 N2=n1+n3+n4+n5 N3=n1+n2+n4+n5 N4=n1+n2+n3+n5 N5=n1+n2+n3+n4 Thus, When the five class problem is formulated as five 2-class problems. we need five GPCEs as discriminant functions to resolve between n1 and N1, n2 and N2, n3 and N3, n4 and N4 and lastly n5 and N5. Each of these five 2-class problems is handled as a separate 2-class problem with simple thresholding. Thus, GPCE# l resolves between samples of class# l and the remaining n - 1 classes. A training set is needed to evaluate the fitness of GPCE during its evolution. If we directly create the training set, it leads to skewness (as n1 < N1). To overcome the skewness, an interleaved data format is proposed for the training set of a GPCE. For example, in the training set of GPCE# l, samples of class# l are placed alternately between samples of the remaining n - 1 classes. Thus, the interleaved data format is an artifact to create a balanced training set. Conventionally, all the samples of a training set are fed to evaluate the fitness of every member of the population in each generation. We call this "global" learning 3s GP tries to learn the entire training set at every stage of the evolution. We have introduced incremental learning to simplify the task of learning for the GP paradigm. A subset of the training set is fed and the size of the subset is gradually increased over time to cover the entire training data. The basic motivation for incremental learning is to improve learning during evolution as it is easier to learn a smaller task and then to progress from a smaller task to a bigger task. Experimental results are presented to show that the interleaved data format and incremental learning improve the performance of the GP classifier. We also show that the GPCEs evolved with an arithmetic function set are able to track variation in the input better than GPCEs evolved with function sets containing logical and nonlinear elements. Hence, we have used arithmetic function set, incremental learning, and interleaved data format to evolve GPCEs in our simulations. AS each GPCE is trained to recognize samples belonging to its own class and reject samples belonging to other classes a strength of association measure is associated with each GPCE to indicate the degree to which it can recognize samples belonging to its own class. The strength of association measures are used for assigning a class to an input feature vector. To reduce misclassification of samples, we also show how heuristic rules can be generated in the GP framework unlike in either MLC or the neural network classifier. We have also studied the scalability and generalizing ability of the GP classifier by varying the number of classes. We also analyse the performance of the GP classifier by considering the well-known Iris data set. We compare the performance of classification rules generated from the GP classifier with those generated from neural network classifier, (24.5 method and fuzzy classifier for the Iris data set. We show that the performance of GP is comparable to other classifiers for the Iris data set. We notice that the classification rules can be generated with very little post-processing and they are very similar to the rules generated from the neural network and C4.5 for the Iris data set. Incremental learning influences the number of generations available for GP to learn the data distribution of classes whose d is -1 in the interleaved data format. This is because the samples belonging to the true class (desired output d is +1) are alternately placed between samples belonging to other classes i.e., they are repeated to balance the training set in the interleaved data format. For example, in the evolution of GPCE for class# l, the fitness function can be fed initially with samples of class#:! and subsequently with the samples of class#3, class#4 and class#. So in the evaluation of the fitness function, the samples of class#kt5 will not be present when the samples of class#2 are present in the initial stages. However, in the later stages of evolution, when samples of class#5 are fed, the fitness function will utilize the samples of both class#2 and class#5. As learning in evolutionary computation is guided by the evaluation of the fitness function, GPCE# l gets lesser number of generations to learn how to reject data of class#5 as compared to the data of class#2. This is because the termination criterion (i.e., the maximum number of generations) is defined a priori. It is clear that there are (n-l)! Ways of ordering the samples of classes whose d is -1 in the interleaved data format. Hence a heuristic is presented to determine a possible order to feed data of different classes for the GPCEs evolved with incremental learning and interleaved data format. The heuristic computes an overlap index for each class based on its spatial spread and distribution of data in the region of overlap with respect to other classes in each feature. The heuristic determines the order in which classes whose desired output d is –1 should be placed in each GPCE-specific training set for the interleaved data format. This ensures that GP gets more number of generations to learn about the data distribution of a class with higher overlap index than a class with lower overlap index. The ability of the GP classifier to learn the data distributions depends upon the number of classes and the spatial spread of data. As the number of classes increases, the GP classifier finds it difficult to resolve between classes. So there is a need to partition the feature space and identify subspaces with reduced number of classes. The basic objective is to divide the feature space into subspaces and hence the data set that contains representative samples of n classes into subdata sets corresponding to the subspaces of the feature space, so that some of the subdata sets/spaces can have data belonging to only p classes (p < n). The GP classifier is then evolved independently for the subdata sets/spaces of the feature space. This results in localized learning as the GP classifier has to learn the data distribution in only a subspace of the feature space rather than in the entire feature space. By integrating the GP classifier with feature space partitioning (FSP), we improve classification accuracy due to localized learning. Although serial computers have increased steadily in their performance, the quest for parallel implementation of a given task has continued to be of interest in any computationally intensive task since parallel implementation leads to a faster execution than a serial implementation As fitness evaluation, selection strategy and population structures are used to evolve a solution in GP, there is scope for a parallel implementation of GP classifier. We have studied distributed GP and massively parallel GP for our approach to GP-based multicategory pattern classification. We present experimental results for distributed GP with Message Passing Interface on IBM SP2 to highlight the speedup that can be achieved over the serial implementation of GP. We also show how data parallelism can be used to further speed up fitness evaluation and hence the execution of the GP paradigm for multicategory pat tern classification. We conclude that GP can be applied to n-category pattern classification and its potential lies in its simplicity and scope for parallel implementation. The GP classifier developed in this thesis can be looked upon as an addition to the earlier statistical, neural and fuzzy approaches to multicategory pattern classification.
134

EVOLVING CONTACT NETWORKS TO ANALYZE EPIDEMIC BEHAVIOUR AND STUDYING THE EFFECTS OF VACCINATION

Shiller, Elisabeth 09 January 2013 (has links)
Epidemic models help researchers understand and predict the nature of a potential epidemic. This study analyzes and improves network evolution technology that evolves contact networks so that simulated epidemics on the network mimic a specified epidemic pattern. The evolutionary algorithm incorporates the novel recentering-restarting algorithm, which is adopted into the optimizer to allow for efficient search of the space of networks. It also implements the toggle-delete representation which allows for broader search of solution space. Then, a diffusion character based method is used for analyzing the contact networks. A comparison of simulated epidemics that result from changing patient zero for a single contact network is performed. It is found that the location of patient zero is important for the behaviour of an epidemic. The social fabric representation is invented and then tested for parameter choices. The response to vaccination strategies (including ring vaccination) is then tested by incorporating them into the epidemic simulations. / Ontario Graduate Scholarship (OGS), Natural Sciences and Engineering Research Council of Canada (NSERC)
135

Exploring conceptual geodynamic models : numerical method and application to tectonics and fluid flow

Wijns, Christopher P. January 2005 (has links)
Geodynamic modelling, via computer simulations, offers an easily controllable method for investigating the behaviour of an Earth system and providing feedback to conceptual models of geological evolution. However, most available computer codes have been developed for engineering or hydrological applications, where strains are small and post-failure deformation is not studied. Such codes cannot simultaneously model large deformation and porous fluid flow. To remedy this situation in the face of tectonic modelling, a numerical approach was developed to incorporate porous fluid flow into an existing high-deformation code called Ellipsis. The resulting software, with these twin capabilities, simulates the evolution of highly deformed tectonic regimes where fluid flow is important, such as in mineral provinces. A realistic description of deformation depends on the accurate characterisation of material properties and the laws governing material behaviour. Aside from the development of appropriate physics, it can be a difficult task to find a set of model parameters, including material properties and initial geometries, that can reproduce some conceptual target. In this context, an interactive system for the rapid exploration of model parameter space, and for the evaluation of all model results, replaces the traditional but time-consuming approach of finding a result via trial and error. The visualisation of all solutions in such a search of parameter space, through simple graphical tools, adds a new degree of understanding to the effects of variations in the parameters, the importance of each parameter in controlling a solution, and the degree of coverage of the parameter space. Two final applications of the software code and interactive parameter search illustrate the power of numerical modelling within the feedback loop to field observations. In the first example, vertical rheological contrasts between the upper and lower crust, most easily related to thermal profiles and mineralogy, exert a greater control over the mode of crustal extension than any other parameters. A weak lower crust promotes large fault spacing with high displacements, often overriding initial close fault spacing, to lead eventually to metamorphic core complex formation. In the second case, specifically tied to the history of compressional orogenies in northern Nevada, exploration of model parameters shows that the natural reactivation of early normal faults in the Proterozoic basement, regardless of basement topography or rheological contrasts, would explain the subsequent elevation and gravitationally-induced thrusting of sedimentary layers over the Carlin gold trend, providing pathways and ponding sites for mineral-bearing fluids.
136

Evolutionary Control of Autonomous Underwater Vehicles

Smart, Royce Raymond, roycesmart@hotmail.com January 2009 (has links)
The goal of Evolutionary Robotics (ER) is the development of automatic processes for the synthesis of robot control systems using evolutionary computation. The idea that it may be possible to synthesise robotic control systems using an automatic design process is appealing. However, ER is considerably more challenging and less automatic than its advocates would suggest. ER applies methods from the field of neuroevolution to evolve robot control systems. Neuroevolution is a machine learning algorithm that applies evolutionary computation to the design of Artificial Neural Networks (ANN). The aim of this thesis is to assay the practical characteristics of neuroevolution by performing bulk experiments on a set of Reinforcement Learning (RL) problems. This thesis was conducted with the view of applying neuroevolution to the design of neurocontrollers for small low-cost Autonomous Underwater Vehicles (AUV). A general approach to neuroevolution for RL problems is presented. The is selected to evolve ANN connection weights on the basis that it has shown competitive performance on continuous optimisation problems, is self-adaptive and can exploit dependencies between connection weights. Practical implementation issues are identified and discussed. A series of experiments are conducted on RL problems. These problems are representative of problems from the AUV domain, but manageable in terms of problem complexity and computational resources required. Results from these experiments are analysed to draw out practical characteristics of neuroevolution. Bulk experiments are conducted using the inverted pendulum problem. This popular control benchmark is inherently unstable, underactuated and non-linear: characteristics common to underwater vehicles. Two practical characteristics of neuroevolution are demonstrated: the importance of using randomly generated evaluation sets and the effect of evaluation noise on search performance. As part of these experiments, deficiencies in the benchmark are identified and modifications suggested. The problem of an underwater vehicle travelling to a goal in an obstacle free environment is studied. The vehicle is modelled as a Dubins car, which is a simplified model of the high-level kinematics of a torpedo class underwater vehicle. Two practical characteristics of neuroevolution are demonstrated: the importance of domain knowledge when formulating ANN inputs and how the fitness function defines the set of evolvable control policies. Paths generated by the evolved neurocontrollers are compared with known optimal solutions. A framework is presented to guide the practical application of neuroevolution to RL problems that covers a range of issues identified during the experiments conducted in this thesis. An assessment of neuroevolution concludes that it is far from automatic yet still has potential as a technique for solving reinforcement problems, although further research is required to better understand the process of evolutionary learning. The major contribution made by this thesis is a rigorous empirical study of the practical characteristics of neuroevolution as applied to RL problems. A critical, yet constructive, viewpoint is taken of neuroevolution. This viewpoint differs from much of the reseach undertaken in this field, which is often unjustifiably optimistic and tends to gloss over difficult practical issues.
137

Emergence and evolution of computational habitats /

Tulai, Alexander F. January 1900 (has links)
Thesis (Ph. D.)--Carleton University, 2004. / Includes bibliographical references (p. 189-201) and index. Also available in electronic format on the Internet.
138

Nonparametric evolutionary clustering

Xu, Tianbing. January 2009 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2009. / Includes bibliographical references.
139

Geração e Simplificação da Base de Conhecimento de um Sistema Híbrido Fuzzy-Genético. / Generation and Simplification of a Knowledge Base Hybrid Fuzzy-Genetic system.

Leandro da Costa Moraes Leite 17 December 2009 (has links)
Geração e Simplificação da Base de Conhecimento de um Sistema Híbrido Fuzzy- Genético propõe uma metodologia para o desenvolvimento da base de conhecimento de sistemas fuzzy, fundamentada em técnicas de computação evolucionária. Os sistemas fuzzy evoluídos são avaliados segundo dois critérios distintos: desempenho e interpretabilidade. Uma metodologia para a análise de problemas multiobjetivo utilizando a Lógica Fuzzy foi também desenvolvida para esse fim e incorporada ao processo de avaliação dos AGs. Os sistemas fuzzy evoluídos foram avaliados através de simulações computacionais e os resultados obtidos foram comparados com os obtidos por outros métodos em diferentes tipos de aplicações. O uso da metodologia proposta demonstrou que os sistemas fuzzy evoluídos possuem um bom desempenho aliado a uma boa interpretabilidade da sua base de conhecimento, tornando viável a sua utilização no projeto de sistemas reais. / Genetic-Fuzzy Systems Generation and Simplification of a Knowledge Base proposes a methodology to develop a knowledge base for fuzzy systems through the utilization of evolutionary computational techniques. The evolved fuzzy systems are evaluated considering two distinct criteria: performance and interpretability. Another Fuzzy Logic-based methodology for multiobjective problem analysis was also developed in this work and incorporated in GAs fitness evaluation process. The aforementioned systems were analyzed through computational simulations, and the results were compared to those obtained through other methods, in some applications. The proposed methodology demonstrated that the evolved fuzzy systems are capable of not only good performance, but also good interpretation of their knowledge base, thus showing that they can be effectively used in real world projects.
140

Computação evolucionária para indução de regras de autômatos celulares multidimensionais

Weinert, Wagner Rodrigo 10 2011 (has links)
Um autômato celular é um sistema dinâmico discreto que evolui pela iteração de regras. Os valores das variáveis do sistema mudam em função de seus valores correntes. Os autômatos celulares podem ser aplicados na resolução de diversos problemas. A tarefa de encontrar uma regra de transição que solucione um determinado problema pode ser generalizada como um problema de indução de regras para autômatos celulares. Várias abordagens baseadas em técnicas de computação evolucionária vêm sendo empregadas neste problema. No entanto, estas restringem-se a aplicações específicas. A principal contribuição deste trabalho é a proposição de uma metodologia genérica para indução de regras de autômatos celulares. Para alcançar este objetivo a pesquisa foi segmentada em quatro etapas. Na primeira etapa avaliou-se o desempenho de alguns parâmetros de previsão de comportamento calculados em função de regras de transição. Os resultados obtidos nesta etapa indicaram que os parâmetros de previsão de comportamento dinâmico devem ser utilizados de forma criteriosa. Este cuidado reside na possibilidade de se obter soluções válidas, porém, não satisfatórias. Ressalta-se também a necessidade da existência de parâmetros de referência que para a maioria dos problemas reais, não está disponível. Na segunda etapa apresentou-se um novo método para a previsão do comportamento dinâmico. Este método considera a regra de transição e a configuração inicial do autômato celular. Para a previsão utilizou-se como referência os padrões de comportamento dinâmico qualitativos descritos por Wolfram. O método mostrou-se eficiente para regras de comportamento nulo. Como o processo de simulação da dinâmica de um sistema pode ter um custo computacional elevado, desenvolveu-se uma terceira metodologia. Nesta metodologia implementou-se uma arquitetura baseada no conceito de hardware/software co-design com a finalidade de contornar problemas referentes a tempo de processamento. Esta arquitetura realiza a evolução de autômatos celulares utilizando lógica reconfigurável. A arquitetura diminuiu o tempo de processamento por centenas de vezes, mas algumas restrições do modelo, como número limitado de células lógicas e reprogramações do hardware inviabilizaram seu uso. Considerando-se as restrições impostas pela arquitetura implementada, iniciou-se a quarta etapa da pesquisa onde foi desenvolvida uma nova arquitetura paralela fundamentada no paradigma mestre-escravo. Neste paradigma um processo mestre implementa o algoritmo evolucionário e um conjunto de processos escravos dividem a tarefa de validação das regras obtidas. O sistema é executado em um cluster composto por 120 núcleos de processamento que se interligam por meio de uma rede ethernet. A estratégia co-evolucionária baseada em um modelo insular permitiu a busca por soluções que apresentam um melhor valor para função de fitness. O sistema genérico implementado sobre um ambiente paralelo foi capaz de solucionar os problemas abordados. Uma análise de distribuição de tarefas entre vários processadores enfatizou os benefícios do processamento paralelo. Os experimentos também indicaram um conjunto de parâmetros evolucionários de referência que podem ser utilizados para configurar o sistema. As contribuições deste trabalho foram tanto teóricas, com as avaliações realizadas sobre os parâmetros e os diferentes métodos de previsão de comportamento dinâmico, quanto metodológicas, pois desenvolveu-se a proposta de duas arquiteturas de processamento distintas. / A cellular automata is a discrete dynamic system that evolves thought interactions of rules and can be applied to solve several complex problems. The task to find the transition rule to solve a problem can be generalized as a problem of rule induction for cellular automata. Several approaches, based on evolutionary computation techniques, have been proposed to solve this problem. However, there is no generic methodology capable of being applied to a large range of problems. The main contribution of this work is a generic methodology for rule induction for cellular automata. This research was done in four steps to achieve this objective. In the first step we evaluated the performance of some dynamic behavior forecasting parameters calculated as function of a transition rule. The obtained results indicated that those parameters can be used in a careful way. This is due to the possibility of obtaining valid, but insatisfactory solutions. We stress the importance of considering reference parameters, which for the majority of real problems, are not available. In the second research step we proposed a new method to forecast the dynamic behavior. This method considers the transition rule and the initial configuration of the cellular automata. We used the qualitative dynamic behavior patterns described by Wolfram as reference to the forecast. This method was efficient for null behavior rules. Since the process of dynamic simulation can have a high computational cost, we developed a third methodology: an architecture based on the concept of hardware/software co-design to accelerate the processing time. This architecture implements the evolution of cellular automata using reconfigurable logic and was able to decrease hundreds of times the processing time. In the fourth step we developed a new parallel architecture based on the master-slave paradigm. In this paradigm, the master process implements the evolutionary algorithm and a set of slaves processes divide the task of validating the obtained rules. The system runs in a cluster with 120 processing cores connected by a local area network. The co-evolutionary strategy based on an insular model allowed the search for high quality solutions. The generic system implemented over a parallel environment was able to solve the problems proposed. A task distribution analyses among several processors emphasized the benefits of parallel processing. The experiments also indicated a set of reference parameters that can be used to configure the system. The contributions of this work were theoretical and methodological. The former refers to the evaluations done and the different methods for dynamic behavior forecasting parameters. The latter is about the development of two architectures for processing.

Page generated in 0.2015 seconds