231 |
A contribution towards real-time forecasting of algal blooms in drinking water reservoirs by means of artificial neural networks and evolutionary algorithms.Welk, Amber Lee January 2008 (has links)
Historical water quality databases from two South Australian drinking water reservoirs were used, in conjunction with various computational modelling methods for the ordination, clustering and forecasting of complex ecological data. Techniques used throughout the study were: Kohonen artificial neural networks (KANN) for data categorisation and the discovery of patterns and relationships, recurrent supervised artificial neural networks (RANN) for knowledge discovery and forecasting of algal dynamics and hybrid evolutionary algorithms (HEA) for rule-set discovery and optimisation for forecasting algal dynamics. These methods were combined to provide an integrated approach to the analysis of algal populations including interactions within the algal community and with other water quality factors, which results in improved understanding and forecasting of algal dynamics. The project initially focussed on KANN for the patternising and classification of the historical data to reveal links between the physical, chemical and biological components of the reservoirs. This offered some understanding of the system and relationships being considered for the construction of the forecasting models. Specific investigations were performed to examine past conditions and the impacts of different management regimes, as well as to discover sets of conditions that correspond with specific algal functional groups. RANN was then used to build models for forecasting both Chl-a and the main nuisance species, Anabaena, up to 7 days in advance. This method also provided sensitivity analyses to demonstrate the relationship between input and output variables by plotting the reaction of the output to variations in the inputs. Initially one year from the data set was selected for the testing of a model, as per the split-sample technique. To further test the models, it was later decided to select several years for testing to ensure the models were useful under changed conditions, and that test results were not misleading regarding the models true capabilities. RANN were firstly used to create reservoir specific or ad-hoc models. Later, the models were trained with the merged data sets of both reservoirs to create one model that could be applied to either reservoir. Another method of forecasting was trialled and compared to RANN. HEA was found to be equal or superior to RANN in predictive power, also allowed sensitivity analysis and provided an explicit, portable rule set. The HEA rule sets were initially tested on selected years of data, however to fully demonstrate the models potential, a process for k-fold cross-validation was developed to test the rule-set on all years of data. To further extend the applicability of the HEA rule-set; the idea of rule-based agents for specific lake ecosystem categories was examined. The generality of a rule-based agent means that, after successful validation on several lakes from one category, the agent could then be applied to other water bodies from within that category that had not been involved in the training process. The ultimate test of the rule-based agent for the warm monomictic and eutrophic lake ecosystem category was to be applied to a real-time monitoring and forecasting situation. The agent was fed with online, real-time data from a reservoir that belonged to the same ecosystem category but was not used in the training process. These preliminary experiments showed promising results. It can be concluded that the concept of rulebased agents will facilitate real-time forecasting of algal blooms in drinking water reservoirs provided on-line monitoring of relevant variables has been implemented. Contributions of this research include: (1) to offer insight into the capabilities of 3 kinds of computational modelling techniques applied to complex water quality data, (2) novel applications of KANN including the division of data into separate management periods for comparison of management efficiency, (3) to both qualitatively and quantitatively elucidate relationships between water quality parameters, (4) research toward the development of a forecasting tool for algal abundance 7 days in advance that could be generic for a particular lake ecosystem category and implemented in real-time, and (5) to suggest a thorough testing method for such models (k-fold cross validation). / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1331584 / Thesis (Ph.D.) -- University of Adelaide, School of Earth and Environmental Sciences, 2008
|
232 |
A neuro-evolutionary multiagent approach to multi-linked inverted pendulum controlSills, Stephen 29 May 2012 (has links)
Recent work has shown humanoid robots with spinal columns, instead of rigid torsos, benefit from both better balance and an increased ability to absorb external impact. Similarly, snake robots have shown promise as a viable option for exploration in confined spaces with limited human access, such as during power plant maintenance. Both spines and snakes, as well as hyper-redundant manipulators, can simplify to a model of a system with multiple links. The multi-link inverted pendulum is a well known benchmark problem in control systems due to its ability to accommodate varying model complexity. Such a system is useful for testing new learning algorithms or laying the foundation for autonomous control of more complex devices such as robotic spines and multi-segmented arms which currently use traditional control methods or are operated by humans. It is often easy to view these systems as single-agent learners due to the high level of interaction among the segments. However, as the number of links in the system increases, the system becomes harder to control.
This work replaces the centralized learner with a team of coevolved agents. The use of a multiagent approach allows for control of larger systems. The addition of transfer learning not only increases the learning rate, but also enables the training of larger teams which were previously infeasible due to extended training times.
The results presented support these claims by examining neuro-evolutionary control of 3-, 6-, and 12-link systems with nominal conditions as well as with sensor noise, actuator noise, and the addition of more complex physics. / Graduation date: 2012
|
233 |
Σχεδιασμός και υλοποίηση πολυκριτηριακής υβριδικής μεθόδου ταξινόμησης βιολογικών δεδομένων με χρήση εξελικτικών αλγορίθμων και νευρωνικών δικτύωνΣκρεπετός, Δημήτριος 09 October 2014 (has links)
Δύσκολα προβλήματα ταξινόμησης από τον χώρο της
Βιοπληροφορικής όπως η πρόβλεψη των microRNA γονιδιών και η πρόβλεψη των πρωτεϊνικών αλληλεπιδράσεων (Protein- Protein Interactions) απαιτούν ισχυρούς ταξινομητές οι οποίοι θα πρέπει να έχουν καλή ακρίβεια ταξινόμησης, να χειρίζονται ελλιπείς τιμές, να είναι ερμηνεύσιμοι, και να μην πάσχουν από το πρόβλημα ανισορροπίας κλάσεων. Ένας ευρέως χρησιμοποιούμενος ταξινομητής είναι τα νευρωνικά δίκτυα, τα οποία ωστόσο χρειάζονται προσδιορισμό της αρχιτεκτονικής τους και των λοιπών παραμέτρων τους, ενώ και οι αλγόριθμοι εκμάθησής τους συνήθως συγκλίνουν σε τοπικά ελάχιστα. Για τους λόγους αυτούς, προτείνεται μία πολυκριτηριακή εξελικτική μέθοδος η οποία βασίζεται στους εξελικτικούς αλγορίθμους ώστε να βελτιστοποιήσει πολλά από τα προαναφερθέντα κριτήρια απόδοσης των νευρωνικών δικτύων, να βρει επίσης την βέλτιση αρχιτεκτονική καθώς και ένα ολικό ελάχιστο για τα συναπτικά τους βάρη. Στην συνέχεια, από τον πληθυσμό που προκύπτει χρησιμοποιούμε το σύνολό του ώστε να επιτύχουμε την ταξινόμηση. / Hard classification problems of the area of Bioinformatics, like microRNA prediction and PPI prediction, demand powerful classifiers which must have good prediction accuracy, handle missing values, be interpretable, and not suffer from the class imbalance problem. One wide used classifier is neural networks, which need definition of their architecture and their other parameters, while their training algorithms usually converge to local minima. For those reasons, we suggest a multi-objective evolutionary method, which is based to evolutionary algorithms in order to optimise many of the aforementioned criteria of the performance of a neural network, and also find the optimised architecture and a global minimum for its weights. Then, from the ensuing population, we use it as an ensemble classifier in order to perform the classification.
|
234 |
Μιμιδικοί και εξελικτικοί αλγόριθμοι στην αριθμητική βελτιστοποίηση και στη μη γραμμική δυναμικήΠεταλάς, Ιωάννης 18 September 2008 (has links)
Το κύριο στοιχείο της διατριβής είναι οι Εξελικτικοί Αλγόριθμοι. Στο πρώτο μέρος παρουσιάζονται οι Μιμιδικοί Αλγόριθμοι. Οι Μιμιδικοί Αλγόριθμοι είναι υβριδικά σχήματα που συνδυάζουν τους Εξελιτκικούς Αλγορίθμους με μεθόδους τοπικής αναζήτησης. Οι Μιμιδικοί Αλγόριθμοι συγκρίθηκαν με τους Εξελικτικούς Αλγορίθμους σε πληθώρα προβλημάτων ολικής βελτιστοποίησης και είχαν καλύτερα αποτελέσματα. Στο δεύτερο μέρος μελετήθηκαν προβλήματα μη γραμμικής δυναμικής. Αυτά ήταν η εκτίμηση της περιοχής ευστάθειας διατηρητικών απεικονίσεων, η ανίχνευση συντονισμών και ο υπολογισμός περιοδικών τροχιών. Τα αποτελέσματα ήταν ικανοποιητικά. / The main objective of the thesis was the study of Evolutionary Algorithms. At the first part, Memetic Algorithms were introduced. Memetic Algorithms are hybrid schemes that combine Evolutionary Algorithms and local search methods. Memetic Algorithms were compared to Evolutionary Algorithms in various problems of global optimization and they had better performance. At the second part, problems from nonlinear dynamics were studied. These were the estimation of the stability region of conservative maps, the detection of resonances and the computation of periodic orbits. The results were satisfactory.
|
235 |
Surrogate-Assisted Evolutionary AlgorithmsLoshchilov, Ilya 08 January 2013 (has links) (PDF)
Les Algorithmes Évolutionnaires (AEs) ont été très étudiés en raison de leur capacité à résoudre des problèmes d'optimisation complexes en utilisant des opérateurs de variation adaptés à des problèmes spécifiques. Une recherche dirigée par une population de solutions offre une bonne robustesse par rapport à un bruit modéré et la multi-modalité de la fonction optimisée, contrairement à d'autres méthodes d'optimisation classiques telles que les méthodes de quasi-Newton. La principale limitation de AEs, le grand nombre d'évaluations de la fonction objectif, pénalise toutefois l'usage des AEs pour l'optimisation de fonctions chères en temps calcul. La présente thèse se concentre sur un algorithme évolutionnaire, Covariance Matrix Adaptation Evolution Strategy (CMA-ES), connu comme un algorithme puissant pour l'optimisation continue boîte noire. Nous présentons l'état de l'art des algorithmes, dérivés de CMA-ES, pour résoudre les problèmes d'optimisation mono- et multi-objectifs dans le scénario boîte noire. Une première contribution, visant l'optimisation de fonctions coûteuses, concerne l'approximation scalaire de la fonction objectif. Le meta-modèle appris respecte l'ordre des solutions (induit par la valeur de la fonction objectif pour ces solutions) ; il est ainsi invariant par transformation monotone de la fonction objectif. L'algorithme ainsi défini, saACM-ES, intègre étroitement l'optimisation réalisée par CMA-ES et l'apprentissage statistique de meta-modèles adaptatifs ; en particulier les meta-modèles reposent sur la matrice de covariance adaptée par CMA-ES. saACM-ES préserve ainsi les deux propriété clé d'invariance de CMA-ES~: invariance i) par rapport aux transformations monotones de la fonction objectif; et ii) par rapport aux transformations orthogonales de l'espace de recherche. L'approche est étendue au cadre de l'optimisation multi-objectifs, en proposant deux types de meta-modèles (scalaires). La première repose sur la caractérisation du front de Pareto courant (utilisant une variante mixte de One Class Support Vector Machone (SVM) pour les points dominés et de Regression SVM pour les points non-dominés). La seconde repose sur l'apprentissage d'ordre des solutions (rang de Pareto) des solutions. Ces deux approches sont intégrées à CMA-ES pour l'optimisation multi-objectif (MO-CMA-ES) et nous discutons quelques aspects de l'exploitation de meta-modèles dans le contexte de l'optimisation multi-objectif. Une seconde contribution concerne la conception d'algorithmes nouveaux pour l'optimi\-sation mono-objectif, multi-objectifs et multi-modale, développés pour comprendre, explorer et élargir les frontières du domaine des algorithmes évolutionnaires et CMA-ES en particulier. Spécifiquement, l'adaptation du système de coordonnées proposée par CMA-ES est couplée à une méthode adaptative de descente coordonnée par coordonnée. Une stratégie adaptative de redémarrage de CMA-ES est proposée pour l'optimisation multi-modale. Enfin, des stratégies de sélection adaptées aux cas de l'optimisation multi-objectifs et remédiant aux difficultés rencontrées par MO-CMA-ES sont proposées.
|
236 |
A Comparative Study Of Tree Encodings For Evolutionary ComputingSaka, Esin 01 July 2005 (has links) (PDF)
One of the most important factors on the success of evolutionary algorithms (EAs) about trees is the representation of them. The representation should exhibit efficiency, locality and heritability to enable effective evolutionary computing. Neville proposed three different methods for encoding labeled trees. The first one is similar with Prü / fer' / s encoding. In 2001, it is reported that, the use of Prü / fer numbers is a poor representation of spanning trees for evolutionary search, since it has low locality for random trees. In the thesis Neville' / s other two encodings, namely Neville branch numbers and Neville leaf numbers, are studied. For their performance in EA their properties and algorithms for encoding and decoding them are also examined. Optimal algorithms with time and space complexities of O(n) , where n is the number of nodes, for encoding and
decoding Neville branch numbers are given. The localities of Neville' / s encodings are investigated. It is shown that, although the localities of Neville branch and leaf numbers are perfect for star type trees, they are low for random trees. Neville branch and Neville leaf numbers are compared with other codings in EAs and SA for four problems: ' / onemax tree problem' / , ' / degree-constrained minimum spanning tree problem' / , ' / all spanning trees problem' / and ' / all degree constrained spanning trees problem' / . It is shown that, neither Neville nor Prü / fer encodings are suitable for EAs. These encodings are suitable for only tree enumeration and degree computation. Algorithms which are timewise and spacewise optimal for ' / all spanning trees problem' / (ASTP) for complete graphs, are given by using Neville branch encoding. Computed time and space complexities for solving ASTP of complete graphs are O(nn-2) and O(n) if trees are only enumerated and O(nn-1) and O(n) if all spanning trees are printed , respectively, where n is the number of nodes. Similarly, ' / all degree constrained spanning trees problem' / of a complete graph is solvable in O(nn-1) time and O(n) space.
|
237 |
Optimum Design Of Pin-jointed 3-d Dome Structures Using Global Optimization TechniquesSarac, Yavuz 01 November 2005 (has links) (PDF)
Difficult gradient calculations, converging to a local optimum without exploring the design space adequately, too much dependency on the starting solution, lacking capabilities to treat discrete and mixed design variables are the main drawbacks of conventional optimization techniques. So evolutionary optimization methods received significant interest amongst researchers in the optimization area. Genetic
algorithms (GAs) and simulated annealing (SA) are the main representatives of evolutionary optimization methods. These techniques emerged as powerful and modern strategies to efficiently deal with the difficulties encountered in conventional techniques, and therefore rightly attracted a substantial interest and popularity. The underlying concepts of these techniques and thus their algorithmic models have been devised by establishing between the optimization task and events occurring in nature. While, Darwin& / #8217 / s survival of the fittest theory is mimicked by GAs, annealing process of physical systems are employed to SA.
On the other hand, dome structures are among the most preferred types of structures for large unobstructed areas. Domes have been of a special interest in the sense that
they enclose a maximum amount of space with a minimum surface. This feature provides economy in terms of consumption of constructional materials. So merging these two concepts make it possible to obtain optimum designs of dome structures.
This thesis is concerned with the use of GAs and SA in optimum structural design of dome structures, which range from some relatively simple problems to the problems of increased complexity. In this thesis, firstly both techniques are
investigated in terms of their practicality and applicability to the problems of interest. Then numerous test problems taken from real life conditions are studied for comparing the success of the proposed GA and SA techniques with other discrete
and continuous optimization methods. The results are discussed in detail to reach certain recommendations contributing to a more efficient use of the techniques in
optimum structural design of pin-jointed 3-D dome structures.
|
238 |
Evolutionary algorithms and frequent itemset mining for analyzing epileptic oscillationsSmart, Otis Lkuwamy 28 March 2007 (has links)
This research presents engineering tools that address an important area impacting many persons worldwide: epilepsy. Over 60 million people are affected by epilepsy, a neurological disorder characterized by recurrent seizures that occur suddenly. Surgery and anti-epileptic drugs (AED s) are common therapies for epilepsy patients. However, only persons with seizures that originate in an unambiguous, focal portion of the brain are candidates for surgery, while AED s can lead to very adverse side-effects. Although medical devices based upon focal cooling, drug infusion or electrical stimulation are viable alternatives for therapy, a reliable method to automatically pinpoint dysfunctional brain and direct these devices is needed. This research introduces a method to effectively localize epileptic networks, or connectivity between dysfunctional brain, to guide where to insert electrodes in the brain for therapeutic devices, surgery, or further investigation. The method uses an evolutionary algorithm (EA) and frequent itemset mining (FIM) to detect and cluster frequent concentrations of epileptic neuronal action potentials within human intracranial electroencephalogram (EEG) recordings. In an experiment applying the method to seven patients with neocortical epilepsy (a total of 35 seizures), the approach reliably identifies the seizure onset zone, in six of the subjects (a total of 31 seizures). Hopefully, this research will lead to a better control of seizures and an improved quality of life for the millions of persons affected by epilepsy.
|
239 |
A multi-objective stochastic approach to combinatorial technology space explorationPatel, Chirag B. 18 May 2009 (has links)
Several techniques were studied to select and prioritize technologies for a complex system. Based on the findings, a method called Pareto Optimization and Selection of Technologies (POST) was formulated to efficiently explore the combinatorial technology space. A knapsack problem was selected as a benchmark problem to test-run various algorithms and techniques of POST. A Monte Carlo simulation using the surrogate models was used for uncertainty quantification. The concepts of graph theory were used to model and analyze compatibility constraints among technologies. A probabilistic Pareto optimization, based on the concepts of Strength Pareto Evolutionary Algorithm II (SPEA2), was formulated for Pareto optimization in an uncertain objective space. As a result, multiple Pareto hyper-surfaces were obtained in a multi-dimensional objective space; each hyper-surface representing a specific probability level. These Pareto layers enabled the probabilistic comparison of various non-dominated technology combinations. POST was implemented on a technology exploration problem for a 300 passenger commercial aircraft. The problem had 29 identified technologies with uncertainties in their impacts on the system. The distributions for these uncertainties were defined using beta distributions. Surrogate system models in the form of Response Surface Equations (RSE) were used to map the technology impacts on the system responses. Computational complexity of technology graph was evaluated and it was decided to use evolutionary algorithm for probabilistic Pareto optimization. The dimensionality of the objective space was reduced using a dominance structure preserving approach. Probabilistic Pareto optimization was implemented with reduced number of objectives. Most of the technologies were found to be active on the Pareto layers. These layers were exported to a dynamic visualization environment enabled by a statistical analysis and visualization software called JMP. The technology combinations on these Pareto layers are explored using various visualization tools and one combination is selected. The main outcome of this research is a method based on consistent analytical foundation to create a dynamic tradeoff environment in which decision makers can interactively explore and select technology combinations.
|
240 |
Complex Co-evolutionary Systems Approach to the Management of Sustainable Grasslands - A case study in Mexico.Martinez-Garcia, Alejandro Nicolas Unknown Date (has links)
The complex co-evolutionary systems approach -CCeSA - provides a well-suited framework for analysing agricultural systems, serving as a bridge between physical and socioeconomic sciences, alowing for the explaination of phenomena, and for the use of metaphors for thinking and action. By studying agricultural systems as self-generated, hierarchical, complex co-evolutionary farming systems - CCeFSs -, one can investigate the interconnections between the elements that constitute CCeFSs, along with the relationships between CCeFSs and other sytems, as a fundamental step to understanding sustainability as an emergent property of the system. CCeFSs are defined as human activity systems emerging from the purposes, gestalt, mental models, history and weltanschauung of the farm manager, and from his dynamic co-evolution with the environment while managing the resources at his hand to achieve his own multiple, conflicting, dynamic, semi-structured, and often incommensurable and conflicting purposes while performing above thresholds for failure, and enough flexibility to dynamically co-evolve with its changing biophysical and socioeconomic environment for a given future period. Fitness and flexibility are essential features of sustainable CCeFSs because they describe the systems' dynamic capacity to explore and exploit their dynamic phase space while co-evolving with it. This implies that a sustainable CCeFS is conceived as a set of dynamic, co-evolutionary processes, contrasting with the standard view of sustainability as an equilibrium or steady-state. Achieving sustainable CCeFSs is a semi-structured, constrained, multi-objective and dynamic optimisation management problem, with an intractable search space, that can be solved within CCeSA with the help of a multi-objective co-evolutionary optimisation tool. Carnico-ICSPEA2, a co-evolutionary navigator - CoEvoNav -used as a CCeSA's tool for harnessing the complexity of the CCeFS of interest and its environment towards sustainability, is introduced. The software was designed by its end-user - the farm manager and author of this thesis - as an aid for the analysis and optimisation of the San Francisco ranch, a beef cattle enterprise running on temperate pastures and fodder crops in the Central Plateau of Mexico. By combining a non-linear simulator and a multi-objective evolutionary algorithm with a deterministic and stochastic framework, the CoEvoNav imitates the co-evolutionary pattern of the CCeFS of interest. As such, the software was used by the farm manager to navigate through his CCeFS's co-evolutionary phase space towards achieving sustainability at farm level. The ultimate goal was to enhance the farm manager's decision-making process and co-evolutionary skills, through an increased understanding of his system, the co-evolutionary process between his mental models, the CCeFS, and the CoEvoNav, and the continuous discovery of new, improved sets of heuristics. An overview of the methodological, theoretical and philosophical framework of the thesis is introduced. Also, a survey of the Mexican economy, its agricultural sector, and a statistical review of the Mexican beef industry is presented. Concepts such as modern agriculture, the reductionist approach to agricultural research, models, the system's environment, sustainability, conventional and sustainable agriculture, complexity, evolution, simulators, and multi-objective optimisation tools are extensively reviewed. Issues concerning the impossibility of predicting the long-term future behaviour of CCeFSs, along with the use of simulators as decision support tools in the quest for sustainable CCeFSs are discussed. The rationale behind the simulator used for this study, along with that of the multi-objective evolutionary tools used as a makeup of Carnico-ICSPEA2 are explained. A description of the San Francisco ranch, its key on-farm sustainability indicators in the form of objective functions, constraints, and decision variables, and the semi-structured, multi-objective, dynamic, constrained management problem posed by the farm manager's planned introduction of a herd of bulls for fattening as a way to increase the fitness of his CCeFS via a better management of the system's feed surpluses and the acquisition of a new pick-up truck are described as a case study. The tested scenario and the experimental design for the simulations are presented as well. Results from using the CoEvoNav as the farm manager's extended phenotype to solve his multi-objective optimisation problem are described, along with the implications for the management and sustainability of the CCeFS. Finally, the approach and tools developed are evaluated, and the progress made in relation to methodological, theoretical, philosophical and conceptual notions is reviewed along with some future topics for research.
|
Page generated in 0.0851 seconds