• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 6
  • 5
  • 2
  • Tagged with
  • 39
  • 21
  • 19
  • 18
  • 14
  • 14
  • 14
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Evolutionary Control of Autonomous Underwater Vehicles

Smart, Royce Raymond, roycesmart@hotmail.com January 2009 (has links)
The goal of Evolutionary Robotics (ER) is the development of automatic processes for the synthesis of robot control systems using evolutionary computation. The idea that it may be possible to synthesise robotic control systems using an automatic design process is appealing. However, ER is considerably more challenging and less automatic than its advocates would suggest. ER applies methods from the field of neuroevolution to evolve robot control systems. Neuroevolution is a machine learning algorithm that applies evolutionary computation to the design of Artificial Neural Networks (ANN). The aim of this thesis is to assay the practical characteristics of neuroevolution by performing bulk experiments on a set of Reinforcement Learning (RL) problems. This thesis was conducted with the view of applying neuroevolution to the design of neurocontrollers for small low-cost Autonomous Underwater Vehicles (AUV). A general approach to neuroevolution for RL problems is presented. The is selected to evolve ANN connection weights on the basis that it has shown competitive performance on continuous optimisation problems, is self-adaptive and can exploit dependencies between connection weights. Practical implementation issues are identified and discussed. A series of experiments are conducted on RL problems. These problems are representative of problems from the AUV domain, but manageable in terms of problem complexity and computational resources required. Results from these experiments are analysed to draw out practical characteristics of neuroevolution. Bulk experiments are conducted using the inverted pendulum problem. This popular control benchmark is inherently unstable, underactuated and non-linear: characteristics common to underwater vehicles. Two practical characteristics of neuroevolution are demonstrated: the importance of using randomly generated evaluation sets and the effect of evaluation noise on search performance. As part of these experiments, deficiencies in the benchmark are identified and modifications suggested. The problem of an underwater vehicle travelling to a goal in an obstacle free environment is studied. The vehicle is modelled as a Dubins car, which is a simplified model of the high-level kinematics of a torpedo class underwater vehicle. Two practical characteristics of neuroevolution are demonstrated: the importance of domain knowledge when formulating ANN inputs and how the fitness function defines the set of evolvable control policies. Paths generated by the evolved neurocontrollers are compared with known optimal solutions. A framework is presented to guide the practical application of neuroevolution to RL problems that covers a range of issues identified during the experiments conducted in this thesis. An assessment of neuroevolution concludes that it is far from automatic yet still has potential as a technique for solving reinforcement problems, although further research is required to better understand the process of evolutionary learning. The major contribution made by this thesis is a rigorous empirical study of the practical characteristics of neuroevolution as applied to RL problems. A critical, yet constructive, viewpoint is taken of neuroevolution. This viewpoint differs from much of the reseach undertaken in this field, which is often unjustifiably optimistic and tends to gloss over difficult practical issues.
22

Neuroevolução aplicada no treinamento de redes neurais convolucionais para aprender estratégias específicas do jogo Go

Sakurai, Rafael Guimarães January 2017 (has links)
Orientador: Prof. Dr. Fabrício Olivetti de França / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2017. / Go é um jogo de tabuleiro que chama muita atenção na área de Inteligência Artificial, por ser um problema complexo de resolver e precisar de diferentes estratégias para obter um bom nível de habilidade no jogo. Até 2015, todos os melhores programas de Go precisavam começar a partida com vantagem para poder ganhar de um jogador profissional, mas no final de 2015, o programa AlphaGo foi o primeiro e único até o momento capaz de vencer um jogador profissional sem precisar de vantagem, combinando o uso de redes neurais convolucionais profundas para direcionar as buscas em árvores de Monte-Carlo. Esta dissertação tem como objetivo principal criar um agente inteligente de Go que decide seus próximos movimentoscom base no cenário atual do tabuleiro e em modelos de predição criados para três estratégias específicas do jogo. Para isso, duas hipóteses foram testadas: i) é possívelespecializar agentes inteligentes para o aprendizado de estratégias parciais do jogo de Go, ii) a combinação dessas estratégias permitem a construção de um agente inteligente para o jogo de Go. Para a primeira hipótese um agente foi treinado para aprender, com base em um jogador heurístico e posteriormente com base nos melhores agentes treinados, a posicionar as pedras para permitir a expansão do território, este agente aprendeu a generalizar esta estratégia contra os indivíduos treinados em diferentes estágios e também a capturar pedras. Também foram treinados dois agentes com base na resolução de problemas, com objetivo de aprenderem as estratégias específicas de captura e defesa das pedras. Em ambos os treinamentos foi possível notar que o conhecimento para resolver um problema era propagado para as próximas gerações de indivíduos, mas o nível de aprendizado foi baixo devido ao pouco treinamento. Para a segunda hipótese, um agente foi treinado para decidir qual das três estratégias específicas utilizar de acordo com o estado atual do tabuleiro. Foi possível constatar que este agente, jogando contra outros indivíduos da população, evoluiu na escolha de melhores estratégias, permitindo a dominação de territórios, captura e defensa das pedras. Os agentes foram criados utilizando Redes Neurais Convolucionais, sem qualquer conhecimento prévio sobre como jogar Go, e o treinamento foi feito com Neuroevolução. Como resultado foi possível perceber a evolução dos agentes para aprender as estratégias e comportamentos distintos de forma segmentada. O nível do agente inteligente gerado ainda está distante de um jogador profissional, porém ainda existem opções de melhorias para serem testadas com parametrização, reformulação da função de aptidão, entre outros. Esses resultados propõem novas possibilidades para a criação de agentes inteligentes para jogos complexos. / Go is a board game that draws a lot of attention in the Artificial Intelligence area, because it is a complex problem to solve and needs different strategies in order to obtain a good skill level in the game. By 2015, all the Go¿s best programs must start the match with advantage to win over a professional player, but in the end of 2015, the AlphaGo program was the first and, so far, the only one capable of beating a professional player without needing advantage, combining the use of deep convolutional neural networks to orientate the searches on Monte-Carlo trees. This dissertation has as main objective to create an intelligent agent of Go that decides its next movements based on current scenario of the board and in prediction models created for three specific strategies of the game. For this purpose, two hypothesis were tested: i) whether it is possible to specialize intelligent agents to learn partial strategies of Go game, ii) whether the combination of these strategies allows the construction of an intelligent agent to play Go. For the first hyphotesis, an agent was trained to learn, based on matches again a heuristic player and later based on the best trained agents, to position the stones to allow the expansion of territory, this agent learn to generalize this strategy against individuals trained in different stages and capture stones too. Two agents were also trained based on problem solving, in order to learn the specific strategies of catching and defense of stones. In both trainings were possible to note that the knowledge to solve a problem was propagated to the next generations of individuals, but the level of learning was low due to the short training. For the second hyphotesis, an agent was trained to decide which of the three specific strategies to use according to the current state of the board. It was possible to verify that this agent, playing against other individuals population, evolved in choosing better strategies, allowing territories domination, capture and defend stones. The agents was created using Convolution Neural Networks, without any previous knowledge about how to play Go, and the training was performed using Neuroevolution. As a result, it was possible to perceive the evolution of agents to learn different strategies and behaviours in a segmented way. The intelligent agent generated¿s skill still far from a professional player, however there are still options for improvement to be tested with parameterization, reformulation of fitness function, and others. These results propose new opportunities for the creation of intelligent agents for complex games.
23

Neuroevolucão de um controlador neural e dinâmico para um robô móvel omnidirecional de quatro rodas / Neuroevolved dynamic controller for a four-wheeled omnidirectional mobile robot

Domingos, Ruan Michel Martins 01 November 2018 (has links)
Submitted by Liliane Ferreira (ljuvencia30@gmail.com) on 2018-12-04T14:59:58Z No. of bitstreams: 2 Dissertação - Ruan Michel Martins Domingos - 2018.pdf: 5209833 bytes, checksum: 69d9378d6ad33cb6458c4dc9035813bf (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-12-05T10:25:02Z (GMT) No. of bitstreams: 2 Dissertação - Ruan Michel Martins Domingos - 2018.pdf: 5209833 bytes, checksum: 69d9378d6ad33cb6458c4dc9035813bf (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-12-05T10:25:02Z (GMT). No. of bitstreams: 2 Dissertação - Ruan Michel Martins Domingos - 2018.pdf: 5209833 bytes, checksum: 69d9378d6ad33cb6458c4dc9035813bf (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-11-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work proposes a hierarchical control architecture to deal with the Trajectory Tracking Problem while an autonomous omnidirectional wheeled mobile robot operates. A traditional velocity controller and an intelligent decision-making neural network controller address the problem, considering the robot's kinematic and dynamic models. A neuroevolution technique evolves a smart Neurocontroller functionally attached to a Resolved Acceleration PI/PD Controller. The resulting control strategy shows to improve trajectory tracking errors during simulation studies. The Traditional and Intelligent controller combination showed very promising results even when applied in other trajectories that didn't belong to the original training set. / Este trabalho propõe uma arquitetura de controle hierárquico para lidar com o Problema de Rastreamento de Trajetória durante a operação de um robô móvel omnidirecional autônomo. Um controlador de velocidade tradicional e um controlador inteligente baseado em Redes Neurais para a tomada de decisão buscam resolvem o problema, considerando os modelos cinemático e dinâmico do robô. Uma técnica de neuroevolução evolui o neurocontrolador inteligente acoplado funcionalmente a um Controlador Dinâmico PI/PD de Aceleração Resolvida. A estratégia ou política de decisão de controle resultante mostra melhorias nos erros de rastreamento de trajetória durante estudos de simulação. A combinação entre Controle Tradicional e Controle Inteligente mostrou-se bastante eficaz mesmo aplicado em trajetórias não constantes do conjunto de treinamento.
24

Maximalizace výpočetní síly neuroevolucí / Maximizing Computational Power by Neuroevolution

Matzner, Filip January 2016 (has links)
Echo state networks represent a special type of recurrent neural networks. Recent papers stated that the echo state networks maximize their computational performance on the transition between order and chaos, the so-called edge of chaos. This work confirms this statement in a comprehensive set of experiments. Afterwards, the best performing echo state network is compared to a network evolved via neuroevolution. The evolved network outperforms the best echo state network, however, the evolution consumes significant computational resources. By combining the best of both worlds, the simplicity of echo state networks and the performance of evolved networks, a new model called locally connected echo state networks is proposed. The results of this thesis may have an impact on future designs of echo state networks and efficiency of their implementation. Furthermore, the findings may improve the understanding of biological brain tissue. 1
25

Evoluční návrh konvolučních neuronových sítí / Evolutionary Design of Convolutional Neural Networks

Piňos, Michal January 2020 (has links)
The aim of this work is to design and implement a program for automated design of convolutional neural networks (CNN) with the use of evolutionary computing techniques. From a practical point of view, this approach reduces the requirements for the human factor in the design of CNN architectures, and thus eliminates the tedious and laborious process of manual design. This work utilizes a special form of genetic programming, called Cartesian genetic programming, which uses a graph representation for candidate solution encoding.This technique enables the user to parameterize the CNN search process and focus on architectures, that are interesting from the view of used computational units, accuracy or number of parameters. The proposed approach was tested on the standardized CIFAR-10dataset, which is often used by researchers to compare the performance of their CNNs. The performed experiments showed, that this approach has both research and practical potential and the implemented program opens up new possibilities in automated CNN design.
26

Umělá inteligence v real-time strategiích / Artificial Intelligence for Real-time Strategy Games

Kurňavová, Simona January 2021 (has links)
Real-time strategy games are an exciting area of research, as creating a game AI poses many challenges - from managing a single unit to completing an objective of the game. This thesis explores possible solutions to this task, using genetic programming and neuroevolution. It presents and compares findings and differences between the models. Both methods performed reasonably well, but genetic programming was found to be a bit more effective in performance and results.
27

Neuronové sítě a genetické algoritmy / Neural Networks and Genetic Algorithm

Karásek, Štěpán January 2016 (has links)
This thesis deals with evolutionary and genetic algorithms and the possible ways of combining them. The theoretical part of the thesis describes genetic algorithms and neural networks. In addition, the possible combinations and existing algorithms are presented. The practical part of this thesis describes the implementation of the algorithm NEAT and the experiments performed. A combination with differential evolution is proposed and tested. Lastly, NEAT is compared to the algorithms backpropagation (for feed-forward neural networks) and backpropagation through time (for recurrent neural networks), which are used for learning neural networks. Comparison is aimed at learning speed, network response quality and their dependence on network size.
28

Developmental Encodings in Neuroevolution - No Free Lunch but a Peak at the Menu is Allowed

Kiran Manthri, Bala, Sai Tanneeru, Kiran January 2021 (has links)
NeuroEvolution besides deep learning is considered the most promising method to train and optimize neural networks. Neuroevolution uses genetic algorithms to train the controller of an agent performing various tasks. Traditionally, the controller of an agent will be encoded in a genome which will be directly translated into the neural network of the controller. All weights and the connections will be described by their elements in the genome of the agent. Direct Encoding – states if there is a single change in the genome it directly affects a change in the brain. Over time, different forms of encoding have been developed, such as Indirect and Developmental Encodings. This paper mainly concentrates on Developmental Encoding and how it could improve NeuroEvolution. The No-Free Lunch theorem states that there is no specific optimization method that would outperform any other. This does not mean that the genetic encodings could not outperform other methods on specific neuroevolutionary tasks. However, we do not know what tasks this might be. Thus here a range of different tasks is tested using different encodings. The hope is to find in which task domains developmental encodings perform best.
29

A NEURAL-NETWORK-BASED CONTROLLER FOR MISSED-THRUST INTERPLANETARY TRAJECTORY DESIGN

Paul A Witsberger (12462006) 26 April 2022 (has links)
<p>The missed-thrust problem is a modern challenge in the field of mission design. While some methods exist to quantify its effects, there still exists room for improvement for algorithms which can fully anticipate and plan for a realistic set of missed-thrust events. The present work investigates the use of machine learning techniques to provide a robust controller for a low-thrust spacecraft. The spacecraft’s thrust vector is provided by a neural network controller which guides the spacecraft to the target along a trajectory that is robust to missed thrust, and the controller does not need to re-optimize any trajectories if it veers off its nominal course. The algorithms used to train the controller to account for missed thrust are supervised learning and neuroevolution. Supervised learning entails showing a neural network many examples of what inputs and outputs should look like, with the network learning over time to duplicate the patterns it has seen. Neuroevolution involves testing many neural networks on a problem, and using the principles of biological evolution and survival of the fittest to produce increasingly competitive networks. Preliminary results show that a controller designed with these methods provides mixed results, but performance can be greatly boosted if the controller’s output is used as an initial guess for an optimizer. With an optimizer, the success rate ranges from around 60% to 96% depending on the problem.</p> <p><br></p> <p>Additionally, this work conducts an analysis of a novel hyperbolic rendezvous strategy which was originally conceived by Dr. Buzz Aldrin. Instead of rendezvousing on the outbound leg of a hyperbolic orbit (traveling away from Earth), the spacecraft performs a rendezvous while on the inbound leg (traveling towards Earth). This allows for a relatively low Delta-v abort option for the spacecraft to return to Earth if a problem arose during rendezvous. Previous work that studied hyperbolic rendezvous has always assumed rendezvous on the outbound leg because the total Delta-v required (total propellant required) for the insertion alone is minimal with this strategy. However, I show that when an abort maneuver is taken into consideration, inserting on the inbound leg is both lower Delta-v overall, and also provides an abort window which is up to a full day longer.</p>
30

Evolution Through The Search For Novelty

Lehman, Joel 01 January 2012 (has links)
I present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima. As a significant problem in evolutionary computation, deception has inspired many techniques designed to mitigate it. However, nearly all such methods are still ultimately susceptible to deceptive local optima because they still measure progress with respect to the objective, which this dissertation will show is often a broken compass. Furthermore, although novelty search completely abandons the objective, it counterintuitively often outperforms methods that search directly for the objective in deceptive tasks and can induce evolutionary dynamics closer in spirit to natural evolution. The main contributions are to (1) introduce novelty search, an example of an effective search method that is not guided by actively measuring or encouraging objective progress; (2) validate novelty search by applying it to biped locomotion; (3) demonstrate novelty search’s benefits for evolvability (i.e. the ability of an organism to further evolve) in a variety of domains; (4) introduce an extension of novelty search called minimal criteria novelty search that brings a new abstraction of natural evolution to evolutionary computation (i.e. evolution as a search for many ways of iii meeting the minimal criteria of life); (5) present a second extension of novelty search called novelty search with local competition that abstracts evolution instead as a process driven towards diversity with competition playing a subservient role; and (6) evolve a diversity of functional virtual creatures in a single run as a culminating application of novelty search with local competition. Overall these contributions establish novelty search as an important new research direction for the field of evolutionary computation.

Page generated in 0.0331 seconds