Spelling suggestions: "subject:"evolutionary robotics"" "subject:"mvolutionary robotics""
11 |
Incremental Evolutionary Methods for Automatic Programming of Robot ControllersPetrovic, Pavel January 2007 (has links)
<p>The aim of the main work in the thesis is to investigate Incremental Evolution methods for designing a suitable behavior arbitration mechanism for behavior-based (BB) robot controllers for autonomous mobile robots performing tasks of higher complexity. The challenge of designing effective controllers for autonomous mobile robots has been intensely studied for few decades. Control Theory studies the fundamental control principles of robotic systems. However, the technological progress allows, and the needs of advanced manufacturing, service, entertainment, educational, and mission tasks require features beyond the scope of the standard functionality and basic control. Artificial Intelligence has traditionally looked upon the problem of designing robotics systems from the high-level and top-down perspective: given a working robotic device, how can it be equipped with an intelligent controller. Later approaches advocated for better robustness, modifiability, and control due to a bottom-up layered incremental controller and robot building (Behavior-Based Robotics, BBR). Still, the complexity of programming such system often requires manual work of engineers. Automatic methods might lead to systems that perform task on demand without the need of expert robot programmer. In addition, a robot programmer cannot predict all the possible situations in the robotic applications. Automatic programming methods may provide flexibility and adaptability of the robotic products with respect to the task performed. One possible approach to automatic design of robot controllers is Evolutionary Robotics (ER). Most of the experiments performed in the field of ER have achieved successful learning of target task, while the tasks were of limited complexity. This work is a marriage of incremental idea from the BBR and automatic programming of controllers using ER. Incremental Evolution allows automatic programming of robots for more complex tasks by providing a gentle and easy-to understand support by expertknowledge — division of the target task into sub-tasks. We analyze different types of incrementality, devise new controller architecture, implement an original simulator compatible with hardware, and test it with various incremental evolution tasks for real robots. We build up our experimental field through studies of experimental and educational robotics systems, evolutionary design, distributed computation that provides the required processing power, and robotics applications. University research is tightly coupled with education. Combining the robotics research with educational applications is both a useful consequence as well as a way of satisfying the necessary condition of the need of underlying application domain where the research work can both reflect and base itself.</p>
|
12 |
Incremental Evolutionary Methods for Automatic Programming of Robot ControllersPetrovic, Pavel January 2007 (has links)
The aim of the main work in the thesis is to investigate Incremental Evolution methods for designing a suitable behavior arbitration mechanism for behavior-based (BB) robot controllers for autonomous mobile robots performing tasks of higher complexity. The challenge of designing effective controllers for autonomous mobile robots has been intensely studied for few decades. Control Theory studies the fundamental control principles of robotic systems. However, the technological progress allows, and the needs of advanced manufacturing, service, entertainment, educational, and mission tasks require features beyond the scope of the standard functionality and basic control. Artificial Intelligence has traditionally looked upon the problem of designing robotics systems from the high-level and top-down perspective: given a working robotic device, how can it be equipped with an intelligent controller. Later approaches advocated for better robustness, modifiability, and control due to a bottom-up layered incremental controller and robot building (Behavior-Based Robotics, BBR). Still, the complexity of programming such system often requires manual work of engineers. Automatic methods might lead to systems that perform task on demand without the need of expert robot programmer. In addition, a robot programmer cannot predict all the possible situations in the robotic applications. Automatic programming methods may provide flexibility and adaptability of the robotic products with respect to the task performed. One possible approach to automatic design of robot controllers is Evolutionary Robotics (ER). Most of the experiments performed in the field of ER have achieved successful learning of target task, while the tasks were of limited complexity. This work is a marriage of incremental idea from the BBR and automatic programming of controllers using ER. Incremental Evolution allows automatic programming of robots for more complex tasks by providing a gentle and easy-to understand support by expertknowledge — division of the target task into sub-tasks. We analyze different types of incrementality, devise new controller architecture, implement an original simulator compatible with hardware, and test it with various incremental evolution tasks for real robots. We build up our experimental field through studies of experimental and educational robotics systems, evolutionary design, distributed computation that provides the required processing power, and robotics applications. University research is tightly coupled with education. Combining the robotics research with educational applications is both a useful consequence as well as a way of satisfying the necessary condition of the need of underlying application domain where the research work can both reflect and base itself.
|
13 |
Embodied Evolution of Learning AbilityElfwing, Stefan January 2007 (has links)
Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous, and autonomous properties of biological evolution. The evaluation, selection, and reproduction are carried out by cooperation and competition of the robots, without any need for human intervention. An embodied evolution framework is therefore well suited to study the adaptive learning mechanisms for artificial agents that share the same fundamental constraints as biological agents: self-preservation and self-reproduction. The main goal of the research in this thesis has been to develop a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing of subpopulations of virtual agents inside each robot. The framework integrates reproduction as a directed autonomous behavior, and allows for learning of basic behaviors for survival by reinforcement learning. The purpose of the evolution is to evolve the learning ability of the agents, by optimizing meta-properties in reinforcement learning, such as the selection of basic behaviors, meta-parameters that modulate the efficiency of the learning, and additional and richer reward signals that guides the learning in the form of shaping rewards. The realization of the embodied evolution framework has been a cumulative research process in three steps: 1) investigation of the learning of a cooperative mating behavior for directed autonomous reproduction; 2) development of an embodied evolution framework, in which the selection of pre-learned basic behaviors and the optimization of battery recharging are evolved; and 3) development of an embodied evolution framework that includes meta-learning of basic reinforcement learning behaviors for survival, and in which the individuals are evaluated by an implicit and biologically inspired fitness function that promotes reproductive ability. The proposed embodied evolution methods have been validated in a simulation environment of the Cyber Rodent robot, a robotic platform developed for embodied evolution purposes. The evolutionarily obtained solutions have also been transferred to the real robotic platform. The evolutionary approach to meta-learning has also been applied for automatic design of task hierarchies in hierarchical reinforcement learning, and for co-evolving meta-parameters and potential-based shaping rewards to accelerate reinforcement learning, both in regards to finding initial solutions and in regards to convergence to robust policies. / QC 20100706
|
14 |
Developing Box-Pushing Behaviours Using Evolutionary RoboticsVan Lierde, Boris January 2011 (has links)
The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.
|
15 |
Evolution of Neural Controllers for Robot TeamsAronsson, Claes January 2002 (has links)
<p>This dissertation evaluates evolutionary methods for evolving cooperative teams of robots. Cooperative robotics is a challenging research area in the field of artificial intelligence. Individual and autonomous robots may by cooperation enhance their performance compared to what they can achieve separately. The challenge of cooperative robotics is that performance relies on interactions between robots. The interactions are not always fully understood, which makes the designing process of hardware and software systems complex. Robotic soccer, such as the RoboCup competitions, offers an unpredictable dynamical environment for competing robot teams that encourages research of these complexities. Instead of trying to solve these problems by designing and implement the behavior, the robots can learn how to behave by evolutionary methods. For this reason, this dissertation evaluates evolution of neural controllers for a team of two robots in a competitive soccer environment. The idea is that evolutionary methods may be a solution to the complexities of creating cooperative robots. The methods used in the experiments are influenced by research of evolutionary algorithms with single autonomous robots and on robotic soccer. The results show that robot teams can evolve to a form of cooperative behavior with simple reactive behavior by relying on self-adaptation with little supervision and human interference.</p>
|
16 |
Towards navigation without sensory inputs: modelling Hesslow?s simulation hypothesis in artificial cognitive agentsMontebelli, Alberto January 2004 (has links)
<p>In the recent years a growing interest in Cognitive Science has been directed to the cognitive role of the agent's ability to predict the consequences of their actions, without actual engagement with their environment. The creation of an experimental model for Hesslow's simulation hypothesis, based on the use of a simulated adaptive agent and the methods of evolutionary robotics within the general perspective of radical connectionism, is reported in this dissertation. A hierarchical architecture consisting of a mixture of (recurrent) experts is investigated in order to test its ability to produce an 'inner world', functional stand-in for the agent's interactions with its environment. Such a mock world is expected to be rich enough to sustain 'blind navigation', which means navigation based solely on the agent's own internal predictions. The results exhibit the system's vivid internal dynamics, its critical sensitivity to a high number of parameters and, finally, a discrepancy with the declared goal of blind navigation. However, given the dynamical complexity of the system, further analysis and testing appear necessary.</p>
|
17 |
Trust and reputation for formation and evolution of multi-robot teamsPippin, Charles Everett 13 January 2014 (has links)
Agents in most types of societies use information about potential partners to determine whether to form mutually beneficial partnerships. We can say that when this information is used to decide to form a partnership that one agent trusts another, and when agents work together for mutual benefit in a partnership, we refer to this as a form of cooperation. Current multi-robot teams typically have the team's goals either explicitly or implicitly encoded into each robot's utility function and are expected to cooperate and perform as designed. However, there are many situations in which robots may not be interested in full cooperation, or may not be capable of performing as expected. In addition, the control strategy for robots may be fixed with no mechanism for modifying the team structure if teammate performance deteriorates. This dissertation investigates the application of trust to multi-robot teams. This research also addresses the problem of how cooperation can be enabled through the use of incentive mechanisms. We posit a framework wherein robot teams may be formed dynamically, using models of trust. These models are used to improve performance on the team, through evolution of the team dynamics. In this context, robots learn online which of their peers are capable and trustworthy to dynamically adjust their teaming strategy.
We apply this framework to multi-robot task allocation and patrolling domains and show that performance is improved when this approach is used on teams that may have poorly performing or untrustworthy members. The contributions of this dissertation include algorithms for applying performance characteristics of individual robots to task allocation, methods for monitoring performance of robot team members, and a framework for modeling trust of robot team members. This work also includes experimental results gathered using simulations and on a team of indoor mobile robots to show that the use of a trust model can improve performance on multi-robot teams in the patrolling task.
|
18 |
A Regulatory Theory of Cortical Organization and its Applications to RoboticsThangavelautham, Jekanthan 05 March 2010 (has links)
Fundamental aspects of biologically-inspired regulatory mechanisms are considered in a robotics context, using artificial neural-network control systems . Regulatory mechanisms are used to control expression of genes, adaptation of form and behavior in organisms. Traditional neural network control architectures assume networks of neurons are fixed and are interconnected by wires. However, these architectures tend to be specified by a designer and are faced with several limitations that reduce scalability and tractability for tasks with larger search spaces. Traditional methods used to overcome these limitations with fixed network topologies are to provide more supervision by a designer. More supervision as shown does not guarantee improvement during training particularly when making incorrect assumptions for little known task domains. Biological organisms often do not require such external intervention (more supervision) and have self-organized through adaptation. Artificial neural tissues (ANT) addresses limitations with current neural-network architectures by modeling both wired interactions between neurons and wireless interactions through use of chemical diffusion fields. An evolutionary (Darwinian) selection process is used to ‘breed’ ANT controllers for a task at hand and the framework facilitates emergence of creative solutions since only a system goal function and a generic set of basis behaviours need be defined. Regulatory mechanisms are formed dynamically within ANT through superpositioning of chemical diffusion fields from multiple sources and are used to select neuronal groups. Regulation drives competition and cooperation among neuronal groups and results in areas of specialization forming within the tissue. These regulatory mechanisms are also shown to increase tractability without requiring more supervision using a new statistical theory developed to predict performance characteristics of fixed network topologies. Simulations also confirm the significance of regulatory mechanisms in solving certain tasks found intractable for fixed network topologies. The framework also shows general improvement in training performance against existing fixed-topology neural network controllers for several robotic and control tasks. ANT controllers evolved in a low-fidelity simulation environment have been demonstrated for a number of tasks on hardware using groups of mobile robots and have given insight into self-organizing system. Evidence of sparse activity and use of decentralized, distributed functionality within ANT controller solutions are found consistent with observations from neurobiology.
|
19 |
How do multiple behaviours affect the process of competitive co-evolution? : An experimental studyRoxell, Anders January 2006 (has links)
<p>In evolutionary robotics there has been research about the pursuit problem with different numbers of predators and prey: (i) one predator and one prey, (ii) many predators against one prey, and (iii) many predators against many prey. However, these different experiments are only involving food chains with two populations (two trophic levels). This dissertation uses three trophic levels to investigate if individuals in the middle trophic level perform equally or better than those that are been evolved in a two trophic level environment.</p><p>The investigation was done in a simulator called YAKS. A statistical analysis was conducted to evalutate the results. The result indicated that a robot with two tasks gets better at hunting and evading than robots with one task (either hunt or evade). Robots from the middle trophic level that are moving in the same direction as the camera is facing, were the best predators and prey. This dissertation is a step towards more complex and animal-like behaviours of robots.</p>
|
20 |
Abordagem evolucionária com idades para construção de conhecimento aplicado à robótica móvel / An evolutionary approach with ages to knowledgebuilding applied to mobile autonomous roboticsSchneider, Andre Marcelo January 2006 (has links)
Este trabalho apresenta e discute uma proposta de estratégia inédita para o problema de aprendizado de regras através de Sistemas Classificadores, aplicado à robótica móvel, utilizando um robô NOMAD 200. Esta estratégia tem como base, teorias de Algoritmos Genéticos e de Sistemas Classificadores, que são os paradigmas constituintes do núcleo da arquitetura implementada para o controle do robô. O aspecto diferencial desta abordagem é a inspiração em Algoritmos Genéticos com Idades, para permitir o uso e controle de uma população de tamanho variável. O sistema foi modelado observando-se características físicas do robô NOMAD 200 e sendo constituído por módulos de gerenciamento de memória, reprodução, controle da população e execução. A memória se apresenta como uma base de regras de produção; o módulo de reprodução incorpora um AG tradicional, com operadores de seleção, cruzamento e mutação; o controle populacional permite o uso de população de tamanho variável, através do de índices de usabilidade e similaridade das regras com as situações confrontadas pelo robô; por fim, o módulo de execução é responsável pela interação do robô com o ambiente, realizando leitura dos sensores e ações pelos atuadores e, quando necessário, ativar funções de segurança para preservar a integridade física do robô. Para dar sustentabilidade à proposta, esta foi validada através de vários experimentos, realizados em ambientes simulados e em um ambiente real, com um robô NOMAD 200, em diferentes cenários. Os ambientes testados variam desde ambientes esparsos até labirintos com obstáculos e paredes ortogonais entre si. Para cada experimento são apresentados os resultados e respectiva análise de dados. Foram realizadas análises criteriosas no comportamento da população, observando seu crescimento e idade média, bem como os eventos ocorridos no processo de aprendizado, para certificar as características a que se propõe esta abordagem. A principal contribuição deste trabalho é o uso da "IDADE" e II"CSABILIDADE" em um sistema baseado em SC. A usabilidade substitui o atributo de energia e respectivos cálculos do SC tradicional, no processo de escolha das regras, simplificando a implementação. Além disso, pode ser utilizado como índice de ajuste, para que possam ser usadas técnicas convencionais de seleção. A idade é responsável por preservar ou eliminar os indivíduos da população, através de estratégias de penalização e recompensa, possibilitando manter uma população de regras de tamanho variável, permitindo, ainda, manter a diversidade genética na população e evitar a sua homogenização, bem como isentar o modelador do sistema da definição destes parâmetros. / In this work, we propose a new strategy to the problem of learning rules in a Evolutionary System that is applied for mobile robotics using a NOMAD 200 robot. This strategy is based on Genetic AIgoriths and Classifier Systems theories, which are the paradigms of the implemented architecture core for robot controI. The unique feature of this approach is the inspiration on Genetic AIgorithms with Ages. This feature allows the algorithm to make use of a controlled variable size population. The system was designed respecting the physical features of the ~OMAD 200 robot. It is composed by modules of memory, reproduction, populational control and execution. The memory is the base for production rules. The reproduction module is a conventional GA, with operators for selection, crossover and mutation. The population control allows the use of a variable size population, based on the usability and the similarity of the rules on the situations presented to the robot. Finally, the execution module is responsable for the interaction between the robot and the environment, making the sensors reading and action application from the actuators and, if necessary, activating the security functions to preserve the physical integrity of the robot. To give support to the proposal, it was validated through several experiments, performed both in a simulated environment and in a real NOMAD 200 robot, in several cenarios. The environments used in the experiments ranged from open spaces to labyrinths with obstacles and ortogonal walls. Vle present the results and data analysis for each one of the experiments. AIso, the population behavior is analysed, by the observation of his growing and average age and the events occurred during the learning process, to confirm the features of these approach. The main contribution of this work is the use of "AGE"and ""CSABILITY"in a CS based system. The usability replaces the strength attribute and respective calculations necessary in the process of choosing rules in traditional CS. Because of this change, our solution is simpler to implement than traditional CS systems. Besides that, the usability can be used as fitness value, making possible the use of conventional selection techniques. The Age is responsible for the decision of to preserve or to elliminate individuaIs from the population. The choose of individuaIs is done by a penalty and reward strategy, which permits a variable size population of rules with genetic diversity and avoid the population's homogenization. The use of the age for decision making aIso preserves the system developer from the task of defining these parameters.
|
Page generated in 0.0878 seconds