• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 2
  • 1
  • Tagged with
  • 25
  • 25
  • 13
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Localization Performance Improvement of a Low-Resolution Robotic System using an Electro-Permanent Magnetic Interface and an Ensemble Kalman Filter

Martin, Jacob Ryan 17 October 2022 (has links)
As the United States is on the cusp of returning astronauts to the Moon, it becomes increasingly apparent that the assembly of structures in space will have to rely upon robots to perform the construction process. With a focus on sustaining a presence on the Moon's surface in such a harsh and unforgiving environment, demonstrating the robustness of autonomous assembly and capabilities of robotic manipulators is necessary. Current robotic assembly on Earth consists mainly of inspection or highly controlled environments, and always with a human in the loop to step in and fix issues if a problem occurs. To remove the human element, the robot system then must account for safety as well. Thus, system risk can easily overwhelm project costs. This thesis proposes a combination of hardware and state estimation solutions to improve the feasibility of low-fidelity and low-resolution robots for precision assembly tasks. Doing so reduces the risk to mission success, as the hardware becomes easier to replace or repair. The hardware modifications implement an electro-permanent magnet interface with alignment features to reduce the fidelity needed for the robot end effector. On the state estimation side, an Ensemble Kalman Filter is implemented, along with a scaling system to prevent FASER Lab hardware from becoming stuck due to hardware limitations. Overall, the three modifications improved the test robot's autonomous convergence error by 98.5%, bettering the system sufficiently to make an autonomous assembly process feasible. / Master of Science / With the dawn of new space age nearly upon us, one of the most important aspects to working in space will be robotic assembly, whether on the surface of other planetary bodies like the Moon or in zero-gravity, in order to keep astronauts safe and to reduce spaceship launch costs. Both places have their own difficult problems to deal with, and doing any actions in those locations come with a significant amount of risk involved. To reduce extreme risk, you can spend more money to over-protect the robots, or reduce the consequences of the risk. This thesis describes a way to reduce the impact of risks to a mission by checking whether inexpensive robots can be adapted and modified to be able to perform similar construction actions to a much more expensive robot. It does this by using specialized hardware and software programs to better align the robot to where it needs to go without people needing to step in and help it. The experiments showed a 98.5% improvement to the system from without any of the modifications and validated that the low-cost robot could be improved sufficiently to be useful.
2

Modeling Autonomous Agents' Behavior Using Neuro-Immune Networks

Meshref, Hossam 22 August 2002 (has links)
Autonomous robots are expected to interact with their dynamic changing environment. This interactions requires certain level of behavior based Intelligence, which facilitates the dynamic adaptation of the robot behavior accordingly with his surrounding environment. Many researches have been done in biological information processing systems to model the behavior of an autonomous robot. The Artificial Immune System (AIS) provides new paradigm suitable for dynamic problem dealing with unknown environment rather than a static problem. The immune system has some features such as memory, tolerance, diversity and more features that can be used in engineering applications. The immune system has an important feature called meta-dynamics in which new species of antibodies are produced continuously from the bone marrow. If the B-Cell (robot) cannot deal with the current situation, new behaviors (antibodies) should be generated by the meta dynamics function. This behavior should be incorporated into the existing immune system to gain immunity against new environmental changes. We decided to use a feed forward Artificial Neural Network (ANN) to simulate this problem, and to build the AIS memory. Many researchers have tried to tackle different points in mimicking the biological immune system, but no one previously has proposed such an acquired memory. This contribution is made as a "proof of concept" to the field of biological immune system simulation as a start of further research efforts in this direction. Many applications can potentially use our designed Neuro-Immune Network (NIN), especially in the area of autonomous robotics. We demonstrated the use of the designed NIN to control a robot arm in an unknown environment. As the system encounters new cases, it will increase its ability to deal with old and new situations encountered. This novel technique can be applied to many robotics applications in industry, where autonomous robots are required to have adaptive behavior in response to their environmental changes. Regarding future work, the use of VLSI neural networks to enhance the speed of the system for real time applications can be investigated along with possible methods of design and implementation of a similar VLSI chip for the AIN. / Ph. D.
3

Towards Improving and Extending Traditional Robot Autonomy with Human Guided Machine Learning

Cesar-Tondreau, Brian 05 October 2022 (has links)
Traditional autonomy among robotic and other artificial agents was accomplished via automated planning methods that found a viable sequence of actions, which, if executed by an agent, would result in the successful completion of the given task(s). However, many tasks that we would like robotic agents to perform involve goals that are complex, poorly-defined, or hard to specify. Furthermore, significant amounts of data or computation are required for agents to reach reasonable performance. As a result, autonomous systems still rely on human operators to play a supervisory role to ensure that robotic operations are completed quickly and successfully. The presented work aims to improve the traditional methods of robot autonomy by developing an intuitive means for(human operators to adapt/mold the behaviors and decision making of autonomous agents) autonomous agents to leverage the flexibility and expertise of human end users. Specifically, this work shows the results of three machine learning-based approaches for modifying and extending established robot navigation behaviors and skills through human demonstration. Our first project combines Imitation learning with classical navigation software to achieve long-horizon planning and navigation that follows navigation rules specified by a human user. We show that this method can adapt a robot's navigation behavior to become more like that of a human demonstrator. Moreover, for a minimal amount of demonstration data, we find that this approach outperforms recent baselines in both navigation success rate and trajectory similarity to the demonstrator. In the second project, we introduce a method of communicating complex skills over a short-horizon task. Specifically, we explore using imitation learning to teach a robot the complex skill needed to safely navigate through negative obstacles in simulation. We find that this proposed method could imitate complex navigation behaviors and generalize to novel environments in simulation with minimal demonstration. Furthermore, we find that this method compares favorably to a classical motion planning algorithm which was modified to assign traversal cost based on the terrain slope local to the robot's current pose. Finally, we demonstrate a practical implementation of the second approach in a real-world environment. We show that the proposed method results in a policy that can generalize across differently shaped obstacles and across simulation and reality. Moreover, we show that the proposed method still outperforms the classical motion planning algorithm when tasked to navigate negative obstacles in the real world. / Doctor of Philosophy / With the rapid advancement of computing power and growing technical literacy of the general public, the tasks that robots should be able to accomplish have multiplied. Robots can, however, be limited by the human ability to effectively convey how tasks should be performed. For example, autonomous robot navigation to a specified path planning software suite that generates feasible and obstacle-free trajectories through a cluttered environment. While these modules can be modified to meet task-specific constraints and user preferences, current modification procedures require substantial effort on the part of an expert roboticist with a great deal of technical training. The desired tasks and skills are difficult to effectively convey in a machine legible format. These tasks often require technical expertise in multiple mechatronic disciplines and hours of hand tuning that the typical end user does not have. In this dissertation, we examine methods that directly leverage human users to teach robots how to perform tasks that are generally difficult to specify pragmatically. We focus on methods that allow human users to extend established robot navigation behaviors and skills by demonstrating their own preferred approaches. We evaluate the performances of our proposed approaches in terms of navigation success rate, adherence to the demonstrated behavior, and their ability to apply what they have learned to novel environments. Moreover, we showed that our approaches compare favorably to recent machine learning-based approaches to autonomous navigation, and classical navigation techniques with respect to these metrics.
4

Approche Bayésienne pour la Sélection de l'Action et la Focalisation de l'Attention. Application à la Programmation de Robots Autonomes.

Chagas E Cavalcante Koike, Carla Maria 14 November 2005 (has links) (PDF)
Les systèmes sensorimoteurs autonomes, placés dans des environnements dynamiques, doivent répondre continuellement à la question ultime: comment contrôler les commandes motrices à partir des entrées sensorielles? Répondre à cette question est un problème très complexe, principalement à cause de l'énorme quantité d'informations qui doit être traitée, tout en respectant plusieurs restrictions: contraintes de temps réel, espace mémoire restreint, et capacité de traitement des données limitée. Un autre défi majeur consiste à traiter l'information incomplète et imprécise, habituellement présente dans des environnements dynamiques. Cette thèse s'intéresse au problème posé par la commande des systèmes sensorimoteurs autonomes et propose un enchaînement d'hypothèses et de simplifications. Ces hypothèses et simplifications sont définies dans un cadre mathématique précis et strict appelé programmation bayésienne, une extension des réseaux bayésiens. L'enchaînement se présente en cinq paliers: utilisation d'états internes; les hypothèses de Markov de premier ordre, de stationnarité et les filtres bayésiens; exploitation de l'indépendance partielle entre les variables d'état; addition d'un mécanisme de choix de comportement;la focalisation de l'attention guidée par l'intention de comportement. La description de chaque étape est suivie de son analyse selon les exigences de mémoire, de complexité de calcul, et de difficulté de modélisation. Nous présentons également des discussions approfondies concernant d'une part la programmation des robots et d'autre part les systèmes cognitifs. Enfin, nous décrivons l'application de ce cadre de programmation à un robot mobile.
5

Robot Planning Based On Learned Affordances

Cakmak, Maya 01 August 2007 (has links) (PDF)
This thesis studies how an autonomous robot can learn affordances from its interactions with the environment and use these affordances in planning. It is based on a new formalization of the concept which proposes that affordances are relations that pertain to the interactions of an agent with its environment. The robot interacts with environments containing different objects by executing its atomic actions and learns the different effects it can create, as well as the invariants of the environments that afford creating that effect with a certain action. This provides the robot with the ability to predict the consequences of its future interactions and to deliberatively plan action sequences to achieve a goal. The study shows that the concept of affordances provides a common framework for studying reactive control, deliberation and adaptation in autonomous robots. It also provides solutions to the major problems in robot planning, by grounding the planning operators in the low-level interactions of the robot.
6

Cooperative Robotics : A Survey

Bergfeldt, Niklas January 2000 (has links)
<p>This dissertation aims to present a structured overview of the state-of-the-art in cooperative robotics research. As we illustrate in this dissertation, there are several interesting aspects that draws attention to the field, among which 'Life Sciences' and 'Applied AI' are emphasized. We analyse the key concepts and main research issues within the field, and discuss its relations to other disciplines, including cognitive science, biology, artificial life and engineering. In particular it can be noted that the study of collective robot behaviour has drawn much inspiration from studies of animal behaviour. In this dissertation we also analyse one of the most attractive research areas within cooperative robotics today, namely RoboCup. Finally, we present a hierarchy of levels and mechanisms of cooperation in robots and animals, which we illustrate with examples and discussions.</p>
7

Towards Learning Affordances: Detection Of Relevant Features And Characteristics For Reachability

Eren, Selda 01 March 2006 (has links) (PDF)
In this thesis, we reviewed the affordance concept for autonomous robot control and proposed that invariant features of objects that support a specific affordance can be learned. We used a physics-based robot simulator to study the reachability affordance on the simulated KURT3D robot model. We proposed that, through training, the values of each feature can be split into strips, which can then be used to detect the relevant features and their characteristics. Our analysis showed that it is possible to achieve higher prediction accuracy on the affordance support of novel objects by using only the relevant features. This is an important gain, since failures can have high costs in robotics and better prediction accuracy is desired.
8

Decision-making algorithms for autonomous robots / Algorithmes de prise de décision stratégique pour robots autonomes

Hofer, Ludovic 27 November 2017 (has links)
Afin d'être autonomes, les robots doivent êtres capables de prendre des décisions en fonction des informations qu'ils perçoivent de leur environnement. Cette thèse modélise les problèmes de prise de décision robotique comme des processus de décision markoviens avec un espace d'état et un espace d'action tous deux continus. Ce choix de modélisation permet de représenter les incertitudes sur le résultat des actions appliquées par le robot. Les nouveaux algorithmes d'apprentissage présentés dans cette thèse se focalisent sur l'obtention de stratégies applicables dans un domaine embarqué. Ils sont appliqués à deux problèmes concrets issus de la RoboCup, une compétition robotique internationale annuelle. Dans ces problèmes, des robots humanoïdes doivent décider de la puissance et de la direction de tirs afin de maximiser les chances de marquer et contrôler la commande d'une primitive motrice pour préparer un tir. / The autonomy of robots heavily relies on their ability to make decisions based on the information provided by their sensors. In this dissertation, decision-making in robotics is modeled as continuous state and action markov decision process. This choice allows modeling of uncertainty on the results of the actions chosen by the robots. The new learning algorithms proposed in this thesis focus on producing policies which can be used online at a low computational cost. They are applied to real-world problems in the RoboCup context, an international robotic competition held annually. In those problems, humanoid robots have to choose either the direction and power of kicks in order to maximize the probability of scoring a goal or the parameters of a walk engine to move towards a kickable position.
9

Cooperative Robotics : A Survey

Bergfeldt, Niklas January 2000 (has links)
This dissertation aims to present a structured overview of the state-of-the-art in cooperative robotics research. As we illustrate in this dissertation, there are several interesting aspects that draws attention to the field, among which 'Life Sciences' and 'Applied AI' are emphasized. We analyse the key concepts and main research issues within the field, and discuss its relations to other disciplines, including cognitive science, biology, artificial life and engineering. In particular it can be noted that the study of collective robot behaviour has drawn much inspiration from studies of animal behaviour. In this dissertation we also analyse one of the most attractive research areas within cooperative robotics today, namely RoboCup. Finally, we present a hierarchy of levels and mechanisms of cooperation in robots and animals, which we illustrate with examples and discussions.
10

Autonomous learning of multiple skills through intrinsic motivations : a study with computational embodied models

Santucci, Vieri Giuliano January 2016 (has links)
Developing artificial agents able to autonomously discover new goals, to select them and learn the related skills is an important challenge for robotics. This becomes even crucial if we want robots to interact with real environments where they have to face many unpredictable problems and where it is not clear which skills will be the more suitable to solve them. The ability to learn and store multiple skills in order to use them when required is one of the main characteristics of biological agents: forming ample repertoires of actions is important to widen the possibility for an agent to better adapt to different environments and to improve its chance of survival and reproduction. Moreover, humans and other mammals explore the environment and learn new skills not only on the basis of reward-related stimuli but also on the basis of novel or unexpected neutral stimuli. The mechanisms related to this kind of learning processes have been studied under the heading of “Intrinsic Motivations” (IMs), and in the last decades the concept of IMs have been used in developmental and autonomous robotics to foster an artificial curiosity that can improve the autonomy and versatility of artificial agents. In the research presented in this thesis I focus on the development of open-ended learning robots able to autonomously discover interesting events in the environment and autonomously learn the skills necessary to reproduce those events. In particular, this research focuses on the role that IMs can play in fostering those processes and in improving the autonomy and versatility of artificial agents. Taking inspiration from recent and past research in this field, I tackle some of the interesting open challenges related to IMs and to the implementation of intrinsically motivated robots. I first focus on the neurophysiology underlying IM learning signals, and in particular on the relations between IMs and phasic dopamine (DA). With the support of a first computational model, I propose a new hypothesis that addresses the dispute over the nature and the functions of phasic DA activations: reconciling two contrasting theories in the literature and taking xi into account the different experimental data, I suggest that phasic DA can be considered as a reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). The results obtained with my computational model support the presented hypothesis, showing how such a learning signal can serve two important functions: driving both the discovery and acquisition of novel actions and the maximisation of rewards. Moreover, those results provide a first example of the power of IMs to guide artificial agents in the cumulative learning of complex behaviours that would not be learnt simply providing a direct reward for the final tasks. In a second work, I move to investigate the issues related to the implementation of IMs signal in robots. Since the literature still lacks a specific analysis of which is the best IM signal to drive skill acquisition, I compare in a robotic setup different typologies of IMs, as well as the different mechanisms used to implement them. The results provide two important contributions: 1) they show how IM signals based on the competence of the system are able to generate a better guidance for skill acquisition with respect to the signals based on the knowledge of the agent; 2) they identify a proper mechanism to generate a competence-based IM signal, showing that the stronger the link between the IM signal and the competence of the system, the better the performance. Following the aim of widening the autonomy and the versatility of artificial agents, in a third work I focus on the improvement of the control architecture of the robot. I build a new 3-level architecture that allows the system to select the goals to pursue, to search for the best way to achieve them, and acquire the related skills. I implement this architecture in a simulated iCub robot and test it in a 3D experimental scenario where the agent has to learn, on the basis of IMs, a reaching task where it is not clear which arm of the robot is the most suitable to reach the different targets. The performance of the system is compared to the one of my previous 2-level architecture system, where tasks and computational resources are associated at design time. The better performance of the system endowed with the new 3-level architecture highlights the importance of developing robots with different levels of autonomy, and in particular both the high-level of goal selection and the low-level of motor control. Finally, I focus on a crucial issue for autonomous robotics: the development of a system that is able not only to select its own goals, but also to discover them through the interaction with the environment. In the last work I present GRAIL, a Goal-discovering Robotic Architecture for Intrisically-motivated Learning. Building on the insights provided by my previous research, GRAIL is a 4-level hierarchical architecture that for the first time assembles in unique system different features necessary for the development of truly autonomous robots. GRAIL is able to autonomously 1) discover new goals, 2) create and store representations of the events associated to those goals, 3) select the goal to pursue, 4) select the computational resources to learn to achieve the desired goal, and 5) self-generate its own learning signals on the basis of the achievement of the selected goals. I implement GRAIL in a simulated iCub and test it in three different 3D experimental setup, comparing its performance to my previous systems, showing its capacity to generate new goals in unknown scenarios, and testing its ability to cope with stochastic environments. The experiments highlight on the one hand the importance of an appropriate hierarchical architecture for supporting the development of autonomous robots, and on the other hand how IMs (together with goals) can play a crucial role in the autonomous learning of multiple skills.

Page generated in 0.0669 seconds