• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic movement primitives andreinforcement learning for adapting alearned skill

Lundell, Jens January 2016 (has links)
Traditionally robots have been preprogrammed to execute specific tasks. Thisapproach works well in industrial settings where robots have to execute highlyaccurate movements, such as when welding. However, preprogramming a robot isalso expensive, error prone and time consuming due to the fact that every featuresof the task has to be considered. In some cases, where a robot has to executecomplex tasks such as playing the ball-in-a-cup game, preprogramming it mighteven be impossible due to unknown features of the task. With all this in mind,this thesis examines the possibility of combining a modern learning framework,known as Learning from Demonstrations (LfD), to first teach a robot how toplay the ball-in-a-cup game by demonstrating the movement for the robot, andthen have the robot to improve this skill by itself with subsequent ReinforcementLearning (RL). The skill the robot has to learn is demonstrated with kinestheticteaching, modelled as a dynamic movement primitive, and subsequently improvedwith the RL algorithm Policy Learning by Weighted Exploration with the Returns.Experiments performed on the industrial robot KUKA LWR4+ showed that robotsare capable of successfully learning a complex skill such as playing the ball-in-a-cupgame. / Traditionellt sett har robotar blivit förprogrammerade för att utföra specifika uppgifter.Detta tillvägagångssätt fungerar bra i industriella miljöer var robotar måsteutföra mycket noggranna rörelser, som att svetsa. Förprogrammering av robotar ärdock dyrt, felbenäget och tidskrävande eftersom varje aspekt av uppgiften måstebeaktas. Dessa nackdelar kan till och med göra det omöjligt att förprogrammeraen robot att utföra komplexa uppgifter som att spela bollen-i-koppen spelet. Medallt detta i åtanke undersöker den här avhandlingen möjligheten att kombinera ettmodernt ramverktyg, kallat inläraning av demonstrationer, för att lära en robothur bollen-i-koppen-spelet ska spelas genom att demonstrera uppgiften för denoch sedan ha roboten att själv förbättra sin inlärda uppgift genom att användaförstärkande inlärning. Uppgiften som roboten måste lära sig är demonstreradmed kinestetisk undervisning, modellerad som dynamiska rörelseprimitiver, ochsenare förbättrad med den förstärkande inlärningsalgoritmen Policy Learning byWeighted Exploration with the Returns. Experiment utförda på den industriellaKUKA LWR4+ roboten visade att robotar är kapabla att framgångsrikt lära sigspela bollen-i-koppen spelet
2

Implementation Of A Closed-loop Action Generation System On A Humanoid Robot Through Learning By Demonstration

Tunaoglu, Doruk 01 September 2010 (has links) (PDF)
In this thesis the action learning and generation problem on a humanoid robot is studied. Our aim is to realize action learning, generation and recognition in one system and our inspiration source is the mirror neuron hypothesis which suggests that action learning, generation and recognition share the same neural circuitry. Dynamic Movement Primitives, an efficient action learning and generation approach, are modified in order to fulfill this aim. The system we developed (1) can learn from multiple demonstrations, (2) can generalize to different conditions, (3) generates actions in a closed-loop and online fashion and (4) can be used for online action recognition. These claims are supported by experiments and the applicability of the developed system in real world is demonstrated through implementing it on a humanoid robot.
3

Anticipation of Human Movements : Analyzing Human Action and Intention: An Experimental Serious Game Approach

Kurt, Ugur Halis January 2018 (has links)
What is the difference between intention and action? To start answering this complex question, we have created a serious game that allows us to capture a large quantity of experimental data and study human behavior. In the game, users catch flies, presented to the left or to the right of the screen, by dragging the tongue of a frog across a touchscreen monitor. The movement of interest has a predefined starting point (the frog) and necessarily transits through a via-point (a narrow corridor) before it proceeds to the chosen left/right direction. Meanwhile, the game collects data about the movement performed by the player. This work is focused on the analysis of such movements. We try to find criteria that will allow us to predict (as early as possible) the direction (left/right) chosen by the player. This is done by analyzing kinematic information (e.g. trajectory, velocity profile, etc.). Also, processing such data according to the dynamical movement primitives approach, allows us to find further criteria that support a classification of human movement. Our preliminary results show that individually considered, participants tend to create and use stereotypical behaviors that can be used to formulate predictions about the subjects’ intention to reach in one direction or the other, early after the onset of the movement.
4

Control of a Multiple Degree-of-Freedom Arm With Functional Electrical Stimulation Using a Reduced Set of Command Inputs

Cornwell, Andrew Stevens 30 January 2012 (has links)
No description available.
5

Implementation of an action library : Implementation of a Manipulation Action library for UR3e Robot Arm / Implementation of an action library : Implementation of a Manipulation Action library for UR3e Robot Arm

Lundborg, Fredrik January 2024 (has links)
This thesis aims to parameterize and generalize functions for robotic use. The goal is to simplify the usage of robot arms. This thesis explores robot functionalities within the theme of simple cooking tasks. The functions explored are cutting objects, stirring bowls and pick and place. An environment where objects can be moved around is created and each individual task can be described as a function with parameters. In addition to this DMP (Dynamic Movement Primitives) is incorporated into the functions for future usage in mimicking human motion when performing tasks. The result of actions being parameterized and general in their definitions makes them robust and easy to use in an environment where objects are not always located in the same positions. The incorporation of the DMP adds to the generality of the functions, being able to use the same setup without modifications for objects of different sizes as well as having trajectory inputs for robot execution.
6

Prédiction du mouvement humain pour la robotique collaborative : du geste accompagné au mouvement corps entier / Movement Prediction for human-robot collaboration : from simple gesture to whole-body movement

Dermy, Oriane 17 December 2018 (has links)
Cette thèse se situe à l’intersection de l’apprentissage automatique et de la robotique humanoïde, dans le domaine de la robotique collaborative. Elle se focalise sur les interactions non verbales humain-robot, en particulier sur l’interaction gestuelle. La prédiction de l’intention, la compréhension et la reproduction de gestes sont les questions centrales de cette thèse. Dans un premier temps, le robot apprend des gestes par démonstration : un utilisateur prend le bras du robot et lui fait réaliser les gestes à apprendre plusieurs fois. Le robot doit alors reproduire ces différents mouvements tout en les généralisant pour les adapter au contexte. Pour cela, à l’aide de ses capteurs proprioceptifs, il interprète les signaux perçus pour comprendre le mouvement guidé par l’utilisateur, afin de pouvoir en générer des similaires. Dans un second temps, le robot apprend à reconnaître l’intention de l’humain avec lequel il interagit, à partir des gestes que ce dernier initie. Le robot produit ensuite des gestes adaptés à la situation et correspondant aux attentes de l’utilisateur. Cela nécessite que le robot comprenne la gestuelle de l’utilisateur. Pour cela, différentes modalités perceptives ont été explorées. À l’aide de capteurs proprioceptifs, le robot ressent les gestes de l’utilisateur au travers de son propre corps : il s’agit alors d’interaction physique humain-robot. À l’aide de capteurs visuels, le robot interprète le mouvement de la tête de l’utilisateur. Enfin, à l’aide de capteurs externes, le robot reconnaît et prédit le mouvement corps entier de l’utilisateur. Dans ce dernier cas, l’utilisateur porte lui-même des capteurs (vêtement X-Sens) qui transmettent sa posture au robot. De plus, le couplage de ces modalités a été étudié. D’un point de vue méthodologique, nous nous sommes focalisés sur les questions d’apprentissage et de reconnaissance de gestes. Une première approche permet de modéliser statistiquement des primitives de mouvements representant les gestes : les ProMPs. La seconde, ajoute à la première du Deep Learning, par l’utilisation d’auto-encodeurs, afin de modéliser des gestes corps entier contenant beaucoup d’informations, tout en permettant une prédiction en temps réel mou. Différents enjeux ont notamment été pris en compte, concernant la prédiction des durées des trajectoires, la réduction de la charge cognitive et motrice imposée à l’utilisateur, le besoin de rapidité (temps réel mou) et de précision dans les prédictions / This thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field. It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction. The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis. First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these different movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the user's movement in order to generate similar ones later on. Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates. The robot can then perform gestures adapted to the situation and corresponding to the user’s expectations. This requires the robot to understand the user’s gestures. To this end, different perceptual modalities have been explored. Using proprioceptive sensors, the robot feels the user’s gestures through its own body: it is then a question of physical human-robot interaction. Using visual sensors, the robot interprets the movement of the user’s head. Finally, using external sensors, the robot recognizes and predicts the user’s whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied. From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures) : ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in predictions

Page generated in 0.0734 seconds