• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 5
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Developmental Grasp Learning Scheme For Humanoid Robots

Bozcuoglu, Asil Kaan 01 September 2012 (has links) (PDF)
While an infant is learning to grasp, there are two key processes that she uses for leading a successful development. In the first process, infants use an intuitional approach where the hand is moved towards the object to create an initial contact regardless of the object properties. The contact is followed by a tactile grasping phase where the object is enclosed by the hand. This intuitive grasping behavior leads an grasping mechanism, which utilizes visual input and incorporates this into the grasp plan. The second process is called scaffolding, a guidance by stating how to accomplish the task or modifying its behaviors by interference. Infants pay attention to such guidance and understand the indication of important features of an object from 9 months of age. This supervision mechanism plays an important role for learning how to grasp certain objects in a proper way. To simulate these behavioral findings, a reaching and a tactile grasping controller was implemented on iCub humanoid robot which allowed it to reach an object from different directions, and enclose its fingers to cover the object. With these, a human-like grasp learning for iCub is proposed. Namely, the first step is an unsupervised learning where the robot is experimenting how to grasp objects. The second step is supervised learning phase where a caregiver modifies the end-effectors position when the robot is mistaken. By doing several experiments for two different grasping styles, we observe that the proposed methodology shows a better learning rate comparing to the scaffolding-only learning mechanism.
2

FPCA Based Human-like Trajectory Generating

Dai, Wei 01 January 2013 (has links)
This thesis presents a new human-like upper limb and hand motion generating method. The work is based on Functional Principal Component Analysis and Quadratic Programming. The human-like motion generating problem is formulated in a framework of minimizing the difference of the dynamic profile of the optimal trajectory and the known types of trajectory. Statistical analysis is applied to the pre-captured human motion records to work in a low dimensional space. A novel PCA FPCA hybrid motion recognition method is proposed. This method is implemented on human grasping data to demonstrate its advantage in human motion recognition. One human grasping hierarchy is also proposed during the study. The proposed method of generating human-like upper limb and hand motion explores the ability to learn the motion kernels from human demonstration. Issues in acquiring motion kernels are also discussed. The trajectory planning method applies different weight on the extracted motion kernels to approximate the kinematic constraints of the task. Multiple means of evaluation are implemented to illustrate the quality of the generated optimal human-like trajectory compared to the real human motion records.
3

Novel approach for representing, generalising, and quantifying periodic gaits

Lin, Hsiu-Chin January 2015 (has links)
Our goal is to introduce a novel method for representing, generalising, and comparing gaits; particularly, walking gait. Human walking gaits are a result of complex, interdependent factors that include variations resulting from embodiments, environment and tasks, making techniques that use average template frameworks suboptimal for systematic analysis or corrective interventions. The proposed work aims to devise methodologies for being able to represent gaits and gait transitions such that optimal policies that eliminate the inter-personal variations from tasks and embodiment may be recovered. Our approach is built upon (i) work in the domain of null-space policy recovery and (ii) previous work in generalisation for point-to-point movements. The problem is formalised using a walking phase model, and the null-space learning method is used to generalise a consistent policy from multiple observations with rich variations. Once recovered, the underlying policies (mapped to different gait phases) can serve as reference guideline to quantify and identify pathological gaits while being robust against interpersonal and task variations. To validate our methods, we have demonstrated robustness of our method with simulated sagittal 2-link gait data with multiple ground truth constraints and policies. Pathological gait identification was then tested on real-world human gait data with induced gait abnormality, with the proposed method showing significant robustness to variations in speed and embodiment compared to template based methods. Future work will extend this to kinetic features and higher degree-of-freedom.
4

Learning socio-communicative behaviors of a humanoid robot by demonstration / Apprendre les comportements socio-communicatifs d'un robot humanoïde par la démonstration

Nguyen, Duc-Canh 22 October 2018 (has links)
Un robot d'assistance sociale (SAR) est destiné à engager les gens dans une interaction située comme la surveillance de l'exercice physique, la réadaptation neuropsychologique ou l'entraînement cognitif. Alors que les comportements interactifs de ces systèmes sont généralement scriptés, nous discutons ici du cadre d’apprentissage de comportements interactifs multimodaux qui est proposé par le projet SOMBRERO.Dans notre travail, nous avons utilisé l'apprentissage par démonstration afin de fournir au robot des compétences nécessaires pour effectuer des tâches collaboratives avec des partenaires humains. Il y a trois étapes principales d'apprentissage de l'interaction par démonstration: (1) recueillir des comportements interactifs représentatifs démontrés par des tuteurs humains; (2) construire des modèles des comportements observés tout en tenant compte des connaissances a priori (modèle de tâche et d'utilisateur, etc.); et ensuite (3) fournir au robot-cible des contrôleurs de gestes appropriés pour exécuter les comportements souhaités.Les modèles multimodaux HRI (Human-Robot Interaction) sont fortement inspirés des interactions humain-humain (HHI). Le transfert des comportements HHI aux modèles HRI se heurte à plusieurs problèmes: (1) adapter les comportements humains aux capacités interactives du robot en ce qui concerne ses limitations physiques et ses capacités de perception, d'action et de raisonnement limitées; (2) les changements drastiques des comportements des partenaires humains face aux robots ou aux agents virtuels; (3) la modélisation des comportements interactifs conjoints; (4) la validation des comportements robotiques par les partenaires humains jusqu'à ce qu'ils soient perçus comme adéquats et significatifs.Dans cette thèse, nous étudions et faisons des progrès sur ces quatre défis. En particulier, nous traitons les deux premiers problèmes (transfert de HHI vers HRI) en adaptant le scénario et en utilisant la téléopération immersive. En outre, nous utilisons des réseaux neuronaux récurrents pour modéliser les comportements interactifs multimodaux (tels que le discours, le regard, les mouvements de bras, les mouvements de la tête, les canaux). Ces techniques récentes surpassent les méthodes traditionnelles (Hidden Markov Model, Dynamic Bayesian Network, etc.) en termes de précision et de coordination inter-modalités. A la fin de cette thèse, nous évaluons une première version de robot autonome équipé des modèles construits par apprentissage. / A socially assistive robot (SAR) is meant to engage people into situated interaction such as monitoring physical exercise, neuropsychological rehabilitation or cognitive training. While the interactive behavioral policies of such systems are mainly hand-scripted, we discuss here key features of the training of multimodal interactive behaviors in the framework of the SOMBRERO project.In our work, we used learning by demonstration in order to provide the robot with adequate skills for performing collaborative tasks in human centered environments. There are three main steps of learning interaction by demonstration: we should (1) collect representative interactive behaviors from human coaches; (2) build comprehensive models of these overt behaviors while taking into account a priori knowledge (task and user model, etc.); and then (3) provide the target robot with appropriate gesture controllers to execute the desired behaviors.Multimodal HRI (Human-Robot Interaction) models are mostly inspired by Human-Human interaction (HHI) behaviors. Transferring HHI behaviors to HRI models faces several issues: (1) adapting the human behaviors to the robot’s interactive capabilities with regards to its physical limitations and impoverished perception, action and reasoning capabilities; (2) the drastic changes of human partner behaviors in front of robots or virtual agents; (3) the modeling of joint interactive behaviors; (4) the validation of the robotic behaviors by human partners until they are perceived as adequate and meaningful.In this thesis, we study and make progress over those four challenges. In particular, we solve the two first issues (transfer from HHI to HRI) by adapting the scenario and using immersive teleoperation. In addition, we use Recurrent Neural Networks to model multimodal interactive behaviors (such as speech, gaze, arm movements, head motion, backchannels) that surpass traditional methods (Hidden Markov Model, Dynamic Bayesian Network, etc.) in both accuracy and coordination between the modalities. We also build and evaluate a proof-of-concept autonomous robot to perform the tasks.
5

Task transparency in learning by demonstration : gaze, pointing, and dialog

dePalma, Nicholas Brian 07 July 2010 (has links)
This body of work explores an emerging aspect of human-robot interaction, transparency. Socially guided machine learning has proven that highly immersive robotic behaviors have yielded better results than lesser interactive behaviors for performance and shorter training time. While other work explores this transparency in learning by demonstration using non-verbal cues to point out the importance or preference users may have towards behaviors, my work follows this argument and attempts to extend it by offering cues to the internal task representation. What I show is that task-transparency, or the ability to connect and discuss the task in a fluent way implores the user to shape and correct the learned goal in ways that may be impossible by other present day learning by demonstration methods. Additionally, some participants are shown to prefer task-transparent robots which appear to have the ability of "introspection" in which it can modify the learned goal by other methods than just demonstration.

Page generated in 0.1493 seconds