• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 12
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 13
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelisation Visuelle d'un Objet Inconnu par un Robot Humanoide Autonome / Visual Modeling of an Unknown Object by an Autonomous Humanoid Robot

Foissotte, Torea 03 December 2010 (has links)
Ce travail est focalisé sur le problème de la construction autonome du modèle 3D d'un objet inconnu en utilisant un robot humanoïde. Plus particulièrement, nous considérons un HRP-2 guidé par la vision au sein d'un environnement connu qui peut contenir des obstacles. Notre méthode considère les informations visuelles disponibles, les contraintes sur le corps du robot ainsi que le modèle de l'environnement dans le but de générer des postures adéquates et les mouvements nécessaires autour de l'objet. Le problème de sélection de vue ("Next-Best-View") est abordé en se basant sur un générateur de postures qui calcule une configuration par la résolution d'un problème d'optimisation. Une première solution est une approche locale où un algorithme de rendu original à été conçu afin d'être inclut directement dans le générateur de postures. Une deuxième solution augmente la robustesse aux minimums locaux en décomposant le problème en 2 étapes: (i) trouver la pose du capteur tout en satisfaisant un ensemble de contraintes réduit, et (ii) calculer la configuration complète du robot avec le générateur de posture. La première étape repose sur des méthodes d'optimisation globale et locale (BOBYQA) afin de converger vers des points de vue pertinents dans des espaces de configuration admissibles non convexes. Notre approche est testée en conditions réelles par le biais d'une architecture cohérente qui inclus différents composants logiciels spécifique à l'usage d'un humanoïde. Ces expériences intègrent des travaux de recherche en cours en planification de mouvements, contrôle de mouvements et traitement d'image, qui pourront permettre de construire de façon autonome le modèle 3D d'un objet. / This work addresses the problem of autonomously constructing the 3D model of an unknown object using a humanoid robot.More specifically, we consider a HRP-2 evolving in a known environment, which is possibly cluttered, guided by vision.Our method considers the visual information available, the constraints on the robot body, and the model of the environment in order to generate pertinent postures and the necessary motions around the object.Our two solutions to the Next-Best-View problem are based on a specific posture generator, where a posture is computed by solving an optimization problem.The first solution is a local approach to the problem where an original rendering algorithm is specifically designed in order to be directly included in the posture generator. The rendering algorithm can display complex 3D shapes while taking into account self-occlusions.The second solution seeks more global solutions by decoupling the problem in two steps: (i) find the best sensor pose while satisfying a reduced set of constraints on the humanoid, and (ii) generate a whole-body posture with the posture generator.The first step relies on global sampling and BOBYQA, a derivative-free optimization method, to converge toward pertinent viewpoints in non-convex feasible configuration spaces.Our approach is tested in real conditions by using a coherent architecture that includes various complex software components that consider the specificities of the humanoid robot. This experiment integrates on-going works addressing the tasks of motion planning, motion control, and visual processing, to allow the completion of the 3D object reconstruction in future works.
12

Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interaction

Bussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
13

Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things

Marjanovic, Matthew J. 20 June 2003 (has links)
As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer. A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.
14

Action Recognition Through Action Generation

Akgun, Baris 01 August 2010 (has links) (PDF)
This thesis investigates how a robot can use action generation mechanisms to recognize the action of an observed actor in an on-line manner i.e., before the completion of the action. Towards this end, Dynamic Movement Primitives (DMP), an action generation method proposed for imitation, are modified to recognize the actions of an actor. Specifically, a human actor performed three different reaching actions to two different objects. Three DMP&#039 / s, each corresponding to a different reaching action, were trained using this data. The proposed method used an object-centered coordinate system to define the variables for the action, eliminating the difference between the actor and the robot. During testing, the robot simulated action trajectories by its learned DMPs and compared the resulting trajectories against the observed one. The error between the simulated and the observed trajectories were integrated into a recognition signal, over which recognition was done. The proposed method was applied on the iCub humanoid robot platform using an active motion capture device for sensing. The results showed that the system was able to recognize actions with high accuracy as they unfold in time. Moreover, the feasibility of the approach is demonstrated in an interactive game between the robot and a human.
15

Implementation Of A Closed-loop Action Generation System On A Humanoid Robot Through Learning By Demonstration

Tunaoglu, Doruk 01 September 2010 (has links) (PDF)
In this thesis the action learning and generation problem on a humanoid robot is studied. Our aim is to realize action learning, generation and recognition in one system and our inspiration source is the mirror neuron hypothesis which suggests that action learning, generation and recognition share the same neural circuitry. Dynamic Movement Primitives, an efficient action learning and generation approach, are modified in order to fulfill this aim. The system we developed (1) can learn from multiple demonstrations, (2) can generalize to different conditions, (3) generates actions in a closed-loop and online fashion and (4) can be used for online action recognition. These claims are supported by experiments and the applicability of the developed system in real world is demonstrated through implementing it on a humanoid robot.
16

Localization using natural landmarks off-field for robot soccer

He, Yuchen 28 April 2014 (has links)
Localization is an important problem that must be resolved in order for a robot to make an estimation of its location based on observation and odometry updates. Relying on artificial landmarks such as the lines, circles, and goalposts in the robot soccer domain, current robot localization requires prior knowledge and suffers from uncertainty problems due to partial observation, and thus is less generalizable compared to human beings, who refer to their surroundings for complimentary information. To improve the certainty of the localization model, we propose a framework that recognizes orientation by actively using natural landmarks from the off-field surroundings, extracting these visual features from raw images. Our approach involves identifying visual features and natural landmarks, training with localization information to understand the surroundings, and prediction based on matching of features. This approach can increase the precision of robot orientation and improve localization accuracy by eliminating uncertain hypotheses, and in addition, it is also a general approach that can be extended and applied to other localization problems as well. / text
17

Metrics to evaluate human teaching engagement from a robot's point of view

Novanda, Ori January 2017 (has links)
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
18

Fusion d'informations multi-capteurs pour la commande du robot humanoïde NAO / Multi-sensor information fusion : application for the humanoid NAO robot

Nguyen, Thanh Long 05 April 2017 (has links)
Dans cette thèse nous montrons comment améliorer la perception d’un robot humanoïde NAO en utilisant la fusion multi-capteurs. Nous avons proposé deux scénarios: la détection de la couleur et la reconnaissance d’objets colorés. Dans ces deux situations, nous utilisons la caméra du robot et nous ajoutons des caméras externes pour augmenter la fiabilité de la détection car nous nous plaçons dans un contexte expérimental dans lequel l’environnement est non contrôlé. Pour la détection de la couleur, l’utilisateur demande au robot NAO de trouver un objet coloré. La couleur est décrite par des termes linguistiques tels que: rouge, jaune, .... Le principal problème à résoudre est la façon dont le robot reconnaît les couleurs. Pour ce faire, nous avons proposé un système Flou de Sugeno pour déterminer la couleur demandée. Pour simplifier, les cibles choisies sont des balles colorées. Nous avons appliqué la transformation de Hough pour extraire les valeurs moyennes des pixels des balles détectées. Ces valeurs sont utilisées comme entrées pour le système Flou. Les fonctions d'appartenance et les règles d'inférence du système sont construites sur la base de l'évaluation perceptive de l'humain. La sortie du système Flou est une valeur numérique indiquant le nom de la couleur. Une valeur de seuil est introduite pour définir la zone de décision pour chaque couleur. Si la sortie floue tombe dans cet intervalle, alors la couleur est considérée comme la vraie sortie du système. Nous sommes dans un environnement non contrôlé dans lequel il y a des incertitudes et des imprécisions (variation de la lumière, qualité des capteurs, similarité entre couleurs). Ces facteurs affectent la détection de la couleur par le robot. L’introduction du seuil qui encadre la couleur, conduit à un compromis entre l'incertitude et la fiabilité. Si cette valeur est faible, les décisions sont plus fiables, mais le nombre de cas incertains augmente, et vice et versa. Dans nos expérimentations, on a pris une valeur de seuil petite, de sorte que l'incertitude soit plus importante, et donc la prise de décision par un capteur unique, celui de NAO, soit faible. Nous proposons d'ajouter d’autres caméras 2D dans le système afin d’améliorer la prise de décision par le robot NAO. Cette prise de décision résulte de la fusion des sorties des caméras en utilisant la théorie des fonctions de croyance pour lever les ambiguïtés. La valeur de seuil est prise en compte lors de la construction des valeurs de masse à partir de la sortie Floue de Sugeno de chaque caméra. La règle de combinaison de Dempster-Shafer et le maximum de probabilité pignistique sont choisis dans la méthode. Selon nos expériences, le taux de détection du système de fusion est grandement amélioré par rapport au taux de détection de chaque caméra prise individuellement. Nous avons étendu cette méthode à la reconnaissance d’objets colorés en utilisant des caméras hétérogènes 2D et 3D. Pour chaque caméra, nous extrayons vecteurs de caractéristiques (descripteurs SURF et SHOT) des objets, riches en informations caractérisant les modèles d'objets. Sur la base de la correspondance avec des modèles formés et stockés dans la base d'apprentissage, chaque vecteur de caractéristiques de l'objet détecté vote pour une ou plusieurs classes appartenant à l'ensemble de puissance. Nous construisons une fonction de masse après une étape de normalisation. Dans cette expérimentation, la règle de combinaison de Dempster-Shafer et le maximum de probabilité pignistique sont utilisés pour prendre la décision finale. A la suite des trois expérimentations réalisées, le taux de reconnaissance du système de fusion est bien meilleur que le taux de décision issu de chaque caméra individuellement. Nous montrons ainsi que la fusion multi-capteurs permet d’améliorer la prise de décision du robot. / Being interested in the important role of robotics in human life, we do a research about the improvement in reliability of a humanoid robot NAO by using multi-sensor fusion. In this research, we propose two scenarios: the color detection and the object recognition. In these two cases, a camera of the robot is used in combination with external cameras to increase the reliability under non-ideal working conditions. For the color detection, the NAO robot is requested to find an object whose color is described in human terms such as: red, yellow, brown, etc. The main problem to be solved is how the robot recognizes the colors as well as the human perception does. To do that, we propose a Fuzzy Sugeno system to decide the color of a detected target. For simplicity, the chosen targets are colored balls, so that the Hough transformation is employed to extract the average pixel values of the detected ball, then these values are used as the inputs for the Fuzzy system. The membership functions and inference rules of the system are constructed based on perceptual evaluation of human. The output of the Fuzzy system is a numerical value indicating a color name. Additionally, a threshold value is introduced to define the zone of decision for each color. If the Fuzzy output falls into a color interval constructed by the threshold value, that color is considered to be the output of the system. This is considered to be a good solution in an ideal condition, but not in an environment with uncertainties and imprecisions such as light variation, or sensor quality, or even the similarity among colors. These factors really affect the detection of the robot. Moreover, the introduction of the threshold value also leads to a compromise between uncertainty and reliability. If this value is small, the decisions are more reliable, but the number of uncertain cases are increases, and vice versa. However, the threshold value is preferred to be small after an experimental validation, so the need for a solution of uncertainty becomes more important. To do that, we propose adding more 2D cameras into the detection system of the NAO robot. Each camera applies the same method as described above, but their decisions are fused by using the Dempster-Shafer theory in order to improve the detection rate. The threshold value is taken into account to construct mass values from the Sugeno Fuzzy output of each camera. The Dempster-Shafer's rule of combination and the maximum of pignistic probability are chosen in the method. According to our experimens, the detection rate of the fusion system is really better than the result of each individual camera. We extend this recognition process for colored object recognition. These objects are previously learned during the training phase. To challenge uncertainties and imprecisions, the chosen objects look similar in many points: geometrical form, surface, color, etc. In this scenario, the recognition system has two 2D cameras: one of NAO and one is an IP camera, then we add a 3D camera to take the advantages of depth information. For each camera, we extract feature points of the objects (SURF descriptor for 2D data, and the SHOT descriptor for 3D data). To combine the cameras in the recognition system, the Dempster-Shafer theory is again employed for the fusion. Based on the correspondence to trained models stored in the learning base, each feature point of the detected object votes for one or several classes i.e. a hypothesis in the power set. We construct a mass function after a normalization step. In this case, the Dempster-Shafer's rule of combination and the maximum of pignistic probability are employed to make the final decision. After doing three experiments, we conclude that the recognition rate of the fusion system is much better than the rate of each individual camera, from that we confirm the benefits of multi-sensor fusion for the robot's reliability.
19

Impact Force Reduction Using Variable Stiffness with an Optimal Approach for Jumping Robots

Calderon Chavez, Juan Manuel 22 February 2017 (has links)
Running, jumping and walking are physical activities that are performed by humans in a simple and efficient way. However, these types of movements are difficult to perform by humanoid robots. Humans perform these activities without difficulty thanks to their ability to absorb the ground impact force. The absorption of the impact force is based on the human ability to vary muscles stiffness. The principal objective of this dissertation is to study vertical jumps in order to reduce the impact force in the landing phase of the jump motion of humanoid robots. Additionally, the impact force reduction is applied to an arm-oriented movement with the objective of preserving the integrity of falling humanoid robot. This dissertation focuses on researching vertical jump motions by designing, implementing and testing variable stiffness control strategies based on Computed-Torque Control while tracking desired trajectories calculated using the Zero Moment Point (ZMP) and the Center of Mass (CoM) conditions. Variable stiffness method is used to reduce the impact force during the landing phase. The variable stiffness approach was previously presented by Pratt et al. in [1], where they proposed that full stiffness is not always required. In this dissertation, the variable stiffness capability is implemented without the integration of any springs or dampers. All the actuators in the robot are DC Motors and the lower stiffness is achieved by the design and implementation of PID gain values in the PID controller for each motor. The current research proposes two different approaches to generate variable stiffness. The first approach is based on an optimal control theory where the linear quadratic regulator is used to calculate the gain values of the PID controller. The second approach is based on Fuzzy logic theory and it calculates the proportional gain (KP) of the PID controller. Both approaches are based on the idea of computing the PID gains to allow for the displacement of the DC motor positions with respect to the target positions during the landing phase. While a DC motor moves from the target position, the robot CoM changes towards a lower position reducing the impact force. The Fuzzy approach uses an estimation of the impact velocity and a specified desired soft landing level at the moment of impact in order to calculate the P gain of the PID controller. The optimal approach uses the mathematical model of the motor and the factor, which affects the Q matrix of the Linear Quadratic Regulator (LQR), in order to calculate the new PID values. A One-legged robot is used to perform the jump motion verification in this research. In addition, repeatability experiments were also successfully performed with both the optimal control and the Fuzzy logic methods. The results are evaluated and compared according to the impact force reduction and the robot balance during the landing phase. The impact force calculation is based on the displacement of the CoM during the landing phase. The impact force reduction is accomplished by both methods; however, the robot balance shows a considerable improvement with the optimal control approach in comparison to the Fuzzy logic method. In addition, the Optimal Variable Stiffness method was successfully implemented and tested in Falling Robots. The robot integrity is accomplished by applying the Optimal Variable Stiffness control method to reduce the impact force on the arm joints, shoulders and elbows.
20

Concise Modeling of Humanoid Dynamics / Kortfattad Modellering av Humanoiddynamik

Joachimbauer, Florian January 2017 (has links)
Simulation of mechanical systems like walking robots, is an essential part in developingnew and more applicable solutions in robotics. The increasing complexity of methodsand technologies is a key challenge for common languages. That problem creates a needfor flexible and scalable languages. The thesis concludes that an equation-based toolusing the Euler-Lagrange can simplify the process cycle of modeling and simulation. Itcan minimize the development effort, if the tool supports derivatives. Regretfully, it isnot common to use equation-based tools with this ability for simulation of humanoidrobots.The research in this thesis illustrates the comparison of equation-based tools to commonused tools. The implementation uses the Euler-Lagrange method to model andsimulate nonlinear mechanical systems. The focus of this work is the comparison ofdifferent tools, respectively the development of a humanoid robot in a stepwise mannerbased on the principle of passive walking. Additionally, each developed model has givenan informal argument to its stability. To prove the correctness of the thesis statementthe equation-based tool called Acumen is evaluated in contrast to a common used tool,MATLAB.Based on the achieved results, it can be concluded that the use of equation-based toolsusing Euler-Lagrange formalism is convenient and scalable for humanoid robots. Additionally,the development process is significantly simplified by the advantages of suchtools. Due to the experimental nature of Acumen further research could investigatethe possibilities for different mechanical systems as well as other techniques.

Page generated in 0.0383 seconds