Spelling suggestions: "subject:"humanoid cobots"" "subject:"humanoid kobots""
1 |
Biomimetic motion synthesis for synthetic humanoidsHale, Joshua G. January 2003 (has links)
No description available.
|
2 |
Mental imagery in humanoid robotsSeepanomwan, Kristsana January 2016 (has links)
Mental imagery presents humans with the opportunity to predict prospective happenings based on own intended actions, to reminisce occurrences from the past and reproduce the perceptual experience. This cognitive capability is mandatory for human survival in this folding and changing world. By means of internal representation, mental imagery offers other cognitive functions (e.g., decision making, planning) the possibility to assess information on objects or events that are not being perceived. Furthermore, there is evidence to suggest that humans are able to employ this ability in the early stages of infancy. Although materialisation of humanoid robot employment in the future appears to be promising, comprehensive research on mental imagery in these robots is lacking. Working within a human environment required more than a set of pre-programmed actions. This thesis aims to investigate the use of mental imagery in humanoid robots, which could be used to serve the demands of their cognitive skills as in humans. Based on empirical data and neuro-imaging studies on mental imagery, the thesis proposes a novel neurorobotic framework which proposes to facilitate humanoid robots to exploit mental imagery. Through conduction of a series of experiments on mental rotation and tool use, the results from this study confirm this potential. Chapters 5 and 6 detail experiments on mental rotation that investigate a bio-constrained neural network framework accounting for mental rotation processes. They are based on neural mechanisms involving not only visual imagery, but also affordance encoding, motor simulation, and the anticipation of the visual consequences of actions. The proposed model is in agreement with the theoretical and empirical research on mental rotation. The models were validated with both a simulated and physical humanoid robot (iCub), engaged in solving a typical mental rotation task. The results show that the model is able to solve a typical mental rotation task and in agreement with data from psychology experiments, they also show response times linearly dependent on the angular disparity between the objects. Furthermore, the experiments in chapter 6 propose a novel neurorobotic model that has a macro-architecture constrained by knowledge on brain, which encompasses a rather general mental rotation mechanism and incorporates a biologically plausible decision making mechanism. The new model is tested within the humanoid robot iCub in tasks requiring to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot has an enhanced capacity to generalize mental rotation of new objects and shows the possible effects of overt movements of the wrist on mental rotation. These results indicate that the model represents a further step in the identification of the embodied neural mechanisms that might underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. In Chapter 7, the primary purpose for conducting the experiment on tool use development through computational modelling refers to the demonstration that developmental characteristics of tool use identified in human infants can be attributed to intrinsic motivations. Through the processes of sensorimotor learning and rewarding mechanisms, intrinsic motivations play a key role as a driving force that drives infants to exhibit exploratory behaviours, i.e., play. Sensorimotor learning permits an emergence of other cognitive functions, i.e., affordances, mental imagery and problem-solving. Two hypotheses on tool use development are also conducted thoroughly. Secondly, the experiment tests two candidate mechanisms that might underlie an ability to use a tool in infants: overt movements and mental imagery. By means of reinforcement learning and sensorimotor learning, knowledge of how to use a tool might emerge through random movements or trial-and-error which might reveal a solution (sequence of actions) of solving a given tool use task accidentally. On the other hand, mental imagery was used to replace the outcome of overt movements in the processes of self-determined rewards. Instead of determining a reward from physical interactions, mental imagery allows the robots to evaluate a consequence of actions, in mind, before performing movements to solve a given tool use task. Therefore, collectively, the case of mental imagery in humanoid robots was systematically addressed by means of a number of neurorobotic models and, furthermore, two categories of spatial problem solving tasks: mental rotation and tool use. Mental rotation evidently involves the employment of mental imagery and this thesis confirms the potential for its exploitation by humanoid robots. Additionally, the studies on tool use demonstrate that the key components assumed and included in the experiments on mental rotation, namely affordances and mental imagery, can be acquired by robots through the processes of sensorimotor learning.
|
3 |
Natural, Efficient Walking for Compliant Humanoid RobotsGriffin, Robert James 02 November 2017 (has links)
Bipedal robots offer a uniquely flexible platform capable of navigating complex, human-centric environments. This makes them ideally suited for a variety of missions, including disaster response and relief, emergency scenarios, or exoskeleton systems for individuals with disabilities. This, however, requires significant advances in humanoid locomotion and control, as they are still slow, unnatural, inefficient, and relatively unstable. The work of this dissertation the state of the art with the aim was of increasing the robustness and efficiency of these bipedal walking platforms. We present a series of control improvements to enable reliable, robust, natural bipedal locomotion that was validated on a variety of bipedal robots using both hardware and simulation experiments.
A huge part of reliable walking involves maximizing the robot's control authority. We first present the development of a model predictive controller to both control the ground reaction forces and perform step adjustment for walking stabilization using a mixed-integer quadratic program. This represents the first model predictive controller to include step rotation in the optimization and leverage the capabilities of the time-varying divergent component of motion for navigating rough terrain. We also analyze the potential capabilities of model predictive controllers for the control of bipedal walking.
As an alternative to standard trajectory optimization-based model predictive controls, we present several optimization-based control schemes that leverage more traditional bipedal walking control approaches by embedding a proportional feedback controller into a quadratic program. This controller is capable of combining multiple feedback mechanisms: ground reaction feedback (the "ankle strategy"), angular momentum (the "hip strategy"), swing foot speed up, and step adjustment. This allows the robot to effectively shift its weight, pitch its torso, and adjust its feet to retain balance, while considering environmental constraints, when available.
To enable the robot to walk with straightened legs, we present a strategy that insures that the dynamic plans are kinematically and dynamically feasible to execute using straight legs. The effects of timing on dynamic plans are typically ignored, resulting in them potentially requiring significantly bending the legs during execution. This algorithm modifies the step timings to insure the plan can be executed without bending the legs beyond certain angle, while leaving the desired footsteps unmodified. To then achieve walking with straight legs we then presented a novel approach for indirectly controlling the center of mass height through the leg angles. This avoids complicated height planning techniques that are both computationally expensive and often not general enough to consider variable terrain by effectively biasing the solution of the whole-body controller towards using straighter legs. To incorporate the toe-off motion that is essential to both natural and straight leg walking, we also present a strategy for toe-off control that allows it to be an emergent behavior of the whole-body controller.
The proposed approach was demonstrated through a series of simulation and experimental results on a variety of platforms. Model predictive control for step adjustment and rough terrain is illustrated in simulation, while the other step adjustment strategies and straight leg walking approaches are presented recovering from external disturbances and walking over a variety of terrains in hardware experiments. We discuss many of the practical considerations and limitations required when porting simulation-based controller development to hardware platforms. Using the presented approaches, we also demonstrated a important concept: using whole-body control frameworks, not every desired motion need be directly commanded. Many of these motions, such as toe-off, may simply be emergent behaviors that result by attempting to satisfy other objectives, such as desired reaction forces. We also showed that optimization is a very powerful tool for walking control, able to determine both stabilizing inputs and joint torques. / Ph. D. / Bipedal robots offer a uniquely flexible platform capable of navigating the complex, humancentric environment that we live in. This makes them ideally suited for a variety of missions, including disaster response and relief, emergency scenarios, or exoskeleton systems for individuals with disabilities. This, however, requires significant advances in humanoid locomotion and control, as they are still slow, unnatural, inefficient, and relatively unstable. The work of this dissertation aims to increase the robustness and efficiency of these bipedal walking platforms.
To increase the overall stability of the robot while walking, we aimed to develop new control schemes that incorporate more of the same balance strategies used by people. These include the adjustment of ground reaction forces (the “ankle strategy”, shifting weight), angular momentum (the “hip strategy”, pitching the torso and windmilling the arms), swing foot speed up, and step adjustment. Using these approaches, the robot is able to walk much more stably.
With the ability to use human-like control strategies, the next step is to develop appropriate methods to allow it to walk with straighter legs. Without correct step timing, it may be necessary at times to significantly bend the knees to take the specified step. We develop an approach to adjust the step timing to decrease the required knee bend of the robot. We then present an approach for indirectly controlling the robot height through the knee angles. This avoids traditional complicated height planning techniques that are both computationally hard and not general enough to consider complex terrain. To incorporate the toe-off motion that is essential to both natural and straight leg walking, we also present a new strategy for toe-off that allows it to emerge natural from the controller.
We present the proposed approach through a series of simulation and experimental results on several robots and in several environments. We discuss many of the practical considerations and limitations required when porting simulation-based controller development to hardware platforms. Using the presented approaches, we also demonstrated an important concept: using whole-body control frameworks, not every desired motion need be directly commanded. Many of these motions, such as toe-off, may simply be emergent behaviors that result by attempting to satisfy other objectives, such as desired reaction forces. We also showed that optimization is a very powerful tool for walking control, able to determine both stabilizing inputs and joint torques.
|
4 |
Humanoid robots walking with soft soles / Marche des robots humanoïdes avec des semelles souplesPajon, Adrien 01 December 2017 (has links)
Lorsque des changements inattendus de la surface du sol se produisent lors de la marche, le système nerveux central humain doit appliquer des mesures de contrôle appropriées pour assurer une stabilité dynamique. De nombreuses études dans le domaine de la commande moteur ont étudié les mécanismes d'un tel contrôle postural et ont largement décrit comment les trajectoires du centre de masse (COM), le placement des pas et l'activité musculaire s'adaptent pour éviter une perte d'équilibre. Les mesures que nous avons effectuées montrent qu'en arrivant sur un sol mou, les participants ont modulé de façon active les forces de réaction au sol (GRF) sous le pied de support afin d'exploiter les propriétés élastiques et déformables de la surface pour amortir l'impact et probablement dissiper l'énergie mécanique accumulée pendant la ‘chute’ sur la nouvelle surface déformable. Afin de contrôler plus efficacement l'interaction pieds-sol des robots humanoïdes pendant la marche, nous proposons d'ajouter des semelles extérieures souples (c'est-à-dire déformables) aux pieds. Elles absorbent les impacts et limitent les effets des irrégularités du sol pendant le mouvement sur des terrains accidentés. Cependant, ils introduisent des degrés de liberté passifs (déformations sous les pieds) qui complexifient les tâches d'estimation de l'état du robot et ainsi que sa stabilisation globale. Pour résoudre ce problème, nous avons conçu un nouveau générateur de modèle de marche (WPG) basé sur une minimisation de la consommation d'énergie qui génère les paramètres nécessaires pour utiliser conjointement un estimateur de déformation basé sur un modèle éléments finis (FEM) de la semelle souple pour prendre en compte sa déformation lors du mouvement. Un tel modèle FEM est coûteux en temps de calcul et empêche la réactivité en ligne. Par conséquent, nous avons développé une boucle de contrôle qui stabilise les robots humanoïdes lors de la marche avec des semelles souples sur terrain plat et irrégulier. Notre contrôleur en boucle fermée minimise les erreurs sur le centre de masse (COM) et le point de moment nul (ZMP) avec un contrôle en admittance des pieds basé sur un estimateur de déformation simplifié. Nous démontrons son efficacité expérimentalement en faisant marcher le robot humanoïde HRP-4 sur des graviers. / When unexpected changes of the ground surface occur while walking, the human central nervous system needs to apply appropriate control actions to assure dynamic stability. Many studies in the motor control field have investigated the mechanisms of such a postural control and have widely described how center of mass (COM) trajectories, step patterns and muscle activity adapt to avoid loss of balance. Measurements we conducted show that when stepping over a soft ground, participants actively modulated the ground reaction forces (GRF) under the supporting foot in order to exploit the elastic and compliant properties of the surface to dampen the impact and to likely dissipate the mechanical energy accumulated during the ‘fall’ onto the new compliant surface.In order to control more efficiently the feet-ground interaction of humanoid robots during walking, we propose adding outer soft (i.e. compliant) soles to the feet. They absorb impacts and cast ground unevenness during locomotion on rough terrains. However, they introduce passive degrees of freedom (deformations under the feet) that complexify the tasks of state estimation and overall robot stabilization. To address this problem, we devised a new walking pattern generator (WPG) based on a minimization of the energy consumption that offers the necessary parameters to be used jointly with a sole deformation estimator based on finite element model (FEM) of the soft sole to take into account the sole deformation during the motion. Such FEM computation is time costly and inhibit online reactivity. Hence, we developed a control loop that stabilizes humanoid robots when walking with soft soles on flat and uneven terrain. Our closed-loop controller minimizes the errors on the center of mass (COM) and the zero-moment point (ZMP) with an admittance control of the feet based on a simple deformation estimator. We demonstrate its effectiveness in real experiments on the HRP-4 humanoid walking on gravels.
|
5 |
Cognitive-Developmental Learning for a Humanoid Robot: A Caregiver's GiftArsenio, Artur Miguel 26 September 2004 (has links)
The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties.This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes, which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes.Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.
|
6 |
Motion planning and perception : integration on humanoid robots / Planification de mouvement, modélisation et perception : intégration sur un robot humanoïdeNakhaei, Alireza 24 September 2009 (has links)
Le chapitre 1 est pour l'essentiel une brève introduction générale qui donne le contexte générale de la planification et présente l'organisation du document dans son ensemble et quelques uns des points clés retenus : robot humanoïde, environnement non statique, perception par vision artificielle, et représentation de cet environnement par grilles d'occupation. Dans le chapitre 2, après une revue de littérature bien menée, l'auteur propose de considérer les points de repère de l'environnement dès la phase de planification de chemin afin de rendre plus robuste l'exécution des déplacements en cas d'évolution de l'environnement entre le moment où la planification est menée et celui où le robot se déplace ( évolution étant entendu comme liée à une amélioration de la connaissance par mise à jour, ou due à un changement de l'environnement lui-même). Le concept est décrit et une formalisation proposée. Le chapitre 3 s'intéresse en détail à la planification dans le cas d'environnements dynamiques. Les méthodes existantes, nombreuses, sont tout d'abord analysées et bien présentées. Le choix est fait ici de décrire l'environnement comme étant décomposé en cellules, regroupant elles-mêmes des voxels, éléments atomiques de la représentation. L'environnement étant changeant, l'auteur propose de réévaluer le plan préétabli à partir d'une bonne détection de la zone qui a pu se trouver modifiée dans l'environnement. L'approche est validée expérimentalement en utilisant une des plateformes robotiques du LAAS qui dispose de bonnes capacités de localisation : le manipulateur mobile Jido étant à ce jour plus performant sur ce plan que l'humanoïde HRP2, c'est lui qui a été utilisé. Ces expérimentations donnent des indications concordantes sur l'efficacité de l'approche retenue. Notons également que la planification s'appuie sur une boite englobante de l'humanoïde, et non pas sur une représentation plus riche (multi-degré-deliberté). En revanche, c'est bien de planification pour l'humanoïde considéré dans toute sa complexité qu'il s'agit au chapitre 4 : on s'intéresse ici à tous les degrés de liberté du robot. L'auteur propose des évolutions de méthodes existantes et en particulier sur la manière de tirer profit de la redondance cinématique. L'approche est bien décrite et permet d'inclure une phase d'optimisation de la posture globale du robot. Des exemples illustrent le propos et sont l'occasion de comparaison avec d'autres méthodes. Le chapitre 5 s'intéresse à la manière de modéliser l'environnement, sachant qu'on s'intéresse ici au cas d'une perception par vision artificielle, et précisément au cas de l'humanoïde, robot d'assurer lui-même cette perception au fur et à mesure de son avancée dans l'environnement. On est donc dans le cadre de la recherche de la meilleure vue suivante qui doit permettre d'enrichir au mieux la connaissance qu'a le robot de son environnement. L'approche retenue fait à nouveau appel à la boite englobante de l'humanoïde et non à sa représentation complète ; il sera intéressant de voir dans le futur ce que pourrait apporter la prise en compte des degrés de liberté de la tête ou du torse à la résolution de ce problème. Le chapitre 6 décrit la phase d'intégration de tous ces travaux sur la plateforme HRP2 du LAAS-CNRS, partie importante de tout travail de roboticien. / This thesis starts by proposing a new framework for motion planning using stochastic maps, such as occupancy-grid maps. In autonomous robotics applications, the robot's map of the environment is typically constructed online, using techniques from SLAM. These methods can construct a dense map of the environment, or a sparse map that contains a set of identifiable landmarks. In this situation, path planning would be performed using the dense map, and the path would be executed in a sensor-based fashion, using feedback control to track the reference path based on sensor information regarding landmark position. Maximum-likelihood estimation techniques are used to model the sensing process as well as to estimate the most likely nominal path that will be followed by the robot during execution of the plan. The proposed approach is potentially a practical way to plan under the specific sorts of uncertainty confronted by a humanoid robot. The next chapter, presents methods for constructing free paths in dynamic environments. The chapter begins with a comprehensive review of past methods, ranging from modifying sampling-based methods for the dynamic obstacle problem, to methods that were specifically designed for this problem. The thesis proposes to adapt a method reported originally by Leven et al.. so that it can be used to plan paths for humanoid robots in dynamic environments. The basic idea of this method is to construct a mapping from voxels in a discretized representation of the workspace to vertices and arcs in a configuration space network built using sampling-based planning methods. When an obstacle intersects a voxel in the workspace, the corresponding nodes and arcs in the configuration space roadmap are marked as invalid. The part of the network that remains comprises the set of valid candidate paths. The specific approach described here extends previous work by imposing a two-level hierarchical structure on the representation of the workspace. The methods described in Chapters 2 and 3 essentially deal with low-dimensional problems (e.g., moving a bounding box). The reduction in dimensionality is essential, since the path planning problem confronted in these chapters is complicated by uncertainty and dynamic obstacles, respectively. Chapter 4 addresses the problem of planning the full motion of a humanoid robot (whole-body task planning). The approach presented here is essentially a four-step approach. First, multiple viable goal configurations are generated using a local task solver, and these are used in a classical path planning approach with one initial condition and multiple goals. This classical problem is solved using an RRT-based method. Once a path is found, optimization methods are applied to the goal posture. Finally, classic path optimization algorithms are applied to the solution path and posture optimization. The fifth chapter describes algorithms for building a representation of the environment using stereo vision as the sensing modality. Such algorithms are necessary components of the autonomous system proposed in the first chapter of the thesis. A simple occupancy-grid based method is proposed, in which each voxel in the grid is assigned a number indicating the probability that it is occupied. The representation is updated during execution based on values received from the sensing system. The sensor model used is a simple Gaussian observation model in which measured distance is assumed to be true distance plus additive Gaussian noise. Sequential Bayes updating is then used to incrementally update occupancy values as new measurements are received. Finally, chapter 6 provides some details about the overall system architecture, and in particular, about those components of the architecture that have been taken from existing software (and therefore, do not themselves represent contributions of the thesis). Several software systems are described, including GIK, WorldModelGrid3D, HppDynamicObstacle, and GenoM.
|
7 |
A Walking Controller for Humanoid Robots using Virtual ForceJagtap, Vinayak V. 23 November 2019 (has links)
Current state-of-the-art walking controllers for humanoid robots use simple models, such as Linear Inverted Pendulum Mode (LIPM), to approximate Center of Mass(CoM) dynamics of a robot. These models are then used to generate CoM trajectories that keep the robot balanced while walking. Such controllers need prior information of foot placements, which is generated by a walking pattern generator. While the robot is walking, any change in the goal position leads to aborting the existing foot placement plan and re-planning footsteps, followed by CoM trajectory generation. This thesis proposes a tightly coupled walking pattern generator and a reactive balancing controller to plan and execute one step at a time. Walking is an emergent behavior from such a controller which is achieved by applying a virtual force in the direction of the goal. This virtual force, along with external forces acting on the robot, is used to compute desired CoM acceleration and the footstep parameters for only the next step. Step location is selected based on the capture point, which is a point on the ground at which the robot should step to stay balanced. Because each footstep location is derived as needed based on the capture point, it is not necessary to compute a complete set of footsteps. Experiments show that this approach allows for simpler inputs, results in faster operation, and is inherently immune to external perturbing and other reaction forces from the environment. Experiments are performed on Boston Dynamic's Atlas robot and NASA's Valkyrie R5 robot in simulation, and on Atlas hardware.
|
8 |
A Walking Controller for Humanoid Robots using Virtual ForceJagtap, Vinayak V 13 September 2019 (has links)
Current state-of-the-art walking controllers for humanoid robots use simple models, such as Linear Inverted Pendulum Mode (LIPM), to approximate Center of Mass(CoM) dynamics of a robot. These models are then used to generate CoM trajectories that keep the robot balanced while walking. Such controllers need prior information of foot placements, which is generated by a walking pattern generator. While the robot is walking, any change in the goal position leads to aborting the existing foot placement plan and re-planning footsteps, followed by CoM trajectory generation. This thesis proposes a tightly coupled walking pattern generator and a reactive balancing controller to plan and execute one step at a time. Walking is an emergent behavior from such a controller which is achieved by applying a virtual force in the direction of the goal. This virtual force, along with external forces acting on the robot, is used to compute desired CoM acceleration and the footstep parameters for only the next step. Step location is selected based on the capture point, which is a point on the ground at which the robot should step to stay balanced. Because each footstep location is derived as needed based on the capture point, it is not necessary to compute a complete set of footsteps. Experiments show that this approach allows for simpler inputs, results in faster operation, and is inherently immune to external perturbing and other reaction forces from the environment. Experiments are performed on Boston Dynamic's Atlas robot and NASA's Valkyrie R5 robot in simulation, and on Atlas hardware.
|
9 |
Robot-mediated interviews : a robotic intermediary for facilitating communication with childrenWood, Luke Jai January 2015 (has links)
Robots have been used in a variety of education, therapy or entertainment contexts. This thesis introduces the novel application of using humanoid robots for Robot-Mediated Interviews (RMIs). In the initial stages of this research it was necessary to first establish as a baseline if children would respond to a robot in an interview setting, therefore the first study compared how children responded to a robot and a human in an interview setting. Following this successful initial investigation, the second study expanded on this research by examining how children would respond to different types and difficulty of questions from a robot compared to a human interviewer. Building on these studies, the third study investigated how a RMI approach would work for children with special needs. Following the positive results from the three studies indicating that a RMI approach may have some potential, three separate user panel sessions were organised with user groups that have expertise in working with children and for whom the system would be potentially useful in their daily work. The panel sessions were designed to gather feedback on the previous studies and outline a set of requirements to make a RMI system feasible for real world users. The feedback and requirements from the user groups were considered and implemented in the system before conducting a final field trial of the system with a potential real world user. The results of the studies in this research reveal that the children generally interacted with KASPAR in a very similar to how they interacted with a human interviewer regardless of question type or difficulty. The feedback gathered from experts working with children suggested that the three most important and desirable features of a RMI system were: reliability, flexibility and ease of use. The feedback from the experts also indicated that a RMI system would most likely be used with children with special needs. The final field trial with 10 children and a potential real world user illustrated that a RMI system could potentially be used effectively outside of a research context, with all of the children in the trial responding to the robot. Feedback from the educational psychologist testing the system would suggest that a RMI approach could have real world implications if the system were developed further.
|
10 |
Transfert de Mouvement Humain vers Robot Humanoïde / Human Motion Transfer on Humanoid RobotMontecillo Puente, Francisco Javier 26 August 2010 (has links)
Le but de cette thèse est le transfert du mouvement humain vers un robot humanoïde en ligne. Dans une première partie, le mouvement humain, enregistré par un système de capture de mouvement, est analysé pour extraire des caractéristiques qui doivent être transférées vers le robot humanoïde. Dans un deuxième temps, le mouvement du robot qui comprend ces caractéristiques est calculé en utilisant la cinématique inverse avec priorité. L'ensemble des tâches avec leurs priorités est ainsi transféré. La méthode permet une reproduction du mouvement la plus fidèle possible, en ligne et pour le haut du corps. Finalement, nous étudions le problème du transfert mouvement des pieds. Pour cette étude, le mouvement des pieds est analysé pour extraire les trajectoires euclidiennes qui sont adaptées au robot. Les trajectoires du centre du masse qui garantit que le robot ne tombe pas sont calculées `a partir de la position des pieds et du modèle du pendule inverse. Il est ainsi possible réaliser une imitation complète incluant les mouvements du haut du corps ainsi que les mouvements des pieds. / The aim of this thesis is to transfer human motion to a humanoid robot online. In the first part of this work, the human motion recorded by a motion capture system is analyzed to extract salient features that are to be transferred on the humanoid robot. We introduce the humanoid normalized model as the set of motion properties. In the second part of this work, the robot motion that includes the human motion features is computed using the inverse kinematics with priority. In order to transfer the motion properties a stack of tasks is predefined. Each motion property in the humanoid normalized model corresponds to one target in the stack of tasks. We propose a framework to transfer human motion online as close as possible to a human motion performance for the upper body. Finally, we study the problem of transfering feet motion. In this study, the motion of feet is analyzed to extract the Euclidean trajectories adapted to the robot. Moreover, the trajectory of the center of mass which ensures that the robot does not fall is calculated from the feet positions and the inverse pendulum model of the robot. Using this result, it is possible to achieve complete imitation of upper body movements and including feet motion
|
Page generated in 0.0687 seconds