Spelling suggestions: "subject:"humano robot""
21 |
Implementation Of A Closed-loop Action Generation System On A Humanoid Robot Through Learning By DemonstrationTunaoglu, Doruk 01 September 2010 (has links) (PDF)
In this thesis the action learning and generation problem on a humanoid robot is studied. Our aim is to realize action learning, generation and recognition in one system and our inspiration source is the mirror neuron hypothesis which suggests that action learning, generation and recognition share the same neural circuitry. Dynamic Movement Primitives, an efficient action learning and generation approach, are modified in order to fulfill this aim. The system we developed (1) can learn from multiple demonstrations, (2) can generalize to different conditions, (3) generates actions in a closed-loop and online fashion and (4) can be used for
online action recognition. These claims are supported by experiments and the applicability of the developed system in real world is demonstrated through implementing it on a humanoid robot.
|
22 |
Localization using natural landmarks off-field for robot soccerHe, Yuchen 28 April 2014 (has links)
Localization is an important problem that must be resolved in order for a robot to make an estimation of its location based on observation and odometry updates. Relying on artificial landmarks such as the lines, circles, and goalposts in the robot soccer domain, current robot localization requires prior knowledge and suffers from uncertainty problems due to partial observation, and thus is less generalizable compared to human beings, who refer to their surroundings for complimentary information. To improve the certainty of the localization model, we propose a framework that recognizes orientation by actively using natural landmarks from the off-field surroundings, extracting these visual features from raw images. Our approach involves identifying visual features and natural landmarks, training with localization information to understand the surroundings, and prediction based on matching of features. This approach can increase the precision of robot orientation and improve localization accuracy by eliminating uncertain hypotheses, and in addition, it is also a general approach that can be extended and applied to other localization problems as well. / text
|
23 |
Metrics to evaluate human teaching engagement from a robot's point of viewNovanda, Ori January 2017 (has links)
This thesis was motivated by a study of how robots can be taught by humans, with an emphasis on allowing persons without programming skills to teach robots. The focus of this thesis was to investigate what criteria could or should be used by a robot to evaluate whether a human teacher is (or potentially could be) a good teacher in robot learning by demonstration. In effect, choosing the teacher that can maximize the benefit to the robot using learning by imitation/demonstration. The study approached this topic by taking a technology snapshot in time to see if a representative example of research laboratory robot technology is capable of assessing teaching quality. With this snapshot, this study evaluated how humans observe teaching quality to attempt to establish measurement metrics that can be transferred as rules or algorithms that are beneficial from a robot's point of view. To evaluate teaching quality, the study looked at the teacher-student relationship from a human-human interaction perspective. Two factors were considered important in defining a good teacher: engagement and immediacy. The study gathered more literature reviews relating to further detailed elements of engagement and immediacy. The study also tried to link physical effort as a possible metric that could be used to measure the level of engagement of the teachers. An investigatory experiment was conducted to evaluate which modality the participants prefer to employ in teaching a robot if the robot can be taught using voice, gesture demonstration, or physical manipulation. The findings from this experiment suggested that the participants appeared to have no preference in terms of human effort for completing the task. However, there was a significant difference in human enjoyment preferences of input modality and a marginal difference in the robot's perceived ability to imitate. A main experiment was conducted to study the detailed elements that might be used by a robot in identifying a 'good' teacher. The main experiment was conducted in two subexperiments. The first part recorded the teacher's activities and the second part analysed how humans evaluate the perception of engagement when assessing another human teaching a robot. The results from the main experiment suggested that in human teaching of a robot (human-robot interaction), humans (the evaluators) also look for some immediacy cues that happen in human-human interaction for evaluating the engagement.
|
24 |
Fusion d'informations multi-capteurs pour la commande du robot humanoïde NAO / Multi-sensor information fusion : application for the humanoid NAO robotNguyen, Thanh Long 05 April 2017 (has links)
Dans cette thèse nous montrons comment améliorer la perception d’un robot humanoïde NAO en utilisant la fusion multi-capteurs. Nous avons proposé deux scénarios: la détection de la couleur et la reconnaissance d’objets colorés. Dans ces deux situations, nous utilisons la caméra du robot et nous ajoutons des caméras externes pour augmenter la fiabilité de la détection car nous nous plaçons dans un contexte expérimental dans lequel l’environnement est non contrôlé. Pour la détection de la couleur, l’utilisateur demande au robot NAO de trouver un objet coloré. La couleur est décrite par des termes linguistiques tels que: rouge, jaune, .... Le principal problème à résoudre est la façon dont le robot reconnaît les couleurs. Pour ce faire, nous avons proposé un système Flou de Sugeno pour déterminer la couleur demandée. Pour simplifier, les cibles choisies sont des balles colorées. Nous avons appliqué la transformation de Hough pour extraire les valeurs moyennes des pixels des balles détectées. Ces valeurs sont utilisées comme entrées pour le système Flou. Les fonctions d'appartenance et les règles d'inférence du système sont construites sur la base de l'évaluation perceptive de l'humain. La sortie du système Flou est une valeur numérique indiquant le nom de la couleur. Une valeur de seuil est introduite pour définir la zone de décision pour chaque couleur. Si la sortie floue tombe dans cet intervalle, alors la couleur est considérée comme la vraie sortie du système. Nous sommes dans un environnement non contrôlé dans lequel il y a des incertitudes et des imprécisions (variation de la lumière, qualité des capteurs, similarité entre couleurs). Ces facteurs affectent la détection de la couleur par le robot. L’introduction du seuil qui encadre la couleur, conduit à un compromis entre l'incertitude et la fiabilité. Si cette valeur est faible, les décisions sont plus fiables, mais le nombre de cas incertains augmente, et vice et versa. Dans nos expérimentations, on a pris une valeur de seuil petite, de sorte que l'incertitude soit plus importante, et donc la prise de décision par un capteur unique, celui de NAO, soit faible. Nous proposons d'ajouter d’autres caméras 2D dans le système afin d’améliorer la prise de décision par le robot NAO. Cette prise de décision résulte de la fusion des sorties des caméras en utilisant la théorie des fonctions de croyance pour lever les ambiguïtés. La valeur de seuil est prise en compte lors de la construction des valeurs de masse à partir de la sortie Floue de Sugeno de chaque caméra. La règle de combinaison de Dempster-Shafer et le maximum de probabilité pignistique sont choisis dans la méthode. Selon nos expériences, le taux de détection du système de fusion est grandement amélioré par rapport au taux de détection de chaque caméra prise individuellement. Nous avons étendu cette méthode à la reconnaissance d’objets colorés en utilisant des caméras hétérogènes 2D et 3D. Pour chaque caméra, nous extrayons vecteurs de caractéristiques (descripteurs SURF et SHOT) des objets, riches en informations caractérisant les modèles d'objets. Sur la base de la correspondance avec des modèles formés et stockés dans la base d'apprentissage, chaque vecteur de caractéristiques de l'objet détecté vote pour une ou plusieurs classes appartenant à l'ensemble de puissance. Nous construisons une fonction de masse après une étape de normalisation. Dans cette expérimentation, la règle de combinaison de Dempster-Shafer et le maximum de probabilité pignistique sont utilisés pour prendre la décision finale. A la suite des trois expérimentations réalisées, le taux de reconnaissance du système de fusion est bien meilleur que le taux de décision issu de chaque caméra individuellement. Nous montrons ainsi que la fusion multi-capteurs permet d’améliorer la prise de décision du robot. / Being interested in the important role of robotics in human life, we do a research about the improvement in reliability of a humanoid robot NAO by using multi-sensor fusion. In this research, we propose two scenarios: the color detection and the object recognition. In these two cases, a camera of the robot is used in combination with external cameras to increase the reliability under non-ideal working conditions. For the color detection, the NAO robot is requested to find an object whose color is described in human terms such as: red, yellow, brown, etc. The main problem to be solved is how the robot recognizes the colors as well as the human perception does. To do that, we propose a Fuzzy Sugeno system to decide the color of a detected target. For simplicity, the chosen targets are colored balls, so that the Hough transformation is employed to extract the average pixel values of the detected ball, then these values are used as the inputs for the Fuzzy system. The membership functions and inference rules of the system are constructed based on perceptual evaluation of human. The output of the Fuzzy system is a numerical value indicating a color name. Additionally, a threshold value is introduced to define the zone of decision for each color. If the Fuzzy output falls into a color interval constructed by the threshold value, that color is considered to be the output of the system. This is considered to be a good solution in an ideal condition, but not in an environment with uncertainties and imprecisions such as light variation, or sensor quality, or even the similarity among colors. These factors really affect the detection of the robot. Moreover, the introduction of the threshold value also leads to a compromise between uncertainty and reliability. If this value is small, the decisions are more reliable, but the number of uncertain cases are increases, and vice versa. However, the threshold value is preferred to be small after an experimental validation, so the need for a solution of uncertainty becomes more important. To do that, we propose adding more 2D cameras into the detection system of the NAO robot. Each camera applies the same method as described above, but their decisions are fused by using the Dempster-Shafer theory in order to improve the detection rate. The threshold value is taken into account to construct mass values from the Sugeno Fuzzy output of each camera. The Dempster-Shafer's rule of combination and the maximum of pignistic probability are chosen in the method. According to our experimens, the detection rate of the fusion system is really better than the result of each individual camera. We extend this recognition process for colored object recognition. These objects are previously learned during the training phase. To challenge uncertainties and imprecisions, the chosen objects look similar in many points: geometrical form, surface, color, etc. In this scenario, the recognition system has two 2D cameras: one of NAO and one is an IP camera, then we add a 3D camera to take the advantages of depth information. For each camera, we extract feature points of the objects (SURF descriptor for 2D data, and the SHOT descriptor for 3D data). To combine the cameras in the recognition system, the Dempster-Shafer theory is again employed for the fusion. Based on the correspondence to trained models stored in the learning base, each feature point of the detected object votes for one or several classes i.e. a hypothesis in the power set. We construct a mass function after a normalization step. In this case, the Dempster-Shafer's rule of combination and the maximum of pignistic probability are employed to make the final decision. After doing three experiments, we conclude that the recognition rate of the fusion system is much better than the rate of each individual camera, from that we confirm the benefits of multi-sensor fusion for the robot's reliability.
|
25 |
Impact Force Reduction Using Variable Stiffness with an Optimal Approach for Jumping RobotsCalderon Chavez, Juan Manuel 22 February 2017 (has links)
Running, jumping and walking are physical activities that are performed by humans in a simple and efficient way. However, these types of movements are difficult to perform by humanoid robots. Humans perform these activities without difficulty thanks to their ability to absorb the ground impact force. The absorption of the impact force is based on the human ability to vary muscles stiffness.
The principal objective of this dissertation is to study vertical jumps in order to reduce the impact force in the landing phase of the jump motion of humanoid robots. Additionally, the impact force reduction is applied to an arm-oriented movement with the objective of preserving the integrity of falling humanoid robot.
This dissertation focuses on researching vertical jump motions by designing, implementing and testing variable stiffness control strategies based on Computed-Torque Control while tracking desired trajectories calculated using the Zero Moment Point (ZMP) and the Center of Mass (CoM) conditions. Variable stiffness method is used to reduce the impact force during the landing phase. The variable stiffness approach was previously presented by Pratt et al. in [1], where they proposed that full stiffness is not always required. In this dissertation, the variable stiffness capability is implemented without the integration of any springs or dampers. All the actuators in the robot are DC Motors and the lower stiffness is achieved by the design and implementation of PID gain values in the PID controller for each motor. The current research proposes two different approaches to generate variable stiffness. The first approach is based on an optimal control theory where the linear quadratic regulator is used to calculate the gain values of the PID controller. The second approach is based on Fuzzy logic theory and it calculates the proportional gain (KP) of the PID controller. Both approaches are based on the idea of computing the PID gains to allow for the displacement of the DC motor positions with respect to the target positions during the landing phase. While a DC motor moves from the target position, the robot CoM changes towards a lower position reducing the impact force. The Fuzzy approach uses an estimation of the impact velocity and a specified desired soft landing level at the moment of impact in order to calculate the P gain of the PID controller. The optimal approach uses the mathematical model of the motor and the factor, which affects the Q matrix of the Linear Quadratic Regulator (LQR), in order to calculate the new PID values.
A One-legged robot is used to perform the jump motion verification in this research. In addition, repeatability experiments were also successfully performed with both the optimal control and the Fuzzy logic methods. The results are evaluated and compared according to the impact force reduction and the robot balance during the landing phase. The impact force calculation is based on the displacement of the CoM during the landing phase. The impact force reduction is accomplished by both methods; however, the robot balance shows a considerable improvement with the optimal control approach in comparison to the Fuzzy logic method. In addition, the Optimal Variable Stiffness method was successfully implemented and tested in Falling Robots. The robot integrity is accomplished by applying the Optimal Variable Stiffness control method to reduce the impact force on the arm joints, shoulders and elbows.
|
26 |
Concise Modeling of Humanoid Dynamics / Kortfattad Modellering av HumanoiddynamikJoachimbauer, Florian January 2017 (has links)
Simulation of mechanical systems like walking robots, is an essential part in developingnew and more applicable solutions in robotics. The increasing complexity of methodsand technologies is a key challenge for common languages. That problem creates a needfor flexible and scalable languages. The thesis concludes that an equation-based toolusing the Euler-Lagrange can simplify the process cycle of modeling and simulation. Itcan minimize the development effort, if the tool supports derivatives. Regretfully, it isnot common to use equation-based tools with this ability for simulation of humanoidrobots.The research in this thesis illustrates the comparison of equation-based tools to commonused tools. The implementation uses the Euler-Lagrange method to model andsimulate nonlinear mechanical systems. The focus of this work is the comparison ofdifferent tools, respectively the development of a humanoid robot in a stepwise mannerbased on the principle of passive walking. Additionally, each developed model has givenan informal argument to its stability. To prove the correctness of the thesis statementthe equation-based tool called Acumen is evaluated in contrast to a common used tool,MATLAB.Based on the achieved results, it can be concluded that the use of equation-based toolsusing Euler-Lagrange formalism is convenient and scalable for humanoid robots. Additionally,the development process is significantly simplified by the advantages of suchtools. Due to the experimental nature of Acumen further research could investigatethe possibilities for different mechanical systems as well as other techniques.
|
27 |
Řídicí jednotka pro humanoidní robot / Control Unit for Humanoid RobotFlorián, Tomáš January 2009 (has links)
The main goal of this project is to understand broadband sold Robonova-I humanoid robot and to design new improvements. The thesis is divided into five main chapters: The first of them (Robonova-I) is concerning the kit of Robonova-I, its completion and control possibilities. In details it deals with used servo motors, their operating and compares them. The next chapter is concerning the design of the control unit along with description of architecture of the processor and control possibilities with demonstrational board. In third there is a software solution of particular methods and commands which control the unit with demonstrational board. Chapter “The Design of Printed Circuit Board” describes facilities of the designed printed circuit board. The last chapter describes the final program for robot control through the use of computer and some of the algorithms by means of which the robot controls the microprocessor.
|
28 |
Investigating the Social Influence of Different Humanoid RobotsThunberg, Sofia January 2017 (has links)
The aim with this thesis were to investigate social influence of the two humanoid robots, NAO and Pepper. The research questions were if there were a difference in human social acceptance, in social influence and in influence on human decision making between NAO and Pepper. To answer these questions, an experiment using the Wizard of Oz-method were used with 36 participant, 18 in each group, interacted with NAO or Pepper. Afterwards two questionnaires, NARS and GODSPEED, were answered and an additional interview were held with the participants. The result showed a significant difference on GODSPEED, where NAO indicates to have a higher amount of social influence on the participants then Pepper. The result for NARS were not significant. The result from the decisions made during the experiment indicated that humans follow NAO more than Pepper, a result that got more explained and understandable during the interviews. For future studies there would be interesting to test the scenario with a larger selection and also with a more natural Wizard of Oz-design.
|
29 |
An Open Source Platform for Controlling the MANOI AT01 Humanoid Robot and Estimating its Center of MassAl-Faisali, Nihad 06 June 2014 (has links)
No description available.
|
30 |
Bipedal Walking for a Full Size Humanoid Robot Utilizing Sinusoidal Feet Trajectories and Its Energy ConsumptionHan, Jea-Kweon 30 May 2012 (has links)
This research effort aims to develop a series of full-sized humanoid robots, and to research a simple but reliable bipedal walking method.
Since the debut of Wabot from Waseda University in 1973, several full-sized humanoid robots have been developed around the world that can walk, and run. Although various humanoid robots have successfully demonstrated their capabilities, bipedal walking methods are still one of the main technical challenges that robotics researchers are attempting to solve. It is still challenging because most bipedal walking methods, including ZMP (Zero Moment Point) require not only fast sensor feedback, but also fast and precise control of actuators. For this reason, only a small number of research groups have the ability to create full-sized humanoid robots that can walk and run.
However, if we consider this problem from a different standpoint, the development of a full-sized humanoid robot can be simplified as long as the bipedal walking method is easily formulated. Therefore, this research focuses on developing a simple but reliable bipedal walking method. It then presents the designs of two versions of a new class of super lightweight (less than 13 kg), full-sized (taller than 1.4 m) humanoid robots called CHARLI-L (Cognitive Humanoid Autonomous Robot with Learning Intelligence – Lightweight) and CHARLI-2. These robots have unique designs compared to other full- sized humanoid robots. CHARLI-L utilizes spring assisted parallel four-bar linkages with synchronized actuation to achieve the goals of lightweight and low cost. Based on the experience and lesions learned from CHARLI-L, CHARLI-2 uses gear train reduction mechanisms, instead of parallel four-bar linkages, to increase actuation torque at the joints while further reducing weight.
Both robots successfully demonstrated untethered bipedal locomotion using an intuitive walking method with sinusoidal foot movement. This walking method is based on the ZMP method. Motion capture tests using six high speed infrared cameras validate the proposed bipedal walking method. Additionally, the total power and energy consumptions during walking are calculated from measured actuator currents. / Ph. D.
|
Page generated in 0.0738 seconds