• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 22
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 80
  • 46
  • 45
  • 39
  • 37
  • 36
  • 32
  • 24
  • 22
  • 22
  • 20
  • 19
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

自動產生具多樣化運動的虛擬人物動畫 / Generating Humanoid Animation with Versatile Motions in a Virtual Environment

黃培智, Huang,Pei-Zhi Unknown Date (has links)
Research on global path planning and navigation strategies for mobile robots has been well studied in the robotics literature. Since the problem can usually be modeled as searching for a collision-free path in a 2D workspace, very efficient and complete algorithms can be employed. However, enabling a humanoid robot to move autonomously in a real-life environment remains a challenging problem. Unlike traditional wheeled robots, legged robots such as humanoid robots have advanced abilities of stepping over an object or striding over a deep gap with versatile locomotions. In this thesis, we propose a motion planning system capable of generating both global and local motions for a humanoid robot in layered environment cluttered with obstacles and deep narrow gaps. The planner can generate a gross motion that takes multiple locomotions, humanoid’s geometric properties and striding ability into consideration. A gross motion plan that satisfies the constraints is generated and further realized by a local planner, which determines the most efficient footsteps and locomotion over uneven terrain. If the local planner fails, the failure is fed back to the global planner to consider other alternative paths. The experiments show that our system can efficiently generate humanoid motions to reach the goal in a real-life environment. The system can also apply to a real humanoid robot to provide a high-level control mechanism.
2

Biomimetic motion synthesis for synthetic humanoids

Hale, Joshua G. January 2003 (has links)
No description available.
3

DESIGN OF A HUMANOID NECK MOVEMENTS AND EYE-EXPRESSIONS MECHANISMS

Navarrete Ortiz de Lanzagorta, Ana January 2012 (has links)
This project aims to design and construct a 3D CAD model of a humanoid robot head; this means the mechanisms that simulate the motions of the neck, the eyes and the eyelids. The project was developed in collaboration with Cognition and Interaction Laboratory at the University of Skövde. From the literature review, it was found that most of the humanoid robots at the market are able to perform neck movements. The problem is that the neck motions today are not smooth as human neck and the movements of face details, such as the eyes and the mouth, are less developed. Only robots created for interaction research between human and robots allows for face expressions. However, the rest of the bodies of such robots are not as well developed as the face. The conclusion is that there is no humanoid robot that presents a full expression face and a well-developed body. This project presents new mechanical concepts for how to provide smooth humanoid neck motions as well as how to show expressions of the robots face. Three parts of the humanoid heads: the neck, the eyes and the eyelids were investigated. By examining different mechanical concepts used today two types of mechanisms were found: parallel and serial. In the neck the serial mechanism was chosen because the motion obtained is smoother. The eyes and the eyelids were designed with a serial mechanism due to the limitations of the space in the head. The three parts were built in to a 3D CAD program in order to test the entire head mechanism. This results in a head mechanism that enables smooth motion of the neck and provides enough degrees of freedom to simulate feelings due to eye expressions.
4

A Dual-SLIP Model For Dynamic Walking In A Humanoid Over Uneven Terrain

Liu, Yiping January 2015 (has links)
No description available.
5

A Walking Controller for Humanoid Robots using Virtual Force

Jagtap, Vinayak V. 23 November 2019 (has links)
Current state-of-the-art walking controllers for humanoid robots use simple models, such as Linear Inverted Pendulum Mode (LIPM), to approximate Center of Mass(CoM) dynamics of a robot. These models are then used to generate CoM trajectories that keep the robot balanced while walking. Such controllers need prior information of foot placements, which is generated by a walking pattern generator. While the robot is walking, any change in the goal position leads to aborting the existing foot placement plan and re-planning footsteps, followed by CoM trajectory generation. This thesis proposes a tightly coupled walking pattern generator and a reactive balancing controller to plan and execute one step at a time. Walking is an emergent behavior from such a controller which is achieved by applying a virtual force in the direction of the goal. This virtual force, along with external forces acting on the robot, is used to compute desired CoM acceleration and the footstep parameters for only the next step. Step location is selected based on the capture point, which is a point on the ground at which the robot should step to stay balanced. Because each footstep location is derived as needed based on the capture point, it is not necessary to compute a complete set of footsteps. Experiments show that this approach allows for simpler inputs, results in faster operation, and is inherently immune to external perturbing and other reaction forces from the environment. Experiments are performed on Boston Dynamic's Atlas robot and NASA's Valkyrie R5 robot in simulation, and on Atlas hardware.
6

A Walking Controller for Humanoid Robots using Virtual Force

Jagtap, Vinayak V 13 September 2019 (has links)
Current state-of-the-art walking controllers for humanoid robots use simple models, such as Linear Inverted Pendulum Mode (LIPM), to approximate Center of Mass(CoM) dynamics of a robot. These models are then used to generate CoM trajectories that keep the robot balanced while walking. Such controllers need prior information of foot placements, which is generated by a walking pattern generator. While the robot is walking, any change in the goal position leads to aborting the existing foot placement plan and re-planning footsteps, followed by CoM trajectory generation. This thesis proposes a tightly coupled walking pattern generator and a reactive balancing controller to plan and execute one step at a time. Walking is an emergent behavior from such a controller which is achieved by applying a virtual force in the direction of the goal. This virtual force, along with external forces acting on the robot, is used to compute desired CoM acceleration and the footstep parameters for only the next step. Step location is selected based on the capture point, which is a point on the ground at which the robot should step to stay balanced. Because each footstep location is derived as needed based on the capture point, it is not necessary to compute a complete set of footsteps. Experiments show that this approach allows for simpler inputs, results in faster operation, and is inherently immune to external perturbing and other reaction forces from the environment. Experiments are performed on Boston Dynamic's Atlas robot and NASA's Valkyrie R5 robot in simulation, and on Atlas hardware.
7

Bipedal Robotic Walking on Flat-Ground, Up-Slope and Rough Terrain with Human-Inspired Hybrid Zero Dynamics

Nadubettu Yadukumar, Shishir 1986- 14 March 2013 (has links)
The thesis shows how to achieve bipedal robotic walking on flat-ground, up-slope and rough terrain by using Human-Inspired control. We begin by considering human walking data and find outputs (or virtual constraints) that, when calculated from the human data, are described by simple functions of time (termed canonical walking functions). Formally, we construct a torque controller, through model inversion, that drives the outputs of the robot to the outputs of the human as represented by the canonical walking function; while these functions fit the human data well, they do not apriori guarantee robotic walking (due to do the physical differences between humans and robots). An optimization problem is presented that determines the best fit of the canonical walking function to the human data, while guaranteeing walking for a specific bipedal robot; in addition, constraints can be added that guarantee physically realizable walking. We consider a physical bipedal robot, AMBER, and considering the special property of the motors used in the robot, i.e., low leakage inductance, we approximate the motor model and use the formal controllers that satisfy the constraints and translate into an efficient voltage-based controller that can be directly implemented on AMBER. The end result is walking on flat-ground and up-slope which is not just human-like, but also amazingly robust. Having obtained walking on specific well defined terrains separately, rough terrain walking is achieved by dynamically changing the extended canonical walking functions (ECWF) that the robot outputs should track at every step. The state of the robot, after every non-stance foot strike, is actively sensed and the new CWF is constructed to ensure Hybrid Zero Dynamics is respected in the next step. Finally, the technique developed is tried on different terrains in simulation and in AMBER showing how the walking gait morphs depending on the terrain.
8

How human are the Crakers? : A study about human identity in Margaret Atwood’s Oryx and Crake

Karlsson, Paola January 2011 (has links)
This essay has handled the subject of humanity in Oryx and Crake by Margaret Atwood. The aim of the thesis was  to argue that the Crakers developed into human beings with help of their teachers. This was made by researching different aspects in humanity such as human identity, language, religion, life and death and how these traits of humanity were developed.    The development of the Crakers’ identities has also been discussed with regards to teachers, teaching and the relation between power and knowledge meaning how the Crakers’ teachers helped them or tried to prevent them from growing into humans. The relation between power and knowledge shows how the teacher holds power over his pupils since he decides what he will teach them. The results revealed that the Crakers became as human as they could be without being born human through teaching and acquiring traits that are known to be human.
9

Mental imagery in humanoid robots

Seepanomwan, Kristsana January 2016 (has links)
Mental imagery presents humans with the opportunity to predict prospective happenings based on own intended actions, to reminisce occurrences from the past and reproduce the perceptual experience. This cognitive capability is mandatory for human survival in this folding and changing world. By means of internal representation, mental imagery offers other cognitive functions (e.g., decision making, planning) the possibility to assess information on objects or events that are not being perceived. Furthermore, there is evidence to suggest that humans are able to employ this ability in the early stages of infancy. Although materialisation of humanoid robot employment in the future appears to be promising, comprehensive research on mental imagery in these robots is lacking. Working within a human environment required more than a set of pre-programmed actions. This thesis aims to investigate the use of mental imagery in humanoid robots, which could be used to serve the demands of their cognitive skills as in humans. Based on empirical data and neuro-imaging studies on mental imagery, the thesis proposes a novel neurorobotic framework which proposes to facilitate humanoid robots to exploit mental imagery. Through conduction of a series of experiments on mental rotation and tool use, the results from this study confirm this potential. Chapters 5 and 6 detail experiments on mental rotation that investigate a bio-constrained neural network framework accounting for mental rotation processes. They are based on neural mechanisms involving not only visual imagery, but also affordance encoding, motor simulation, and the anticipation of the visual consequences of actions. The proposed model is in agreement with the theoretical and empirical research on mental rotation. The models were validated with both a simulated and physical humanoid robot (iCub), engaged in solving a typical mental rotation task. The results show that the model is able to solve a typical mental rotation task and in agreement with data from psychology experiments, they also show response times linearly dependent on the angular disparity between the objects. Furthermore, the experiments in chapter 6 propose a novel neurorobotic model that has a macro-architecture constrained by knowledge on brain, which encompasses a rather general mental rotation mechanism and incorporates a biologically plausible decision making mechanism. The new model is tested within the humanoid robot iCub in tasks requiring to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot has an enhanced capacity to generalize mental rotation of new objects and shows the possible effects of overt movements of the wrist on mental rotation. These results indicate that the model represents a further step in the identification of the embodied neural mechanisms that might underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. In Chapter 7, the primary purpose for conducting the experiment on tool use development through computational modelling refers to the demonstration that developmental characteristics of tool use identified in human infants can be attributed to intrinsic motivations. Through the processes of sensorimotor learning and rewarding mechanisms, intrinsic motivations play a key role as a driving force that drives infants to exhibit exploratory behaviours, i.e., play. Sensorimotor learning permits an emergence of other cognitive functions, i.e., affordances, mental imagery and problem-solving. Two hypotheses on tool use development are also conducted thoroughly. Secondly, the experiment tests two candidate mechanisms that might underlie an ability to use a tool in infants: overt movements and mental imagery. By means of reinforcement learning and sensorimotor learning, knowledge of how to use a tool might emerge through random movements or trial-and-error which might reveal a solution (sequence of actions) of solving a given tool use task accidentally. On the other hand, mental imagery was used to replace the outcome of overt movements in the processes of self-determined rewards. Instead of determining a reward from physical interactions, mental imagery allows the robots to evaluate a consequence of actions, in mind, before performing movements to solve a given tool use task. Therefore, collectively, the case of mental imagery in humanoid robots was systematically addressed by means of a number of neurorobotic models and, furthermore, two categories of spatial problem solving tasks: mental rotation and tool use. Mental rotation evidently involves the employment of mental imagery and this thesis confirms the potential for its exploitation by humanoid robots. Additionally, the studies on tool use demonstrate that the key components assumed and included in the experiments on mental rotation, namely affordances and mental imagery, can be acquired by robots through the processes of sensorimotor learning.
10

Towards a Human-like Robot for Medical Simulation

Thayer, Nicholas D. 05 October 2011 (has links)
Medical mannequins provide the first hands-on training for nurses and doctors and help eliminate human mistakes that would otherwise take place with a real person. The closer the mannequin is to mimicking a human being, the more effective the training; thus, additional features such as movable limbs and eyes, vision processing and realistic social interaction will provide a more fulfilling learning experience. A humanoid robot with a 23 degree of freedom (DOF) hand was developed which is capable of performing complex dexterous tasks such as typing on a keyboard. A single DOF elbow and two DOF shoulder was designed and optimized to maintain human form while being able to dynamically lift common household items. A 6 DOF neck and 13 DOF face with a highly expressive silicone skin-motor arrangement has been developed. The face is capable of talking and making several expressions and is used to train the student to pick up on emotional cues such as eye contact and body language during the interview stage. A pair of 3 DOF legs and a torso were also developed which allows the humanoid to be in either the laying down or sitting up position. An algorithm was developed that only activates necessary areas of code in order to increase its cycle time which greatly increases the vision tracking capabilities of the eyes. The simulator was tested at Carilion Clinic in Roanoke VA with several of the medical staff and their feedback is provided in this document. / Master of Science

Page generated in 0.0389 seconds