• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 12
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 57
  • 57
  • 13
  • 13
  • 11
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design and Implementation of a Scalable Real-Time Motor Controller Architecture for Humanoid Robots and Exoskeletons

Shah, Shriya 24 August 2017 (has links)
Embedded systems for humanoid robots are required to be reliable, low in cost, scalable and robust. Most of the applications related to humanoid robots require efficient force control of Series Elastic Actuators (SEA). These control loops often introduce precise timing requirements due to the safety critical nature of the underlying hardware. Also the motor controller needs to run fast and interface with several sensors. The commercially available motor controllers generally do not satisfy all the requirements of speed, reliability, ease of use and small size. This work presents a custom motor controller, which can be used for real time force control of SEA on humanoid robots and exoskeletons. Emphasis has been laid on designing a system which is scalable, easy to use and robust. The hardware and software architecture for control has been presented along with the results obtained on a novel Series Elastic Actuator based humanoid robot THOR. / Master of Science
12

Design of a Humanoid Robot for Disaster Response

Lee, Bryce Kenji Tim-Sung 21 April 2014 (has links)
This study focuses on the design and implementation of a humanoid robot for disaster response. In particular, this thesis investigates the lower body design in detail with the upper body discussed at a higher level. The Tactical Hazardous Operations Robot (THOR) was designed to compete in the DARPA Robotics Challenge where it needs to complete tasks based on first-responder operations. These tasks, ranging from traversing rough terrain through driving a utility vehicle, suggest a versatile platform in a human sized form factor. A physical experiment of the proposed tasks generated a set of joint range of motions (RoM). Desired limb lengths were determined by comparing existing robots, the test subject in the experiment of proposed tasks, and an average human. Simulations using the desired RoM and limb lengths were used to calculate baseline joint torques. Based on the generated design constraints, THOR is a 34 degree of freedom humanoid that stands 1.78 [m] tall and weighs 65 [kg]. The 12 lower body joints are driven by series elastic linear actuators with multiple joints actuated in parallel. The parallel actuation mimics the human body, where multiple muscles pull on the same joint cooperatively. The legs retain high joint torques throughout their large RoM with some joints achieving torques as high as 289 [Nm]. The upper body uses traditional rotary actuators to drive the waist, arms, and head. The proprioceptive sensor selection was influenced by past experience on humanoid platforms, and perception sensors were selected to match the competition. / Master of Science
13

Development and Characterization of an Interprocess Communications Interface and Controller for Bipedal Robots

Burton, James David 18 January 2016 (has links)
As robotic systems grow in complexity, they inevitably undergo a process of specialization whereby they separate into an array of interconnected subsystems and individual processes. In order to function as a unified system, these processes rely heavily on interprocess communications (IPC) to transfer information between subsystems and various execution loops. This thesis presents the design, implementation, and validation of the Valor ROS Controller, a hybrid IPC interface layer and robot controller. The Valor ROS Controller connects the motion control system, implemented with the internally created Bifrost IPC, developed by Team VALOR for the DARPA Robotics Challenge (DRC) with the high level software developed by Team ViGIR that uses the Robot Operating System (ROS) IPC framework. The Valor ROS Controller also acts as a robot controller designed to run on THOR and ESCHER, and is configurable to use different control modes and controller implementations. By combining an IPC interface layer with controllers, the Valor ROS Controller enabled Team VALOR to use Team ViGIR's software capabilities at the DRC Finals. In addition to the qualitative validation of Team VALOR competing at the DRC Finals, this thesis studies the efficiency of the Valor ROS Controller by quantifying its computational resourceful utilization, message pathway latency, and joint controller tracking. Another contribution of this thesis is the quantification of end-effector pose error incurred by whole-body motions. This phenomenon has been observed on both THOR and ESCHER as one of their arms moves through a trajectory, however, it has never been studied in depth on either robot. The results demonstrate that the Valor ROS Controller adequately uses computational resources and has message latencies in the order of 50 ms. The results also indicate several avenues to improve arm tracking in Team VALOR's system. Whole-body motions account for approximately 5 cm of the end-effector pose error observed on hardware when an arm is at near full extension. / Master of Science
14

Evolution of grasping behaviour in anthropomorphic robotic arms with embodied neural controllers

Massera, Gianluca January 2012 (has links)
The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks.
15

Expressive Collaborative Music Performance via Machine Learning

Xia, Guangyu 01 August 2016 (has links)
Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a predefined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be “music,” body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, body movements, and eyebrow movements. We also conduct the first subjective evaluation of the joint effect of automatic accompaniment and robot expression. Our result shows robot embodiment and expression enable more musical, interactive, and engaging human-computer collaborative performance.
16

Biologically Inspired Legs and Novel Flow Control Valve Toward a New Approach for Accessible Wearable Robotics

Moffat, Shannon Marija 18 April 2019 (has links)
The Humanoid Walking Robot (HWR) is a research platform for the study of legged and wearable robots actuated with Hydro Muscles. The fluid operated HWR is representative of a class of biologically inspired, and in some aspects highly biomimetic robotic musculoskeletal appendages showing certain advantages in comparison to more conventional artificial limbs and braces for physical therapy/rehabilitation, assistance of daily living, and augmentation. The HWR closely mimics the human body structure and function, including the skeleton, ligaments, tendons, and muscles. The HWR can emulate close to human-like movements even when subjected to simplified control laws. One of the main drawbacks of this approach is the inaccessibility of an appropriate fluid flow management support system, in the form of affordable, lightweight, compact, and good quality valves suitable for robotics applications. To resolve this shortcoming, the Compact Robotic Flow Control Valve (CRFC Valve) is introduced and successfully proof-of-concept tested. The HWR added with the CRFC Valve has potential to be a highly energy efficient, lightweight, controllable, affordable, and customizable solution that can resolve single muscle action.
17

Modelisation Visuelle d'un Objet Inconnu par un Robot Humanoide Autonome / Visual Modeling of an Unknown Object by an Autonomous Humanoid Robot

Foissotte, Torea 03 December 2010 (has links)
Ce travail est focalisé sur le problème de la construction autonome du modèle 3D d'un objet inconnu en utilisant un robot humanoïde. Plus particulièrement, nous considérons un HRP-2 guidé par la vision au sein d'un environnement connu qui peut contenir des obstacles. Notre méthode considère les informations visuelles disponibles, les contraintes sur le corps du robot ainsi que le modèle de l'environnement dans le but de générer des postures adéquates et les mouvements nécessaires autour de l'objet. Le problème de sélection de vue ("Next-Best-View") est abordé en se basant sur un générateur de postures qui calcule une configuration par la résolution d'un problème d'optimisation. Une première solution est une approche locale où un algorithme de rendu original à été conçu afin d'être inclut directement dans le générateur de postures. Une deuxième solution augmente la robustesse aux minimums locaux en décomposant le problème en 2 étapes: (i) trouver la pose du capteur tout en satisfaisant un ensemble de contraintes réduit, et (ii) calculer la configuration complète du robot avec le générateur de posture. La première étape repose sur des méthodes d'optimisation globale et locale (BOBYQA) afin de converger vers des points de vue pertinents dans des espaces de configuration admissibles non convexes. Notre approche est testée en conditions réelles par le biais d'une architecture cohérente qui inclus différents composants logiciels spécifique à l'usage d'un humanoïde. Ces expériences intègrent des travaux de recherche en cours en planification de mouvements, contrôle de mouvements et traitement d'image, qui pourront permettre de construire de façon autonome le modèle 3D d'un objet. / This work addresses the problem of autonomously constructing the 3D model of an unknown object using a humanoid robot.More specifically, we consider a HRP-2 evolving in a known environment, which is possibly cluttered, guided by vision.Our method considers the visual information available, the constraints on the robot body, and the model of the environment in order to generate pertinent postures and the necessary motions around the object.Our two solutions to the Next-Best-View problem are based on a specific posture generator, where a posture is computed by solving an optimization problem.The first solution is a local approach to the problem where an original rendering algorithm is specifically designed in order to be directly included in the posture generator. The rendering algorithm can display complex 3D shapes while taking into account self-occlusions.The second solution seeks more global solutions by decoupling the problem in two steps: (i) find the best sensor pose while satisfying a reduced set of constraints on the humanoid, and (ii) generate a whole-body posture with the posture generator.The first step relies on global sampling and BOBYQA, a derivative-free optimization method, to converge toward pertinent viewpoints in non-convex feasible configuration spaces.Our approach is tested in real conditions by using a coherent architecture that includes various complex software components that consider the specificities of the humanoid robot. This experiment integrates on-going works addressing the tasks of motion planning, motion control, and visual processing, to allow the completion of the 3D object reconstruction in future works.
18

Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interaction

Bussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
19

Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things

Marjanovic, Matthew J. 20 June 2003 (has links)
As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer. A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.
20

Action Recognition Through Action Generation

Akgun, Baris 01 August 2010 (has links) (PDF)
This thesis investigates how a robot can use action generation mechanisms to recognize the action of an observed actor in an on-line manner i.e., before the completion of the action. Towards this end, Dynamic Movement Primitives (DMP), an action generation method proposed for imitation, are modified to recognize the actions of an actor. Specifically, a human actor performed three different reaching actions to two different objects. Three DMP&#039 / s, each corresponding to a different reaching action, were trained using this data. The proposed method used an object-centered coordinate system to define the variables for the action, eliminating the difference between the actor and the robot. During testing, the robot simulated action trajectories by its learned DMPs and compared the resulting trajectories against the observed one. The error between the simulated and the observed trajectories were integrated into a recognition signal, over which recognition was done. The proposed method was applied on the iCub humanoid robot platform using an active motion capture device for sensing. The results showed that the system was able to recognize actions with high accuracy as they unfold in time. Moreover, the feasibility of the approach is demonstrated in an interactive game between the robot and a human.

Page generated in 0.0743 seconds