• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 22
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 80
  • 46
  • 45
  • 39
  • 37
  • 36
  • 32
  • 24
  • 22
  • 22
  • 20
  • 19
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Expressive Collaborative Music Performance via Machine Learning

Xia, Guangyu 01 August 2016 (has links)
Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a predefined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be “music,” body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, body movements, and eyebrow movements. We also conduct the first subjective evaluation of the joint effect of automatic accompaniment and robot expression. Our result shows robot embodiment and expression enable more musical, interactive, and engaging human-computer collaborative performance.
52

Génération de mouvement en robotique mobile et humanoïde / Generation of motion in mobile and humanoid robotics

Saurel, Guilhem 03 October 2017 (has links)
La génération de mouvements de locomotion en robotique mobile est étudiée dans le monde académique depuis plusieurs décennies. La théorie concernant la modélisation et le contrôle des robots à roues est largement mature. Cependant, la mise en œuvre effective de ces modèles dans des conditions réelles demande des études complémentaires. Dans cette thèse, nous présentons trois projets mettant en œuvre trois différents types de robots mobiles. Nous débutons dans chaque cas par une analyse sur les qualités recherchées d’un mouvement dans un contexte particulier, qu’il soit artistique ou industriel, et terminons par la présentation des architectures algorithmiques et logicielles mises en œuvre, notamment dans le cadre d’expositions de plusieurs mois, où le public est invité à partager l’espace d’évolution de robots. La réalisation de ces projets montre que certains choix technologiques semblant insignifiants au moment de la conception des robots sont déterminants dans les dernières étapes de la production. On peut extrapoler cette remarque depuis ces robots mobile à deux ou trois degrés de liberté vers des robots humanoïdes pouvant en avoir plusieurs dizaines. La stratégie classique qui consiste à concevoir, dans un premier temps, l’architecture mécatronique des robots humanoïdes, pour se poser ensuite la question de leur contrôle, atteint ses limites, comme le montrent par exemple la consommation énergétique et la difficulté d’obtenir des mouvements de marche dynamique sur ces robots, pourtant conçus dans le but de marcher. Dans une perspective globale de conception des robots marcheurs, nous proposons un système de codesign, où il est possible d’optimiser simultanément la conception mécanique et les contrôleurs d’un robot.. / Generation of locomotion motions in mobile robotics has been studied in the academic world for several decades. The theory concerning the modeling and control of wheeled robots is largely mature. However, the actual implementation of these models in real conditions requires further studies. In this thesis, we present three projects using three different types of mobile robots. In each case, we begin with an analysis of the required qualities of a motion in a particular context, whether artistic or industrial, and end with the presentation of the algorithmic and software architectures implemented, particularly in the context of exhibitions of several months, where the public is invited to share the space of evolution of robots. The realization of these projects shows that some technological choices seem insignificant at the time of the design of the robots are decisive in the final stages of production. One can extrapolate this remark from these mobile robots with two or three degrees of freedom towards humanoid robots which can have several tens. The classical strategy of first designing the mechatronic architecture of humanoid robots and then raising the question of their control has reached its limits, as illustrated, for example, by their energy consumption and the difficulty to obtain dynamic walking motions on these robots, yet designed for the purpose of walking. From a global perspective of robot design, we propose a system of codesign, where it is possible to simultaneously optimize the mechanical design and the controllers of a robot. This system is firstly tested by various examples as proof of concept. It is then applied to the comparison of rigid and elastic actuators on different biped robots, then to the study of the impact of the stabilization of the head on the general stabilization of the body and finally to the design of a prototype of semi-passive walker.
53

Biologically Inspired Legs and Novel Flow Control Valve Toward a New Approach for Accessible Wearable Robotics

Moffat, Shannon Marija 18 April 2019 (has links)
The Humanoid Walking Robot (HWR) is a research platform for the study of legged and wearable robots actuated with Hydro Muscles. The fluid operated HWR is representative of a class of biologically inspired, and in some aspects highly biomimetic robotic musculoskeletal appendages showing certain advantages in comparison to more conventional artificial limbs and braces for physical therapy/rehabilitation, assistance of daily living, and augmentation. The HWR closely mimics the human body structure and function, including the skeleton, ligaments, tendons, and muscles. The HWR can emulate close to human-like movements even when subjected to simplified control laws. One of the main drawbacks of this approach is the inaccessibility of an appropriate fluid flow management support system, in the form of affordable, lightweight, compact, and good quality valves suitable for robotics applications. To resolve this shortcoming, the Compact Robotic Flow Control Valve (CRFC Valve) is introduced and successfully proof-of-concept tested. The HWR added with the CRFC Valve has potential to be a highly energy efficient, lightweight, controllable, affordable, and customizable solution that can resolve single muscle action.
54

Modelisation Visuelle d'un Objet Inconnu par un Robot Humanoide Autonome / Visual Modeling of an Unknown Object by an Autonomous Humanoid Robot

Foissotte, Torea 03 December 2010 (has links)
Ce travail est focalisé sur le problème de la construction autonome du modèle 3D d'un objet inconnu en utilisant un robot humanoïde. Plus particulièrement, nous considérons un HRP-2 guidé par la vision au sein d'un environnement connu qui peut contenir des obstacles. Notre méthode considère les informations visuelles disponibles, les contraintes sur le corps du robot ainsi que le modèle de l'environnement dans le but de générer des postures adéquates et les mouvements nécessaires autour de l'objet. Le problème de sélection de vue ("Next-Best-View") est abordé en se basant sur un générateur de postures qui calcule une configuration par la résolution d'un problème d'optimisation. Une première solution est une approche locale où un algorithme de rendu original à été conçu afin d'être inclut directement dans le générateur de postures. Une deuxième solution augmente la robustesse aux minimums locaux en décomposant le problème en 2 étapes: (i) trouver la pose du capteur tout en satisfaisant un ensemble de contraintes réduit, et (ii) calculer la configuration complète du robot avec le générateur de posture. La première étape repose sur des méthodes d'optimisation globale et locale (BOBYQA) afin de converger vers des points de vue pertinents dans des espaces de configuration admissibles non convexes. Notre approche est testée en conditions réelles par le biais d'une architecture cohérente qui inclus différents composants logiciels spécifique à l'usage d'un humanoïde. Ces expériences intègrent des travaux de recherche en cours en planification de mouvements, contrôle de mouvements et traitement d'image, qui pourront permettre de construire de façon autonome le modèle 3D d'un objet. / This work addresses the problem of autonomously constructing the 3D model of an unknown object using a humanoid robot.More specifically, we consider a HRP-2 evolving in a known environment, which is possibly cluttered, guided by vision.Our method considers the visual information available, the constraints on the robot body, and the model of the environment in order to generate pertinent postures and the necessary motions around the object.Our two solutions to the Next-Best-View problem are based on a specific posture generator, where a posture is computed by solving an optimization problem.The first solution is a local approach to the problem where an original rendering algorithm is specifically designed in order to be directly included in the posture generator. The rendering algorithm can display complex 3D shapes while taking into account self-occlusions.The second solution seeks more global solutions by decoupling the problem in two steps: (i) find the best sensor pose while satisfying a reduced set of constraints on the humanoid, and (ii) generate a whole-body posture with the posture generator.The first step relies on global sampling and BOBYQA, a derivative-free optimization method, to converge toward pertinent viewpoints in non-convex feasible configuration spaces.Our approach is tested in real conditions by using a coherent architecture that includes various complex software components that consider the specificities of the humanoid robot. This experiment integrates on-going works addressing the tasks of motion planning, motion control, and visual processing, to allow the completion of the 3D object reconstruction in future works.
55

Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interaction

Bussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
56

Contact force sensing from motion tracking / Capture de forces de contact par capture de mouvement

Pham, Tu-Hoa 09 December 2016 (has links)
Le sens du toucher joue un rôle fondamental dans la façon dont nous percevons notre environnement, nous déplaçons, et interagissons délibérément avec d'autres objets ou êtres vivants. Ainsi, les forces de contact informent à la fois sur l'action réalisée et sa motivation. Néanmoins, l'utilisation de capteurs de force traditionnels est coûteuse, lourde, et intrusive. Dans cette thèse, nous examinons la perception haptique par la capture de mouvement. Ce problème est difficile du fait qu'un mouvement donné peut généralement être causé par une infinité de distributions de forces possibles, en multi-contact. Dans ce type de situations, l'optimisation sous contraintes physiques seule ne permet que de calculer des distributions de forces plausibles, plutôt que fidèles à celles appliquées en réalité. D'un autre côté, les méthodes d'apprentissage de type `boîte noire' pour la modélisation de structures cinématiquement et dynamiquement complexes sont sujettes à des limitations en termes de capacité de généralisation. Nous proposons une formulation du problème de la distribution de forces exploitant ces deux approches ensemble plutôt que séparément. Nous capturons ainsi la variabilité dans la façon dont on contrôle instinctivement les forces de contact tout en nous assurant de leur compatibilité avec le mouvement observé. Nous présentons notre approche à la fois pour la manipulation et les interactions corps complet avec l'environnement. Nous validons systématiquement nos résultats avec des mesures de référence et fournissons des données exhausives pour encourager et évaluer les travaux futurs sur ce nouveau sujet. / The human sense of touch is of fundamental importance in the way we perceive our environment, move ourselves, and purposefully interact with other objects or beings. Thus, contact forces are informative on both the realized task and the underlying intent. However, monitoring them with force transducers is a costly, cumbersome and intrusive process. In this thesis, we investigate the capture of haptic information from motion tracking. This is a challenging problem, as a given motion can generally be caused by an infinity of possible force distributions in multi-contact. In such scenarios, physics-based optimization alone may only capture force distributions that are physically compatible with a given motion, rather than those really applied. In contrast, machine learning techniques for the black-box modelling of kinematically and dynamically complex structures are often prone to generalization issues. We propose a formulation of the force distribution problem utilizing both approaches jointly rather than separately. We thus capture the variability in the way humans instinctively regulate contact forces while also ensuring their compatibility with the observed motion. We present our approach on both manipulation and whole-body interaction with the environment. We consistently back our findings with ground-truth measurements and provide extensive datasets to encourage and serve as benchmarks for future research on this new topic.
57

Action learning experiments using spiking neural networks and humanoid robots

de Azambuja, Ricardo January 2018 (has links)
The way our brain works is still an open question, but one thing seems to be clear: biological neural systems are computationally powerful, robust and noisy. Natural nervous system are able to control limbs in different scenarios with high precision. As neural networks in living beings communicate through spikes, modern neuromorphic systems try to mimic them by using spike-based neuron models. This thesis is focused on the advancement of neurorobotics or brain inspired robotic arm controllers based on artificial neural network architectures. The architecture chosen to implement those controllers was the spike neuron version of Reservoir Computing framework, called Liquid State Machines. The main goal is to explore the possibility of using brain inspired neural networks to control a robot by demonstration. Moreover, it aims to achieve systems robust to environmental noise and internal structure destruction presenting a graceful degradation. As the validation, a series of action learning experiments are presented where simulated robotic arms are controlled. The investigation starts with a 2 degrees of freedom arm and moves to the research version of the Rethink Robotics Inc. collaborative humanoid robot Baxter. Moreover, a proof-of- concept experiment is also done using the real Baxter robot. The results show Liquid State Machines, when endowed with an extra external feedback loop, can be also employed to control more complex humanoid robotic arms than a simple planar 2 degrees of freedom one. Additionally, the new parallel architecture presented here was capable to withstand noise and internal destruction better than a simple use of multiple columns also presenting a graceful degradation behaviour.
58

Recognizing Engagement Behaviors in Human-Robot Interaction

Ponsler, Brett 17 January 2011 (has links)
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
59

Applications of the Virtual Holonomic Constraints Approach : Analysis of Human Motor Patterns and Passive Walking Gaits

Mettin, Uwe January 2008 (has links)
<p>In the field of robotics there is a great interest in developing strategies and algorithms to reproduce human-like behavior. One can think of human-like machines that may replace humans in hazardous working areas, perform enduring assembly tasks, serve the elderly and handicapped, etc. The main challenges in the development of such robots are, first, to construct sophisticated electro-mechanical humanoids and, second, to plan and control human-like motor patterns.</p><p>A promising idea for motion planning and control is to reparameterize any somewhat coordinated motion in terms of virtual holonomic constraints, i.e. trajectories of all degrees of freedom of the mechanical system are described by geometric relations among the generalized coordinates. Imposing such virtual holonomic constraints on the system dynamics allows to generate synchronized motor patterns by feedback control. In fact, there exist consistent geometric relations in ordinary human movements that can be used advantageously. In this thesis the virtual constraints approach is extended to a wider and rigorous use for analyzing, planning and reproducing human-like motions based on mathematical tools previously utilized for very particular control problems.</p><p>It is often the case that some desired motions cannot be achieved by the robot due to limitations in available actuation power. This constraint rises the question of how to modify the mechanical design in order to achieve better performance. An underactuated planar two-link robot is used to demonstrate that springs can complement the actuation in parallel to an ordinary motor. Motion planning is carried out for the original robot dynamics while the springs are treated as part of the control action with a torque profile suited to the preplanned trajectory.</p><p>Another issue discussed in this thesis is to find stable and unstable (hybrid) limit cycles for passive dynamic walking robots without integrating the full set of differential equations. Such procedure is demonstrated for the compass-gait biped by means of optimization with a reduced number of initial conditions and parameters to search. The properties of virtual constraints and reduced dynamics are exploited to solve this problem.</p>
60

Teaching an Old Robot New Tricks: Learning Novel Tasks via Interaction with People and Things

Marjanovic, Matthew J. 20 June 2003 (has links)
As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer. A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.

Page generated in 0.0255 seconds