• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 300
  • 104
  • 90
  • 58
  • 50
  • 49
  • 41
  • 39
  • 38
  • 38
  • 36
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive / Gesture Segmentation and Recognition for Cognitive Human-Robot Interaction

Simao, Miguel 17 December 2018 (has links)
Cette thèse présente un cadre formel pour l'interaction Homme-robot (HRI), qui reconnaître un important lexique de gestes statiques et dynamiques mesurés par des capteurs portatifs. Gestes statiques et dynamiques sont classés séparément grâce à un processus de segmentation. Les tests expérimentaux sur la base de données de gestes UC2017 ont montré une haute précision de classification. La classification pas à pas en ligne utilisant des données brutes est fait avec des réseaux de neurones profonds « Long-Short Term Memory » (LSTM) et à convolution (CNN), et sont plus performants que les modèles statiques entraînés avec des caractéristiques spécialement conçues, au détriment du temps d'entraînement et d'inférence. La classification en ligne des gestes permet une classification prédictive avec réussit. Le rejet des gestes hors vocabulaire est proposé par apprentissage semi-supervisé par un réseau de neurones du type « Auxiliary Conditional Generative Adversarial Networks ». Le réseau propose a atteint une haute précision de rejet de les gestes non entraînés de la base de données UC2018 DualMyo. / This thesis presents a human-robot interaction (HRI) framework to classify large vocabularies of static and dynamic hand gestures, captured with wearable sensors. Static and dynamic gestures are classified separately thanks to the segmentation process. Experimental tests on the UC2017 hand gesture dataset showed high accuracy. In online frame-by-frame classification using raw incomplete data, Long Short-Term Memory (LSTM) deep networks and Convolutional Neural Networks (CNN) performed better than static models with specially crafted features at the cost of training and inference time. Online classification of dynamic gestures allows successful predictive classification. The rejection of out-of-vocabulary gestures is proposed to be done through semi-supervised learning of a network in the Auxiliary Conditional Generative Adversarial Networks framework. The proposed network achieved a high accuracy on the rejection of untrained patterns of the UC2018 DualMyo dataset.
202

Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interaction

Alves, Silas Franco dos Reis 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
203

Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapy

Ricardo Fibe Gambirasio 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
204

Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interaction

Silas Franco dos Reis Alves 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
205

Programmation d'un robot par des non-experts / End-user Robot Programming in Cobotic Environments

Liang, Ying Siu 12 June 2019 (has links)
Le sujet de recherche est dans la continuité des travaux réalisés au cours de mon M2R sur la programmation par démonstration appliqué à la cobotique en milieu industriel. Ce sujet est à la croisée de plusieurs domaines (interaction Humain-Robot, planification automatique, apprentissage artificiel). Il s'agit maintenant d'aller au delà de ces premiers résultats obtenus au cours de mon M2R et de trouver un cadre générique pour la programmation de « cobots » (robots collaboratifs) en milieu industriel. L'approche cobotique consiste à ce qu'un opérateur humain, en tant qu'expert métier directement impliqué dans la réalisation des tâches en ligne, apprenne au robot à effectuer de nouvelles tâches et à utiliser le robot comme assistant « agile ». Dans ce contexte la thèse propose un mode d'apprentissage de type « end-user programming », c'est-à-dire simple et ne nécessitant pas d'être expert en robotique pour programmer le robot industriel Baxter. / The increasing presence of robots in industries has not gone unnoticed.Cobots (collaborative robots) are revolutionising industries by allowing robots to work in close collaboration with humans.Large industrial players have incorporated them into their production lines, but smaller companies hesitate due to high initial costs and the lack of programming expertise.In this thesis we introduce a framework that combines two disciplines, Programming by Demonstration and Automated Planning, to allow users without programming knowledge to program a robot.The user constructs the robot's knowledge base by teaching it new actions by demonstration, and associates their semantic meaning to enable the robot to reason about them.The robot adopts a goal-oriented behaviour by using automated planning techniques, where users teach action models expressed in a symbolic planning language.In this thesis we present preliminary work on user experiments using a Baxter Research Robot to evaluate our approach.We conducted qualitative user experiments to evaluate the user's understanding of the symbolic planning language and the usability of the framework's programming process.We showed that users with little to no programming experience can adopt the symbolic planning language, and use the framework.We further present our work on a Programming by Demonstration system used for organisation tasks.The system includes a goal inference model to accelerate the programming process by predicting the user's intended product configuration.
206

Indoor Navigation for Mobile Robots : Control and Representations

Althaus, Philipp January 2003 (has links)
This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen. Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events. In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly. A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation. In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction. <b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction
207

Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial Systems

Peschel, Joshua Michael 2012 August 1900 (has links)
This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance.
208

Indoor Navigation for Mobile Robots : Control and Representations

Althaus, Philipp January 2003 (has links)
<p>This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen.</p><p>Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events.</p><p>In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly.</p><p>A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation.</p><p>In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction.</p><p><b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction</p>
209

Methodology for creating human-centered robots : design and system integration of a compliant mobile base

Wong, Pius Duc-min 30 July 2012 (has links)
Robots have growing potential to enter the daily lives of people at home, at work, and in cities, for a variety of service, care, and entertainment tasks. However, several challenges currently prevent widespread production and use of such human-centered robots. The goal of this thesis was first to help overcome one of these broad challenges: the lack of basic safety in human-robot physical interactions. Whole-body compliant control algorithms had been previously simulated that could allow safer movement of complex robots, such as humanoids, but no such robots had yet been documented to actually implement these algorithms. Therefore a wheeled humanoid robot "Dreamer" was developed to implement the algorithms and explore additional concepts in human-safe robotics. The lower mobile base part of Dreamer, dubbed "Trikey," is the focus of this work. Trikey was iteratively developed, undergoing cycles of concept generation, design, modeling, fabrication, integration, testing, and refinement. Test results showed that Trikey and Dreamer safely performed movements under whole-body compliant control, which is a novel achievement. Dreamer will be a platform for future research and education in new human-friendly traits and behaviors. Finally, this thesis attempts to address a second broad challenge to advancing the field: the lack of standard design methodology for human-centered robots. Based on the experience of building Trikey and Dreamer, a set of consistent design guidelines and metrics for the field are suggested. They account for the complex nature of such systems, which must address safety, performance, user-friendliness, and the capability for intelligent behavior. / text
210

A Formal Approach to Social Learning: Exploring Language Acquisition Through Imitation

Cederborg, Thomas 10 December 2013 (has links) (PDF)
The topic of this thesis is learning through social interaction, consisting of experiments that focus on word acquisition through imitation, and a formalism aiming to provide stronger theoretical foundations. The formalism is designed to encompass essentially any situation where a learner tries to figure out what a teacher wants it to do by interaction or observation. It groups learners that are interpreting a broad range of information sources under the same theoretical framework. A teachers demonstration, it's eye gaze during a reproduction attempt and a teacher speech comment are all treated as the same type of information source. They can all tell the imitator what the demonstrator wants it to do, and they need to be interpreted in some way. By including them all under the same framework, the formalism can describe any agent that is trying to figure out what a human wants it to do. This allows us to see parallels between existing research, and it provides a framing that makes new avenues of research visible. The concept of informed preferences is introduced to deal with cases such as "the teacher would like the learner to perform an action, but if it knew the consequences of that action, would prefer another action" or "the teacher is very happy with the end result after the learner has cleaned the apartment, but if it knew that the cleaning produced a lot of noise that disturbed the neighbors, it would not like the cleaning strategy". The success of a learner is judged according to the informed teachers opinion of what would be best for the uninformed version. A series of simplified setups are also introduced showing how a toy world setup can be reduced to a crisply defined inference problem with a mathematically defined success criteria (any learner architecture-setup pair has a numerical success value). An example experiment is presented where a learner is concurrently estimating the task and what the evaluative comments of a teacher means. This experiment shows how the ideas of learning to interpret information sources can be used in practice. The first of the learning from demonstration experiments presented investigates a learner, specifically an imitator, that can learn an unknown number of tasks from unlabeled demonstrations. The imitator has access to a set of demonstrations, but it must infer the number of tasks and determine what demonstration is of what task (there are no symbols or labels attached to the demonstrations). The demonstrator is attempting to teach the imitator a rule where the task to perform is dependent on the 2D position of an object. The objects 2D position is set at a random location within four different, well separated, rectangles, each location indicating that a specific task should be performed. Three different coordinate systems were available, and each task was defined in one of them (for example ''move the hand to the object and then draw a circle around it"). To deal with this setup, a local version of Gaussian Mixture Regression (GMR) was used called Incremental Local Online Gaussian Mixture Regression (ILO-GMR). A small and fixed number of gaussians are fitted to local data, informs policy, and then new local points are gathered. Three other experiments extends the types of contexts to include the actions of another human, making the investigation of language learning possible (a word is learnt by imitating how the demonstrator responds to someone uttering the word). The robot is presented with a setup containing two humans, one demonstrator (who performs hand movements), and an interactant (who might perform some form of communicative act). The interactants behavior is treated as part of the context and the demonstrators behavior is assumed to be an appropriate response to this extended context. Two experiments explore the simultaneous learning of linguistic and non linguistic tasks (one demonstration could show the appropriate response to an interactant speech utterance and another demonstration could show the appropriate response to an object position). The imitator is not given access to any symbolic information about what word or hand sign was spoken, and must infer how many words where spoken, how many times linguistic information was present, and what demonstrations where responses to what word. Another experiment explores more advanced types of linguistic conventions and demonstrator actions (simple word order grammar in interactant communicative acts, and the imitation of internal cognitive operations performed by the demonstrator as a response). Since a single general imitation learning mechanism can deal with the acquisition of all the different types of tasks, it opens up the possibility that there might not be a need for a separate language acquisition system. Being able to learn a language is certainly very useful when growing up in a linguistic community, but this selection pressure can not be used to explain how the linguistic community arose in the first place. It will be argued that a general imitation learning mechanism is both useful in the absence of language, and will result in language given certain conditions such as shared intentionality and the ability to infer the intentions and mental states of others (all of which can be useful to develop in the absence of language). It will be argued that the general tendency to adopt normative rules is a central ingredient for language (not sufficient, and not necessary while adopting an already established language, but certainly very conducive for a community establishing linguistic conventions).

Page generated in 0.1489 seconds