• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 164
  • 20
  • 17
  • 15
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 302
  • 105
  • 91
  • 59
  • 51
  • 50
  • 41
  • 39
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Motor interference and behaviour adaptation in human-humanoid interactions

Shen, Qiming January 2013 (has links)
This thesis proposes and experimentally demonstrates an approach enabling a humanoid robot to adapt its behaviour to match a human’s behaviour in real-time human-humanoid interaction. The approach uses the information distance synchrony detection method, which is a novel method to measure the behaviour synchrony between two agents, as the core part of the behaviour adaptation mechanism to guide the humanoid robot to change its behaviour in the interaction. The feedback of the participants indicated that the application of this behaviour adaptation mechanism could facilitate human-humanoid interaction. The investigation of motor interference, which may be adopted as a possible metric to quantify the social competence of a robot, is also presented in this thesis. The results from two experiments indicated that both human participants’ beliefs about the engagement of the robot and the usage of rhythmic music might affect the elicitation of the motor interference effects. Based on these findings and recent research supporting the importance of other features in eliciting the interference effects, it can be hypothesized that the overall perception of a humanoid robot as a social entity instead of any individual feature of the robot is critical to elicit motor interference in a human observer’s behaviour. In this thesis, the term ‘overall perception’ refers to the human observer’s overall perception of the robot in terms of appearance, behaviour, the observer’s belief and environmental features that may affect the perception. Moreover, it was found in the motor coordination investigation that humans tended to synchronize themselves with a humanoid robot without being instructed to do so. This finding, together with the behaviour adaptation mechanism, may support the feasibility of bi-directional motor coordination in human-humanoid interaction.
172

Cognitive Interactive Robot Learning

Fonooni, Benjamin January 2014 (has links)
Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed. / Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras. / INTRO
173

Modèles profonds de régression et applications à la vision par ordinateur pour l'interaction homme-robot / Deep Regression Models and Computer Vision Applications for Multiperson Human-Robot Interaction

Lathuiliere, Stéphane 22 May 2018 (has links)
Dans le but d’interagir avec des êtres humains, les robots doivent effectuer destâches de perception basique telles que la détection de visage, l’estimation dela pose des personnes ou la reconnaissance de la parole. Cependant, pour interagir naturellement, avec les hommes, le robot doit modéliser des conceptsde haut niveau tels que les tours de paroles dans un dialogue, le centre d’intérêtd’une conversion, ou les interactions entre les participants. Dans ce manuscrit,nous suivons une approche ascendante (dite “top-down”). D’une part, nousprésentons deux méthodes de haut niveau qui modélisent les comportementscollectifs. Ainsi, nous proposons un modèle capable de reconnatre les activitésqui sont effectuées par différents des groupes de personnes conjointement, telsque faire la queue, discuter. Notre approche gère le cas général où plusieursactivités peuvent se dérouler simultanément et en séquence. D’autre part,nous introduisons une nouvelle approche d’apprentissage par renforcement deréseau de neurones pour le contrôle de la direction du regard du robot. Notreapproche permet à un robot d’apprendre et d’adapter sa stratégie de contrôledu regard dans le contexte de l’interaction homme-robot. Le robot est ainsicapable d’apprendre à concentrer son attention sur des groupes de personnesen utilisant seulement ses propres expériences (sans supervision extérieur).Dans un deuxième temps, nous étudions en détail les approchesd’apprentissage profond pour les problèmes de régression. Les problèmesde régression sont cruciaux dans le contexte de l’interaction homme-robotafin d’obtenir des informations fiables sur les poses de la tête et du corpsdes personnes faisant face au robot. Par conséquent, ces contributions sontvraiment générales et peuvent être appliquées dans de nombreux contextesdifférents. Dans un premier temps, nous proposons de coupler un mélangegaussien de régressions inverses linéaires avec un réseau de neurones convolutionnels. Deuxièmement, nous introduisons un modèle de mélange gaussien-uniforme afin de rendre l’algorithme d’apprentissage plus robuste aux annotations bruitées. Enfin, nous effectuons une étude à grande échelle pour mesurerl’impact de plusieurs choix d’architecture et extraire des recommandationspratiques lors de l’utilisation d’approches d’apprentissage profond dans destâches de régression. Pour chacune de ces contributions, une intense validation expérimentale a été effectuée avec des expériences en temps réel sur lerobot NAO ou sur de larges et divers ensembles de données. / In order to interact with humans, robots need to perform basic perception taskssuch as face detection, human pose estimation or speech recognition. However, in order have a natural interaction with humans, the robot needs to modelhigh level concepts such as speech turns, focus of attention or interactions between participants in a conversation. In this manuscript, we follow a top-downapproach. On the one hand, we present two high-level methods that model collective human behaviors. We propose a model able to recognize activities thatare performed by different groups of people jointly, such as queueing, talking.Our approach handles the general case where several group activities can occur simultaneously and in sequence. On the other hand, we introduce a novelneural network-based reinforcement learning approach for robot gaze control.Our approach enables a robot to learn and adapt its gaze control strategy inthe context of human-robot interaction. The robot is able to learn to focus itsattention on groups of people from its own audio-visual experiences.Second, we study in detail deep learning approaches for regression prob-lems. Regression problems are crucial in the context of human-robot interaction in order to obtain reliable information about head and body poses or theage of the persons facing the robot. Consequently, these contributions are really general and can be applied in many different contexts. First, we proposeto couple a Gaussian mixture of linear inverse regressions with a convolutionalneural network. Second, we introduce a Gaussian-uniform mixture model inorder to make the training algorithm more robust to noisy annotations. Finally,we perform a large-scale study to measure the impact of several architecturechoices and extract practical recommendations when using deep learning approaches in regression tasks. For each of these contributions, a strong experimental validation has been performed with real-time experiments on the NAOrobot or on large and diverse data-sets.
174

Human-like Crawling for Humanoid Robots : Gait Evaluation on the NAO robot

Aspernäs, Andreas January 2018 (has links)
Human-robot interaction (HRI) is the study of how we as humans interact and communicate with robots and one of its subfields is working on how we can improve the collaboration between humans and robots. We need robots that are more user friendly and easier to understand and a key aspect of this is human-like movements and behavior. This project targets a specific set of motions called locomotion and tests them on the humanoid NAO robot. A human-like crawling gait was developed for the NAO robot and compared to the built-in walking gait through three kinds of experiments. The first one to compare the speed of the two gaits, the second one to estimate their sta- bility, and the third to examine how long they can operate by measuring the power consumption and temperatures in the joints. The results showed the robot was significantly slower when crawling compared to walking, and when still the robot was more stable while standing than on all-fours. The power consumption remained essentially the same, but the crawling gait ended up having a shorter operational time due to higher temperature increase in the joints. While the crawling gait has benefits of having a lower profile then the walking gait and could therefore more easily pass under low hanging obsta- cles, it does have major issues that needs to be addressed to become a viable solution. Therefore these are important factors to consider when developing gaits and designing robots, and motives further research to try and solve these problems.
175

Human Robot Interaction for Autonomous Systems in Industrial Environments

Chadalavada, Ravi Teja January 2016 (has links)
The upcoming new generation of autonomous vehicles for transporting materials in industrial environments will be more versatile, flexible and efficient than traditional Automatic Guided Vehicles (AGV), which simply follow pre-defined paths. However, freely navigating vehicles can appear unpredictable to human workers and thus cause stress and render joint use of the available space inefficient. This work addresses the problem of providing information regarding a service robot’s intention to humans co-populating the environment. The overall goal is to make humans feel safer and more comfortable, even when they are in close vicinity of the robot. A spatial Augmented Reality (AR) system for robot intention communication by means of projecting proxemic information onto shared floor space is developed on a robotic fork-lift by equipping it with a LED projector. This helps in visualizing internal state information and intents on the shared floors spaces. The robot’s ability to communicate its intentions is evaluated in realistic situations where test subjects meet the robotic forklift. A Likert scalebased evaluation which also includes comparisons to human-human intention communication was performed. The results show that already adding simple information, such as the trajectory and the space to be occupied by the robot in the near future, is able to effectively improve human response to the robot. This kind of synergistic human-robot interaction in a work environment is expected to increase the robot’s acceptability in the industry.
176

Integração de sistemas cognitivo e robótico por meio de uma ontologia para modelar a percepção do ambiente / Integration of cognitive and robotic systems through an ontology to model the perception of the environment

Helio Azevedo 01 August 2018 (has links)
A disseminação do uso de robôs na sociedade moderna é uma realidade. Do começo restrito às operações fabris como pintura e soldagem, até o início de seu uso nas residências, apenas algumas décadas se passaram. A robótica social é uma área de pesquisa que visa desenvolver modelos para que a interação direta de robôs com seres humanos ocorra de forma natural. Um dos fatores que compromete a rápida evolução da robótica social é a dificuldade em integrar sistemas cognitivos e robóticos, principalmente devido ao volume e complexidade da informação produzida por um mundo caótico repleto de dados sensoriais. Além disso, a existência de múltiplas configurações de robôs, com arquiteturas e interfaces distintas, dificulta a verificação e repetibilidade dos experimentos realizados pelos diversos grupos de pesquisa. Esta tese contribui para a evolução da robótica social ao definir uma arquitetura, denominada Cognitive Model Development Environment (CMDE) que simplifica a conexão entre sistemas cognitivos e robóticos. Essa conexão é formalizada com uma ontologia, denominada OntPercept, que modela a percepção do ambiente a partir de informações sensoriais captadas pelos sensores presentes no agente robótico. Nos últimos anos, diversas ontologias foram propostas para aplicações robóticas, mas elas não são genéricas o suficiente para atender completamente às necessidades das áreas de robótica e automação. A formalização oferecida pela OntPercept facilita o desenvolvimento, a reprodução e a comparação de experimentos associados a robótica social. A validação do sistema proposto ocorre com suporte do simulador Robot House Simulator (RHS), que fornece um ambiente onde, o agente robótico e o personagem humano podem interagir socialmente com níveis crescentes de processamento cognitivo. A proposta da CMDE viabiliza a utilização de qualquer sistema cognitivo, em particular, o experimento elaborado para validação desta pesquisa utiliza Soar como arquitetura cognitiva. Em conjunto, os elementos: arquitetura CMDE, ontologia OntPercept e simulador RHS, todos disponibilizados livremente no GitHub, estabelecem um ambiente completo que propiciam o desenvolvimento de experimentos envolvendo sistemas cognitivos dirigidos para a área de robótica social. / The use of robots in modern society is a reality. From the beginning restricted to the manufacturing operations like painting and welding, until the beginning of its use in the residences, only a few decades have passed. Social robotics is an area that aims to develop models so that the direct interaction of robots with humans occurs naturally. One of the factors that compromises the rapid evolution of social robotics is the difficulty in integrating cognitive and robotic systems, mainly due to the volume and complexity of the information produced by a chaotic world full of sensory data. In addition, the existence of multiple configurations of robots, with different architectures and interfaces, makes it difficult to verify and repeat the experiments performed by the different research groups. This research contributes to the evolution of social robotics by defining an architecture, called Cognitive Model Development Environment (CMDE), which simplifies the connection between cognitive and robotic systems. This connection is formalized with an ontology, called OntPercept, which models the perception of the environment from the sensory information captured by the sensors present in the robotic agent. In recent years, several ontologies have been proposed for robotic applications, but they are not generic enough to fully address the needs of robotics and automation. The formalization offered by OntPercept facilitates the development, reproduction and comparison of experiments associated with social robotics. The validation of the proposed system occurs with support of the Robot House Simulator (RHS), which provides an environment where the robotic agent and the human character can interact socially with increasing levels of cognitive processing. All together, the elements: CMDE architecture, OntPercept ontology and RHS simulator, all freely available in GitHub, establish a complete environment that allows the dev
177

How Humans Adapt to a Robot Recipient : An Interaction Analysis Perspective on Human-Robot Interaction

Pelikan, Hannah January 2015 (has links)
This thesis investigates human-robot interaction using an Interaction Analysis methodology. Posing the question how humans manage the interaction with a robot, the study focuses on humans and how they adapt to the robot’s limited conversational and interactional capabilities. As Conversation Analytic research suggests that humans always adjust their actions to a specific recipient, the author assumed to also find this in the interaction with an artificial communicative partner. For this purpose a conventional robot was programmed to play a charade game with human participants. The interaction of the humans with the robot was filmed and analysed within an interaction analytic framework. The study suggests that humans adapt their recipient design with their changing assumptions about the conversational partner. Starting off with different conversational expectations, participants adapt turn design (word selection, turn size, loudness and prosody) first and turn-taking in a second step. Adaptation to the robot is deployed as a means to accomplish a successful interaction. The detailed study of the human perspective in this interaction can yield conclusions for how robots could be improved to facilitate the interaction. As humans adjust to the interactional limitations with varying speed and ease, the limits to which adaptation is most difficult should be addressed first.
178

Human-Telepresence Robot Proxemics Interaction : An ethnographic approach to non-verbal communication / 인간-텔레프레즌스 로봇 프로세믹스 상호작용 : 비언어적 의사소통에 대한 에스노그라피적 접근

Bang, GiHoon January 2018 (has links)
This research aims to find distinct and crucial factors needed in order to design a better robot through exploring the meaning of movement. The researcher conducted six-weeks of iterative work to collect data via an ethnographic method. The researcher examined the interactions between a telepresence robot and human beings in an authentic environment through the collected data and analyzed it based on proxemics theory. The research observed that the robot was given social space when it approached the participants with pauses in between movements. Furthermore, the research introduces proxemics pivot and its notion. Proxemics pivot refers to the part of the robot that people perceive as a standard point when they adjust the proximity between the robot and themselves. The proxemics pivot was considered “a face” and was attributed social properties; the other parts of the robot did not receive the same consideration.
179

Multi-objective Intent-based Path Planning for Robots for Static and Dynamic Environments

Shaikh, Meher Talat 18 June 2020 (has links)
This dissertation models human intent for a robot navigation task, managed by a human and undertaken by a robot in a dynamic, multi-objective environment. Intent is expressed by a human through a user interface and then translated into a robot trajectory that satisfies a set of human-specified objectives and constraints. For a goal-based robot navigation task in a dynamic environment, intent includes expectations about a path in terms of objectives and constraints to be met. If the planned path drifts from the human's intent as the environment changes, a new path needs to be planned. The intent framework has four elements: (a) a mathematical representation of human intent within a multi-objective optimization problem; (b) design of an interactive graphical user interface that enables a human to communicate intent to the robot and then to subsequently monitor intent execution; (c) integration and adoption of a fast online path-planning algorithms that generate solutions/trajectories conforming to the given intent; and (d) design of metric-based triggers that provide a human the opportunity to correct or adapt a planned path to keep it aligned with intent as the environment changes. Key contributions of the dissertation are: (i) design and evaluation of different user interfaces to express intent, (ii) use of two different metrics, cosine similarity and intent threshold margin, that help quantify intent, and (iii) application of the metrics in path (re)planning to detect intent mismatches for a robot navigating in a dynamic environment. A set of user studies including both controlled laboratory experiments and Amazon Mechanical Turk studies were conducted to evaluate each of these dissertation components.
180

Emergent Social Interactions between a Hospital Patient and a Service Robot : A Research Through Design inquiry into the social dynamics of the interaction framework hospital patient, service robot, caregiver

Bucuroiu, Denisa Maria January 2021 (has links)
The following documents a research through design inquiry into how socialites of a hospital environment are disrupted or improved by implementing a service robot. The robot, support for excessive work, represents a new intermediary between a patient and a caregiver. Robotic work routines appear as better, more efficient, and more affordable. Apart from other ethical and inclusive considerations given to this dialogue, the social values hidden in traditional workflows are of equal importance.  This thesis attempts to generate constructive design research about emergent social norms and social dynamics caused by service robots’ implementation. The lessons learned are presented in a final research discussion. Further applied, the knowledge held common grounds with a rehabilitation robot developed by Blue Ocean Robotics.

Page generated in 0.1334 seconds