• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 20
  • 17
  • 15
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 292
  • 292
  • 292
  • 103
  • 86
  • 55
  • 49
  • 48
  • 40
  • 38
  • 38
  • 36
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Initial steps toward human augmented mapping

Topp, Elin Anna January 2006 (has links)
With the progress in research and product development humans and robots get more and more close to each other and the idea of a personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment. The environment has to be assumed an arbitrary domestic or office-like environment that has to be shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that could be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that the robot acquires. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot could ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a "guided tour" in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. Based on results from robotics research, psychology, human-robot interaction and cognitive science a general architecture for a system for Human Augmented Mapping is presented. This architecture combines a hierarchically organised robotic mapping approach with interaction abilities with the help of a high-level environment model. An initial system design and implementation that combines a tracking and following approach with a mapping system is described. Observations from a pilot study in which this initial system was used successfully are reported and support the assumptions about the usefulness of the environment model that is used as the link between robotic and human representation. / QC 20101125
42

Using Augmented Virtuality to Improve Human-Robot Interactions

Nielsen, Curtis W. 03 February 2006 (has links) (PDF)
Mobile robots can be used in situations and environments that are distant from an operator. In order for an operator to control a robot effectively he or she requires an understanding of the environment and situation around the robot. Since the robot is at a remote distant from the operator and cannot be directly observed, the information necessary for an operator to develop an understanding or awareness of the robot's situation comes from the user interface. The usefulness of the interface depends on the manner in which the information from the remote environment is presented. Conventional interfaces for interacting with mobile robots typically present information in a multi-windowed display where different sets of information are presented in different windows. The disjoint sets of information require significant cognitive processing on the part of the operator to interpret and understand the information. To reduce the cognitive effort to interpret the information from a mobile robot, requirements and technology for a three-dimensional augmented virtuality interface are presented. The 3D interface is designed to combine multiple sets of informationinto a single correlated window which can reduce the cognitive processing required to interpret and understand the information in comparison to a conventional (2D) interface. The usefulness of the 3D interface is validated, in comparison to a prototype of conventional 2D interfaces, through a series of navigation- and exploration-based user-studies. The user studies reveal that operators are able to drive the robot, build maps, find and identify items, and finish tasks faster with the 3D interface than with the 2D interface. Moreover, operators have fewer collisions, void walls better, and use a pan-tilt-zoom camera more with the 3D interface than with the 2D interface. Performance with the 3D interface is also more tolerant to network delay and distracting sets of information. Finally, principles for presenting multiple sets of information to a robot operator are presented. The principles are used to discuss and illustrate possible extensions of the 3D interface to other domains.
43

A Customizable Socially Interactive Robot with Wireless Health Monitoring Capability

Hornfeck, Kenneth B. 20 April 2011 (has links)
No description available.
44

Effect of a human-teacher vs. a robot-teacher on human learning a pilot study

Smith, Melissa A. B. 01 August 2011 (has links)
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
45

Investigation Of Tactile Displays For Robot To Human Communication

Barber, Daniel 01 January 2012 (has links)
Improvements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff & Graefe, 2002; Partan & Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially beneficial to advancing HRI. Incorporation of tactile displays within MMC requires developing messages equivalent in communication power to speech and visual signals used in the military. Toward that end, two experiments were performed to investigate the feasibility of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence structure for communication of messages for robot to human communication. Experiment one evaluated tactons from the literature with standardized parameters grouped into categories (directional, dynamic, and static) based on the nature and meaning of the patterns to inform design of a tactile syntax. Findings of this experiment revealed directional tactons showed better performance than non-directional tactons, therefore syntax for experiment two composed of a non-directional and a directional tacton was more likely to show performance better than chance. Experiment two tested the syntax structure of equally performing tactons identified from experiment one, revealing participants’ ability to interpret tactile sentences better than chance with or without the presence of an independent work imperative task. This finding advanced the state of the art in tactile displays from one to two word phrases facilitating inclusion of the tactile modality within MMC for HRI
46

Applying The Appraisal Theory Of Emotionto Human-agent Interaction

Pepe, Aaron 01 January 2007 (has links)
Autonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of robot. This research involves two studies; in the first study an experimental method was used in which participants interacted with a live dog, a robotic dog or a non-anthropomorphic robot to attempt to accomplish a set of tasks. The appraisals of motive consistent / motive inconsistent (the task was performed correctly/incorrectly) and high / low perceived control (the teammate was well trained/not well trained) were manipulated to show the practicality of using appraisal theory as a basis for human robot interaction studies. Robot form was investigated for its influence on emotions experienced. Finally, the influence of high and low control on the experience of positive emotions caused by another was investigated. Results show that a human - robot live interaction test bed is a valid way to influence participants' appraisals. Manipulation checks of motive consistent / motive inconsistent, high / low perceived control and the proper appraisal of cause were significant. Form was shown to influence both the positive and negative emotions experienced, the more lifelike agents were rated higher in positive emotions and lower in negative emotions. The emotion gratitude was shown to be greater during conditions of low control when the entities performed correctly,suggesting that more experiments should be conducted investigating agent caused motive-conducive events. A second study was performed with participants evaluating their reaction to a hypothetical story. In this story they were interacting with either a human, robotic dog, or robot to complete a task. These three agent types and high/low perceived control were manipulated with all stories ending successfully. Results indicated that gratitude and appreciation are sensitive to the manipulation of agent type. It is suggested that, based on the results of these studies, the emotion gratitude should be added to Roseman et al. (2001) appraisal theory to describe the emotion felt during low-control, motive-consistent, other-caused events. These studies have also shown that the appraisal theory of emotion is useful in the study of human-robot and human-animal interactions.
47

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Tozadore, Daniel Carnieto 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
48

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Daniel Carnieto Tozadore 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
49

Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.

Bischof, Andreas 01 March 2017 (has links) (PDF)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.
50

Human-in-the-loop control for cooperative human-robot tasks

Chipalkatty, Rahul 29 March 2012 (has links)
Even with the advance of autonomous robotics and automation, many automated tasks still require human intervention or guidance to mediate uncertainties in the environment or to execute the complexities of a task that autonomous robots are not yet equipped to handle. As such, robot controllers are needed that utilize the strengths of both autonomous agents, adept at handling lower level control tasks, and humans, superior at handling higher-level cognitive tasks. To address this need, we develop a control theoretic framework that seeks to incorporate user commands such that user intention is preserved while an automated task is carried out by the controller. This is a novel approach in that system theoretic tools allow for analytic guarantees of feasibility and convergence to goal states which naturally lead to varying levels of autonomy. We develop a model predictive controller that takes human input, infers human intent, then applies a control that minimizes deviations from the intended human control while ensuring that the lower-level automated task is being completed. This control framework is then evaluated in a human operator study involving a shared control task with human guidance of a mobile robot for navigation. These theoretical and experimental results lay the foundation for applying this control method for human-robot cooperative control to actual human-robot tasks. Specifically, the control is applied to a Urban Search and Rescue robot task where the shared control of a quadruped rescue robot is needed to ensure static stability during human-guided leg placements in uneven terrain. This control framework is also extended to a multiple user and multiple agent system where the human operators control multiple agents such that the agents maintain a formation while allowing the human operators to manipulate the shape of the formation. User studies are also conducted to evaluate the control in multiple operator scenarios.

Page generated in 0.1488 seconds