• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 20
  • 16
  • 14
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 229
  • 229
  • 229
  • 89
  • 59
  • 44
  • 42
  • 37
  • 32
  • 31
  • 30
  • 29
  • 25
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modeling Human Learning in Games

Alghamdi, Norah K. 12 1900 (has links)
Human-robot interaction is an important and broad area of study. To achieve success- ful interaction, we have to study human decision making rules. This work investigates human learning rules in games with the presence of intelligent decision makers. Par- ticularly, we analyze human behavior in a congestion game. The game models traffic in a simple scenario where multiple vehicles share two roads. Ten vehicles are con- trolled by the human player, where they decide on how to distribute their vehicles on the two roads. There are hundred simulated players each controlling one vehicle. The game is repeated for many rounds, allowing the players to adapt and formulate a strategy, and after each round, the cost of the roads and visual assistance is shown to the human player. The goal of all players is to minimize the total congestion experienced by the vehicles they control. In order to demonstrate our results, we first built a human player simulator using Fictitious play and Regret Matching algorithms. Then, we showed the passivity property of these algorithms after adjusting the passivity condition to suit discrete time formulation. Next, we conducted the experiment online to allow players to participate. A similar analysis was done on the data collected, to study the passivity of the human decision making rule. We observe different performances with different types of virtual players. However, in all cases, the human decision rule satisfied the passivity condition. This result implies that human behavior can be modeled as passive, and systems can be designed to use these results to influence human behavior and reach desirable outcomes.
22

People Detection based on Points Tracked by an Omnidirectional Camera and Interaction Distance for Service Robots System / サービスロボットシステムのための全方位カメラによるトラッキング可能特徴点とインタラクション距離情報を用いた人物検出

Tasaki, Tsuyoshi 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第17926号 / 情博第508号 / 新制||情||90(附属図書館) / 30746 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 奥乃 博, 教授 河原 達也, 教授 中村 裕一, 教授 五十嵐 淳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
23

Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control

Khan Mohd, Tauheed 06 September 2019 (has links)
No description available.
24

Initial steps toward human augmented mapping

Topp, Elin Anna January 2006 (has links)
With the progress in research and product development humans and robots get more and more close to each other and the idea of a personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment. The environment has to be assumed an arbitrary domestic or office-like environment that has to be shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become possible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that could be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot's workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that the robot acquires. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot could ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a "guided tour" in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. Based on results from robotics research, psychology, human-robot interaction and cognitive science a general architecture for a system for Human Augmented Mapping is presented. This architecture combines a hierarchically organised robotic mapping approach with interaction abilities with the help of a high-level environment model. An initial system design and implementation that combines a tracking and following approach with a mapping system is described. Observations from a pilot study in which this initial system was used successfully are reported and support the assumptions about the usefulness of the environment model that is used as the link between robotic and human representation. / QC 20101125
25

Effect of a human-teacher vs. a robot-teacher on human learning a pilot study

Smith, Melissa A. B. 01 August 2011 (has links)
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
26

Applying The Appraisal Theory Of Emotionto Human-agent Interaction

Pepe, Aaron 01 January 2007 (has links)
Autonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of robot. This research involves two studies; in the first study an experimental method was used in which participants interacted with a live dog, a robotic dog or a non-anthropomorphic robot to attempt to accomplish a set of tasks. The appraisals of motive consistent / motive inconsistent (the task was performed correctly/incorrectly) and high / low perceived control (the teammate was well trained/not well trained) were manipulated to show the practicality of using appraisal theory as a basis for human robot interaction studies. Robot form was investigated for its influence on emotions experienced. Finally, the influence of high and low control on the experience of positive emotions caused by another was investigated. Results show that a human - robot live interaction test bed is a valid way to influence participants' appraisals. Manipulation checks of motive consistent / motive inconsistent, high / low perceived control and the proper appraisal of cause were significant. Form was shown to influence both the positive and negative emotions experienced, the more lifelike agents were rated higher in positive emotions and lower in negative emotions. The emotion gratitude was shown to be greater during conditions of low control when the entities performed correctly,suggesting that more experiments should be conducted investigating agent caused motive-conducive events. A second study was performed with participants evaluating their reaction to a hypothetical story. In this story they were interacting with either a human, robotic dog, or robot to complete a task. These three agent types and high/low perceived control were manipulated with all stories ending successfully. Results indicated that gratitude and appreciation are sensitive to the manipulation of agent type. It is suggested that, based on the results of these studies, the emotion gratitude should be added to Roseman et al. (2001) appraisal theory to describe the emotion felt during low-control, motive-consistent, other-caused events. These studies have also shown that the appraisal theory of emotion is useful in the study of human-robot and human-animal interactions.
27

A Customizable Socially Interactive Robot with Wireless Health Monitoring Capability

Hornfeck, Kenneth B. 20 April 2011 (has links)
No description available.
28

Adaptive Communication Interfaces for Human-Robot Collaboration

Christie, Benjamin Alexander 07 May 2024 (has links)
Robots can use a collection of auditory, visual, or haptic interfaces to convey information to human collaborators. The way these interfaces select signals typically depends on the task that the human is trying to complete: for instance, a haptic wristband may vibrate when the human is moving quickly and stop when the user is stationary. But people interpret the same signals in different ways, so what one user finds intuitive another user may not understand. In the absence of task knowledge, conveying signals is even more difficult: without knowing what the human wants to do, how should the robot select signals that helps them accomplish their task? When paired with the seemingly infinite ways that humans can interpret signals, designing an optimal interface for all users seems impossible. This thesis presents an information-theoretic approach to communication in task-agnostic settings: a unified algorithmic formalism for learning co-adaptive interfaces from scratch without task knowledge. The resulting approach is user-specific and not tied to any interface modality. This method is further improved by introducing symmetrical properties using priors on communication. Although we cannot anticipate how a human will interpret signals, we can anticipate interface properties that humans may like. By integrating these functional priors in the aforementioned learning scheme, we achieve performance far better than baselines that have access to task knowledge. The results presented here indicate that users subjectively prefer interfaces generated from the presented learning scheme while enabling better performance and more efficient interactions. / Master of Science / This thesis presents a novel interface for robot-to-human communication that personalizes to the current user without either task-knowledge nor an interpretative model of the human. Suppose that you are trying to find the location of buried treasure in a sandbox. You don't know the location of the treasure, but a robotic assistant does. Unfortunately, the only way the assistant can communicate the position of the treasure to you is through two LEDs of varying intensity --- and neither you nor the robot have a mutually understood interpretation of those signals. Without knowing the robot's convention for communication, how should you interpret the robot's signals? There are infinitely many viable interpretations: perhaps a brighter signal means that the treasure is towards the center of the sandbox -- or something else entirely. The robot has a similar problem: how should it interpret your behavior? Without knowing what you want to do with the hidden information (i.e., your task) or how you behave (i.e., your interpretative model), there is an infinite number pairs for either that fit your behavior. This work presents an interface optimizer that maximizes the correlation between the human's behavior and the hidden information. Testing with real humans indicates that this learning scheme can produce useful communicative mappings --- without knowing the users' tasks or their interpretative models. Furthermore, we recognize that humans have common biases in their interpretation of the world (leading to biases in their interpretations of robot communication). Although we cannot assume how a specific user will interpret an interface's signal, we can assume user-friendly interface designs that most humans find intuitive. We leverage these biases to further improve the aforementioned learning scheme across several user studies. As such, the findings presented in this thesis have a direct impact on human-robot co-adaptation in task-agnostic settings.
29

Inferring the Human's Objective in Human Robot Interaction

Hoegerman, Joshua Thomas 03 May 2024 (has links)
This thesis discusses the use of Bayesian Inference in inferring over the human's objective for Human-Robot Interaction, more specifically, it focuses upon the adaptation of methods to better utilize the information for inferring upon the human's objective for Reward Learning and Communicative Shared Autonomy settings. To accomplish this, we first examine state-of-the-art methods for approaching Bayesian Inverse Reinforcement learning where we explore the strengths and weaknesses of current approaches. After which we explore alternative methods for approaching the problem, borrowing similar approaches to those of the statistics community to apply alternative methods to improve the sampling process over the human's belief. After this, I then move to a discussion on the setting of Shared Autonomy in the presence and absence of communication. These differences are then explored in our method for inferring upon an environment where the human is aware of the robot's intention and how this can be used to dramatically improve the robot's ability to cooperate and infer upon the human's objective. In total, I conclude that the use of these methods to better infer upon the human's objective significantly improves the performance and cohesion between the human and robot agents within these settings. / Master of Science / This thesis discusses the use of various methods to allow robots to better understand human actions so that they can learn and work with those humans. In this work we focus upon two areas of inferring the human's objective: The first is where we work with learning what things the human prioritizes when completing certain tasks to better utilize the information inherent in the environment to best learn those priorities such that a robot can replicate the given task. The second body of work surrounds Shared Autonomy where we work to have the robot better infer what task a human is going to do and thus better allow the robot to assist with this goal through using communicative interfaces to alter the information dynamic the robot uses to infer upon that human intent. Collectively, the work of the thesis works to push that the current inference methods for Human-Robot Interaction can be improved through the further progression of inference to best approximate the human's internal model in a given setting.
30

Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.

Bischof, Andreas 01 March 2017 (has links) (PDF)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.

Page generated in 0.0539 seconds