• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 24
  • 18
  • 18
  • 9
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 307
  • 125
  • 104
  • 68
  • 63
  • 63
  • 57
  • 51
  • 50
  • 46
  • 45
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Applying The Appraisal Theory Of Emotionto Human-agent Interaction

Pepe, Aaron 01 January 2007 (has links)
Autonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of robot. This research involves two studies; in the first study an experimental method was used in which participants interacted with a live dog, a robotic dog or a non-anthropomorphic robot to attempt to accomplish a set of tasks. The appraisals of motive consistent / motive inconsistent (the task was performed correctly/incorrectly) and high / low perceived control (the teammate was well trained/not well trained) were manipulated to show the practicality of using appraisal theory as a basis for human robot interaction studies. Robot form was investigated for its influence on emotions experienced. Finally, the influence of high and low control on the experience of positive emotions caused by another was investigated. Results show that a human - robot live interaction test bed is a valid way to influence participants' appraisals. Manipulation checks of motive consistent / motive inconsistent, high / low perceived control and the proper appraisal of cause were significant. Form was shown to influence both the positive and negative emotions experienced, the more lifelike agents were rated higher in positive emotions and lower in negative emotions. The emotion gratitude was shown to be greater during conditions of low control when the entities performed correctly,suggesting that more experiments should be conducted investigating agent caused motive-conducive events. A second study was performed with participants evaluating their reaction to a hypothetical story. In this story they were interacting with either a human, robotic dog, or robot to complete a task. These three agent types and high/low perceived control were manipulated with all stories ending successfully. Results indicated that gratitude and appreciation are sensitive to the manipulation of agent type. It is suggested that, based on the results of these studies, the emotion gratitude should be added to Roseman et al. (2001) appraisal theory to describe the emotion felt during low-control, motive-consistent, other-caused events. These studies have also shown that the appraisal theory of emotion is useful in the study of human-robot and human-animal interactions.
62

A Customizable Socially Interactive Robot with Wireless Health Monitoring Capability

Hornfeck, Kenneth B. 20 April 2011 (has links)
No description available.
63

Programming by demonstration for dual-arm manipulation

Mudgal, Karan Chaitanya 28 May 2024 (has links)
Motivated by challenges operators face with manual control tasks, including fatigue and workload management, this research explores the adoption of a semi-autonomous control method to improve work environment quality and task metrics in controlled situations. Building upon the success of Programming by Demonstration (PbD) for single-arm industrial robotic applications, we extend these techniques to dual-arm robotic control. We present a semi-autonomous approach allowing users to supervise tasks while delegating control to the system, alleviating stress and fatigue associated with manual control operations. This research compares manual and semi-autonomous control in a human-robot team, focusing quantitatively on user performance and qualitatively on trust in the system. Participants controlled a dual-arm robotic system from a remote cockpit, monitoring progress through a graphical user interface (GUI) and camera views. Semi-autonomous control employs PbD with selectable ’motion primitives’. Trials involved a modified pick-and-place task and results demonstrate a significantly higher success rate across all metrics with semi-autonomous control. This study highlights the applicability of PbD as a semi-autonomous control method in human-robot teams, reducing workload stress and enhancing task performance. Integrating sensors for dynamic environment analysis to create motion feedback mechanisms could further enhance user trust and system adaptability. Ultimately, this research suggests implementing semi-autonomous control for dual-arm robotic systems, offering faster onboarding for new operators and increased operational flexibility while minimizing user stress and fatigue.
64

Design and Control of an Ergonomic Wearable Full-Wrist Exoskeleton for Pathological Tremor Alleviation

Wang, Jiamin 31 January 2023 (has links)
Activities of daily living (ADL) such as writing, eating, and object manipulation are challenging for patients suffering from pathological tremors. Pathological tremors are involuntary, rhythmic, and oscillatory movements that manifest in limbs, the head, and other body parts. Among the existing treatments, mechanical loading through wearable rehabilitation devices is popular for being non-invasive and innocuous to the human body. In particular, a few exoskeletons are developed to actively mitigate pathological tremors in the forearm. While these forearm exoskeletons can effectively suppress tremors, they still require significant improvements in ergonomics to be implemented for ADL applications. The ergonomics of the exoskeleton can be improved via design and motion control pertaining to human biomechanics, which leads to better efficiency, comfort, and safety for the user. The wrist is a complicated biomechanical joint with two coupled degrees of freedom (DOF) pivotal to human manipulation capabilities. Existing exoskeletons either do not provide tremor suppression in all wrist DOFs, or can be restrictive to the natural wrist movement. This motivates us to explore a better exoskeleton solution for wrist tremor suppression. We propose TAWE - a wearable exoskeleton that provides alleviation of pathological tremors in all wrist DOFs. The design adopts a 6-DOF rigid linkage mechanism to ensure unconstrained natural wrist movements, and wearability features without extreme tight-binding or precise positioning for convenient ADL applications. When TAWE is equipped by the user, a closed-kinematic chain is formed between the exoskeleton and the forearm. We analyze the coupled multibody dynamics of the human-exoskeleton system, which reveals a few robotic control problems - (i) The first problem is the identification of the unknown wrist kinematics within the closed kinematic chain. We realize the real-time wrist kinematic identification (WKI) based on a novel ellipsoidal joint model that describes the coupled wrist kinematics, and a sparsity-promoting Extended Kalman Filter for the efficient real-time regression; (ii) The second problem is the exoskeleton motion control for tremor suppression. We design a robust adaptive controller (IO-RAC) based on model reference adaptive control and inverse optimal robust control theories, which can identify the unknown model inertia and load, and provide stable tracking control under disturbance; (iii) The third problem is the estimation of voluntary movement from tremorous motion data for the motion planning of exoskeleton. We develop a lightweight and data-driven voluntary movement estimator (SVR-VME) based on least square support vector regression, which can estimate voluntary movements with real-time signal adaptability and significantly reduced time delay. Simulations and experiments are carried out to test the individual performance of robotic control algorithms proposed in this study, and their combined real-time performance when integrated into the full exoskeleton control system. We also manufacture the prototype of TAWE, which helps us validate the proposed solutions in tremor alleviation exoskeletons. Overall, the design of TAWE meets the expectations in its compliance with natural wrist movement and simple wearability. The exoskeleton control system can execute stably in real-time, identify unknown system kinematics and dynamics, estimate voluntary movements, and suppress tremors in the wrist. The results also indicate a few limitations in the current approaches, which require further investigations and improvements. Finally, the proposed exoskeleton control solutions are developed based on generic formulations, which can be applied to not only TAWE, but also other rehabilitation exoskeletons. / Doctor of Philosophy / Activities of daily living (ADL) such as writing, eating, and object manipulation are challenging for patients suffering from pathological tremors, which affect millions of people worldwide. Tremors are involuntary, rhythmic, and oscillatory movements. In recent years, rehabilitation exoskeletons are developed as non-invasive solutions to pathological tremor alleviation. The wrist is pivotal to human manipulation capabilities. Existing exoskeletons either do not provide tremor suppression in all wrist movements, or can be restrictive to natural wrist movements. To explore a better solution with improved performance and ergonomics, we propose TAWE - a wearable exoskeleton that provides tremor alleviation in full wrist motions. TAWE adopts a high-degree-of-freedom mechanism to ensure unconstrained natural wrist movements, and wearability features for convenient ADL applications. The coupled dynamics between the forearm and TAWE leads to a few robotic control problems. We propose novel real-time robotic control solutions in the identification of unknown wrist kinematics, robust adaptive exoskeleton control for tremor suppression, and voluntary movement estimation for motion planning. Later, simulations and experiments validate the TAWE prototype and its exoskeleton control framework for tremor alleviation, and reveal limitations in the current approaches that require further investigations and improvements. Finally, the proposed exoskeleton control solutions are developed based on generic formulations, which can be applied to not only TAWE, but also other rehabilitation exoskeletons.
65

Adaptive Communication Interfaces for Human-Robot Collaboration

Christie, Benjamin Alexander 07 May 2024 (has links)
Robots can use a collection of auditory, visual, or haptic interfaces to convey information to human collaborators. The way these interfaces select signals typically depends on the task that the human is trying to complete: for instance, a haptic wristband may vibrate when the human is moving quickly and stop when the user is stationary. But people interpret the same signals in different ways, so what one user finds intuitive another user may not understand. In the absence of task knowledge, conveying signals is even more difficult: without knowing what the human wants to do, how should the robot select signals that helps them accomplish their task? When paired with the seemingly infinite ways that humans can interpret signals, designing an optimal interface for all users seems impossible. This thesis presents an information-theoretic approach to communication in task-agnostic settings: a unified algorithmic formalism for learning co-adaptive interfaces from scratch without task knowledge. The resulting approach is user-specific and not tied to any interface modality. This method is further improved by introducing symmetrical properties using priors on communication. Although we cannot anticipate how a human will interpret signals, we can anticipate interface properties that humans may like. By integrating these functional priors in the aforementioned learning scheme, we achieve performance far better than baselines that have access to task knowledge. The results presented here indicate that users subjectively prefer interfaces generated from the presented learning scheme while enabling better performance and more efficient interactions. / Master of Science / This thesis presents a novel interface for robot-to-human communication that personalizes to the current user without either task-knowledge nor an interpretative model of the human. Suppose that you are trying to find the location of buried treasure in a sandbox. You don't know the location of the treasure, but a robotic assistant does. Unfortunately, the only way the assistant can communicate the position of the treasure to you is through two LEDs of varying intensity --- and neither you nor the robot have a mutually understood interpretation of those signals. Without knowing the robot's convention for communication, how should you interpret the robot's signals? There are infinitely many viable interpretations: perhaps a brighter signal means that the treasure is towards the center of the sandbox -- or something else entirely. The robot has a similar problem: how should it interpret your behavior? Without knowing what you want to do with the hidden information (i.e., your task) or how you behave (i.e., your interpretative model), there is an infinite number pairs for either that fit your behavior. This work presents an interface optimizer that maximizes the correlation between the human's behavior and the hidden information. Testing with real humans indicates that this learning scheme can produce useful communicative mappings --- without knowing the users' tasks or their interpretative models. Furthermore, we recognize that humans have common biases in their interpretation of the world (leading to biases in their interpretations of robot communication). Although we cannot assume how a specific user will interpret an interface's signal, we can assume user-friendly interface designs that most humans find intuitive. We leverage these biases to further improve the aforementioned learning scheme across several user studies. As such, the findings presented in this thesis have a direct impact on human-robot co-adaptation in task-agnostic settings.
66

Inferring the Human's Objective in Human Robot Interaction

Hoegerman, Joshua Thomas 03 May 2024 (has links)
This thesis discusses the use of Bayesian Inference in inferring over the human's objective for Human-Robot Interaction, more specifically, it focuses upon the adaptation of methods to better utilize the information for inferring upon the human's objective for Reward Learning and Communicative Shared Autonomy settings. To accomplish this, we first examine state-of-the-art methods for approaching Bayesian Inverse Reinforcement learning where we explore the strengths and weaknesses of current approaches. After which we explore alternative methods for approaching the problem, borrowing similar approaches to those of the statistics community to apply alternative methods to improve the sampling process over the human's belief. After this, I then move to a discussion on the setting of Shared Autonomy in the presence and absence of communication. These differences are then explored in our method for inferring upon an environment where the human is aware of the robot's intention and how this can be used to dramatically improve the robot's ability to cooperate and infer upon the human's objective. In total, I conclude that the use of these methods to better infer upon the human's objective significantly improves the performance and cohesion between the human and robot agents within these settings. / Master of Science / This thesis discusses the use of various methods to allow robots to better understand human actions so that they can learn and work with those humans. In this work we focus upon two areas of inferring the human's objective: The first is where we work with learning what things the human prioritizes when completing certain tasks to better utilize the information inherent in the environment to best learn those priorities such that a robot can replicate the given task. The second body of work surrounds Shared Autonomy where we work to have the robot better infer what task a human is going to do and thus better allow the robot to assist with this goal through using communicative interfaces to alter the information dynamic the robot uses to infer upon that human intent. Collectively, the work of the thesis works to push that the current inference methods for Human-Robot Interaction can be improved through the further progression of inference to best approximate the human's internal model in a given setting.
67

Cooperative human-robot search in a partially-known environment using multiple UAVs

Chourey, Shivam 28 August 2020 (has links)
This thesis details out a system developed with objective of conducting cooperative search operation in a partially-known environment, with a human operator, and two Unmanned Aerial Vehicles (UAVs) with nadir, and front on-board cameras. The system uses two phases of flight operations, where the first phase is aimed at gathering latest overhead images of the environment using a UAV’s nadir camera. These images are used to generate and update representations of the environment including 3D reconstruction, mosaic image, occupancy image, and a network graph. During the second phase of flight operations, a human operator marks multiple areas of interest for closer inspection on the mosaic generated in previous step, displayed via a UI. These areas are used by the path planner as visitation goals. The two-step path planner, which uses network graph, utilizes the weighted-A* planning, and Travelling Salesman Problem’s solution to compute an optimal visitation plan. This visitation plan is then converted into Mission waypoints for a second UAV, and are communicated through a navigation module over a MavLink connection. A UAV flying at low altitude, executes the mission plan, and streams a live video from its front-facing camera to a ground station over a wireless network. The human operator views the video on the ground station, and uses it to locate the target object, culminating the mission. / Master of Science / This thesis details out the work focused on developing a system capable of conducting search operation in an environment where prior information has been rendered outdated, while allowing human operator, and multiple robots to cooperate for the search. The system operation is divided into two phases of flight operations, where the first operation focuses on gathering the current information using a camera equipped unmanned aircraft, while the second phase involves utilizing the human operator’s instinct to select areas of interest for a close inspection. It is followed by a flight operation using a second unmanned aircraft aimed at visiting the selected areas and gathering detailed information. The system utilizes the data acquired through first phase, and generates a detailed map of the target environment. In the second phase of flight operations, a human uses the detailed map, and marks the areas of interest by drawing over the map. This allows the human operator to guide the search operation. The path planner generates an optimal plan of visitation which is executed by the second unmanned aircraft. The aircraft streams a live video to a ground station over a wireless network, which is used by the human operator for detecting the target object’s location, concluding the search operation.
68

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Tozadore, Daniel Carnieto 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
69

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Daniel Carnieto Tozadore 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
70

Human-robot Interaction For Multi-robot Systems

Lewis, Bennie 01 January 2014 (has links)
Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces.

Page generated in 0.2555 seconds