• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 20
  • 17
  • 15
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 299
  • 299
  • 295
  • 104
  • 87
  • 57
  • 49
  • 49
  • 41
  • 39
  • 38
  • 37
  • 35
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Human Robot Interaction Solutions for Intuitive Industrial Robot Programming

Akan, Batu January 2012 (has links)
Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation. / robot colleague project
12

An exploratory study of health professionals' attitudes about robotic telepresence technology

Kristoffersson, Annica, Coradeschi, Silvia, Loutfi, Amy, Severinson Eklundh, Kerstin January 2011 (has links)
This article presents the results from a video-based evaluation study of a social robotic telepresence solution for elderly. The evaluated system is a mobile teleoperated robot called Giraff that allows caregivers to virtually enter a home and conduct a natural visit just as if they were physically there. The evaluation focuses on the perspectives from primary healthcare organizations and collects the feedback from different categories of health professionals. The evaluation included 150 participants and yielded unexpected results with respect to the acceptance of the Giraff system. In particular, greater exposure to technology did not necessarily increase acceptance and large variances occurred between the categories of health professionals. In addition to outlining the results, this study provides a number of indications with respect to increasing acceptance for technology for elderly. / <p>The final version of this article can be read at http://www.tandfonline.com/doi/pdf/10.1080/15228835.2011.639509</p>
13

Human and Robot Interaction basedon safety zones in a shared work environment

Augustsson, Svante January 2013 (has links)
The work explores the possibility to increase the automation along a production line by introducing robots without reducing the safety of the operator. The introduction of a robot to a workstation often demands a redesign of the workstation and traditionally the introduction of physical safety solutions that can limit the access to the work area and object on the production line. This work aims to find a general solution that can be used not only in the construction industry, but also in other types of industries to allow for an increased Human and Robot Interaction (HRI) without physical safety solution. A concept solution of a dynamic and flexible robot cell is presented to allow for HRI based on safety zones in a shared work environment. The concepts are based on one robot and the usage of a 3D camera system allowing for the design of virtual safety zones, used to control the HRI. When an operator approaches the robots work area and triggers a safety zone the robot stops its work and moves away from the operator. Based on the safety requirements and triggered zones the robot will continue to work in a new area or wait until the operator leaves the work area and then continue with the interrupted work task. This will allow the operator and the robot to work together, where the operator location controls the robots workspace. Testing and validation of the presented concept showed that the wanted functionality could be obtained. It also showed limitations to the equipment and the system used during tests and raised additional aspects of the safety for HRI. Of the detected limitations the most crucial when looking at up-time for the production line, is the camera system need of a relatively dust free environment, good and constant lighting. For the safety of the system the limitation lies in the size and placing of the safety zones in combination with the disturbance from  surrounding equipment. The presented concept has proven to work, and can be applied not only for the construction industry but for all industries with manufacturing alongside production lines with large components.
14

Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying Robot

Sharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people. One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
15

Adapting the Laban Effort System to Design Affect-Communicating Locomotion Path for a Flying Robot

Sharma, Megha 20 September 2013 (has links)
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective states, such as their moods or emotions. Further, people attribute affect and personalities to movements of even abstract entities based solely on the style of their motions, e.g., movement of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. In this thesis, we investigated how flying robots can leverage this locomotion-style communication channel for communicating their states to people. One problem in leveraging this style of communication in robot design is that there are no guidelines, or tools that Human-Robot Interaction (HRI) designers can leverage to author affect communicating locomotion paths for flying robots. Therefore, we propose to adapt the Laban Effort System (LES), a standard method for interpreting human motion commonly used in the performing arts, to develop a set of guidelines that can be leveraged by HRI designers to author affective locomotion paths for flying robots. We further validate our proposed approach by conducting a small design workshop with a group of interaction designers, where they were asked to design robotic behaviors using our design method. We conclude this thesis with an original adaption of LES to the locomotion path of a flying robot, and a set of design guidelines that can be leveraged by interaction designers for building affective locomotion path for a flying robot.
16

Decisional issues during human-robot joint action

Devin, Sandra 03 November 2017 (has links) (PDF)
In the future, robots will become our companions and co-workers. They will gradually appear in our environment, to help elderly or disabled people or to perform repetitive or unsafe tasks. However, we are still far from a real autonomous robot, which would be able to act in a natural, efficient and secure manner with humans. To endow robots with the capacity to act naturally with human, it is important to study, first, how humans act together. Consequently, this manuscript starts with a state of the art on joint action in psychology and philosophy before presenting the implementation of the principles gained from this study to human-robot joint action. We will then describe the supervision module for human-robot interaction developed during the thesis. Part of the work presented in this manuscript concerns the management of what we call a shared plan. Here, a shared plan is a a partially ordered set of actions to be performed by humans and/or the robot for the purpose of achieving a given goal. First, we present how the robot estimates the beliefs of its humans partners concerning the shared plan (called mental states) and how it takes these mental states into account during shared plan execution. It allows it to be able to communicate in a clever way about the potential divergent beliefs between the robot and the humans knowledge. Second, we present the abstraction of the shared plans and the postponing of some decisions. Indeed, in previous works, the robot took all decisions at planning time (who should perform which action, which object to use…) which could be perceived as unnatural by the human during execution as it imposes a solution preferentially to any other. This work allows us to endow the robot with the capacity to identify which decisions can be postponed to execution time and to take the right decision according to the human behavior in order to get a fluent and natural robot behavior. The complete system of shared plans management has been evaluated in simulation and with real robots in the context of a user study. Thereafter, we present our work concerning the non-verbal communication needed for human-robot joint action. This work is here focused on how to manage the robot head, which allows to transmit information concerning what the robot's activity and what it understands of the human actions, as well as coordination signals. Finally, we present how to mix planning and learning in order to allow the robot to be more efficient during its decision process. The idea, inspired from neuroscience studies, is to limit the use of planning (which is adapted to the human-aware context but costly) by letting the learning module made the choices when the robot is in a "known" situation. The first obtained results demonstrate the potential interest of the proposed solution.
17

Fast upper body pose estimation for human-robot interaction

Burke, Michael Glen January 2015 (has links)
This work describes an upper body pose tracker that finds a 3D pose estimate using video sequences obtained from a monocular camera, with applications in human-robot interaction in mind. A novel mixture of Ornstein-Uhlenbeck processes model, trained in a reduced dimensional subspace and designed for analytical tractability, is introduced. This model acts as a collection of mean-reverting random walks that pull towards more commonly observed poses. Pose tracking using this model can be Rao-Blackwellised, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. The model is used within a recursive Bayesian framework to provide reliable estimates of upper body pose when only a subset of body joints can be detected. Model training data can be extended through a retargeting process, and better pose coverage obtained through the use of Poisson disk sampling in the model training stage. Results on a number of test datasets show that the proposed approach provides pose estimation accuracy comparable with the state of the art in real time (30 fps) and can be extended to the multiple user case. As a motivating example, this work also introduces a pantomimic gesture recognition interface. Traditional approaches to gesture recognition for robot control make use of predefined codebooks of gestures, which are mapped directly to the robot behaviours they are intended to elicit. These gesture codewords are typically recognised using algorithms trained on multiple recordings of people performing the predefined gestures. Obtaining these recordings can be expensive and time consuming, and the codebook of gestures may not be particularly intuitive. This thesis presents arguments that pantomimic gestures, which mimic the intended robot behaviours directly, are potentially more intuitive, and proposes a transfer learning approach to recognition, where human hand gestures are mapped to recordings of robot behaviour by extracting temporal and spatial features that are inherently present in both pantomimed actions and robot behaviours. A Bayesian bias compensation scheme is introduced to compensate for potential classification bias in features. Results from a quadrotor behaviour selection problem show that good classification accuracy can be obtained when human hand gestures are recognised using behaviour recordings, and that classification using these behaviour recordings is more robust than using human hand recordings when users are allowed complete freedom over their choice of input gestures.
18

Use of Vocal Prosody to Express Emotions in Robotic Speech

Crumpton, Joe 14 August 2015 (has links)
Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions.
19

The Impact Of Mental Transformation Training Across Levels Of Automation On Spatial Awareness In Human-robot Interaction

Rehfeld, Sherri 01 January 2006 (has links)
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
20

The Perception And Measurement Of Human-robot Trust

Schaefer, Kristin 01 January 2013 (has links)
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual’s trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.

Page generated in 0.1279 seconds