Spelling suggestions: "subject:"humanrobot 1interaction"" "subject:"humanrobot 3dinteraction""
101 |
A Graphical Language for LTL Motion and Mission PlanningJanuary 2013 (has links)
abstract: Linear Temporal Logic is gaining increasing popularity as a high level specification language for robot motion planning due to its expressive power and scalability of LTL control synthesis algorithms. This formalism, however, requires expert knowledge and makes it inaccessible to non-expert users. This thesis introduces a graphical specification environment to create high level motion plans to control robots in the field by converting a visual representation of the motion/task plan into a Linear Temporal Logic (LTL) specification. The visual interface is built on the Android tablet platform and provides functionality to create task plans through a set of well defined gestures and on screen controls. It uses the notion of waypoints to quickly and efficiently describe the motion plan and enables a variety of complex Linear Temporal Logic specifications to be described succinctly and intuitively by the user without the need for the knowledge and understanding of LTL specification. Thus, it opens avenues for its use by personnel in military, warehouse management, and search and rescue missions. This thesis describes the construction of LTL for various scenarios used for robot navigation using the visual interface developed and leverages the use of existing LTL based motion planners to carry out the task plan by a robot. / Dissertation/Thesis / M.S. Computer Science 2013
|
102 |
Closed-form Inverse Kinematic Solution for Anthropomorphic Motion in Redundant Robot ArmsJanuary 2013 (has links)
abstract: As robots are increasingly migrating out of factories and research laboratories and into our everyday lives, they should move and act in environments designed for humans. For this reason, the need of anthropomorphic movements is of utmost importance. The objective of this thesis is to solve the inverse kinematics problem of redundant robot arms that results to anthropomorphic configurations. The swivel angle of the elbow was used as a human arm motion parameter for the robot arm to mimic. The swivel angle is defined as the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Using kinematic data recorded from human subjects during every-day life tasks, the linear sensorimotor transformation model was validated and used to estimate the swivel angle, given the desired end-effector position. Defining the desired swivel angle simplifies the kinematic redundancy of the robot arm. The proposed method was tested with an anthropomorphic redundant robot arm and the computed motion profiles were compared to the ones of the human subjects. This thesis shows that the method computes anthropomorphic configurations for the robot arm, even if the robot arm has different link lengths than the human arm and starts its motion at random configurations. / Dissertation/Thesis / M.S.Tech Mechanical Engineering 2013
|
103 |
Chat, Connect, Collapse: A Critique on the Anthropomorphization of Chatbots in Search for Emotional IntimacyCheng, Alexandra 01 January 2018 (has links)
This thesis is a critique on the ease in which humans tend to anthropomorphize chatbots, assigning human characteristics to entities that fundamentally will never understand the human experience. It will be further exploring these consequences on our society's socio-cultural fabric, representations of the self and identity formation in terms of communication and the essence of humanity.
|
104 |
An Evaluation of Gaze and EEG-Based Control of a Mobile RobotKhan, Mubasher Hassan, Laique, Tayyab January 2011 (has links)
Context: Patients with diseases such as locked in syndrome or motor neuron are paralyzed and they need special care. To reduce the cost of their care, systems need to be designed where human involvement is minimal and affected people can perform their daily life activities independently. To assess the feasibility and robustness of combinations of input modalities, mobile robot (Spinosaurus) navigation is controlled by a combination of Eye gaze tracking and other input modalities. Objectives: Our aim is to control the robot using EEG brain signals and eye gaze tracking simultaneously. Different combinations of input modalities are used to control the robot and turret movement and then we find out which combination of control technique mapped to control command is most effective. Methods: The method includes developing the interface and control software. An experiment involving 15 participants was conducted to evaluate control of the mobile robot using a combination of eye tracker and other input modalities. Subjects were required to drive the mobile robot from a starting point to a goal along a pre-defined path. At the end of experiment, a sense of presence questionnaire was distributed among the participants to take their feedback. A qualitative pilot study was performed to find out how a low cost commercial EEG headset, the Emotiv EPOCTM, can be used for motion control of a mobile robot at the end. Results: Our study results showed that the Mouse/Keyboard combination was the most effective for controlling the robot motion and turret mounted camera respectively. In experimental evaluation, the Keyboard/Eye Tracker combination improved the performance by 9%. 86% of participants found that turret mounted camera was useful and provided great assistance in robot navigation. Our qualitative pilot study of the Emotiv EPOCTM demonstrated different ways to train the headset for different actions. Conclusions: In this study, we concluded that different combinations of control techniques could be used to control the devices e.g. a mobile robot or a powered wheelchair. Gaze-based control was found to be comparable with the use of a mouse and keyboard; EEG-based control was found to need a lot of training time and was difficult to train. Our pilot study suggested that using facial expressions to train the Emotiv EPOCTM was an efficient and effective way to train it.
|
105 |
Unified Incremental Multimodal Interface for Human-Robot InteractionAmeri Ekhtiarabadi, Afshin January 2011 (has links)
Face-to-face human communication is a multimodal and incremental process. Humans employ different information channels (modalities) for their communication. Since some of these modalities are more error-prone to specic type of data, a multimodal communication can benefit from strengths of each modality and therefore reduce ambiguities during the interaction. Such interfaces can be applied to intelligent robots who operate in close relation with humans. With this approach, robots can communicate with their human colleagues in the same way they communicate with each other, thus leading to an easier and more robust human-robot interaction (HRI).In this work we suggest a new method for implementing multimodal interfaces in HRI domain and present the method employed on an industrial robot. We show that operating the system is made easier by using this interface. / Robot Colleague
|
106 |
A retro-projected robotic head for social human-robot interactionDelaunay, Frédéric C. January 2016 (has links)
As people respond strongly to faces and facial features, both consciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an original contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving software designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial expressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncontrolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evaluations present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face.
|
107 |
The development of a human-robot interface for industrial collaborative systemTang, Gilbert January 2016 (has links)
Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.
|
108 |
Ambiente para interação baseada em reconhecimento de emoções por análise de expressões faciais / Environment based on emotion recognition for human-robot interactionCaetano Mazzoni Ranieri 09 August 2016 (has links)
Nas ciências de computação, o estudo de emoções tem sido impulsionado pela construção de ambientes interativos, especialmente no contexto dos dispositivos móveis. Pesquisas envolvendo interação humano-robô têm explorado emoções para propiciar experiências naturais de interação com robôs sociais. Um dos aspectos a serem investigados é o das abordagens práticas que exploram mudanças na personalidade de um sistema artificial propiciadas por alterações em um estado emocional inferido do usuário. Neste trabalho, é proposto um ambiente para interação humano-robô baseado em emoções, reconhecidas por meio de análise de expressões faciais, para plataforma Android. Esse sistema consistiu em um agente virtual agregado a um aplicativo, o qual usou informação proveniente de um reconhecedor de emoções para adaptar sua estratégia de interação, alternando entre dois paradigmas discretos pré-definidos. Nos experimentos realizados, verificou-se que a abordagem proposta tende a produzir mais empatia do que uma condição controle, entretanto esse resultado foi observado somente em interações suficientemente longas. / In computer sciences, the development of interactive environments have motivated the study of emotions, especially on the context of mobile devices. Research in human-robot interaction have explored emotions to create natural experiences on interaction with social robots. A fertile aspect consist on practical approaches concerning changes on the personality of an artificial system caused by modifications on the users inferred emotional state. The present project proposes to develop, for Android platform, an environment for human-robot interaction based on emotions. A dedicated module will be responsible for recognizing emotions by analyzing facial expressions. This system consisted of a virtual agent aggregated to an application, which used information of the emotion recognizer to adapt its interaction strategy, alternating between two pre-defined discrete paradigms. In the experiments performed, it was found that the proposed approach tends to produce more empathy than a control condition, however this result was observed only in sufficiently long interactions.
|
109 |
Blockchain, Smart Contracts and Cryptocurrencies in Robotics: \\Use Cases, Economics, and Human-Robot InteractionCardenas, Irvin Steve 18 December 2020 (has links)
No description available.
|
110 |
ZERROR : Provoking ethical discussions of humanoid robots through speculative animationKrzewska, Weronika January 2021 (has links)
Robotics engineers' ongoing quest to create human-like robots has raised profound questions on their lack of ethical implications. The rapid progress and growth of humanoid robots is said to have a significant impact on society and human psychology in the near future. Interaction Design is a multidisciplinary field in which designers are often encouraged to engage in important conversations and find solutions to complex problems. On the other hand, animators often use animated videos as metaphors to reflect on important matters that are present in our cultural and societal spheres. This study investigates the use of animation in Speculative Design settings as material to bridge two communities together - the animators and roboticists, to foster ethical behaviors and impact future technology. The main result of the design process is a concept for a mobile platform that stimulates discussions on the ethical considerations of human relationships with humanoid robots, through speculative animation. Moreover, the interactive platform enhances imagination, creativity and learning processes between users.
|
Page generated in 0.1151 seconds