Spelling suggestions: "subject:"human robot interaction"" "subject:"human cobot interaction""
181 |
Behaviour-Aware Motion Planning for Autonomous Vehicles Incorporating Human Driving StyleLazarov, Kristiyan, Mirzai, Badi January 2019 (has links)
This paper proposes a model to ensure safe and realistic human-robot interaction for an autonomous vehicle interacting with a human-driven vehicle, by incorporating the driving style of the human driver. The interaction is modeled as a game, where both agents try to maximize future rewards. The driving style of the human is captured via the role of the human driver in the game, capturing the fact that humans with different driving styles reason differently. The solution of the game is obtained using an numerical approximation and used by the autonomous vehicle to plan optimally ahead. The model is validated via simulations on a safety-critical scenario, where realistic driving style-dependent behaviour emerges naturally.
|
182 |
Designing an interface for a teleoperated vehicle which uses two cameras for navigation.Rudqwist, Lucas January 2018 (has links)
The Swedish fire department have been wanting a robot that can be sent to situations where it’s too dangerous to send in firefighters. A teleoperated vehicle is being developed for exactly this purpose. This thesis has its base in research that previously has been conducted within Human-Robot Interaction and interface design for teleoperated vehicles. In this study, a prototype was developed to be able to simulate the experience of driving a teleoperated vehicle. It visualised the intended interface of the operator and simulated the operating experience. The development followed a User-Centered Design process and was evaluated by users. After the final evaluation a design proposal, based on previous research and user feedback, was presented. The study discusses the issues discovered when designing an interface for a teleoperated vehicle that uses two cameras for maneuvering. One challenge was how to fully utilize the two video feeds and create an interplay between them. The evaluations showed that users could keep better focus with one larger, designated main feed and the second one being placed where it can be easily glanced at. Simplicity and were to display sensor data were also shown to be important aspects to consider when trying to lower the mental load on the operator. Further modifications to the vehicle and the interface has to be made to increase the operators awareness and confidence when maneuvering the vehicle. / Det svenska brandförsvaret har varit i behov utav en robot som kan användas i situationer där det är för riskfyllt att skicka in brandmän. Ett radiostyrt fordon håller på att utvecklas för exakt detta syfte. Detta arbete baseras på den forskning som tidigare genomförts inom Människa-Datorinteraktion och gränssnitts-design för radiostyrda fordon. I denna studie utvecklades en prototyp för att simulera känslan av att köra ett radiostyrt fordon. Det visualiserade det tänka gränssnitten för operatören och simulerade körupplevelsen. Utvecklingen skedde genom en Användarcentrerad designprocess och utvärderades med hjälp utav användare. Efter den slutgiltiga utvärderingen så presenterades ett designförslag som baserades på tidigare forskning och användarnas återkoppling. Studien diskuterar de problem som uppstår när man designar ett gränssnitt för ett radiostyrt fordon som använder två kameror för manövrering. En utmaning var hur man kan till fullo utnyttja de två kamerabilderna och skapa ett samspel mellan dem. Utvärderingarna visade att användarna kunde hålla bättre fokus med en större, dedikerad kamerabild och en mindre sekundär kamerabild som enkelt kan blickas över. Enkelhet och var sensordata placeras, visade sig också var viktiga aspekter för att minska den mentala påfrestningen för operatören. Vidare modifikationer på fordonet och gränssnittet behöver genomföras för öka operatörens medvetenhet och självförtroende vid manövrering.
|
183 |
In the Eyes of the Beheld? : Investigating people's understanding of the visual capabilities of autonomous vehiclesPettersson, Max January 2022 (has links)
Autonomous vehicles are complex, technologically opaque, and can vary greatly in what perceptual capabilities they are endowed with. Because of this, it is reasonable to expect people to have difficulties in accurately inferring what an autonomous vehicle has and has not seen, and also how they will act, in a traffic situation. To facilitate effective interaction in traffic, autonomous vehicles should therefore be developed with people’s assumptions in mind, and design efforts should be made to communicate the vehicles' relevant perceptual beliefs. For such efforts to be effective however, they need to be grounded in empirical data of what assumptions people make about autonomous vehicles' perceptual capabilities. Using a novel method, the present study aims to contribute to this by investigating how people's understanding of the visual capabilities of autonomous vehicles compare to their understanding of those of human drivers with respect to (Q1) what the vehicle/driver can and cannot see in various traffic situations, (Q2) how certain they are of Q1, and (Q3) the level of agreement in their judgement of Q1. Additionally, we examine whether (Q4) there is a correlation between individual differences in anthropomorphizing and Q1. The results indicate that people generally believe autonomous vehicles and human drivers have the same perceptual capabilities, and that they therefore are subject to similar limitations. The results also indicate that people are equally certain of their beliefs in both cases, strongly agree in both cases, and that individual differences in anthropomorphizing are not associated with these beliefs. Implications for development of autonomous vehicles and future research are discussed.
|
184 |
Human-robot Interaction For Multi-robot SystemsLewis, Bennie 01 January 2014 (has links)
Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi-robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that the robots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator’s concentration is divided not only among multiple robots but also between controlling each robot’s base and arm. This complexity substantially increases the potential neglect time, since the operator’s inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance. There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaces which reduce the operator’s workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelf parts. User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modiii eling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces.
|
185 |
Multi-Human Management of a Hub-Based Colony: Efficiency and Robustness in the Cooperative Best M-of-N TaskGrosh, John Rolfes 01 June 2019 (has links)
Swarm robotics is an emerging field that is expected to provide robust solutions to spatially distributed problems. Human operators will often be required to guide a swarm in the fulfillment of a mission. Occasionally, large tasks may require multiple spatial swarms to cooperate in their completion. We hypothesize that when latency, bandwidth, operator dropout, and communication noise are significant factors, human organizations that promote individual initiative perform more effectively and resiliently than hierarchies in the cooperative best-m-of-n task. Simulations automating the behavior of hub-based swarm robotic agents and groups of human operators are used to evaluate this hypothesis. To make the comparisons between the team and hierarchies meaningful, we explore parameter values determining how simulated human operators behave in teams and hierarchies to optimize the performance of the respective organizations. We show that simulation results generally support the hypothesis with respect to the effect of latency and bandwidth on organizational performance.
|
186 |
Knock on Wood : Does Material Choice Change the Social Perception of Robots? / Ta i trä : Påverkar val av material den sociala uppfattnignen av robotar?Björklund, Linnea January 2018 (has links)
This paper aims to understand whether there is a difference in how socially interactive robots are perceived based on the material they are constructed out of. Two studies to that end were performed; a pilot in a live setting and a main one online. Participants were asked to rate three versions of the same robot design, one built out of wood, one out of plastic, and one covered in fur. This was then used in two studies to ascertain the participants perception of competence, warmth, and discomfort and the differences between the three materials. Statistically significant differences were found between the materials regarding the perception of warmth and discomfort / Denna uppsats undersöker huruvida det finns en skillnad i hur socialt interaktiva robotar uppfattas baserat på vilket material de är tillverkade i. Två studier gjordes för att ta reda på detta: En pilotstudie som skedde fysiskt, och huvudstudien skedde online. Deltagarna ombads att skatta tre versioner av samma robotdesign, där en var byggd i trä, en i plast och en täckt i päls. Dessa användes sedan i två studier för att bedöma deltagarnas uppfattning av robotarnas kompetens, värme och obehag, samt skillnaderna i dessa mellan de tre materialen. Statistiskt signifikanta skillnader hittades i uppfattningen av värme och obehag.
|
187 |
Pose Imitation Constraints For Kinematic StructuresGlebys T Gonzalez (14486934) 09 February 2023 (has links)
<p> </p>
<p>The usage of robots has increased in different areas of society and human work, including medicine, transportation, education, space exploration, and the service industry. This phenomenon has generated a sudden enthusiasm to develop more intelligent robots that are better equipped to perform tasks in a manner that is equivalently good as those completed by humans. Such jobs require human involvement as operators or teammates since robots struggle with automation in everyday settings. Soon, the role of humans will be far beyond users or stakeholders and include those responsible for training such robots. A popular teaching form is to allow robots to mimic human behavior. This method is intuitive and natural and does not require specialized knowledge of robotics. While there are other methods for robots to complete tasks effectively, collaborative tasks require mutual understanding and coordination that is best achieved by mimicking human motion. This mimicking problem has been tackled through skill imitation, which reproduces human-like motion during a task shown by a trainer. Skill imitation builds on faithfully replicating the human pose and requires two steps. In the first step, an expert's demonstration is captured and pre-processed, and motion features are obtained; in the second step, a learning algorithm is used to optimize for the task. The learning algorithms are often paired with traditional control systems to transfer the demonstration to the robot successfully. However, this methodology currently faces a generalization issue as most solutions are formulated for specific robots or tasks. The lack of generalization presents a problem, especially as the frequency at which robots are replaced and improved in collaborative environments is much higher than in traditional manufacturing. Like humans, we expect robots to have more than one skill and the same skills to be completed by more than one type of robot. Thus, we address this issue by proposing a human motion imitation framework that can be efficiently computed and generalized for different kinematic structures (e.g., different robots).</p>
<p> </p>
<p>This framework is developed by training an algorithm to augment collaborative demonstrations, facilitating the generalization to unseen scenarios. Later, we create a model for pose imitation that converts human motion to a flexible constraint space. This space can be directly mapped to different kinematic structures by specifying a correspondence between the main human joints (i.e., shoulder, elbow, wrist) and robot joints. This model permits having an unlimited number of robotic links between two assigned human joints, allowing different robots to mimic the demonstrated task and human pose. Finally, we incorporate the constraint model into a reward that informs a Reinforcement Learning algorithm during optimization. We tested the proposed methodology in different collaborative scenarios. Thereafter, we assessed the task success rate, pose imitation accuracy, the occlusion that the robot produces in the environment, the number of collisions, and finally, the learning efficiency of the algorithm.</p>
<p> </p>
<p>The results show that the proposed framework creates effective collaboration in different robots and tasks.</p>
|
188 |
Perspective Control: Technology to Solve the Multiple Feeds Problem in Sensor SystemsMorison, Alexander M. 25 October 2010 (has links)
No description available.
|
189 |
Haptic-Enabled Robotic Arms to Achieve Handshakes in the MetaverseMohd Faisal, 26 September 2022 (has links)
Humans are social by nature, and the physical distancing due to COVID has converted many of our daily interactions into virtual ones. Among the negative consequences of this, we find the lack of an element that is essential to humans' well-being, which is the physical touch. With more interactions shifting towards the digital world of the metaverse, we want to provide individuals with the means to include the physical touch in their interactions. We explore the Digital Twin technology's prospect to support in reducing the impact of this on humans. We provide a definition of the concept of Robo Twin and explain its role in mediating human interactions. Besides, we survey research works related to Digital Twin's physical representation with a focus on under-actuated Digital Twin's robotic arms. In this thesis, we first provide findings from the literature, to support researchers' decisions in the adoption and use of designs and implementations of Digital Twin's robotic arms, and to inform future research on current challenges and gaps in existing research works.
Subsequently, we design and implement two right-handed under-actuated Digital Twin's robotic arms to mediate the physical interaction between two individuals by allowing them to perform a handshake while they are physically distanced. This experiment served as a proof of concept for our proposed idea of Robo Twin. The findings are very promising as our evaluation shows that the participants are highly interested in using our system to make a handshake with their loved ones when they are physically separated. With this Robo Twin Arm system, we also find a correlation between the handshake characteristics and gender and/or personality traits of the participants from the quantitative handshake data collected during the experiment. Moreover, it is a step towards the design and development of Digital Twin's under-actuated robotic arms and ways to enhance the overall user experience with such a system.
|
190 |
Human-Robot Interaction with Pose Estimation and Dual-Arm Manipulation Using Artificial IntelligenceRen, Hailin 16 April 2020 (has links)
This dissertation focuses on applying artificial intelligence techniques to human-robot interaction, which involves human pose estimation and dual-arm robotic manipulation. The motivating application behind this work is autonomous victim extraction in disaster scenarios using a conceptual design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system to achieve autonomous safe and effective victim extraction, thereby reducing the potential risk to field medical providers. This dissertation formulates the autonomous victim extraction process using a dual-arm robotic manipulation system for human-robot interaction. According to the general process of Human-Robot Interaction (HRI), which includes perception, control, and decision-making, this research applies machine learning techniques to human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation, respectively. In the human pose estimation, an efficient parallel ensemble-based neural network is developed to provide real-time human pose estimation on 2D RGB images. A 13-limb, 14-joint skeleton model is used in this perception neural network and each ensemble of the neural network is designed for a specific limb detection. The parallel structure poses two main benefits: (1) parallel ensembles architecture and multiple Graphics Processing Units (GPU) make distributed computation possible, and (2) each individual ensemble can be deployed independently, making the processing more efficient when the detection of only some specific limbs is needed for the tasks. Precise robotic manipulator modeling benefits from the simplicity of the controller design and improves the performance of trajectory following. Traditional system modeling relies on first principles, simplifying assumptions and prior knowledge. Any imperfection in the above could lead to an analytical model that is different from the real system. Machine learning techniques have been applied in this field to pursue faster computation and more accurate estimation. However, a large dataset is always needed for these techniques, while obtaining the data from the real system could be costly in terms of both time and maintenance. In this research, a series of different Generative Adversarial Networks (GANs) are proposed to efficiently identify inverse kinematics and inverse dynamics of the robotic manipulators. One four-Degree-of-Freedom (DOF) robotic manipulator and one six-DOF robotic manipulator are used with different sizes of the dataset to evaluate the performance of the proposed GANs. The general methods can also be adapted to other systems, whose dataset is limited using general machine learning techniques. In dual-arm robotic manipulation, basic behaviors such as reaching, pushing objects, and picking objects up are learned using Reinforcement Learning. A Teacher-Student advising framework is proposed to learn a single neural network to control dual-arm robotic manipulators with previous knowledge of controlling a single robotic manipulator. Simulation and experimental results present the efficiency of the proposed framework compared to the learning process from scratch. Another concern in robotic manipulation is safety constraints. A variable-reward hierarchical reinforcement learning framework is proposed to solve sparse reward and tasks with constraints. A task of picking up and placing two objects to target positions while keeping them in a fixed distance within a threshold is used to evaluate the performance of the proposed method. Comparisons to other state-of-the-art methods are also presented. Finally, all the three proposed components are integrated as a single system. Experimental evaluation with a full-size manikin was performed to validate the concept of applying artificial intelligence techniques to autonomous victim extraction using a dual-arm robotic manipulation system. / Doctor of Philosophy / Using mobile robots for autonomous victim extraction in disaster scenarios reduces the potential risk to field medical providers. This dissertation focuses on applying artificial intelligence techniques to this human-robot interaction task involving pose estimation and dual-arm manipulation for victim extraction. This work is based on a design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system attached on an embedded declining stretcher to achieve autonomous safe and effective victim extraction. Therefore, the overall research in this dissertation addresses: human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation for human pose adjustment. To accurately estimate the human pose for real-time applications, the dissertation proposes a neural network that could take advantages of multiple Graphics Processing Units (GPU). Considering the cost in data collection, the dissertation proposed novel machine learning techniques to obtain the inverse dynamic model and the inverse kinematic model of the robotic manipulators using limited collected data. Applying safety constraints is another requirement when robots interacts with humans. This dissertation proposes reinforcement learning techniques to efficiently train a dual-arm manipulation system not only to perform the basic behaviors, such as reaching, pushing objects and picking up and placing objects, but also to take safety constraints into consideration in performing tasks. Finally, the three components mentioned above are integrated together as a complete system. Experimental validation and results are discussed at the end of this dissertation.
|
Page generated in 0.1352 seconds