Spelling suggestions: "subject:"human robot interaction"" "subject:"human cobot interaction""
291 |
Multimodal Learning CompanionsYun, Hae Seon 20 December 2024 (has links)
Technologien wie Sensoren können dabei helfen, die Fortschritte und Zustände der Lernenden (z.B. Langeweile, Verhaltensweisen des Aufgebens) zu verstehen, und diese erkannten Zustände können genutzt werden, um ein Unterstützungssystem zu entwickeln, das als Begleiter fungiert. Zu diesem Zweck werden in dieser Dissertation drei Forschungsfragen untersucht: 1) Wie können multimodale Sensordaten wie physiologische und eingebettete Sensordaten genutzt werden, um Lernbegleiter zu entwickeln, die den Lernenden ein Bewusstsein für ihre Zustände vermitteln? als erste Forschungsfrage, 2) Wie können Lernbegleiter auf verschiedenen Modalitätsschnittstellen entworfen werden, wie z.B. bildschirmbasierte Agenten und verkörperte Roboter?, um verschiedene Möglichkeiten zu untersuchen, wie Lernende effektiv beraten werden können, und 3) Wie können nicht-technische Nutzer bei der Gestaltung und Nutzung multimodaler Lernbegleiter für ihre Anwendungen unterstützt werden? Zur Beantwortung der obengenannten Forschungsfragen wurde als Methode der Design-Based Research (DBR) Ansatz gewählt, bei der Theorie und Praxis gleichermaßen berücksichtigt wurden. Die daraus abgeleiteten Designüberlegungen dienten als Leitfaden für die Gestaltung von Lernbegleitern und der Plattform zur Entwicklung multimodaler Lernbegleiter. / This dissertation investigates three research questions: 1) How can multimodal sensor data such as physiological and embedded sensor data be used to design learning companions to provide learners with an awareness of their states?, 2) How can learning companions be designed for different modality interfaces, such as screen-based agents and embodied robots? to investigate various means to provide effective advice to learners, and 3) How can non-technical users be supported in designing and using multimodal learning companions in their various use cases? To answer these research questions, design-based research (DBR) methodology was utilized, considering both theory and practice. The derived design considerations were employed to guide the design of the learning companions as well as the platform to design multimodal learning companions. The findings from this dissertation reveal an association between the change in physiological sensor values and the arousal of emotion, which is also endorsed by prior studies. It was also found that using sensor devices such as mobile and wearable devices and Facial Expression Recognition (FER) can add to the methods of detecting learners’ states. Furthermore, designing a learning companion requires a consideration of the different modalities of the involved technology, in addition to the appropriate design of application scenarios. It is also necessary to integrate the stakeholders (e.g. teachers) into the design process while also considering the data privacy of the target users (e.g. students). The dissertation employs DBR to investigate real-life educational issues, considering both theories and practical constraints. Even though the studies conducted are limited, as they involved only small sample sizes lacking in generalizability, some authentic educational needs were derived, and the corresponding solutions were devised and tested in this dissertation.
|
292 |
Timing multimodal turn-taking in human-robot cooperative activityChao, Crystal 27 May 2016 (has links)
Turn-taking is a fundamental process that governs social interaction. When humans interact, they naturally take initiative and relinquish control to each other using verbal and nonverbal behavior in a coordinated manner. In contrast, existing approaches for controlling a robot's social behavior do not explicitly model turn-taking, resulting in interaction breakdowns that confuse or frustrate the human and detract from the dyad's cooperative goals. They also lack generality, relying on scripted behavior control that must be designed for each new domain. This thesis seeks to enable robots to cooperate fluently with humans by automatically controlling the timing of multimodal turn-taking. Based on our empirical studies of interaction phenomena, we develop a computational turn-taking model that accounts for multimodal information flow and resource usage in interaction. This model is implemented within a novel behavior generation architecture called CADENCE, the Control Architecture for the Dynamics of Embodied Natural Coordination and Engagement, that controls a robot's speech, gesture, gaze, and manipulation. CADENCE controls turn-taking using a timed Petri net (TPN) representation that integrates resource exchange, interruptible modality execution, and modeling of the human user. We demonstrate progressive developments of CADENCE through multiple domains of autonomous interaction encompassing situated dialogue and collaborative manipulation. We also iteratively evaluate improvements in the system using quantitative metrics of task success, fluency, and balance of control.
|
293 |
Адаптивне бихевиористичке стратегије у интеракцији између човека и машине у контексту медицинске терапије / Adaptivne biheviorističke strategije u interakciji između čoveka i mašine u kontekstu medicinske terapije / Adaptive Behavioural Strategies in Human-Robot Interaction in the Context of Medical TherapyTasevski Jovica 10 September 2018 (has links)
<p>У овој дисертацији се разматрају изабрани аспекти истраживачког проблема спецификације, дизајнирања и имплементације конверзационих робота као асистивних средстава у терапији деце са церебралном парализом. Доприноси ови тезе су следећи. (i) Предложена је архитектура конверзационог агента опште намене која омогућава флексибилно интегрисање модула различитих функционалности. (ii) Дефинисана је и имплементирана адаптивна бихевиористичка стратегија коју робот примењује у интеракцији са децом. (iii) Предложена дијалошка стратегија је спроведена и позитивно процењена у интеркацији између деце и робота у реалистичном терапеутском контексту. (iv) Коначно, предложен је приступ за аутоматско детектовање критичних промена у дијалогу, заснован на појму нормализоване дијалошке ентропије.</p> / <p>U ovoj disertaciji se razmatraju izabrani aspekti istraživačkog problema specifikacije, dizajniranja i implementacije konverzacionih robota kao asistivnih sredstava u terapiji dece sa cerebralnom paralizom. Doprinosi ovi teze su sledeći. (i) Predložena je arhitektura konverzacionog agenta opšte namene koja omogućava fleksibilno integrisanje modula različitih funkcionalnosti. (ii) Definisana je i implementirana adaptivna bihevioristička strategija koju robot primenjuje u interakciji sa decom. (iii) Predložena dijaloška strategija je sprovedena i pozitivno procenjena u interkaciji između dece i robota u realističnom terapeutskom kontekstu. (iv) Konačno, predložen je pristup za automatsko detektovanje kritičnih promena u dijalogu, zasnovan na pojmu normalizovane dijaloške entropije.</p> / <p>This doctoral dissertation considers selected aspects of the research problem of specification, design, and implementation of conversational robots as assistive tools in therapy for children with cerebral palsy. This dissertation has made the following contributions: (i) It proposes a general architecture for conversational agents that allows for flexible integration of software modules implementing different functionalities. (ii) It introduces and implements an adaptive behavioural strategy that is applied by the robot in interaction with children. (iii) The proposed dialogue strategy is applied and evaluated in interaction between children and the robot MARKO, in realistic therapeutic settings. (iv) Finally, the dissertation proposes an approach to automatic detection of critical changes in human-machine interaction, based on the notion of normalized interactional entropy.</p>
|
294 |
Brain-computer interfaces for inducing brain plasticity and motor learning: implications for brain-injury rehabilitationBabalola, Karolyn Olatubosun 08 July 2011 (has links)
The goal of this investigation was to explore the efficacy of implementing a rehabilitation robot controlled by a noninvasive brain-computer interface (BCI) to influence brain plasticity and facilitate motor learning. The motivation of this project stemmed from the need to address the population of stroke survivors who have few or no options for therapy.
A stroke occurs every 40 seconds in the United States and it is the leading cause of long-term disability [1-3]. In a country where the elderly population is growing at an astounding rate, one in six persons above the age of 55 is at risk of having a stroke. Internationally, the rates of strokes and stroke-induced disabilities are comparable to those of the United States [1, 4-6]. Approximately half of all stroke survivors suffer from immediate unilateral paralysis or weakness, 30-60% of which never regain function [1, 6-9]. Many individuals who survive stroke will be forced to seek institutional care or long-term assistance.
Clinicians have typically implemented stroke rehabilitative treatment using active training techniques such as constraint induced movement therapy (CIMT) and robotic therapy [10-12]. Such techniques restore motor activity by forcing the movement of weakened limbs. That active engagement of the weakened limb movement stimulates neural pathways and activates the motor cortex, thus inducing brain plasticity and motor learning. Several studies have demonstrated that active training does in fact have an effect on the way the brain restores itself and leads to faster rehabilitation [10, 13-15]. In addition, studies involving mental practice, another form of rehabilitation, have shown that mental imagery directly stimulates the brain, but is not effective unless implemented as a supplemental to active training [16, 17]. Only stroke survivors retaining residual motor ability are able to undergo active rehabilitative training; the current selection of therapies has overlooked the significant population of stroke survivors suffering from severe control loss or complete paralysis [6, 10].
A BCI is a system or device that detects minute changes in brain signals to facilitate communication or control. In this investigation, the BCI was implemented through an electroencephalograph (EEG) device. EEG devices detect electrical brain signals transmitted through the scalp that corresponded with imagined motor activity. Within the BCI, a linear transformation algorithm converted EEG spectral features into control commands for an upper-limb rehabilitative robot, thus implementing a closed-looped feedback-control training system. The concept of the BCI-robot system implemented in this investigation may provide an alternative to current therapies by demonstrating the results of bypassing motor activity using brain signals to facilitate robotic therapy.
In this study, 24 able-bodied volunteers were divided into two study groups; one group trained to use sensorimotor rhythms (SMRs) (produced by imagining motor activity) to control the movement of a robot and the other group performed the 'guided-imagery' task of watching the robot move without control. This investigation looked for contrasts between the two groups that showed that the training involved with controlling the BCI-robot system had an effect on brain plasticity and motor learning.
To analyze brain plasticity and motor learning, EEG data corresponding to imagined arm movement and motor learning were acquired before, during, and after training. Features extracted from the EEG data consisted of frequencies in the 5-35Hz range, which produced amplitude fluctuations that were measurably significant during reaching. Motor learning data consisted of arm displacement measures (error) produced during an motor adaptation task performed daily by all subjects.
The results of the brain plasticity analysis showed persistent reductions in beta activity for subjects in the BCI group. The analysis also showed that subjects in the Non-BCI group had significant reductions in mu activity; however, these results were likely due to the fact that different EEG caps were used in each stage of the study. These results were promising but require further investigation.
The motor learning data showed that the BCI group out-performed non-BCI group in all measures of motor learning. These findings were significant because this was the first time a BCI had been applied to a motor learning protocol and the findings suggested that BCI had an influence on the speed at which subjects adapted to a motor learning task. Additional findings suggested that BCI subjects who were in the 40 and over age group had greater decreases in error after the learning phase of motor assessment. These finding suggests that BCI could have positive long term effects on individuals who are more likely to suffer from a stroke and possibly could be beneficial for chronic stroke patients.
In addition to exploring the effects of BCI training on brain plasticity and motor learning this investigation sought to detect whether the EEG features produced during guided-imagery could differentiate between reaching direction. While the analysis presented in this project produced classification accuracies no greater than ~77%, it formed the basis of future studies that would incorporate different pattern recognition techniques.
The results of this study show the potential for developing new rehabilitation therapies and motor learning protocols that incorporate BCI.
|
295 |
Social Dimensions of Robotic versus Virtual Embodiment, Presence and InfluenceThellman, Sam January 2016 (has links)
Robots and virtual agents grow rapidly in behavioural sophistication and complexity. They become better learners and teachers, cooperators and communicators, workers and companions. These artefacts – whose behaviours are not always readily understood by human intuition nor comprehensibly explained in terms of mechanism – will have to interact socially. Moving beyond artificial rational systems to artificial social systems means having to engage with fundamental questions about agenthood, sociality, intelligence, and the relationship between mind and body. It also means having to revise our theories about these things in the course of continuously assessing the social sufficiency of existing artificial social agents. The present thesis presents an empirical study investigating the social influence of physical versus virtual embodiment on people's decisions in the context of a bargaining task. The results indicate that agent embodiment did not affect the social influence of the agent or the extent to which it was perceived as a social actor. However, participants' perception of the agent as a social actor did influence their decisions. This suggests that experimental results from studies comparing different robot embodiments should not be over-generalised beyond the particular task domain in which the studied interactions took place.
|
296 |
User Interface for ARTable and Microsoft Hololens / User Interface for ARTable and Microsoft HololensBambušek, Daniel January 2018 (has links)
Tato práce se zaměřuje na použitelnost brýlí Microsoft HoloLens pro rozšířenou realitu v prototypu pracoviště pro spolupráci člověka s robotem - "ARTable". Použití brýlí je demonstrováno vytvořeným uživatelským rozhraním, které pomáhá uživatelům lépe a rychleji porozumět systému ARTable. Umožňuje prostorově vizualizovat naučené programy, aniž by bylo nutné spouštět samotného robota. Uživatel je veden 3D animací jednotlivých programů a hlasem zařízení, což mu pomůže získat jasnou představu o tom, co by se stalo, pokud by program spustil přímo na robotovi. Implementované řešení také umožňuje interaktivně provést uživatele celým procesem programování robota. Použití brýlí umožňuje mimo jiné zobrazit cenné prostorové informace, například vidění robota, tedy zvýraznit ty objekty, které jsou robotem detekovány.
|
297 |
Reimagining Human-Machine Interactions through Trust-Based FeedbackKumar Akash (8862785) 17 June 2020 (has links)
<div>Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines<i> interact and collaborate</i> with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be <i>calibrated </i>to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time.</div><div><br></div><div>In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.</div>
|
298 |
Learning Continuous Human-Robot Interactions from Human-Human DemonstrationsVogt, David 02 March 2018 (has links)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
|
299 |
Towards Socially Intelligent Robots in Human Centered Environment / Vers des robots socialement intelligents en environnement humainPandey, Amit kumar 20 June 2012 (has links)
Bientôt, les robots ne travailleront plus de manière isolée mais avec nous. Ils entrent peu à peu dans notre vie de tous les jours pour coopérer, assister, aider, servir, apprendre, enseigner ou même jouer avec l'homme. Dans ce contexte, nous considérons que ce ne doit pas être à l'homme de s'adapter au robot. Au contraire, le robot doit être capable d'intégrer, dans ses stratégies de planification et de décision, différents facteurs d'effort et de confort et de prendre en compte les préférences et désirs de l'homme ainsi que les normes sociales de son environnement. Tout en respectant les principes de sécurité réglementaire, le robot doit se comporter, naviguer, manipuler, communiquer et apprendre d'une manière qui soit pertinente, acceptée et compréhensible par l'homme. Cette thèse explore et définit les ingrédients clés nécessaires au robot pour développer une telle intelligence socio-cognitive. Elle définit également un cadre pour l'interaction homme-robot permettant de s'attaquer à ces challenges dans le but de rendre le robot socialement intelligent / Robots will no longer be working isolated from us. They are entering into our day-to-day life to cooperate, assist, help, serve, learn, teach and play with us. In this context, it is important that because of the presence of robots, the human should not be on compromising side. To achieve this, beyond the basic safety requirements, robots should take into account various factors ranging from human’s effort, comfort, preferences, desire, to social norms, in their various planning and decision making strategies. They should behave, navigate, manipulate, interact and learn in a way, which is expected, accepted, and understandable by us, the human. This thesis begins by exploring and identifying the basic yet key ingredients of such socio-cognitive intelligence. Then we develop generic frameworks and concepts from HRI perspective to address these additional challenges, and to elevate the robots capabilities towards being socially intelligent
|
300 |
Design, analysis, and simulation of a humanoid robotic arm applied to catchingYesmunt, Garrett Scot January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / There have been many endeavors to design humanoid robots that have human characteristics such as dexterity, autonomy and intelligence. Humanoid robots are intended to cooperate with humans and perform useful work that humans can perform. The main advantage of humanoid robots over other machines is that they are flexible and multi-purpose. In this thesis, a human-like robotic arm is designed and used in a task which is typically performed by humans, namely, catching a ball. The robotic arm was designed to closely resemble a human arm, based on anthropometric studies. A rigid multibody dynamics software was used to create a virtual model of the robotic arm, perform experiments, and collect data. The inverse kinematics of the robotic arm was solved using a Newton-Raphson numerical method with a numerically calculated Jacobian. The system was validated by testing its ability to find a kinematic solution for the catch position and successfully catch the ball within the robot's workspace. The tests were conducted by throwing the ball such that its path intersects different target points within the robot's workspace. The method used for determining the catch location consists of finding the intersection of the ball's trajectory with a virtual catch plane. The hand orientation was set so that the normal vector to the palm of the hand is parallel to the trajectory of the ball at the intersection point and a vector perpendicular to this normal vector remains in a constant orientation during the catch.
It was found that this catch orientation approach was reliable within a 0.35 x 0.4 meter window in the robot's workspace. For all tests within this window, the robotic arm successfully caught and dropped the ball in a bin. Also, for the tests within this window, the maximum position and orientation (Euler angle) tracking errors were 13.6 mm and 4.3 degrees, respectively. The average position and orientation tracking errors were 3.5 mm and 0.3 degrees, respectively. The work presented in this study can be applied to humanoid robots in industrial assembly lines and hazardous environment recovery tasks, amongst other applications.
|
Page generated in 1.104 seconds