Spelling suggestions: "subject:"cocial robotics"" "subject:"bsocial robotics""
21 |
Teaching mobile robots to use spatial wordsDobnik, Simon January 2009 (has links)
The meaning of spatial words can only be evaluated by establishing a reference to the properties of the environment in which the word is used. For example, in order to evaluate what is to the left of something or how fast is fast in a given context, we need to evaluate properties such as the position of objects in the scene, their typical function and behaviour, the size of the scene and the perspective from which the scene is viewed. Rather than encoding the semantic rules that define spatial expressions by hand, we developed a system where such rules are learned from descriptions produced by human commentators and information that a mobile robot has about itself and its environment. We concentrate on two scenarios and words that are used in them. In the first scenario, the robot is moving in an enclosed space and the descriptions refer to its motion ('You're going forward slowly' and 'Now you're turning right'). In the second scenario, the robot is static in an enclosed space which contains real-size objects such as desks, chairs and walls. Here we are primarily interested in prepositional phrases that describe relationships between objects ('The chair is to the left of you' and 'The table is further away than the chair'). The perspective can be varied by changing the location of the robot. Following the learning stage, which is performed offline, the system is able to use this domain specific knowledge to generate new descriptions in new environments or to 'understand' these expressions by providing feedback to the user, either linguistically or by performing motion actions. If a robot can be taught to 'understand' and use such expressions in a manner that would seem natural to a human observer, then we can be reasonably sure that we have captured at least something important about their semantics. Two kinds of evaluation were performed. First, the performance of machine learning classifiers was evaluated on independent test sets using 10-fold cross-validation. A comparison of classifier performance (in regard to their accuracy, the Kappa coefficient (κ), ROC and Precision-Recall graphs) is made between (a) the machine learning algorithms used to build them, (b) conditions under which the learning datasets were created and (c) the method by which data was structured into examples or instances for learning. Second, with some additional knowledge required to build a simple dialogue interface, the classifiers were tested live against human evaluators in a new environment. The results show that the system is able to learn semantics of spatial expressions from low level robotic data. For example, a group of human evaluators judged that the live system generated a correct description of motion in 93.47% of cases (the figure is averaged over four categories) and that it generated the correct description of object relation in 59.28% of cases.
|
22 |
Reading with Robots: A Platform to Promote Cognitive Exercise through Identification and Discussion of Creative Metaphor in BooksParde, Natalie 08 1900 (has links)
Maintaining cognitive health is often a pressing concern for aging adults, and given the world's shifting age demographics, it is impractical to assume that older adults will be able to rely on individualized human support for doing so. Recently, interest has turned toward technology as an alternative. Companion robots offer an attractive vehicle for facilitating cognitive exercise, but the language technologies guiding their interactions are still nascent; in elder-focused human-robot systems proposed to date, interactions have been limited to motion or buttons and canned speech. The incapacity of these systems to autonomously participate in conversational discourse limits their ability to engage users at a cognitively meaningful level.
I addressed this limitation by developing a platform for human-robot book discussions, designed to promote cognitive exercise by encouraging users to consider the authors' underlying intentions in employing creative metaphors. The choice of book discussions as the backdrop for these conversations has an empirical basis in neuro- and social science research that has found that reading often, even in late adulthood, has been correlated with a decreased likelihood to exhibit symptoms of cognitive decline. The more targeted focus on novel metaphors within those conversations stems from prior work showing that processing novel metaphors is a cognitively challenging task, for young adults and even more so in older adults with and without dementia.
A central contribution arising from the work was the creation of the first computational method for modelling metaphor novelty in word pairs. I show that the method outperforms baseline strategies as well as a standard metaphor detection approach, and additionally discover that incorporating a sentence-based classifier as a preliminary filtering step when applying the model to new books results in a better final set of scored word pairs. I trained and evaluated my methods using new, large corpora from two sources, and release those corpora to the research community. In developing the corpora, an additional contribution was the discovery that training a supervised regression model to automatically aggregate the crowdsourced annotations outperformed existing label aggregation strategies. Finally, I show that automatically-generated questions adhering to the Questioning the Author strategy are comparable to human-generated questions in terms of naturalness, sensibility, and question depth; the automatically-generated questions score slightly higher than human-generated questions in terms of clarity. I close by presenting findings from a usability evaluation in which users engaged in thirty-minute book discussions with a robot using the platform, showing that users find the platform to be likeable and engaging.
|
23 |
Affective Workload Allocation System For Multi-human Multi-robot TeamsWonse Jo (13119627) 17 May 2024 (has links)
<p>Human multi-robot systems constitute a relatively new area of research that focuses on the interaction and collaboration between humans and multiple robots. Well-designed systems can enable a team of humans and robots to effectively work together on complex and sophisticated tasks such as exploration, monitoring, and search and rescue operations. This dissertation introduces an affective workload allocation system capable of adaptively allocating workload in real-time while considering the conditions and work performance of human operators in multi-human multi-robot teams. The proposed system is largely composed of three parts, taking the surveillance scenario involving multi-human operators and multi-robot system as an example. The first part of the system is a framework for an adaptive multi-human multi-robot system that allows real-time measurement and communication between heterogeneous sensors and multi-robot systems. The second part is an algorithm for real-time monitoring of humans' affective states using machine learning techniques and estimation of the affective state from multimodal data that consists of physiological and behavioral signals. The third part is a deep reinforcement learning-based workload allocation algorithm. For the first part of the affective workload allocation system, we developed a robot operating system (ROS)-based affective monitoring framework to enable communication among multiple wearable biosensors, behavioral monitoring devices, and multi-robot systems using the real-time operating system feature of ROS. We validated the sub-interfaces of the affective monitoring framework through connecting to a robot simulation and utilizing the framework to create a dataset. The dataset included various visual and physiological data categorized on the cognitive load level. The targeted cognitive load is stimulated by a closed-circuit television (CCTV) monitoring task on the surveillance scenario with multi-robot systems. Furthermore, we developed a deep learning-based affective prediction algorithm using the physiological and behavioral data captured from wearable biosensors and behavior-monitoring devices, in order to estimate the cognitive states for the second part of the system. For the third part of the affective workload allocation system, we developed a deep reinforcement learning-based workload allocation algorithm to allocate optimal workloads based on a human operator's performance. The algorithm was designed to take an operator's cognitive load, using objective and subjective measurements as inputs, and consider the operator's task performance model we developed using the empirical findings of the extensive user experiments, to allocate optimal workloads to human operators. We validated the proposed system through within-subjects study experiments on a generalized surveillance scenario involving multiple humans and multiple robots in a team. The multi-human multi-robot surveillance environment included an affective monitoring framework and an affective prediction algorithm to read sensor data and predict human cognitive load in real-time, respectively. We investigated optimal methods for affective workload allocations by comparing other allocation strategies used in the user experiments. As a result, we demonstrated the effectiveness and performance of the proposed system. Moreover, we found that the subjective and objective measurement of an operator's cognitive loads and the process of seeking consent for the workload transitions must be included in the workload allocation system to improve the team performance of the multi-human multi-robot teams.</p>
|
24 |
Wie kommt die Robotik zum Sozialen? Epistemische Praktiken der Sozialrobotik.Bischof, Andreas 15 July 2016 (has links)
In zahlreichen Forschungsprojekten wird unter Einsatz großer finanzieller und personeller Ressourcen daran gearbeitet, dass Roboter die Fabrikhallen verlassen und Teil von Alltagswelten wie Krankenhäusern, Kindergärten und Privatwohnungen werden. Die Konstrukteurinnen und Konstrukteure stehen dabei vor einer nicht-trivialen Herausforderung: Sie müssen die Ambivalenzen und Kontingenzen alltäglicher Interaktion in die diskrete Sprache der Maschinen übersetzen. Wie sie dieser Herausforderung begegnen, welche Muster und Lösungen sie heranziehen und welche Implikationen für die Verwendung von Sozialrobotern dabei gelegt werden, ist der Gegenstand des Buches. Auf der Suche nach der Antwort, was Roboter sozial macht, hat Andreas Bischof Forschungslabore und Konferenzen in Europa und Nordamerika besucht und ethnografisch erforscht. Zu den wesentlichen Ergebnissen dieser Studie gehört die Typologisierung von Forschungszielen in der Sozialrobotik, eine epistemische Genealogie der Idee des Roboters in Alltagswelten, die Rekonstruktion der Bezüge zu 'echten' Alltagswelten in der
Sozialrobotik-Entwicklung und die Analyse dreier Gattungen epistemischer Praktiken, derer sich die Ingenieurinnen und Ingenieure bedienen, um Roboter sozial zu machen.:EINLEITUNG
1. WAS IST SOZIALROBOTIK?
1.1 Roboter & Robotik zum Funktionieren bringen
1.2 Drei Problemdimensionen der Sozialrobotik
1.3 Forschungsstand Sozialrobotik
1.4 Problemstellung – Sozialrobotik als „wicked problem“
2. FORSCHEN, TECHNISIEREN UND ENTWERFEN
2.1 Wissenschaft als (soziale) Praxis
2.2 Technisierung und Komplexitätsreduktion in Technik
2.3 Entwurf, Technik, Nutzung – Technik zwischen Herstellungs- und Wirkungszusammenhang
2.4 Sozialrobotik als Problemlösungshandeln
3. METHODOLOGIE UND METHODEN DER STUDIE
3.1 Forschungsstil Grounded Theory
3.2 Ethnografie und narrative Experteninterviews
3.3 Auswertungsmethoden und Generalisierung
3.4 Zusammenfassung
4. DER ROBOTER ALS UNIVERSALWERKZEUG
4.1 Roboter als fiktionale Apparate
4.2 Robotik als Lösungsversprechen
4.3 Computer Science zwischen Wissenschaft und Design
4.4 Fazit – Das Erbe des Universalwerkzeugs
5. FORSCHUNGS- UND ENTWICKLUNGSZIELE DER SOZIALROBOTIK
5.1 Bedingungen projektförmiger Forschung
5.2 Dimensionen und Typen der Ziele von Sozialrobotik
5.3 Beschreibung der Typen anhand der Verteilung der Fälle
5.4 Ko-Konstruktion der Anwendung an Fallbeispielen
5.5 Fazit – Typen von Sozialität in Entwicklungszielen
6. EPISTEMISCHE PRAKTIKEN UND INSTRUMENTE DER SOZIALROBOTIK
6.1 Praktiken der Laboratisierung des Sozialen
6.2 Alltägliche und implizite Heuristiken
6.3 Inszenierende Praktiken
6.4 Fazit – Wechselspiele des Erzeugens und Beobachtens
7. FAZIT
7.1 Phänomenstruktur der Sozialrobotik
7.2 Entwicklung als Komplexitätspendel
7.3 Methodologischer Vorschlag für den Entwicklungsprozess
|
25 |
BI-DIRECTIONAL COACHING THROUGH SPARSE HUMAN-ROBOT INTERACTIONSMythra Varun Balakuntala Srinivasa Mur (16377864) 15 June 2023 (has links)
<p>Robots have become increasingly common in various sectors, such as manufacturing, healthcare, and service industries. With the growing demand for automation and the expectation for interactive and assistive capabilities, robots must learn to adapt to unpredictable environments like humans can. This necessitates the development of learning methods that can effectively enable robots to collaborate with humans, learn from them, and provide guidance. Human experts commonly teach their collaborators to perform tasks via a few demonstrations, often followed by episodes of coaching that refine the trainee’s performance during practice. Adopting a similar approach that facilitates interactions to teaching robots is highly intuitive and enables task experts to teach the robots directly. Learning from Demonstration (LfD) is a popular method for robots to learn tasks by observing human demonstrations. However, for contact-rich tasks such as cleaning, cutting, or writing, LfD alone is insufficient to achieve a good performance. Further, LfD methods are developed to achieve observed goals while ignoring actions to maximize efficiency. By contrast, we recognize that leveraging human social learning strategies of practice and coaching in conjunction enables learning tasks with improved performance and efficacy. To address the deficiencies of learning from demonstration, we propose a Coaching by Demonstration (CbD) framework that integrates LfD-based practice with sparse coaching interactions from a human expert.</p>
<p><br></p>
<p>The LfD-based practice in CbD was implemented as an end-to-end off-policy reinforcement learning (RL) agent with the action space and rewards inferred from the demonstration. By modeling the reward as a similarity network trained on expert demonstrations, we eliminate the need for designing task-specific engineered rewards. Representation learning was leveraged to create a novel state feature that captures interaction markers necessary for performing contact-rich skills. This LfD-based practice was combined with coaching, where the human expert can improve or correct the objectives through a series of interactions. The dynamics of interaction in coaching are formalized using a partially observable Markov decision process. The robot aims to learn the true objectives by observing the corrective feedback from the human expert. We provide an approximate solution by reducing this to a policy parameter update using KL divergence between the RL policy and a Gaussian approximation based on coaching. The proposed framework was evaluated on a dataset of 10 contact-rich tasks from the assembly (peg-insertion), service (cleaning, writing, peeling), and medical domains (cricothyroidotomy, sonography). Compared to baselines of behavioral cloning and reinforcement learning algorithms, CbD demonstrates improved performance and efficiency.</p>
<p><br></p>
<p>During the learning process, the demonstrations and coaching feedback imbue the robot with expert knowledge of the task. To leverage this expertise, we develop a reverse coaching model where the robot can leverage knowledge from demonstrations and coaching corrections to provide guided feedback to human trainees to improve their performance. Providing feedback adapted to individual trainees' "style" is vital to coaching. To this end, we have proposed representing style as objectives in the task null space. Unsupervised clustering of the null-space trajectories using Gaussian mixture models allows the robot to learn different styles of executing the same skill. Given the coaching corrections and style clusters database, a style-conditioned RL agent was developed to provide feedback to human trainees by coaching their execution using virtual fixtures. The reverse coaching model was evaluated on two tasks, a simulated incision and obstacle avoidance through a haptic teleoperation interface. The model improves human trainees’ accuracy and completion time compared to a baseline without corrective feedback. Thus, by taking advantage of different human-social learning strategies, human-robot collaboration can be realized in human-centric environments. </p>
<p><br></p>
|
Page generated in 0.061 seconds