• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 67
  • 48
  • 48
  • 43
  • 35
  • 25
  • 19
  • 15
  • 13
  • 12
  • 12
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Affective Workload Allocation System For Multi-human Multi-robot Teams

Wonse Jo (13119627) 17 May 2024 (has links)
<p>Human multi-robot systems constitute a relatively new area of research that focuses on the interaction and collaboration between humans and multiple robots. Well-designed systems can enable a team of humans and robots to effectively work together on complex and sophisticated tasks such as exploration, monitoring, and search and rescue operations. This dissertation introduces an affective workload allocation system capable of adaptively allocating workload in real-time while considering the conditions and work performance of human operators in multi-human multi-robot teams. The proposed system is largely composed of three parts, taking the surveillance scenario involving multi-human operators and multi-robot system as an example. The first part of the system is a framework for an adaptive multi-human multi-robot system that allows real-time measurement and communication between heterogeneous sensors and multi-robot systems. The second part is an algorithm for real-time monitoring of humans' affective states using machine learning techniques and estimation of the affective state from multimodal data that consists of physiological and behavioral signals. The third part is a deep reinforcement learning-based workload allocation algorithm. For the first part of the affective workload allocation system, we developed a robot operating system (ROS)-based affective monitoring framework to enable communication among multiple wearable biosensors, behavioral monitoring devices, and multi-robot systems using the real-time operating system feature of ROS. We validated the sub-interfaces of the affective monitoring framework through connecting to a robot simulation and utilizing the framework to create a dataset. The dataset included various visual and physiological data categorized on the cognitive load level. The targeted cognitive load is stimulated by a closed-circuit television (CCTV) monitoring task on the surveillance scenario with multi-robot systems. Furthermore, we developed a deep learning-based affective prediction algorithm using the physiological and behavioral data captured from wearable biosensors and behavior-monitoring devices, in order to estimate the cognitive states for the second part of the system. For the third part of the affective workload allocation system, we developed a deep reinforcement learning-based workload allocation algorithm to allocate optimal workloads based on a human operator's performance. The algorithm was designed to take an operator's cognitive load, using objective and subjective measurements as inputs, and consider the operator's task performance model we developed using the empirical findings of the extensive user experiments, to allocate optimal workloads to human operators. We validated the proposed system through within-subjects study experiments on a generalized surveillance scenario involving multiple humans and multiple robots in a team. The multi-human multi-robot surveillance environment included an affective monitoring framework and an affective prediction algorithm to read sensor data and predict human cognitive load in real-time, respectively. We investigated optimal methods for affective workload allocations by comparing other allocation strategies used in the user experiments. As a result, we demonstrated the effectiveness and performance of the proposed system. Moreover, we found that the subjective and objective measurement of an operator's cognitive loads and the process of seeking consent for the workload transitions must be included in the workload allocation system to improve the team performance of the multi-human multi-robot teams.</p>
52

Who Knows Best? Self- versus Friend Robot Customisation with ChatGPT : A study investigating self- and friend-customisation of socially assistive robots acting as a health coach.

Göransson, Marcus January 2024 (has links)
When using socially assistive robots (SAR), it is important that their personality is personalised so that it suits their user. This work investigated how the customisation of the personality of a SAR health coach is perceived when done by the users themselves or their friends via ChatGPT. Therefore, the research question in this study is: How is personalised dialogue for a social robot perceived when generated via ChatGPT, by users and their friends? This study uses a mixed method approach, where participants got to test their own and their friend’s personalised version. The qualitative data was analysed using a thematic analysis. Sixteen participants were recruited.The result from this study showed that it does not matter who is customising the SAR, nor does one make a more persuasive version than the other, and when customising the personality, participants explained what they or their friend preferred. However, it is important to remember that the individual’s preference matters.
53

Can You Read My Mind? : A Participatory Design Study of How a Humanoid Robot Can Communicate Its Intent and Awareness

Thunberg, Sofia January 2019 (has links)
Communication between humans and interactive robots will benefit if people have a clear mental model of the robots' intent and awareness. The aim with this thesis was to investigate how human-robot interaction is affected by manipulation of social cues on the robot. The research questions were: How do social cues affect mental models of the Pepper robot, and how can a participatory design method be used for investigating how the Pepper robot could communicate intent and awareness? The hypothesis for the second question was that nonverbal cues would be preferred over verbal cues. An existing standard platform was used, Softbank's Pepper, as well as state-of-the-art tasks from the RoboCup@Home challenge. The rule book and observations from the 2018 competition were thematically coded and the themes created eight scenarios. A participatory design method called PICTIVE was used in a design study, where five student participants went through three phases, label, sketch and interview, to create a design for how the robot should communicate intent and awareness. The use of PICTIVE was a suitable way to extract a lot of design ideas. However, not all scenarios were optimal for the task. The design study confirmed the use of mediating physical attributes to alter the mental model of a humanoid robot to reach common ground. Further, it did not confirm the hypothesis that nonverbal cues would be preferred over verbal cues, though it did show that verbal cues would not be enough. This, however, needs to be further tested in live interactions.
54

Signalling of ciclyn o complexes through EIF2alpha phosphorylation

Ortet Cortada, Laura 04 June 2010 (has links)
We have identified a novel Cyclin, called Cyclin O, which is able to bind and activate Cdk2 in response to intrinsic apoptotic stimuli. We have focused on the study of Cyclin O&#945; and Cyclin O&#946;, alternatively spliced products of the gene. Upon treatment with different stress stimuli, transfected Cyclin O&#945; accumulates in dense aggregations in the cytoplasm compatible with being Stress Granules (SGs). Furthermore, we have seen that Cyclin O&#946; and a point mutant of the N-terminal part of the protein constitutively localize to the SGs. Although both alpha and beta isoforms are proapoptotic, only Cyclin O&#945; can bind and activate Cdk2. On the other hand, we have demonstrated that Cyclin O is upregulated by Endoplasmic Reticulum (ER) stress and is necessary for ER stress-induced apoptosis. Cyclin O activates specifically the PERK pathway and interacts with the PERK inhibitor protein p58IPK. Moreover, Cyclin O participates in the activation of other eIF2&#945; kinases. We have also observed that a pool of Cyclin O is located in active mitochondria, suggesting a function of the protein linked to oxidative metabolism.Hemos identificado una nueva Ciclina, llamada Ciclina O, que es capaz de unirse y activar Cdk2 en respuesta a estímulos apoptóticos intrínsecos. Nos hemos centrado en el estudio de la Ciclina O&#945; y la Ciclina O&#946;, productos de splicing alternativo del gen. En respuesta a diferentes tipos de estrés, la Ciclina O&#945; se acumula en agregaciones citoplásmicas densas que podrían corresponder a Gránulos de Estrés (SGs). Además, hemos visto que la Ciclina O&#946; y un mutante puntual de la parte N-terminal de la proteína se localizan constitutivamente en los SGs. Aunque las dos isoformas alfa y beta son proapoptóticas, solo la Ciclina O&#945; es capaz de unirse y activar Cdk2. Por otro lado, hemos demostrado que los niveles de Ciclina O se incrementan en respuesta al estrés de Retículo Endoplásmico (RE) y que esta proteína es necesaria para la inducción de apoptosis dependiente de estrés de RE. La Ciclina O activa específicamente la vía de PERK e interacciona con la proteína inhibidora de PERK p58IPK. Además, la Ciclina O participa en la activación de otras quinasas de eIF2&#945;. La Ciclina O se localiza en mitocondrias activas, lo que sugiere una función de la proteína ligada al metabolismo oxidativo.
55

Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

Vogt, David 02 March 2018 (has links) (PDF)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
56

Robot Gaze Behaviour for Handling Confrontational Scenarios / Blickbeteendet hos en robot för att hantera konfrontationsscenarier

Gorgis, Paul January 2021 (has links)
In everyday communication, humans utilise eye gaze due to its importance as a communication tool. As technology evolves, social robots are expected to become more adopted in society and, since they interact with humans, they should similarly use eye gaze to elevate the level of the interaction and increase humans’ perception of them. Previous studies have shown that robots possessing human-like gaze behaviour increase the interactants’ task performance and their perception of the robot. However, social robots must also be able to behave and respond appropriately when humans act inappropriately, and failure in doing so may normalize bad behaviour even towards other humans. Additionally, with the recent progress of wearable eyetracking technology, there is interest to see how this technology can be used to help generate human gaze into a robot. This thesis work investigates how the eye gaze behaviour from a human being can be modeled into the robot Furhat to behave more human-like in a confrontational scenario. It further investigates how a robot possessing the developed human-like gaze model compares to a robot using a believable heuristic gaze model. We created a pipeline which concerned selecting scenarios, conducting roleplays between actors of these scenarios to collect gaze, extracting and processing that gaze data and extracting probability distributions that the human-like model would utilise. Our model used frequencies to determine where to gaze and head rotation, while gamma distributions were used to sample gaze length. We then executed an online video study with the two robot conditions where participants rated either robot by filling out a questionnaire. The results show that while no statistical difference could be found, the human-like condition scored higher on the measures anthropomorphism/human-likeness and composure, whereas the heuristic condition rated higher on expertise and extroversion. As such, the human-like model did not yield a significant benefit on robot perception to opt for it. Still, we suggest that the pipeline used in this thesis work may help HRI researchers to perform gaze studies and possibly build a foundation for further development. / I vardaglig kommunikation använder människor sig av blickar på grund av dess betydelse som kommunikationsverktyg. Då teknologi ständigt utvecklas förväntas det att sociala robotar kommer att bli mer involverade i samhället, och eftersom de integrerar med människor så bör de på samma sätt använda sig av blickar och ögonrörelser för att höja nivån på interaktionen och därmed förbättra människors uppfattning av dem. Tidigare studier har visat att robotar som använder sig av blickar likt människor kan förbättra deltagarnas utförande av uppgifter samt deras uppfattning av roboten. Sociala robotar måste dock även kunna agera och svara på ett lämpligt sätt när människor beter sig olämpligt, och gör dem inte det finns risken att det olämpliga beteendet normaliseras, även i interaktioner med andra människor. Med de senaste framstegen av portabla eye-tracking enheter finns det därför ett intresse att se hur denna teknologi kan användas för att generera människolikt blickbeteende som sedan används i en robot. Denna studie undersöker hur en människas sätt att blicka och titta kan modelleras i roboten Furhat för att bete sig mer människolikt i ett scenario där konfrontation behövs. Studien undersöker dessutom hur en robot som bär ett människolikt blickbeteende jämför sig med en robot som bär ett trovärdigt heuristiskt blickbeteende. Vi skapade en struktur som involverade att välja scenarion, utföra rollspel mellan skådespelare i dessa scenarier för att samla data om deras blickmönster, extrahera och bearbeta denna data, och extraherade sannolikhetsfördelningar som den människolika modellen skulle använda sig av. Vår modell använde sig av frekvenser för att besluta var roboten skulle blicka, medan gammafördelningar användes för att generera blickens längd. Vi utförde därefter en videostudie online med de två robotvarianterna, där deltagare bedömde någon av robotarna genom att svara på en enkät. Resultaten visar att ingen statistisk signifikant skillnad kunde påvisas. Trender visade dock att modellen med människolik blickbeteende bedömdes högre i mätningen av attributerna antropomorfism/mänsklighet och fattning, medan den heuristiska modellen bedömdes högre i expertis och utåtvändighet. Därav erhöll den människolika modellen ingen signifikant framgång för att föredra den. Vi föreslår ändå att strukturen som användes i studien kan hjälpa MRI forskare att utföra studier som involverar blickbeteende hos människor, och möjligtvis bygga en grund för vidareutveckling av strukturen.
57

Augmenting communication channels toward the evolution of autonomous construction sites

Winqvist, David January 2016 (has links)
Context In the last centuries, we have been generating and building infrastructure at a faster pace than ever before. Simultaneously the costs for labor and construction sectors as road and house building is increasing. This provides room for autonomous machines. The development of infrastructure is accomplished through highly efficient and productive construction machinery that progressively modernizes to form the society. In order to increase the pace of development, both cars and industry are getting more and more automated. Volvo Construction Equipment is exploring the autonomous vehicle space. The new machines complement and perfect the human work with efficiency, reliability, and durability. There is however, a question of trust between the human workers and the autonomous machines, I will in this thesis investigate methods on how to develop trust through communication systems with autonomous machines.   Objectives To create recommendations and solutions for products that build trust between human and automated machines on a construction site.   Method Outcome is reached through a case study exploration with validated learning, meaning that it will incorporate learnings through prototype iterations.   Results The result evaluates how trust could be developed between humans and autonomous machinery at a construction site and how communication methods between these parties could be implemented while maintaining high levels of efficiency and safety.   Conclusion Findings in this thesis indicates that trust is developed over time with reliable systems that provide colleagues with updated information available at any time. The results can be introduced in both today’s and tomorrow’s construction sites at various levels of advanced technology. / Sammanhang De senaste hundra åren har vi gett upphov till att bygga infrastruktur i en snabbare takt än någonsin tidigare. Samtidigt ökar kostnaderna för både arbetskraft och byggsektorer som väg- och bostadsbyggnader. Denna situation ger utrymme för autonoma maskiner. Utvecklingen av infrastruktur sker genom effektiva och produktiva konstruktionsmaskiner som successivt moderniseras för att forma samhället. För att öka utvecklingstakten moderniseras både bilar och industri för att möta en mer automatiserad vardag.  Volvo Construction Equipment undersöker det autonoma fordonsutrymmet för nästa generations maskiner. Automationen kompletterar de nya maskinerna och fulländar det mänskliga arbetet med effektivitet, tillförlitlighet och hållbarhet.   Det finns dock en fråga om relationen mellan mänskliga arbetare och autonoma maskiner, jag kommer i denna avhandling undersöka metoder för hur man kan utveckla tillit genom kommunikationssystem mellan arbetare och autonoma maskiner.   Mål Att skapa rekommendationer och lösningar för produkter som bygger tillit mellan mänskliga och automatiserade maskiner på en byggarbetsplats.   Metod Resultatet uppnås genom användandet av fallstudie forskning kombinerat med validerande lärande. Detta innebär lärdomar med hjälp av en iterativ process utav prototyper som testas och valideras.   Resultat Resultatet utvärderar hur förtroende kan utvecklas mellan människor och autonoma maskiner på en byggarbetsplats. Hur kommunikationsmetoder mellan dessa parter skulle kunna genomföras samtidigt som hög effektivitet och säkerhet upprätthålls .   Slutsats Lärandet i denna avhandling tyder på att förtroendet utvecklas över tid med tillförlitliga system som ger medarbetare uppdaterad nödvändig information tillgänglig när som helst. Resultaten kan införas i både dagens och framtidens anläggningsplatser på olika nivåer av avancerad teknik. / <p>Vissa delar är borttagan på grund av konfidentialitet.</p> / ME310 Design Innovation at Stanford University
58

Toward Building A Social Robot With An Emotion-based Internal Control

Marpaung, Andreas 01 January 2004 (has links)
In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry's users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot™--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry's interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator.
59

Preferences for Mental Capacities in Robots : Investigating Preferences for Mental Capacities in Robots Across Different Application Domains

Nääs, Hilda January 2024 (has links)
This study investigates if preferences for mental capacities in robots vary across different application domains and identifies influential factors, both in individuals’ characteristics and attributes specific to each robot domain. Employing a between-subject design, participants (N=271) completed a survey collecting both quantitative and qualitative data on preferences for 12 mental capacities across six robot types situated in a specific application domain (medicine, defense, household, social, education, customer service). Half of the mental capacities align with each dimension (experience and agency) in the two-dimensional model of mind (Gray et al., 2007; McMurtrie, 2023). Key findings reveal a general preference for high agency ability and low experience ability across all application domains. Exceptions were found in preference for lower agency ability in the cleaning robot and higher experience ability in the companion robot. Qualitative analysis indicates a desire for objective and logical robots functioning without emotions, while demonstrating empathy for human emotions. Additionally, gender and educational background emerged as factors influencing preference for lower experience abilities in robots. While previous research has mainly focused on attribution of mental capacities to technical agents, this study provides insights into human preferences and factors affecting them. These insights can guide responsible and ethics-driven development and design of robot technology within the field of human-robot interaction.
60

Multimodal Learning Companions

Yun, Hae Seon 20 December 2024 (has links)
Technologien wie Sensoren können dabei helfen, die Fortschritte und Zustände der Lernenden (z.B. Langeweile, Verhaltensweisen des Aufgebens) zu verstehen, und diese erkannten Zustände können genutzt werden, um ein Unterstützungssystem zu entwickeln, das als Begleiter fungiert. Zu diesem Zweck werden in dieser Dissertation drei Forschungsfragen untersucht: 1) Wie können multimodale Sensordaten wie physiologische und eingebettete Sensordaten genutzt werden, um Lernbegleiter zu entwickeln, die den Lernenden ein Bewusstsein für ihre Zustände vermitteln? als erste Forschungsfrage, 2) Wie können Lernbegleiter auf verschiedenen Modalitätsschnittstellen entworfen werden, wie z.B. bildschirmbasierte Agenten und verkörperte Roboter?, um verschiedene Möglichkeiten zu untersuchen, wie Lernende effektiv beraten werden können, und 3) Wie können nicht-technische Nutzer bei der Gestaltung und Nutzung multimodaler Lernbegleiter für ihre Anwendungen unterstützt werden? Zur Beantwortung der obengenannten Forschungsfragen wurde als Methode der Design-Based Research (DBR) Ansatz gewählt, bei der Theorie und Praxis gleichermaßen berücksichtigt wurden. Die daraus abgeleiteten Designüberlegungen dienten als Leitfaden für die Gestaltung von Lernbegleitern und der Plattform zur Entwicklung multimodaler Lernbegleiter. / This dissertation investigates three research questions: 1) How can multimodal sensor data such as physiological and embedded sensor data be used to design learning companions to provide learners with an awareness of their states?, 2) How can learning companions be designed for different modality interfaces, such as screen-based agents and embodied robots? to investigate various means to provide effective advice to learners, and 3) How can non-technical users be supported in designing and using multimodal learning companions in their various use cases? To answer these research questions, design-based research (DBR) methodology was utilized, considering both theory and practice. The derived design considerations were employed to guide the design of the learning companions as well as the platform to design multimodal learning companions. The findings from this dissertation reveal an association between the change in physiological sensor values and the arousal of emotion, which is also endorsed by prior studies. It was also found that using sensor devices such as mobile and wearable devices and Facial Expression Recognition (FER) can add to the methods of detecting learners’ states. Furthermore, designing a learning companion requires a consideration of the different modalities of the involved technology, in addition to the appropriate design of application scenarios. It is also necessary to integrate the stakeholders (e.g. teachers) into the design process while also considering the data privacy of the target users (e.g. students). The dissertation employs DBR to investigate real-life educational issues, considering both theories and practical constraints. Even though the studies conducted are limited, as they involved only small sample sizes lacking in generalizability, some authentic educational needs were derived, and the corresponding solutions were devised and tested in this dissertation.

Page generated in 0.046 seconds