• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 63
  • 44
  • 44
  • 39
  • 32
  • 23
  • 16
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Daniel Carnieto Tozadore 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
12

Den första digitala medarbetaren : En Actor-Network Theory-studie i en omsorgsförvaltning

Stålhand, Henrik, Davoodi, Matin January 2020 (has links)
Det har visat sig att det finns en viss oro för vad den ökade robotiseringen kommer att innebära, samtidigt som robotiseringens utsträckning skiljer sig mellan branscher. Denna ANT-studie behandlar införandet av en robot i en omsorgsförvaltning. Av detta följer att verksamheten genomgår någon form av förändring, varefter vi ämnat skapa en förståelse för de okända konsekvenser som följer. Studien har även sin utgångspunkt i HRI och antropomorfisering, som givit perspektiv på olika former av aktörskap och deras interaktioner. Studien är till sin natur kvalitativ och har präglats av en abduktiv arbetsprocess, samtidigt som ANT-perspektivet även har haft implikationer på det ontologiska och epistemologiska ställningstagandet. Det empiriska materialet presenteras i form av ett narrativ uppdelat i tre episoder som är berättat av roboten Matilda. Det har visat sig att Matilda har varit en central materialitet för de associeringar som sammanför aktörerna i nätverket, men att hennes införande även medfört nätverkseffekter. Vi kan med säkerhet påstå att effekterna är dynamiska och att de först blir framträdande med tiden. Detta ger ett cirkulärt samband, där idéer förändras och avlöser varandra. / It has been shown that there is some concern about what the increased robotization will mean, while the extent of robotization differs between industries. This ANT study deals with the introduction of a robot into a Public Care Department. This is followed by the notion that the organization will undergo some kind of change, after which we intend to create an understanding of the unknown consequences that follow. The study also has its outset in HRI and anthropomorphism, which have given perspectives on different forms of actors and their interactions. The study is by nature qualitative and has been characterized by an abductive work process, while the ANT perspective has also had implications for the ontological and epistemological positions. The empirical material is presented in the form of a narrative, divided into three episodes told by Matilda the robot. It appears that Matilda has been a central materiality for the associations that connect the actors in the network, but that her introduction also has brought network effects. We can say with certainty that the effects are dynamic and that they only become prominent over time. This provides a circular relationship, where ideas change and replace each other.
13

Vilka kriterier är viktiga för användarupplevelsen vid interaktion med en språkcafé-robot? / Which criteria are important for the user experience when interacting with a language café robot?

Mekonnen, Michael, Tahir, Gara January 2019 (has links)
As the number of immigrants in Sweden rises, the demand for alternative methods for language learning increases simultaneously. The use of social robots for teaching a second language is a promising field. The following research question has been designed to identify how social robots can be improved to better suit second language learners. The research question is: Which criteria are important for the user experience when interacting with a language cafe robot? The main method used to answer the question is Design Thinking with the help of semi-structured interviews. The results were 12 criteria which can be implemented for social robots in the future. The research has also studied how the criteria can be implemented in robots and to what degree the robot Furhat developed by Furhat robotics has implemented the criteria today. / I takt med det stigande antalet immigranter i Sverige ökar efterfrågan på alternativa metoder för språkinlärning. Användningen av sociala robotar för att undervisa andraspråk är ett lovande fält. Följande forskningsfråga har utformats för att identifiera hur sociala robotar kan förbättras för att anpassas till elever som lär sig andraspråk. Forskningsfrågan lyder enligt följande: Vilka kriterier är viktiga för användarupplevelsen när man interagerar med en språkcaférobot? Den huvudsakliga metoden som används för att svara på frågan är Design Thinking med hjälp av semistrukturerade intervjuer. Resultatet var 12 kriterier som kan implementeras för sociala robotar i framtiden. Rapporten har också studerat hur kriterierna kan implementeras i robotar och i vilken grad roboten Furhat som utvecklats av Furhat Robotics har implementerat kriterierna idag.
14

Differences in Children’s Experiences when Playing with a Social Robo : a Field Experiment / Skillnader i barns upplevelser när det leker med en interaktiv robot : ett fältexperiment

von Matérn, Gunnur January 2014 (has links)
This study explored human-robot interaction where children got to play with the interactive social robot Romo. The focus of the study was to explore if children experienced the interactions with the robot differently depending on two parameters. The parameters used were thought to measure differences in experiences, attitudes and expectations towards the robot depending on whether the children were co-creators of the robot or merely had playful interaction with the robot. The results indicated that the children in both activity parameter groups had similar pleasurable experiences apart from four additional categories that were detected in the co-creation group. Something that indicates that the group of children that were given the opportunity to manipulate and form Romo’s behavior had a richer user experience compared to the group of children that only played with Romo. It was also noticeable that none of the children that manipulated and formed Romo’s behavior experienced it as direct learning. They saw the learning process more as being a playful experience and many of them expressed that they had taught Romo to do various things. The ability to edit Romo’s robotic motions and behavior, through an easy contextual-sign interface, inevitably allowed the children to understand physical and computational models through play. / Denna studie undersökte människa-robot interaktion där barnen fick leka med interaktiva sociala roboten Romo. Målet med studien var att undersöka om barnen upplevde interaktioner med roboten på olika sätt beroende på två parametrar. De parametrar som användes var avsedda att mäta skillnader i upplevelser, attityder och förväntningar till roboten beroende på om barnen var medskapare av roboten eller enbart hade en lekfull interaktion med det. Resultatet visade att barnen i båda aktivitets parameter grupperna hade liknande upplevelser av interaktionen med roboten förutom att gruppen som var medskapare av roboten hade ytterligare fyra njutbar upplevelser. Något som tyder på att den grupp barn som fick möjlighet att manipulera och forma Romos beteende hade en rikare användarupplevelse jämfört med den grupp av barn som bara lekte med Romo. Det var också anmärkningsvärt att ingen av barnen som manipulerade och formade Romos beteende upplevde det som direkt inlärning. De såg inlärningsprocessen mer som en lekfull upplevelse och många av dem uttryckte att de hade lärt Romo att göra olika saker. Möjligheten att redigera Romos rörelser och beteenden, genom en enkel kontextuellt gränssnitt, oundvikligen tillät barnen att förstå fysisk- och beräkningsbaserad modellering genom lek.
15

Recognizing Engagement Behaviors in Human-Robot Interaction

Ponsler, Brett 17 January 2011 (has links)
Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it.
16

The impact of social expectation towards robots on human-robot interactions

Syrdal, Dag Sverre January 2018 (has links)
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
17

Requirements for effective collision detection on industrial serial manipulators

Schroeder, Kyle Anthony 16 October 2013 (has links)
Human-robot interaction (HRI) is the future of robotics. It is essential in the expanding markets, such as surgical, medical, and therapy robots. However, existing industrial systems can also benefit from safe and effective HRI. Many robots are now being fitted with joint torque sensors to enable effective human-robot collision detection. Many existing and off-the-shelf industrial robotic systems are not equipped with these sensors. This work presents and demonstrates a method for effective collision detection on a system with motor current feedback instead of joint torque sensors. The effectiveness of this system is also evaluated by simulating collisions with human hands and arms. Joint torques are estimated from the input motor currents. The joint friction and hysteresis losses are estimated for each joint of an SIA5D 7 Degree of Freedom (DOF) manipulator. The estimated joint torques are validated by comparing to joint torques predicted by the recursive application of Newton-Euler equations. During a pick and place motion, the estimation error in joint 2 is less than 10 Newton meters. Acceleration increased the estimation uncertainty resulting in estimation errors of 20 Newton meters over the entire workspace. When the manipulator makes contact with the environment or a human, the same technique can be used to estimate contact torques from motor current. Current-estimated contact torque is validated against the calculated torque due to a measured force. The error in contact force is less than 10 Newtons. Collision detection is demonstrated on the SIA5D using estimated joint torques. The effectiveness of the collision detection is explored through simulated collisions with the human hands and arms. Simulated collisions are performed both for a typical pick and place motion as well as trajectories that transverse the entire workspace. The simulated forces and pressures are compared to acceptable maximums for human hands and arms. During pick and place motions with vertical and lateral end effector motions at 10mm/s and 25mm/s, the maximum forces and pressures remained below acceptable levels. At and near singular configurations some collisions can be difficult to detect. Fortunately, these configurations are generally avoided for kinematic reasons. / text
18

Socially interactive robots as mediators in human-human remote communication

Papadopoulos, Fotios January 2012 (has links)
This PhD work was partially supported by the European LIREC project (Living with robots and interactive companions) a collaboration of 10 EU partners that aims to develop a new generation of interactive and emotionally intelligent companions able of establishing and maintaining long-term relationships with humans. The project takes a multi-disciplinary approach towards investigating methods to allow robotic companions to perceive, remember and react to people in order to enhance the companion’s awareness of sociability in domestic environments. (e.g. remind a user and provide useful information, carry heavy objects etc.). One of the project's scenarios concerns remote human-human communication enhancement utilising autonomous robots as social mediators which is the focus of this PhD thesis. This scenario involves a remote communication situation between two distant users who wish to utilise their robot companions in order to enhance their communication and interaction experience with each other over the internet. The scenario derived from the need of communication between people who are separated from their relatives and friends due to work commitments or other personal obligations. Even for people that live close by, communication mediated by modern technologies has become widespread. However, even with the use of video communication, they are still missing an important medium of interaction that has received much less attention over the past years, which is touch. The purpose of this thesis was to develop autonomous robots as social mediators in a remote human-human communication scenario in order to allow the users to use touch and other modalities on the robots. This thesis addressed the following research questions: Can an autonomous robot be a social mediator in human-human remote communication? How does an autonomous robotic mediator compare to a conventional computer interface in facilitating users’ remote communication? Which methodology should be used for qualitative and quantitative measurements for local user-robot and user-user social remote interactions? In order to answer these questions, three different communications platforms were developed during this research and each one addressed a number of research questions. The first platform (AIBOcom) allowed two distant users to collaborate in a virtual environment by utilising their autonomous robotic companions during their communication. Two pet-like robots, which interact individually with two remotely communicating users, allowed the users to play an interactive game cooperatively. The study tested two experimental conditions, characterised by two different modes of synchronisation between the robots that were located locally with each user. In one mode the robots incrementally affected each other’s behaviour, while in the other mode, the robots mirrored each other’s behaviour. This study aimed to identify users’ preferences for robot mediated human-human interactions in these two modes, as well as investigating users’ overall acceptance of such communication media. Findings indicated that users preferred the mirroring mode and that in this pilot study robot assisted remote communication was considered desirable and acceptable to the users. The second platform (AiBone) explored the effects of an autonomous robot on human-human remote communication and studied participants' preferences in comparison with a communication system not involving robots. We developed a platform for remote human-human communication in the context of a collaborative computer game. The exploratory study involved twenty pairs of participants who communicated using video conference software. Participants expressed more social cues and sharing of their game experiences with each other when using the robot. However, analysis of the interactions of the participants with each other and with the robot show that it is difficult for participants to familiarise themselves quickly with the robot while they can perform the same task more efficiently with conventional devices. Finally, our third platform (AIBOStory) was based on a remote interactive story telling software that allowed users to create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two people. The behaviour of the robot was inspired by dog behaviour and used a simple computational memory model. An initial pilot study evaluated the proposed system's use and acceptance by the users. Five pairs of participants were exposed to the system, with the robot acting as a social mediator, and the results suggested an overall positive acceptance response. The main study involved long-term interactions of 20 participants in order to compare their preferences between two modes: using the game enhanced with an autonomous robot and a non-robot mode. The data was analysed using quantitative and qualitative techniques to measure user preference and Human-Robot Interaction. The statistical analysis suggests user preferences towards the robot mode. Furthermore, results indicate that users utilised the memory feature, which was an integral part of the robot’s control architecture, increasingly more as the sessions progressed. Results derived from the three main studies supported our argument that domestic robots could be used as social mediators in remote human-human communications and offered an enhanced experience during their interactions with both robots and each other. Additionally, it was found that the presence of intelligent robots in the communication can increase the number of exhibited social cues between the users and are more preferable compared to conventional interactive devices such as computer keyboard and mouse.
19

An Evaluation of Gaze and EEG-Based Control of a Mobile Robot

Khan, Mubasher Hassan, Laique, Tayyab January 2011 (has links)
Context: Patients with diseases such as locked in syndrome or motor neuron are paralyzed and they need special care. To reduce the cost of their care, systems need to be designed where human involvement is minimal and affected people can perform their daily life activities independently. To assess the feasibility and robustness of combinations of input modalities, mobile robot (Spinosaurus) navigation is controlled by a combination of Eye gaze tracking and other input modalities. Objectives: Our aim is to control the robot using EEG brain signals and eye gaze tracking simultaneously. Different combinations of input modalities are used to control the robot and turret movement and then we find out which combination of control technique mapped to control command is most effective. Methods: The method includes developing the interface and control software. An experiment involving 15 participants was conducted to evaluate control of the mobile robot using a combination of eye tracker and other input modalities. Subjects were required to drive the mobile robot from a starting point to a goal along a pre-defined path. At the end of experiment, a sense of presence questionnaire was distributed among the participants to take their feedback. A qualitative pilot study was performed to find out how a low cost commercial EEG headset, the Emotiv EPOCTM, can be used for motion control of a mobile robot at the end. Results: Our study results showed that the Mouse/Keyboard combination was the most effective for controlling the robot motion and turret mounted camera respectively. In experimental evaluation, the Keyboard/Eye Tracker combination improved the performance by 9%. 86% of participants found that turret mounted camera was useful and provided great assistance in robot navigation. Our qualitative pilot study of the Emotiv EPOCTM demonstrated different ways to train the headset for different actions. Conclusions: In this study, we concluded that different combinations of control techniques could be used to control the devices e.g. a mobile robot or a powered wheelchair. Gaze-based control was found to be comparable with the use of a mouse and keyboard; EEG-based control was found to need a lot of training time and was difficult to train. Our pilot study suggested that using facial expressions to train the Emotiv EPOCTM was an efficient and effective way to train it.
20

Wizard-of-Oz system för interaktion på distans med den sociala roboten Furhat

Alvarsson, Albin January 2022 (has links)
In the last decades the average life expectancy of humans has increased significantly. There are more old people than ever before. At the same time there is a big staff shortage at nursing homes. A future study will examine the effect of introducing a socially intelligent robot called Furhat in such a home. In this work a so-called Wizard-of-Oz control system is developed which enables remote control of the normally autonomous Furhat. This control system will later be used in the future study. The Wizard-of-Oz control system is developed with the intention of reaching the lowest possible response time between the control system and Furhat to minimize the risk of unnatural conversation due to long wait times between actions. It is found that a response time at or above 500ms can have a clearly degrading effect on a conversation. By following code-standards with a focus on developing fast code an average response time in the range of 35-245ms depending on the action taken is reached.

Page generated in 0.0257 seconds