• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 13
  • 9
  • 7
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Tozadore, Daniel Carnieto 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
2

Aplicação de um robô humanoide autônomo por meio de reconhecimento de imagem e voz em sessões pedagógicas interativas / Application of an autonomous humanoid robot by image and voice recognition in interactive pedagogical sessions

Daniel Carnieto Tozadore 03 March 2016 (has links)
A Robótica Educacional consiste na utilização de robôs para aplicação prática dos conteúdos teóricos discutidos em sala de aula. Porém, os robôs mais usados apresentam uma carência de interação com os usuários, a qual pode ser melhorada com a inserção de robôs humanoides. Esta dissertação tem como objetivo a combinação de técnicas de visão computacional, robótica social e reconhecimento e síntese de fala para a construção de um sistema interativo que auxilie em sessões pedagógicas por meio de um robô humanoide. Diferentes conteúdos podem ser abordados pelos robôs de forma autônoma. Sua aplicação visa o uso do sistema como ferramenta de auxílio no ensino de matemática para crianças. Para uma primeira abordagem, o sistema foi treinado para interagir com crianças e reconhecer figuras geométricas 3D. O esquema proposto é baseado em módulos, no qual cada módulo é responsável por uma função específica e contém um grupo de funcionalidades. No total são 4 módulos: Módulo Central, Módulo de Diálogo, Módulo de Visão e Módulo Motor. O robô escolhido é o humanoide NAO. Para visão computacional, foram comparados a rede LEGION e o sistema VOCUS2 para detecção de objetos e SVM e MLP para classificação de imagens. O reconhecedor de fala Google Speech Recognition e o sintetizador de voz do NAOqi API são empregados para interações sonoras. Também foi conduzido um estudo de interação, por meio da técnica de Mágico-de-Oz, para analisar o comportamento das crianças e adequar os métodos para melhores resultados da aplicação. Testes do sistema completo mostraram que pequenas calibrações são suficientes para uma sessão de interação com poucos erros. Os resultados mostraram que crianças que tiveram contato com uma maior interatividade com o robô se sentiram mais engajadas e confortáveis nas interações, tanto nos experimentos quanto no estudo em casa para as próximas sessões, comparadas às crianças que tiveram contato com menor nível de interatividade. Intercalar comportamentos desafiadores e comportamentos incentivadores do robô trouxeram melhores resultados na interação com as crianças do que um comportamento constante. / Educational Robotics is a growing area that uses robots to apply theoretical concepts discussed in class. However, robots usually present a lack of interaction with users that can be improved with humanoid robots. This dissertation presents a project that combines computer vision techniques, social robotics and speech synthesis and recognition to build an interactive system which leads educational sessions through a humanoid robot. This system can be trained with different content to be addressed autonomously to users by a robot. Its application covers the use of the system as a tool in the mathematics teaching for children. For a first approach, the system has been trained to interact with children and recognize 3D geometric figures. The proposed scheme is based on modules, wherein each module is responsible for a specific function and includes a group of features for this purpose. In total there are 4 modules: Central Module, Dialog Module, Vision Module and Motor Module. The chosen robot was the humanoid NAO. For the Vision Module, LEGION network and VOCUS2 system were compared for object detection and SVM and MLP for image classification. The Google Speech Recognition speech recognizer and Voice Synthesizer Naoqi API are used for sound interactions. An interaction study was conducted by Wizard-of-Oz technique to analyze the behavior of children and adapt the methods for better application results. Full system testing showed that small calibrations are sufficient for an interactive session with few errors. Children who had experienced greater interaction degrees from the robot felt more engaged and comfortable during interactions, both in the experiments and studying at home for the next sessions, compared to children who had contact with a lower level of interactivity. Interim challenging behaviors and support behaviors brought better results in interaction than a constant behavior.
3

The impact of social expectation towards robots on human-robot interactions

Syrdal, Dag Sverre January 2018 (has links)
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction.
4

An Evaluation of Gaze and EEG-Based Control of a Mobile Robot

Khan, Mubasher Hassan, Laique, Tayyab January 2011 (has links)
Context: Patients with diseases such as locked in syndrome or motor neuron are paralyzed and they need special care. To reduce the cost of their care, systems need to be designed where human involvement is minimal and affected people can perform their daily life activities independently. To assess the feasibility and robustness of combinations of input modalities, mobile robot (Spinosaurus) navigation is controlled by a combination of Eye gaze tracking and other input modalities. Objectives: Our aim is to control the robot using EEG brain signals and eye gaze tracking simultaneously. Different combinations of input modalities are used to control the robot and turret movement and then we find out which combination of control technique mapped to control command is most effective. Methods: The method includes developing the interface and control software. An experiment involving 15 participants was conducted to evaluate control of the mobile robot using a combination of eye tracker and other input modalities. Subjects were required to drive the mobile robot from a starting point to a goal along a pre-defined path. At the end of experiment, a sense of presence questionnaire was distributed among the participants to take their feedback. A qualitative pilot study was performed to find out how a low cost commercial EEG headset, the Emotiv EPOCTM, can be used for motion control of a mobile robot at the end. Results: Our study results showed that the Mouse/Keyboard combination was the most effective for controlling the robot motion and turret mounted camera respectively. In experimental evaluation, the Keyboard/Eye Tracker combination improved the performance by 9%. 86% of participants found that turret mounted camera was useful and provided great assistance in robot navigation. Our qualitative pilot study of the Emotiv EPOCTM demonstrated different ways to train the headset for different actions. Conclusions: In this study, we concluded that different combinations of control techniques could be used to control the devices e.g. a mobile robot or a powered wheelchair. Gaze-based control was found to be comparable with the use of a mouse and keyboard; EEG-based control was found to need a lot of training time and was difficult to train. Our pilot study suggested that using facial expressions to train the Emotiv EPOCTM was an efficient and effective way to train it.
5

The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot Perception

Liu, Xiaozhen 30 June 2023 (has links)
As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions. / Master of Science / As robots increasingly appear in people's lives as functional assistants or for entertainment, there are more and more scenarios in which people interact with robots. More research on human-robot interaction is being proposed to help develop more natural ways of interaction. Our study focuses on the effects of emotions conveyed by a humanoid robot's non-speech sounds on people's perception about the robot and its emotions. The results of our experiments show that the accuracy of emotion recognition of regular voices is significantly higher than that of music and robot-like voices and elicits higher trust, naturalness, and preference. The gender of the robot's voice or the gender of the participant did not affect the accuracy of emotion recognition. People are now not inclined to traditional stereotypes of robotic voices (e.g., like old movies), and expressing emotions with music and gestures mostly shows a lower perception. Happiness and sadness were identified with the highest accuracy among the emotions we studied. Participants felt more discomfort and human-likeness in the male voices than in female voices. Male participants were more likely to feel uncomfortable when interacting with the humanoid robot, while female participants were more likely to feel warm. Our study discusses design guidelines and future research directions for emotional cues in social robots.
6

From a Machine to a Collaborator

Bozorgmehrian, Shokoufeh 05 January 2024 (has links)
This thesis book represents an exploration of the relationship between architecture and robotics, tailored to meet the requirements of both architecture students and professionals and any other creative user. The investigation encompasses three distinct robotic arm applications for architecture students, introduces and evaluates an innovative 3D printing application with robotic arms, and presents projects focused on the design of human-robot interaction techniques and their system development. Furthermore, the thesis showcases the development of a more intuitive human-robot interaction system and explores various user interaction methods with robotic arms for rapid prototyping and fabrication. Each experiment describes the process, level of interaction, and key takeaways. The narrative of the thesis unfolds as a journey through different applications of robotic fabrication, emphasizing the creative human as the focal point of these systems. This thesis underscores the significance of user experience research and anticipates future innovations in the evolving landscape of the creative field. The discoveries made in this exploration lay a foundation for the study and design of interfaces and interaction techniques, fostering seamless collaboration between designers and robotic systems. Keywords: Robotic Fabrication - Human-Robot Interaction (HRI) - Human-Computer Interaction (HCI) - User Experience Research - Human-Centered Design - Architecture - Art - Creative Application / Master of Architecture
7

Robots that say 'no' : acquisition of linguistic behaviour in interaction games with humans

Förster, Frank January 2013 (has links)
Negation is a part of language that humans engage in pretty much from the onset of speech. Negation appears at first glance to be harder to grasp than object or action labels, yet this thesis explores how this family of ‘concepts’ could be acquired in a meaningful way by a humanoid robot based solely on the unconstrained dialogue with a human conversation partner. The earliest forms of negation appear to be linked to the affective or motivational state of the speaker. Therefore we developed a behavioural architecture which contains a motivational system. This motivational system feeds its state simultaneously to other subsystems for the purpose of symbol-grounding but also leads to the expression of the robot’s motivational state via a facial display of emotions and motivationally congruent body behaviours. In order to achieve the grounding of negative words we will examine two different mechanisms which provide an alternative to the established grounding via ostension with or without joint attention. Two large experiments were conducted to test these two mechanisms. One of these mechanisms is so called negative intent interpretation, the other one is a combination of physical and linguistic prohibition. Both mechanisms have been described in the literature on early child language development but have never been used in human-robot-interaction for the purpose of symbol grounding. As we will show, both mechanisms may operate simultaneously and we can exclude none of them as potential ontogenetic origin of negation.
8

Exploring Human-Robot Interaction Through Explainable AI Poetry Generation

Strineholm, Philippe January 2021 (has links)
As the field of Artificial Intelligence continues to evolve into a tool of societal impact, a need of breaking its initial boundaries as a computer science discipline arises to also include different humanistic fields. The work presented in this thesis revolves around the role that explainable artificial intelligence has in human-robot interaction through the study of poetry generators. To better understand the scope of the project, a poetry generators study presents the steps involved in the development process and the evaluation methods. In the algorithmic development of poetry generators, the shift from traditional disciplines to transdisciplinarity is identified. In collaboration with researchers from the Research Institutes of Sweden, state-of-the-art generators are tested to showcase the power of artificially enhanced artifacts. A development plateau is discovered and with the inclusion of Design Thinking methods potential future human-robot interaction development is identified. A physical prototype capable of verbal interaction on top of a poetry generator is created with the new feature of changing the corpora to any given audio input. Lastly, the strengths of transdisciplinarity are connected with the open-sourced community in regards to creativity and self-expression, producing an online tool to address future work improvements and introduce nonexperts to the steps required to self-build an intelligent robotic companion, thus also encouraging public technological literacy. Explainable AI is shown to help with user involvement in the process of creation, alteration and deployment of AI enhanced applications.
9

I don’t know because I’m not a robot : I don’t know because I’m not a robot:A qualitative study exploring moral questions as a way to investigate the reasoning behind preschoolers’ mental state attribution to robots

Amcoff, Oscar January 2022 (has links)
Portrayals of artificially intelligent robots are becoming increasingly prevalent in children’s culture. This affects how children perceive robots, which have been found to affect the way children in school understand subjects like technology and programming. Since teachers need to know what influences their pupils' understanding of these subjects, we need to know how children’s preconceptions about robots affect the way they attribute mental states to them. We still know relatively little about how children do this. Based on the above, a qualitative approach was deemed fit. This study aimed to (1) investigate the reasoning and preconceptions underlying children’s mental state attribution to robots, and (2) explore the effectiveness of moral questions as a way to do this. 16 children aged 5- and 6 years old were asked to rate the mental states of four different robots while subsequently being asked to explain their answers. Half of the children were interviewed alone and half in small groups. A thematic analysis was conducted to analyze the qualitative data. Children’s mental state attribution was found to be influenced by preconceptions about robots as a group of entities lacking mental states. Children were found to perceive two robots, Atlas, and Nao, differently in various respects. This was argued to be because the children perceived these robots through archetypal frameworks. Moral questions were found successful as a way to spark reflective reasoning about the mental state attribution in the children.
10

Lekmannabedömning av ett självkörande fordons körförmåga: betydelsen av att erfara fordonet i trafiken / Lay assessment of a selfdriving vehicle’s driving ability: the influence of experiencing the vehicle in traffic

Åkerström, Ulrika January 2022 (has links)
Datorstyrda maskiner som både kan styra sina egna aktiviteter och som har ett stort rörelseomfång kommer snart att dela vår fysiska miljö vilket kommer innebära en drastisk förändring för vår nuvarande mänskliga kontext. Tidigare olyckor som skett mellan mänskliga förare och automatiserade fordon kan förklaras genom en bristande förståelse för de automatiserade fordonets beteende. Det är därför viktigt att ta reda på hur människor förstår automatiserade fordons förmågor och begränsningar. SAE International, en global yrkeskår får ingenjörer verksamma inom fordonsindustrin, har definierat ett ramverk som beskriver funktionaliteten hos automatiserade fordon i 6 olika nivåer. Den rapporterade studien undersökte med utgångspunkt i detta ramverk vilken automationsgrad deltagarna antar att en självkörande buss har genom deltagarnas upplevelse av fordonet. Inom ramarna för studien färdades deltagarna en kort sträcka på en självkörande buss och besvarade en enkät om hur de ser på bussens förmågor och begränsningar både före och efter färden. Studieresultatet visade att hälften av deltagarna överskattade bussens automationsgrad. Efter att ha färdats med bussen justerade deltagarna ner sina förväntningar på fordonets körförmåga vilket stämde bättre överens med bussens förmågor och begränsningar. Deltagarna rapporterade även att de var mer säkra i sina bedömningar efter erfarenhet av fordonet. Sammanfattningsvis tyder resultatet på att (1) människor tenderar att överskatta automatiserade fordons körförmåga, men att (2) deras uppfattning justeras i samband med att de kommer i kontakt med det automatiserade fordonet i verkligheten och att (3) de då även blir mer säkra i sina bedömningar. Detta borde tas i beaktning vid utveckling av självkörande fordon för att minska risken för olyckor i trafiken.

Page generated in 0.1416 seconds