• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A study of non-linguistic utterances for social human-robot interaction

Read, Robin January 2014 (has links)
The world of animation has painted an inspiring image of what the robots of the future could be. Taking the robots R2D2 and C3PO from the Star Wars films as representative examples, these robots are portrayed as being more than just machines, rather, they are presented as intelligent and capable social peers, exhibiting many of the traits that people have also. These robots have the ability to interact with people, understand us, and even relate to us in very personal ways through a wide repertoire of social cues. As robotic technologies continue to make their way into society at large, there is a growing trend toward making social robots. The field of Human-Robot Interaction concerns itself with studying, developing and realising these socially capable machines, equipping them with a very rich variety of capabilities that allow them to interact with people in natural and intuitive ways, ranging from the use of natural language, body language and facial gestures, to more unique ways such as expression through colours and abstract sounds. This thesis studies the use of abstract, expressive sounds, like those used iconically by the robot R2D2. These are termed Non-Linguistic Utterances (NLUs) and are a means of communication which has a rich history in film and animation. However, very little is understood about how such expressive sounds may be utilised by social robots, and how people respond to these. This work presents a series of experiments aimed at understanding how NLUs can be utilised by a social robot in order to convey affective meaning to people both young and old, and what factors impact on the production and perception of NLUs. Firstly, it is shown that not all robots should use NLUs. The morphology of the robot matters. People perceive NLUs differently across different robots, and not always in a desired manner. Next it is shown that people readily project affective meaning onto NLUs though not in a coherent manner. Furthermore, people's affective inferences are not subtle, rather they are drawn to well established, basic affect prototypes. Moreover, it is shown that the valence of the situation in which an NLU is made, overrides the initial valence of the NLU itself: situational context biases how people perceive utterances made by a robot, and through this, coherence between people in their affective inferences is found to increase. Finally, it is uncovered that NLUs are best not used as a replacement to natural language (as they are by R2D2), rather, people show a preference for them being used alongside natural language where they can play a supportive role by providing essential social cues.
2

Vilka kriterier är viktiga för användarupplevelsen vid interaktion med en språkcafé-robot? / Which criteria are important for the user experience when interacting with a language café robot?

Mekonnen, Michael, Tahir, Gara January 2019 (has links)
As the number of immigrants in Sweden rises, the demand for alternative methods for language learning increases simultaneously. The use of social robots for teaching a second language is a promising field. The following research question has been designed to identify how social robots can be improved to better suit second language learners. The research question is: Which criteria are important for the user experience when interacting with a language cafe robot? The main method used to answer the question is Design Thinking with the help of semi-structured interviews. The results were 12 criteria which can be implemented for social robots in the future. The research has also studied how the criteria can be implemented in robots and to what degree the robot Furhat developed by Furhat robotics has implemented the criteria today. / I takt med det stigande antalet immigranter i Sverige ökar efterfrågan på alternativa metoder för språkinlärning. Användningen av sociala robotar för att undervisa andraspråk är ett lovande fält. Följande forskningsfråga har utformats för att identifiera hur sociala robotar kan förbättras för att anpassas till elever som lär sig andraspråk. Forskningsfrågan lyder enligt följande: Vilka kriterier är viktiga för användarupplevelsen när man interagerar med en språkcaférobot? Den huvudsakliga metoden som används för att svara på frågan är Design Thinking med hjälp av semistrukturerade intervjuer. Resultatet var 12 kriterier som kan implementeras för sociala robotar i framtiden. Rapporten har också studerat hur kriterierna kan implementeras i robotar och i vilken grad roboten Furhat som utvecklats av Furhat Robotics har implementerat kriterierna idag.
3

Social robots powered by IBM Watson as a support for children with health problems

Kabir, Isak, Kindvall, Kalle January 2017 (has links)
Over the last few years, there has been a growing interest for social robots withhuman-like behavior and their application in healthcare and education. However,there are still plenty of issues that needs to be resolved. One of these challenges isto enable the social robots to fill its role effectively, by creating engagement. In this report the study, that was conducted at IBM Sweden, aims to understandhow IBM Watson can be utilized in the Pepper robot; to engage and supportchildren from the Ronald McDonald House in Uppsala. This is a place for childrenwith health problems and their families to live temporarily. Furthermore, supportivebehaviors are investigated since such behaviors are suggested to be important toincrease the engagement. An initial prototype that used Watson's natural languageprocessing and Pepper, was developed based on user requirements gatheredthrough interviews using a User Centered Design methodology. The prototype wasiteratively developed, and a final evaluation was conducted that examined both theperception of the robot as well as the engagement it created. The evaluation showed that the children wanted to interact with the robot again andhighlighted that they were highly engaged. They perceived the robot as a friend andthe supportive behaviors such as giving praise, responding quickly and maintainingeye contact were most important. The main support the children wanted were tohelp them feel less lonely and the conclusion of this study is that this is a suitablegoal for a robot system.
4

"Sorry, what was your name again?" : How to Use a Social Robot to Simulate Alzheimer’s Disease and Exploring the Effects on its Interlocutors

Kanov, Maria January 2017 (has links)
Machines are designed to be infallible, but what happens if they are suddenly struck by chronic mental decline such as dementia? In this research, a social robot has been transformed into a mild-stage Alzheimer’s patient. The ultimate goal is to use it as a training tool for caregivers and medical students, as well as to raise general awareness for the disease. In particular, the study aimed to identify how to simulate Alzheimer’s with a social robot and what the effects are on its conversation partners. Thanks to its properties, the back-projected robotic head Furhat was the ideal candidate to adopt the role of Max. The sources of inspiration derived from interviews and observations. A Wizard of Oz setup enabled a conversation between the character and the user, who was given the task of asking about the robot’s life. To allow for in-between subject comparisons, the set of 20 participants was a mixture of medical and non- medical students, as well as people who knew someone with dementia closely and those who never met any. The experience was evaluated through pre- and post-interviews along with user observations. The results indicate that the patient simulation was convincing, leading the users to treat the machine as a human being and develop an emotional bond to it. They remained patient in spite of the robot’s symptoms, which affirms its potential for educational use. After all, this project aims to inspire researchers to find solutions in unconventional ways.
5

Utvärdering av UX i interaktion med sociala robotar : - USUS Goals, en modifiering av USUS- ramverket och utveckling av riktlinjer för UX- utvärdering inom människa robotinteraktion / Evaluation of UX in interaction with social robots : - USUS Goals, a modification of the USUS framework and development of guidelines for UX evaluation of human-robot interaction

Wallström, Josefine January 2016 (has links)
Detta arbete har utförts inom ramarna för SIDUS-projektet AIR och fokuserar på interaktion mellan människa och autonoma och sociala robotar. Inom fältet för människa- robotinteraktion (MRI) ökar medvetenheten kring hur viktigt en positiv användarupplevelse (eng. user experience, UX) av dessa interaktioner är. När intresset för UX blir större ökar också behovet av att kunna arbeta med det på ett korrekt och lämpligt sätt. Idag finns det ett stort behov av metoder och tekniker för UX-arbete som är anpassade efter detta komplexa gränssnitt. Det övergripande syftet med detta arbete är därför att minska detta behov genom både en teoretisk litteraturstudie samt ett empiriskt arbete. I litteraturstudien kunde endast två ramverk ämnade för UX-utvärdering av MRI identifieras, varav ett av dem, USUS-ramverket, anses erbjuda en god grund för arbete med UX-utvärdering inom MRI. Fokus för det empiriska arbetet har sedan varit att förbättra och modifiera detta ramverk genom att integrera UX-mål som en del av det. UX-mål pekas ut som en central del för all sorts UX-arbete och är något som också kan optimera de utvärderingar som sker. Därför bör det också vara en del av det UX-arbete som sker inom MRI-fältet. Detta presenteras sedan i en ny version av USUS-ramverket, kallat USUS Goals. Baserat på dessa teoretiska och empiriska studier presenteras sedan riktlinjer för hur det fortsatta arbetet med UX- utvärdering inom MRI bör ske. Slutresultatet visar bland annat att utmaningarna med att integrera UX som del av MRI-fältet är större än vad som först antagits. Utmaningen ligger inte endast i att skapa användbara och anpassade UX-metoder, det är snarare ett ömsesidigt ansvar för båda domänerna att mötas för att tillsammans adressera dessa utmaningar. / Action and Intention Recognition in human interaction with autonomous systems (AIR)
6

Multi-modal expression recognition

Chandrapati, Srivardhan January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Akira T. Tokuhiro / Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.
7

Social Dimensions of Robotic versus Virtual Embodiment, Presence and Influence

Thellman, Sam January 2016 (has links)
Robots and virtual agents grow rapidly in behavioural sophistication and complexity. They become better learners and teachers, cooperators and communicators, workers and companions. These artefacts – whose behaviours are not always readily understood by human intuition nor comprehensibly explained in terms of mechanism – will have to interact socially. Moving beyond artificial rational systems to artificial social systems means having to engage with fundamental questions about agenthood, sociality, intelligence, and the relationship between mind and body. It also means having to revise our theories about these things in the course of continuously assessing the social sufficiency of existing artificial social agents. The present thesis presents an empirical study investigating the social influence of physical versus virtual embodiment on people's decisions in the context of a bargaining task. The results indicate that agent embodiment did not affect the social influence of the agent or the extent to which it was perceived as a social actor. However, participants' perception of the agent as a social actor did influence their decisions. This suggests that experimental results from studies comparing different robot embodiments should not be over-generalised beyond the particular task domain in which the studied interactions took place.

Page generated in 0.0652 seconds