A framework for understanding the social responses to virtual humans suggests that human-like characteristics (e.g., facial expressions, voice, expression of emotion) act as cues that lead a person to place the agent into the category "human" and thus, elicit social responses. Given this framework, this research was designed to answer two outstanding questions that had been raised in the research community (Moon&Nass, 2000): 1) If a virtual human has more human-like characteristics, will it elicit stronger social responses from people? 2) How do the human-like characteristics interact in terms of the strength of social responses? Two social psychological (social facilitation and politeness norm) experiments were conducted to answer these questions. The first experiment investigated whether virtual humans can evoke a social facilitation response and how strong that response is when participants are given different cognitive tasks (e.g., anagrams, mazes, modular arithmetic) that vary in difficulty. They did the tasks alone, in the company of another person, or in the company of a virtual human that varied in terms of features. The second experiment investigated whether people apply politeness norms to virtual humans. Participants were tutored and quizzed either by a virtual human tutor that varied in terms of features or a human tutor. Participants then evaluated the tutor's performance either directly by the tutor or indirectly via a paper and pencil questionnaire. Results indicate that virtual humans can produce social facilitation not only with facial appearance but also with voice recordings. In addition, performance in the presence of voice synced facial appearance seems to elicit stronger social facilitation (i.e., no statistical difference compared to performance in the human presence condition) than in the presence of voice only or face only. Similar findings were observed with the politeness norm experiment. Participants who evaluated their tutor directly reported the tutor's performance more favorably than participants who evaluated their tutor indirectly. In addition, this valence toward the voice synced facial appearance had no statistical difference compared to the valence toward the human tutor condition. The results suggest that designers of virtual humans should be mindful about the social nature of virtual humans.
Identifer | oai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/29601 |
Date | 07 July 2009 |
Creators | Park, Sung Jun |
Publisher | Georgia Institute of Technology |
Source Sets | Georgia Tech Electronic Thesis and Dissertation Archive |
Detected Language | English |
Type | Dissertation |
Page generated in 0.0014 seconds