Return to search

Acoustic Mediation of Vocalized Emotion Identification: Do Decoders Identify Emotions Idiographically or Nomothetically?

Most research investigating vocal expressions of emotion has focused on one or more of three questions: whether there exist unique acoustic profiles of individual encoded emotions, whether the nature of emotion expression is universal across cultures, and how accurately decoders can identify expressed emotions. This dissertation begins to answer a fourth question, whether there exist unique patterns in the types of acoustic properties persons focus on to identify vocalized emotions. Three hypotheses were tested: first, whether acoustic patterns are interpreted idiographically or nomothetically as reflected in a comparison of individual vs. group lens model identification ratios; second, whether there exists a decoder by emotion interaction for scores of accuracy; and third, whether such an interaction is mediated by the acoustic properties of the vocalized emotions. Results from hypothesis one indicate there is no difference between individual and group identification ratios, demonstrating that vocalized emotions are decoded nomothetically. Results from hypothesis two indicate there is not a significant decoder by emotion interaction on scores of accuracy, demonstrating that decoders who are generally good (or bad) at identifying some vocalized emotions tend to be generally good (or bad) at identifying all vocalized emotions. There are, however, significant main effects for both emotion and decoder. Anger and happiness are more accurately decoded than fear and sadness. Perhaps most importantly, multivariate results from hypothesis three indicate strong and consistent differences across the four emotions in the way they are identified acoustically. Specifically, decoders identify anger by primarily focusing on spectral characteristics, fear by primarily focusing on frequency (F0), happiness by primarily focusing on rate, and sadness by focusing on both intensity and rate. These acoustic mediation differences across the emotions are also shown to be nomothetic, that is, they are surprisingly consistent across decoders.

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-2992
Date14 December 2009
CreatorsLauritzen, Michael Kenneth
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttp://lib.byu.edu/about/copyright/

Page generated in 0.0053 seconds