Return to search

How Many Ways Can You Vocalize Emotion? Introducing an Audio Corpus of Acted Emotion

Emotion recognition from facial expressions has been thoroughly explored and explained through decades of research, but emotion recognition from vocal expressions has yet to be fully explained. This project builds on previous experimental approaches to create a large audio corpus of acted vocal emotion. With a large enough sample size, both in number of speakers and number of recordings per speaker, new hypotheses can be explored for differentiating emotions. Recordings from 131 subjects were collected and made available in an online corpus under a Creative Commons license. Thirteen acoustic features from 120 subjects were used as dependent variables in a MANOVA model to differentiate emotions. As a comparison, a simple neural network model was evaluated for its predictive power. Additional recordings intended to exhaust possible ways to express emotion are also explored. This new corpus matches some features found in previous studies for each of the four emotions included (anger, fear, happiness, and sadness).

Identiferoai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-9930
Date01 April 2021
CreatorsKowallis, Logan Ricks
PublisherBYU ScholarsArchive
Source SetsBrigham Young University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses and Dissertations
Rightshttps://lib.byu.edu/about/copyright/

Page generated in 0.0013 seconds