The principal aim of this doctoral research has been to investigate whether various popular methods of emotion elicitation perform differently in terms of self-reported participant affect - and if so, whether any of them is better able to mimic real-life emotional situations. A secondary goal has been to understand how continuous affect can be classified into discrete categories - whether by using clustering algorithms, or resorting to human participants for creating the classifications. A variety of research directions subserved these main goals: firstly, developing data-driven strategies for selecting 'appropriate' stimuli, and matching them across various stimulus modalities (i.e., words, sounds, images,films and virtual environments / VEs); secondly, comparing the chosen modalities on various self-report measures (with VEs assessed both with and without a head-mounted display / HMD); thirdly, comparing how humans classify emotional information vs. a clustering algorithm; and finally, comparing all five lab-based stimulus modalities to emotional data collected via an experience sampling phone app. Findings / outputs discussed will include a matched database of stimuli geared towards lab use, how the choice of stimulus modality may affect research results, the links (or discrepancies) between human and machine classification of emotional information, as well as range restriction affecting lab stimuli relative to `real-life' emotional phenomena.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:756678 |
Date | January 2018 |
Creators | Constantinescu, Alexandra Caterina |
Contributors | Macpherson, Sarah ; Moore, Adam |
Publisher | University of Edinburgh |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://hdl.handle.net/1842/31397 |
Page generated in 0.002 seconds