This thesis combined behavioural and fMRI approaches in the study of the role of autistic traits in the perception of emotion from faces and voices, addressing research questions concerning: behavioural recognition of a full range of six basic emotions across multiple domains (face, voice, and face-voice); neural correlates during the processing of a wide range of emotional expressions from the face, the voice and the combination of both; neural circuity in responding to an incongruence effect (incongruence vs. congruence). The behavioural study investigated the effects of autistic traits as quantified by the Autism- Spectrum Quotient (AQ) on emotional processing in forms of unimodal (faces, voices) and crossmodal (emotionally congruent face-voice expressions) presentations. In addition, by taking into account the degree of anxiety, the role of co-morbid anxiety on emotion recognition in autistic traits was also explored. Compared to an age and gender-matched group of individuals with low levels of autistic traits (LAQ), a trend of no general deficit was found in individuals with high levels of autistic traits (HAQ) in recognizing emotions presented in faces and voice, regardless of their co-morbid anxiety. However, co-morbid anxiety did moderate the relationship between autistic traits and the recognition of emotions (e.g., fear, surprise, and anger), and this effect tended to be different for the two groups. Specifically, with greater anxiety, individuals with HAQ were found to show less probility of correct response in recognizing the emotion of fear. In contrast, individuals with LAQ showed greater probability of correct response in recognizing fear expressions. For response time, anxiety symptoms tended to be significantly associated with greater response latency in the HAQ group but less response latency in the LAQ group in the recognition of emotional expressions, negative emotions in particular (e.g., anger, fear, and sadness); and this effect of anxiety was not restricted to specific modalities. Despite the absence of finding a general emotion recognition deficit in individuals with considerable autistic traits compared to those with low levels of autistic traits, it did not necessarily mean that these two groups shared same neural network when processing emotions. Therefore, it was useful to explore the neural correlates engaged in processing of emotional expressions in individuals with high levels of autistic traits. Results of this investigation tended to suggest a hypo activation of brain areas dedicated to multimodal integration, particularly for displays showing happiness and disgust. However, both the HAQ group and LAQ group showed similar patterns of brain response (mainly in temporal regions) in response to face-voice combination. In response to emotional stimuli in single modality, the HAQ group activated a number of frontal and temporal regions (e.g., STG, MFG, IFG); these differences may suggested a more effortful and less automatic processing in individual with HAQ. In everyday life, emotional information is often conveyed by both the face and voice. Consequently, concurrently presented information by one source can alter the way that information from the other source is perceived and leads to emotional incongruence if information from the two sources was incongruent. Using fMRI, the present work also examined the neural circuity involved in responding to an incongruence effect (incongruence vs. congruence) from face-voice pairs in a group of individuals with considerable autistic traits. In addition, the differences in brain responses for emotional incongruity between explicit instructions to attend to facial expression and explicit instructions to attend to tone of voice in autistic traits was also explored. It was found that there was no significant incongruence effect between groups, given that individuals with a high level of autistic traits are able to recruit more normative neural networks for processing incongruence as individuals with a low level of autistic traits, regardless of instructions. Though no between group differences, individuals with HAQ showed negative activation in regions involved in the default- mode network. However, taken into account changes of instructions, a stronger incongruence effect was more likely to be occurred in the voice-attend condition for individuals with HAQ while in the face-attend condition for individuals with LAQ.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:732780 |
Date | January 2018 |
Creators | Liu, Peipei |
Publisher | University of Glasgow |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://theses.gla.ac.uk/8708/ |
Page generated in 0.0026 seconds