Return to search

Dynamic and adaptive processing of speech in the human auditory cortex

Communicating through speech is an important part of everyday life, and losing that ability can be devastating. Millions of patients around the globe have lost the ability to hear or speak due to auditory cortex deficits. Doctor’s ability to help these patients has been hindered by a lack of understanding of the speech processing mechanisms in the human auditory cortex. This dissertation focuses on enhancing our understanding of the mechanisms of speech encoding in human primary and secondary auditory cortices using two methods of electroencephalography (EEG) and electrocorticography (ECoG).
Phonemes are the smallest linguistic elements that can change a word’s meaning. I characterize EEG responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). I show that responses to different phoneme categories are organized by phonetic features, and each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations reveals that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders.
Later in this dissertation, I use ECoG neural recordings to explore mechanisms of speech communication in real-world environments that require adaptation to changing acoustic conditions. I explore how the human auditory cortex adapts as a new noise source appears in or disappears from the acoustic scene. To investigate the mechanisms of adaptation, neural activity in the auditory cortex of six human subjects were measured as they listened to speech with abruptly changing background noises. Rapid and selective suppression of acoustic features of noise in the neural responses are observed. This suppression results in enhanced representation and perception of speech acoustic features. The degree of adaptation to different background noises varies across neural sites and is predictable from the tuning properties and speech specificity of the sites. Moreover, adaptation to background noise is unaffected by the attentional focus of the listener. The convergence of these neural and perceptual effects reveals the intrinsic dynamic mechanisms that enable a listener to filter out irrelevant sound sources in a changing acoustic scene.
Finally, in the last chapter, I introduce the Neural Acoustic Processing Library (NAPLib). NAPLib contains a suite of tools that characterize various properties of the neural representation of speech, which can be used for characterizing electrode tuning properties, and their response to phonemes. The library is applicable to both invasive and non-invasive recordings, including electroencephalography (EEG), electrocorticography (ECoG) and magnetoecnephalography (MEG).
Together, this dissertation shows new evidence for dynamic and adaptive processing of speech sounds in the auditory pathway, and provides computational tools to study the dynamics of speech encoding in the human brain.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/d8-krbm-1724
Date January 2020
CreatorsKhalighinejad, Bahar
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0023 seconds