Return to search

Spectro-Temporal and Linguistic Processing of Speech in Artificial and Biological Neural Networks

Humans possess the fascinating ability to communicate the most complex of ideas through spoken language, without requiring any external tools. This process has two sides—a speaker producing speech, and a listener comprehending it. While the two actions are intertwined in many ways, they entail differential activation of neural circuits in the brains of the speaker and the listener. Both processes are the active subject of artificial intelligence research, under the names of speech synthesis and automatic speech recognition, respectively. While the capabilities of these artificial models are approaching human levels, there are still many unanswered questions about how our brains do this task effortlessly. But the advances in these artificial models allow us the opportunity to study human speech recognition through a computational lens that we did not have before. This dissertation explores the intricate processes of speech perception and comprehension by drawing parallels between artificial and biological neural networks, through the use of computational frameworks that attempt to model either the brain circuits involved in speech recognition, or the process of speech recognition itself.

There are two general types of analyses in this dissertation. The first type involves studying neural responses recorded directly through invasive electrophysiology from human participants listening to speech excerpts. The second type involves analyzing artificial neural networks trained to perform the same task of speech recognition, as a potential model for our brains. The first study introduces a novel framework leveraging deep neural networks (DNNs) for interpretable modeling of nonlinear sensory receptive fields, offering an enhanced understanding of auditory neural responses in humans. This approach not only predicts auditory neural responses with increased accuracy but also deciphers distinct nonlinear encoding properties, revealing new insights into the computational principles underlying sensory processing in the auditory cortex. The second study delves into the dynamics of temporal processing of speech in automatic speech recognition networks, elucidating how these systems learn to integrate information across various timescales, mirroring certain aspects of biological temporal processing.

The third study presents a rigorous examination of the neural encoding of linguistic information of speech in the auditory cortex during speech comprehension. By analyzing neural responses to natural speech, we identify explicit, distributed neural encoding across multiple levels of linguistic processing, from phonetic features to semantic meaning. This multilevel linguistic analysis contributes to our understanding of the hierarchical and distributed nature of speech processing in the human brain. The final chapter of this dissertation compares linguistic encoding between an automatic speech recognition system and the human brain, elucidating their computational and representational similarities and differences. This comparison underscores the nuanced understanding of how linguistic information is processed and encoded across different systems, offering insights into both biological perception and artificial intelligence mechanisms in speech processing.

Through this comprehensive examination, the dissertation advances our understanding of the computational and representational foundations of speech perception, demonstrating the potential of interdisciplinary approaches that bridge neuroscience and artificial intelligence to uncover the underlying mechanisms of speech processing in both artificial and biological systems.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/wspq-2093
Date January 2024
CreatorsKeshishian, Menoua
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0019 seconds