Spelling suggestions: "subject:"auditory perception -- 3research"" "subject:"auditory perception -- 1research""
1 |
Free field auditory localization and perceptionButcher, Andrew January 2011 (has links)
We have designed a system suitable for auditory electroencephalographic (EEG) experiments,
with the objective of enabling studies of auditory motion. This thesis details the
perceptual cues involved in spatial auditory experiments, and compares a number of spatial
panning algorithms while examining their suitability to this purpose. A behavioural experiment
involving perception of static auditory objects was used in an attempt to differentiate
these panning algorithms. This study was used to inform the panner choice used in an auditory
EEG experiment. This auditory EEG experiment involved the effects of discontinuity
in velocity and position, and their affects on object perception. A new event related potential
(ERP) component – the lateralized object related negativity (LORN) – was identified,
and we consider its significance. libnetstation, a library for connecting with the NetStation
(EEG) system has been developed, and released as open source software. / viii, 61 leaves : ill. ; 29 cm
|
2 |
Electroencephalographic measures of auditory perception in dynamic acoustic environmentsMcMullan, Amanda R January 2013 (has links)
We are capable of effortlessly parsing a complex scene presented to us. In order to do
this, we must segregate objects from each other and from the background. While this
process has been extensively studied in vision science, it remains relatively less
understood in auditory science. This thesis sought to characterize the neuroelectric
correlates of auditory scene analysis using electroencephalography. Chapter 2 determined
components evoked by first-order energy boundaries and second-order pitch boundaries.
Chapter 3 determined components evoked by first-order and second-order discontinuous
motion boundaries. Both of these chapters focused on analysis of event-related potential
(ERP) waveforms and time-frequency analysis. In addition, these chapters investigated
the contralateral nature of a negative ERP component. These results extend the current
knowledge of auditory scene analysis by providing a starting point for discussing and
characterizing first-order and second-order boundaries in an auditory scene. / x, 90 leaves : col. ill. ; 29 cm
|
3 |
Neural mechanisms of attention and speech perception in complex, spatial acoustic environmentPatel, Prachi January 2023 (has links)
We can hold conversations with people in environments where typically there are additional simultaneous talkers in background acoustic space or noise like vehicles on the street or music playing at a café on the sidewalk. This seemingly trivial everyday task is difficult for people with hearing deficits and is extremely hard to model in machines. This dissertation focuses on exploring the neural mechanisms of how the human brain encodes such complex acoustic environments and how cognitive processes like attention shapes processing of the attended speech. My initial experiments explore the representation of acoustic features that help us localize single sound sources in the environment- features like direction and spectrotemporal content of the sounds, and the interaction of these representations with each other. I play natural American English sentences coming from five azimuthal directions in space.
Using intracranial electrocorticography (ECoG) recordings from the human auditory cortex of the listener, I show that the direction of sound and the spectrotemporal content are encoded in two distinct aspects of neural response, the direction modulates the mean of the response and the spectrotemporal features contributes to the modulation of neural response around its mean. Furthermore, I show that these features are orthogonal to each other and do not interact. This representation enables successful decoding of both spatial and phonetic information. These findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of spatial speech processing.
I take a step further to investigate the role of attention in encoding the direction and phonetic features of speech. I play a mixture of male and female spatialized talkers eg. male at left side to the listener and female at right side (talker’s locations switch randomly after each sentence). I ask the listener to follow a given talker e.g. follow male talker as they switch their location after each uttered sentence. While the listener performs this experiment, I collect intracranial EEG data from their auditory cortex. I investigate the bottom-up stimulus dependent and attention independent encoding of such a cocktail party speech and the top-down attention driven role in the encoding of location and speech features. I find a bottom-up stimulus driven contralateral preference in encoding of the mixed speech i.e. Left brain hemisphere automatically and predominantly encodes speech coming from right direction and vice-versa. On top of this bottom-up representation, I find that attended talker’s direction modulates the baseline of the neural response and attended talker’s voice modulates the spectrotemporal tuning of the neural response. Moreover, the modulation to attended talker’s location is present throughout the auditory cortex but the modulation to attended talker’s voice is present only at higher order auditory cortex areas. My findings provide crucially needed evidence to determine how bottom-up and top-down signals interact in the auditory cortex in crowded and complex acoustic scenes to enable robust speech perception. Furthermore, they shed light on the hierarchical encoding of attended speech that have implications on bettering the auditory attention decoding models.
Finally, I talk about a clinical case study where we show that electrical stimulation to specific sites in planum temporale (PT) of an epilepsy patient implanted with intracranial electrode leads to enhancement in speech in noise perception. When noisy speech is played with such an electrical stimulation, the patient perceives that the noise disappears, and that the speech is similar to clean speech that they hear without any noise. We performed series of analysis to determine functional organization of the three main sub regions of the human auditory cortex- planum temporale (PT), Heschl’s gyrus (HG) and superior temporal gyrus (STG). Using Cortico-Cortical Evoked Potentials (CCEPs), we modeled the PT sites to be located between the sites in HG and STG. Furthermore, we find that the discriminability of speech from nonspeech sounds increased in population neural responses from HG to the PT to the STG sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.
Together, this dissertation shows new evidence for the neural encoding of spatial speech; interaction of stimulus driven, and attention driven neural processes in spatial multi-talker speech perception and enhancement of speech in noise perception by electrical brain stimulation.
|
Page generated in 0.1198 seconds