Return to search

Decoding the rhythms of avian auditory LFP

<p> We undertook a detailed analysis of population spike rate and LFP power in the Zebra finch auditory system. Utilizing the full range of Zebra finch vocalizations and dual-hemisphere multielectrode recordings from auditory neurons, we used encoder models to show how intuitive acoustic features such as amplitude, spectral shape and pitch drive the spike rate of individual neurons and LFP power on electrodes. Using ensemble decoding approaches, we show that these acoustic features can be successfully decoded from the population spike rate vector and the power spectra of the multielectrode LFP with comparable performance. In addition we found that adding pairwise spike synchrony to the spike rate decoder boosts performance above that of the population spike rate alone, or LFP power spectra. We also found that decoder performance grows quickly with the addition of more neurons, but there is notable redundancy in the population code. Finally, we demonstrate that LFP power on an electrode can be well predicted by population spike rate and spike synchrony. High frequency LFP power (80-190Hz) integrates neural activity spatially over a distance of up to 250 microns, while low frequency LFP power (0-30Hz) can integrate neural activity originating up to 800 microns away from the recording electrode. </p><p> To understand how an auditory system processes complex sounds, it is essential to understand how the temporal envelope of sounds, i.e. the time-varying amplitude, is encoded by neural activity. We studied the temporal envelope of Zebra finch vocalizations, and show that it exhibits modulations in the 0-30Hz range, similar to human speech. We then built linear filter models to predict 0-30Hz LFP activity from the temporal envelopes of vocalizations, achieving surprisingly high performance for electrodes near thalamorecipient areas of Zebra finch auditory cortex. We then show that there are two spatially-distinct subnetworks that resonate at different frequency bands, one subnetwork that resonates around 19Hz, and another subnetwork that resonates at 14Hz. These two subnetworks are present in every anatomical region. Finally we show that we can improve predictive performance with recurrent neural network models. </p>

Identiferoai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10192590
Date12 January 2017
CreatorsSchachter, Mike J.
PublisherUniversity of California, Berkeley
Source SetsProQuest.com
LanguageEnglish
Detected LanguageEnglish
Typethesis

Page generated in 0.002 seconds