Spelling suggestions: "subject:"beural encoding"" "subject:"beural uncoding""
1 |
The perception and cortical processing of communication soundsWalker, Kerry M. M. January 2008 (has links)
The neural processes used to extract perceptual features of vocal calls, and subsequently to re-integrate those features to form a coherent auditory object, are poorly understood. In this thesis, extracellular recordings were carried out in order to investigate how the temporal envelope, pitch, timbre and spatial location of communication sounds are represented by neurons in two core and three belt areas of ferret (Mustela putorius furo) auditory cortex. Potential neural underpinnings of auditory perception were tested using neurometric analysis to relate the reliability of neural responses to the performance of ferret and human listeners on psychophysical tasks. I found that human listeners' discrimination of the temporal envelopes of vocalization sounds matched the best neurometrics calculated from the temporal spiking patterns of ferret cortical neurons. Neurometric scores based on the spike rates of cortical neurons accounted for ferrets' discrimination of the pitch of artificial vowels. I show that most auditory cortical neurons are modulated by a number of stimulus features, rather than being tuned to only one feature. Neurons in the core auditory cortical fields often respond uniquely to particular combinations of pitch and timbre features, while those in belt regions respond more linearly to feature combinations. Subtle differences in the sensitivity of neurons to pitch, timbre and azimuthal cues were found across cortical areas and depths. These results suggest that auditory cortical neurons provide widely distributed representations of vocalizations, and a single neuron can often use combinations of spike rate and temporal spiking responses to encode multiple sound features.
|
2 |
Optimizing Reservoir Computing Architecture for Dynamic Spectrum Sensing ApplicationsSharma, Gauri 25 April 2024 (has links)
Spectrum sensing in wireless communications serves as a crucial binary classification tool in cognitive radios, facilitating the detection of available radio spectrums for secondary users, especially in scenarios with high Signal-to-Noise Ratio (SNR). Leveraging Liquid State Machines (LSMs), which emulate spiking neural networks like the ones in the human brain, prove to be highly effective for real-time data monitoring for such temporal tasks. The inherent advantages of LSM-based recurrent neural networks, such as low complexity, high power efficiency, and accuracy, surpass those of traditional deep learning and conventional spectrum sensing methods. The architecture of the liquid state machine processor and its training methods are crucial for the performance of an LSM accelerator. This thesis presents one such LSM-based accelerator that explores novel architectural improvements for LSM hardware. Through the adoption of triplet-based Spike-Timing-Dependent Plasticity (STDP) and various spike encoding schemes on the spectrum dataset within the LSM, we investigate the advantages offered by these proposed techniques compared to traditional LSM models on the FPGA. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores these novel onboard learning methods, shares the results of the suggested architectural changes, explains the trade-offs involved, and explores how the improved LSM model's accuracy can benefit different classification tasks. Additionally, we outline the future research directions aimed at further enhancing the accuracy of these models. / Master of Science / Machine Learning (ML) and Artificial Intelligence (AI) have significantly shaped various applications in recent years. One notable domain experiencing substantial positive impact is spectrum sensing within wireless communications, particularly in cognitive radios. In light of spectrum scarcity and the underutilization of RF spectrums, accurately classifying spectrums as occupied or unoccupied becomes crucial for enabling secondary users to efficiently utilize available resources. Liquid State Machines (LSMs), made of spiking neural networks resembling human brain, prove effective in real-time data monitoring for this classification task. Exploiting the temporal operations, LSM accelerators and processors, facilitate high performance and accurate spectrum monitoring than conventional spectrum sensing methods.
The architecture of the liquid state machine processor's training and optimal learning methods plays a pivotal role in the performance of a LSM accelerator. This thesis delves into various architectural enhancements aimed at spectrum classification using a liquid state machine accelerator, particularly implemented on an FPGA board. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores onboard learning methods, such as employing a targeted encoder and incorporating Triplet Spike Timing-Dependent Plasticity (Triplet STDP) in the learning reservoir. These enhancements propose improvements in accuracy for conventional LSM models. The discussion concludes by presenting results of the architectural implementations, highlighting trade-offs, and shedding light on avenues for enhancing the accuracy of conventional liquid state machine-based models further.
|
3 |
Codage des sons dans le nerf auditif en milieu bruyant : taux de décharge versus information temporel / Sound coding in the auditory nerve : rate vs timingHuet, Antoine 14 December 2016 (has links)
Contexte : Les difficultés de compréhension de la parole dans le bruit représente la principale plainte des personnes malentendantes. Cependant, peu d’études se sont intéressées aux mécanismes d’encodage des sons en environnement bruyant. Ce faisant, nos travaux ont portés sur les stratégies d’encodage des sons dans le nerf auditif dans environnements calme et bruyant en combinant des techniques électrophysiologiques et comportementales chez la gerbille.Matériel et méthodes : L’enregistrement unitaire de fibres du nerf auditif a été réalisé en réponse à des bouffées tonales présentées dans un environnement silencieux ou en présence d’une bruit de fond continu large bande. Les seuils audiométriques comportementaux ont été mesurés dans les mêmes conditions acoustiques, par une approche basée sur l’inhibition du reflex acoustique de sursaut.Résultats : Les données unitaires montrent que la cochlée utilise 2 stratégies d’encodage complémentaires. Pour des sons de basse fréquence (<3,6 kHz), la réponse en verrouillage de phase des fibres de l’apex assure un encodage fiable et robuste du seuil auditif. Pour des sons plus aigus (>3,6 kHz), la cochlée utilise une stratégie basée sur le taux de décharge ce qui requiert une plus grande diversité fonctionnelle de fibres dans la partie basale de la cochlée. Les seuils auditifs comportementaux obtenus dans les mêmes conditions de bruit se superposent parfaitement au seuil d’activation des fibres validant ainsi les résultats unitaires.Conclusion : Ce travail met en évidence le rôle capital de l’encodage en verrouillage de phase chez des espèces qui vocalisent au-dessous de 3 kHz, particulièrement en environnement bruyant. Par contre, l’encodage de fréquences plus aiguës repose sur le taux de décharge. Ce résultat met l’accent sur la difficulté d’extrapoler des résultats obtenus sur des modèles murins qui communique dans les hautes fréquences (> à 4 kHz) à l’homme dont le langage se situe entre 0,3 et 3 kHz. / Background: While hearing problems in noisy environments are the main complaints of hearing-impaired people, only few studies focused on cochlear encoding mechanisms in such environments. By combining electrophysiological experiments with behavioral ones, we studied the sound encoding strategies used by the cochlea in a noisy background.Material and methods: Single unit recordings of gerbil auditory nerve were performed in response to tone bursts, presented at characteristic frequencies, in a quiet environment and in the presence of a continuous broadband noise. The behavioral audiogram was measured in the same conditions, with a method based on the inhibition of the acoustic startle response.Results: Single unit data shows that the cochlea used 2 complementary strategies to encode sound. For low frequency sounds (<3.6 kHz), the phase-locked response from the apical fibers ensure a reliable and robust encoding of the auditory threshold. For high frequencies sounds, basal fibers use a strategy based on the discharge rate, which requires a larger heterogeneity of fibers at the base of the cochlea. The behavioral audiogram measured in the same noise condition overlaps perfectly with the fibers’ threshold. This result validates our predictions made from the single fiber recordings.Conclusion: This work highlights the major role of the phase locked neuronal response for animal species that vocalize below 3 kHz (as human), especially in noisy backgrounds. At the opposite, high frequency sound encoding is based on rate information. This result emphasizes the difficulty to transpose results from murine model which communicate in the high frequencies (> 4 kHz) to human whose language is between 0.3 and 3 kHz.
|
4 |
Optogenetic feedback control of neural activityNewman, Jonathan P. 12 January 2015 (has links)
Optogenetics is a set of technologies that enable optically triggered gain or loss of function in genetically specified populations of cells. Optogenetic methods have revolutionized experimental neuroscience by allowing precise excitation or inhibition of firing in specified neuronal populations embedded within complex, heterogeneous tissue. Although optogenetic tools have greatly improved our ability manipulate neural activity, they do not offer control of neural firing in the face of ongoing changes in network activity, plasticity, or sensory input. In this thesis, I develop a feedback control technology that automatically adjusts optical stimulation in real-time to precisely control network activity levels. I describe hardware and software tools, modes of optogenetic stimulation, and control algorithms required to achieve robust neural control over timescales ranging from seconds to days. I then demonstrate the scientific utility of these technologies in several experimental contexts. First, I investigate the role of connectivity in shaping the network encoding process using continuously-varying optical stimulation. I show that synaptic connectivity linearizes the neuronal response, verifying previous theoretical predictions. Next, I use long-term optogenetic feedback control to show that reductions in excitatory neurotransmission directly trigger homeostatic increases in synaptic strength. This result opposes a large body of literature on the subject and has significant implications for memory formation and maintenance. The technology presented in this thesis greatly enhances the precision with which optical stimulation can control neural activity, and allows causally related variables within neural circuits to be studied independently.
|
5 |
Décodage neuronal dans le système auditif central à l'aide d'un modèle bilinéaire généralisé et de représentations spectro-temporelles bio-inspirées / Neural decoding in the central auditory system using bio-inspired spectro-temporal representations and a generalized bilinear modelSiahpoush, Shadi January 2015 (has links)
Résumé : Dans ce projet, un décodage neuronal bayésien est effectué sur le colliculus inférieur du cochon d'Inde. Premièrement, On lit les potentiels évoqués grâce aux électrodes et ensuite on en déduit les potentiels d'actions à l'aide de technique de classification des décharges des neurones.
Ensuite, un modèle linéaire généralisé (GLM) est entraîné en associant un stimulus acoustique en même temps que les mesures de potentiel qui sont effectuées.
Enfin, nous faisons le décodage neuronal de l'activité des neurones en utilisant une méthode d'estimation statistique par maximum à posteriori afin de reconstituer la représentation spectro-temporelle du signal acoustique qui correspond au stimulus acoustique.
Dans ce projet, nous étudions l'impact de différents modèles de codage neuronal ainsi que de différentes représentations spectro-temporelles (qu'elles sont supposé représenter le stimulus acoustique équivalent) sur la précision du décodage bayésien de l'activité neuronale enregistrée par le système auditif central. En fait, le modèle va associer une représentation spectro-temporelle équivalente au stimulus acoustique à partir des mesures faites dans le cerveau. Deux modèles de codage sont comparés: un GLM et un modèle bilinéaire généralisé (GBM), chacun avec trois différentes représentations spectro-temporelles des stimuli d'entrée soit un spectrogramme ainsi que deux représentations bio-inspirées: un banc de filtres gammatones et un spikegramme. Les paramètres des GLM et GBM, soit le champ récepteur spectro-temporel, le filtre post décharge et l'entrée non linéaire (seulement pour le GBM) sont adaptés en utilisant un algorithme d'optimisation par maximum de vraisemblance (ML). Le rapport signal sur bruit entre la représentation reconstruite et la représentation originale est utilisé pour évaluer le décodage, c'est-à-dire la précision de la reconstruction. Nous montrons expérimentalement que la précision de la reconstruction est meilleure avec une représentation par spikegramme qu'avec une représentation par spectrogramme et, en outre, que l'utilisation d'un GBM au lieu d'un GLM augmente la précision de la reconstruction. En fait, nos résultats montrent que le rapport signal à bruit de la reconstruction d'un spikegramme avec le modèle GBM est supérieur de 3.3 dB au rapport signal à bruit de la reconstruction d'un spectrogramme avec le modèle GLM. / Abstract : In this project, Bayesian neural decoding is performed on the neural activity recorded from the inferior colliculus of the guinea pig following the presentation of a vocalization. In particular, we study the impact of different encoding models on the accuracy of reconstruction of different spectro-temporal representations of the input stimulus. First voltages recorded from the inferior colliculus of the guinea pig are read and the spike trains are obtained. Then, we fit an encoding model to the stimulus and associated spike trains. Finally, we do neural decoding on the pairs of stimuli and neural activities using the maximum a posteriori optimization method to obtain the reconstructed spectro-temporal representation of the signal. Two encoding models, a generalized linear model (GLM) and a generalized bilinear model (GBM), are compared along with three different spectro-temporal representations of the input stimuli: a spectrogram and two bio-inspired representations, i.e. a gammatone filter bank (GFB) and a spikegram. The parameters of the GLM and GBM including spectro-temporal receptive field, post spike filter and input non linearity (only for the GBM) are fitted using the maximum likelihood optimization (ML) algorithm. Signal to noise ratios between the reconstructed and original representations are used to evaluate the decoding, or reconstruction accuracy. We experimentally show that the reconstruction accuracy is better with the spikegram representation than with the spectrogram and GFB representation. Furthermore, using a GBM instead of a GLM significantly increases the reconstruction accuracy. In fact, our results show that the spikegram reconstruction accuracy with a GBM fitting yields an SNR that is 3.3 dB better than when using the standard decoding approach of reconstructing a spectrogram with GLM fitting.
|
Page generated in 0.0745 seconds