Spelling suggestions: "subject:"spiking neurons"" "subject:"spikings neurons""
1 |
Software tool for modelling coding and processing of information in auditory cortex of mice / Software tool for modelling coding and processing of information in auditory cortex of micePopelová, Markéta January 2013 (has links)
Autor Markéta Popelová Název práce Software tool for modelling coding and processing of information in auditory cortex of mice Abstrakt Porozumění zpracovávání a kódování informací ve sluchové k·ře (AC) je stále ne- dostatečné. Z několika r·zných d·vod· by bylo užitečné mít výpočetní model AC, například z d·vodu vysvětlení, či ujasnění procesu kódování informací v AC. Prv- ním cílem této práce bylo vytvořit softwarový nástroj (simulátor SUSNOMAC), zaměřený na modelování AC. Druhým cílem bylo navrhnout výpočetní model AC s následujícími vlastnostmi: Izhikevich·v model neuronu, dlouhodobá plasticita ve formě Spike-timing-dependent plasticity (STDP), šestivrstvá architektura, pa- rametrizované typy neuron·, hustota neuron· a pravděpodobnost vzniku synapsí. Navržený model byl testován v desítkách experiment·, s r·znými sadami para- metr· a v r·zných velikostech (až 100 000 neuron· s takřka 21 milióny synapsí). Experimenty byly analyzovány a jejich výsledky srovnány s pozorováním skutečné AC. V práci popisujeme a analyzujeme několik zajímavých pozorování o aktivitě modelované sítě a vzniku tonotopického uspořádání AC. 1
|
2 |
Self organisation and hierarchical concept representation in networks of spiking neuronsRumbell, Timothy January 2013 (has links)
The aim of this work is to introduce modular processing mechanisms for cortical functions implemented in networks of spiking neurons. Neural maps are a feature of cortical processing found to be generic throughout sensory cortical areas, and self-organisation to the fundamental properties of input spike trains has been shown to be an important property of cortical organisation. Additionally, oscillatory behaviour, temporal coding of information, and learning through spike timing dependent plasticity are all frequently observed in the cortex. The traditional self-organising map (SOM) algorithm attempts to capture the computational properties of this cortical self-organisation in a neural network. As such, a cognitive module for a spiking SOM using oscillations, phasic coding and STDP has been implemented. This model is capable of mapping to distributions of input data in a manner consistent with the traditional SOM algorithm, and of categorising generic input data sets. Higher-level cortical processing areas appear to feature a hierarchical category structure that is founded on a feature-based object representation. The spiking SOM model is therefore extended to facilitate input patterns in the form of sets of binary feature-object relations, such as those seen in the field of formal concept analysis. It is demonstrated that this extended model is capable of learning to represent the hierarchical conceptual structure of an input data set using the existing learning scheme. Furthermore, manipulations of network parameters allow the level of hierarchy used for either learning or recall to be adjusted, and the network is capable of learning comparable representations when trained with incomplete input patterns. Together these two modules provide related approaches to the generation of both topographic mapping and hierarchical representation of input spaces that can be potentially combined and used as the basis for advanced spiking neuron models of the learning of complex representations.
|
3 |
TOPOLOGICAL PROPERTIES OF A NETWORK OF SPIKING NEURONS IN FACE IMAGE RECOGNITIONShin, Joo-Heon 24 March 2010 (has links)
We introduce a novel system for recognition of partially occluded and rotated images. The system is based on a hierarchical network of integrate-and-fire spiking neurons with random synaptic connections and a novel organization process. The network generates integrated output sequences that are used for image classification. The network performed satisfactorily given appropriate topology, i.e. the number of neurons and synaptic connections, which corresponded to the size of input images. Comparison of Synaptic Plasticity Activity Rule (SAPR) and Spike Timing Dependant Plasticity (STDP) rules, used to update connections between the neurons, indicated that the SAPR gave better results and thus was used throughout. Test results showed that the network performed better than Support Vector Machines. We also introduced a stopping criterion based on entropy, which significantly shortened the iterative process while only slightly affecting classification performance.
|
4 |
COMPUTATIONAL MODELING OF MULITSENSORY PROCESSING USING NETWORK OF SPIKING NEURONSLim, Hun Ki 04 May 2011 (has links)
Multisensory processing in the brain underlies a wide variety of perceptual phenomena, but little is known about the underlying mechanisms of how multisensory neurons are generated and how the neurons integrate sensory information from environmental events. This lack of knowledge is due to the difficulty of biological experiments to manipulate and test the characteristics of multisensory processing. By using a computational model of multisensory processing this research seeks to provide insight into the mechanisms of multisensory processing. From a computational perspective, modeling of brain functions involves not only the computational model itself but also the conceptual definition of the brain functions, the analysis of correspondence between the model and the brain, and the generation of new biologically plausible insights and hypotheses. In this research, the multisensory processing is conceptually defined as the effect of multisensory convergence on the generation of multisensory neurons and their integrated response products, i.e., multisensory integration. Thus, the computational model is the implementation of the multisensory convergence and the simulation of the neural processing acting upon the convergence. Next, the most important step in the modeling is analysis of how well the model represents the target, i.e., brain function. It is also related to validation of the model. One of the intuitive and powerful ways of validating the model is to apply methods standard to neuroscience for analyzing the results obtained from the model. In addition, methods such as statistical and graph-theoretical analyses are used to confirm the similarity between the model and the brain. This research takes both approaches to provide analyses from many different perspectives. Finally, the model and its simulations provide insight into multisensory processing, generating plausible hypotheses, which will need to be confirmed by real experimentation.
|
5 |
Aspects of learning within networks of spiking neuronsCarnell, Andrew Robert January 2008 (has links)
Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.
|
6 |
Functional relevance of inhibitory and disinhibitory circuits in signal propagation in recurrent neuronal networksBihun, Marzena Maria January 2018 (has links)
Cell assemblies are considered to be physiological as well as functional units in the brain. A repetitive and stereotypical sequential activation of many neurons was observed, but the mechanisms underlying it are not well understood. Feedforward networks, such as synfire chains, with the pools of excitatory neurons unidirectionally connected and facilitating signal transmission in a cascade-like fashion were proposed to model such sequential activity. When embedded in a recurrent network, these were shown to destabilise the whole network’s activity, challenging the suitability of the model. Here, we investigate a feedforward chain of excitatory pools enriched by inhibitory pools that provide disynaptic feedforward inhibition. We show that when embedded in a recurrent network of spiking neurons, such an augmented chain is capable of robust signal propagation. We then investigate the influence of overlapping two chains on the signal transmission as well as the stability of the host network. While shared excitatory pools turn out to be detrimental to global stability, inhibitory overlap implicitly realises the motif of lateral inhibition, which, if moderate, maintains the stability but if substantial, it silences the whole network activity including the signal. Addition of a disinhibitory pathway along the chain proves to rescue the signal transmission by transforming a strong inhibitory wave into a disinhibitory one, which specifically guards the excitatory pools from receiving excessive inhibition and thereby allowing them to remain responsive to the forthcoming activation. Disinhibitory circuits not only improve the signal transmission, but can also control it via a gating mechanism. We demonstrate that by manipulating a firing threshold of the disinhibitory neurons, the signal transmission can be enabled or completely blocked. This mechanism corresponds to cholinergic modulation, which was shown to be signalled by volume as well as phasic transmission and variably target classes of neurons. Furthermore, we show that modulation of the feedforward inhibition circuit can promote generating spontaneous replay at the absence of external inputs. This mechanism, however, tends to also cause global instabilities. Overall, these results underscore the importance of inhibitory neuron populations in controlling signal propagation in cell assemblies as well as global stability. Specific inhibitory circuits, when controlled by neuromodulatory systems, can robustly guide or block the signals and invoke replay. This mounts to evidence that the population of interneurons is diverse and can be best categorised by neurons’ specific circuit functions as well as their responsiveness to neuromodulators.
|
7 |
Detekce a analýza polychronních skupin neuronů v spikujících sítích. / Detection and analysis of polychronous groups emerging in spiking neural network models.Šťastný, Bořek January 2018 (has links)
How is information represented in real neural networks? Experimental results continue to provide evidence for presence of spiking patterns in network activity. The concept of polychronous groups attempts to explain these results by proposing that neurons group together to fire in non- synchronous but precise time-locked chains. Several methods for the detection of such groups have been proposed, however, they all employ extensive searching in network structure, which limits their usefulness. We present a new method by observing spiking dependencies in network activity to directly detect polychronous groups. Our method shows comparatively more efficient computation by trading off detection selectivity. The method allows for analysis of polychronous groups emerging in noisy networks. Our results support the existence of structure-forming properties of spontaneous activity in neural network.
|
8 |
Transferência de frequência em modelos de neurônios de disparo / Frequency transfer of spiking neurons modelsGewers, Felipe Lucas 25 February 2019 (has links)
Este trabalho trata sobre a transferência de frequência em neurônios de disparo, especificamente neurônios integra-e-dispara com escoamento e neurônios de Izhikevich. Através de análises matemáticas analíticas e sistemáticas simulações numéricas é obtida a função de ganho, a transferência de frequência estacionária e dinâmica dos neurônios utilizados, para diversos valores dos parâmetros do modelo. Desse modo, são realizados múltiplos ajustes às curvas obtidas, e os coeficientes estimados são apresentados. Com base em todos esses dados, são obtidas diversas características dessas relações de transferência de frequência, e como suas propriedades variam com relação aos principais parâmetros do modelo de neurônio e sinapse utilizados. Diversos resultados interessantes foram apresentados, incluindo evidências de que a função ganho do neurônio integra-e-dispara pode se comportar de modo bastante semelhante à função de ganho e transferência estacionária do neurônio de Izhikevich, dependendo dos parâmetros adotados; a divisão do plano de parâmetros do modelo integra-e-dispara de acordo com a linearidade da transferência de frequência dinâmica; o limiar da intensidade de corrente contínua e de frequência de spikes pré-sinápticos de um neurônio de Izhikevich é determinado apenas pelo parâmetro b, no intervalo de parâmetros usual; modelos de sinapses distintos tendem a não alterar a forma da transferência de frequência estacionária de um neurônio de Izhikevich. / This work is about the frequency transfer of spiking neurons, specifically integrate-and-fire neurons and Izhikevich neurons. Through analytical and systematic numerical simulations the gain function, the stationary and dynamic frequency transfer of the adopted neuron models, are obtained for several values of the model parameters. Thus, multiple fits are made to the curves obtained, and the estimated coefficients are presented. Based on all these data, several characteristics of the frequency transfer relations are obtained, and information is obtained about how their properties vary with respect the parameters of the adopted neuron and synapse model. Several interesting results have been presented, including evidences that the integrate-and-fire neuron\'s gain function can behave quite similarly to the Izhikevich neuron\'s stationary transfer and gain function, depending of the adopted parameters. We also obtained the division of the parameters plane of integrate-and-fire model according to the linearity of the dynamic frequency transfer. It was also verified that the thresholds of the presynaptic spikes\' current intensity and frequency of an Izhikevich neuron are determined only by the parameter b, in the usual parameter range. In addition, it was observed that the considered distinct synapses models tend not to depart from the stationary frequency transfer of an Izhikevich neuron.
|
Page generated in 0.08 seconds