Spelling suggestions: "subject:"step"" "subject:"std""
21 |
STDP Implementation Using CBRAM Devices in CMOSJanuary 2015 (has links)
abstract: Alternative computation based on neural systems on a nanoscale device are of increasing interest because of the massive parallelism and scalability they provide. Neural based computation systems also offer defect finding and self healing capabilities. Traditional Von Neumann based architectures (which separate the memory and computation units) inherently suffer from the Von Neumann bottleneck whereby the processor is limited by the number of instructions it fetches. The clock driven based Von Neumann computer survived because of technology scaling. However as transistor scaling is slowly coming to an end with channel lengths becoming a few nanometers in length, processor speeds are beginning to saturate. This lead to the development of multi-core systems which process data in parallel, with each core being based on the Von Neumann architecture.
The human brain has always been a mystery to scientists. Modern day super computers are outperformed by the human brain in certain computations. The brain occupies far less space and consumes a fraction of the power a super computer does with certain processes such as pattern recognition. Neuromorphic computing aims to mimic biological neural systems on silicon to exploit the massive parallelism that neural systems offer. Neuromorphic systems are event driven systems rather than being clock driven. One of the issues faced by neuromorphic computing was the area occupied by these circuits. With recent developments in the field of nanotechnology, memristive devices on a nanoscale have been developed and show a promising solution. Memristor based synapses can be up to three times smaller than Complementary Metal Oxide Semiconductor (CMOS) based synapses.
In this thesis, the Programmable Metallization Cell (a memristive device) is used to prove a learning algorithm known as Spike Time Dependant Plasticity (STDP). This learning algorithm is an extension to Hebb’s learning rule in which the synapses weight can be altered by the relative timing of spikes across it. The synaptic weight with the memristor will be its conductance, and CMOS oscillator based circuits will be used to produce spikes that can modulate the memristor conductance by firing with different phases differences. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
|
22 |
Plasticidade sináptica e homeostase intrínseca em uma rede neural in silico : propriedades globais e de resposta a estímulosSusin, Eduarda Demori January 2016 (has links)
Recentemente observou-se experimentalmente, Johnson et al. (2010), que fatias organotípicas corticais de rato são capazes de completar padrões espaço-temporais, após serem treinadas. Embora se especule que mecanismos de plasticidade sináptica e homeostática estejam por trás do fenômeno, ainda não existe nenhuma explicação detalhada sobre o assunto. Com o intuito de propor uma explicação clara e consistente para os mecanismos que permeiam a resposta da rede aos estímulos como um todo, nos propomos a estudar este fenômeno por meio de uma rede de neurônios de integração-e-disparo dotada de mecanismos de homeostase intrínseca e de plasticidade sináptica dependente de disparos. O sistema construído foi explorado, de modo a determinar em que condições a rede poderia comportar-se como o sistema real, e treinado de forma similar `a realizada experimentalmente por Johnson et al. (2010). / Recently it has been observed experimentally, Johnson et al. (2010), that organotypic cortical slices of rat are capable of completing spatio-temporal patterns after training. Although it is speculated that synaptic and homeostatic plasticity may have an important role in this phenomenon, there is still no detailed explanation about this subject. In order to propose a clear and consistent explanation for the mechanisms that underlie the network response to stimuli as a whole, we propose to study this phenomenon through a network of integrate-and-fire neurons endowed with intrinsic homeostasis and spike-timing dependent plasticity mechanisms. The constructed system was explored, aiming to determine in which conditions the network could behave as the real system, and trained in a way similar as the experimental one done by Johnson et al. (2010).
|
23 |
Training Spiking Neural Networks for Energy-Efficient Neuromorphic ComputingGopalakrishnan Srinivasan (8088431) 06 December 2019 (has links)
<p>Spiking Neural Networks (SNNs), widely known as the third
generation of artificial neural networks, offer a promising solution to
approaching the brains' processing capability for cognitive tasks. With more
biologically realistic perspective on input processing, SNN performs neural
computations using spikes in an event-driven manner. The asynchronous
spike-based computing capability can be exploited to achieve improved energy
efficiency in neuromorphic hardware. Furthermore, SNN, on account of
spike-based processing, can be trained in an unsupervised manner using Spike
Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength
of a multi-bit synapse based on the correlation between the spike times of the
input and output neurons. In order to achieve plasticity with compressed
synaptic memory, stochastic binary synapse is proposed where spike timing
information is embedded in the synaptic switching probability. A bio-plausible
probabilistic-STDP learning rule consistent with Hebbian learning theory is
proposed to train a network of binary as well as quaternary synapses. In
addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and
anti-Hebbian mechanisms is proposed to enhance the learnt representations of
the stochastic SNN. The efficacy of the presented learning rules are
demonstrated for feed-forward fully-connected and residual convolutional SNNs
on the MNIST and the CIFAR-10 datasets.<br></p><p>STDP-based learning is limited to shallow SNNs (<5
layers) yielding lower than acceptable accuracy on complex datasets. This
thesis proposes block-wise complexity-aware training algorithm, referred to as
BlocTrain, for incrementally training deep SNNs with reduced memory
requirements using spike-based backpropagation through time. The deep network
is divided into blocks, where each block consists of few convolutional layers
followed by an auxiliary classifier. The blocks are trained sequentially using
local errors from the respective auxiliary classifiers. Also, the deeper blocks
are trained only on the hard classes determined using the class-wise accuracy
obtained from the classifier of previously trained blocks. Thus, BlocTrain
improves the training time and computational efficiency with increasing block
depth. In addition, higher computational efficiency is obtained during
inference by exiting early for easy class instances and activating the deeper
blocks only for hard class instances. The ability of BlocTrain to provide
improved accuracy as well as higher training and inference efficiency compared
to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the
CIFAR-10 and the CIFAR-100 datasets.<br></p><p>Feed-forward SNNs are typically used for static
image recognition while recurrent Liquid State Machines (LSMs) have been shown
to encode time-varying speech data. Liquid-SNN, consisting of input neurons
sparsely connected by plastic synapses to randomly interlinked reservoir of
spiking neurons (or liquid), is proposed for unsupervised speech and image
recognition. The strength of the synapses interconnecting the input and liquid
are trained using STDP, which makes it possible to infer the class of a test
pattern without a readout layer typical in standard LSMs. The Liquid-SNN
suffers from scalability challenges due to the need to primarily increase the
number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of
multiple liquids, where each liquid is trained on a unique input segment, is
proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a
test pattern by combining the spiking activity of the individual liquids, each
of which identifies unique input features. As a result, SpiLinC offers
comparable accuracy to Liquid-SNN with added synaptic sparsity and faster
training convergence, which is validated on the digit subset of TI46 speech
corpus and the MNIST dataset.</p>
|
24 |
Optimizing Reservoir Computing Architecture for Dynamic Spectrum Sensing ApplicationsSharma, Gauri 25 April 2024 (has links)
Spectrum sensing in wireless communications serves as a crucial binary classification tool in cognitive radios, facilitating the detection of available radio spectrums for secondary users, especially in scenarios with high Signal-to-Noise Ratio (SNR). Leveraging Liquid State Machines (LSMs), which emulate spiking neural networks like the ones in the human brain, prove to be highly effective for real-time data monitoring for such temporal tasks. The inherent advantages of LSM-based recurrent neural networks, such as low complexity, high power efficiency, and accuracy, surpass those of traditional deep learning and conventional spectrum sensing methods. The architecture of the liquid state machine processor and its training methods are crucial for the performance of an LSM accelerator. This thesis presents one such LSM-based accelerator that explores novel architectural improvements for LSM hardware. Through the adoption of triplet-based Spike-Timing-Dependent Plasticity (STDP) and various spike encoding schemes on the spectrum dataset within the LSM, we investigate the advantages offered by these proposed techniques compared to traditional LSM models on the FPGA. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores these novel onboard learning methods, shares the results of the suggested architectural changes, explains the trade-offs involved, and explores how the improved LSM model's accuracy can benefit different classification tasks. Additionally, we outline the future research directions aimed at further enhancing the accuracy of these models. / Master of Science / Machine Learning (ML) and Artificial Intelligence (AI) have significantly shaped various applications in recent years. One notable domain experiencing substantial positive impact is spectrum sensing within wireless communications, particularly in cognitive radios. In light of spectrum scarcity and the underutilization of RF spectrums, accurately classifying spectrums as occupied or unoccupied becomes crucial for enabling secondary users to efficiently utilize available resources. Liquid State Machines (LSMs), made of spiking neural networks resembling human brain, prove effective in real-time data monitoring for this classification task. Exploiting the temporal operations, LSM accelerators and processors, facilitate high performance and accurate spectrum monitoring than conventional spectrum sensing methods.
The architecture of the liquid state machine processor's training and optimal learning methods plays a pivotal role in the performance of a LSM accelerator. This thesis delves into various architectural enhancements aimed at spectrum classification using a liquid state machine accelerator, particularly implemented on an FPGA board. FPGA boards, known for their power efficiency and low latency, are well-suited for time-critical machine learning applications. The thesis explores onboard learning methods, such as employing a targeted encoder and incorporating Triplet Spike Timing-Dependent Plasticity (Triplet STDP) in the learning reservoir. These enhancements propose improvements in accuracy for conventional LSM models. The discussion concludes by presenting results of the architectural implementations, highlighting trade-offs, and shedding light on avenues for enhancing the accuracy of conventional liquid state machine-based models further.
|
25 |
Emergence de circuits neuromimétiques orientés sous l'effet de l'épissage associé à la plasticité synaptique à modulation temporelle relative (STDP)Iglesias, Javier 22 August 2005 (has links) (PDF)
L'élagage massif des synapses après une croissance excessive est une phase normale de la maturation du cerveau des mammifères. L'élagage commence peu avant la naissance et est complété avant l'âge de la maturité sexuelle. Les facteurs déclenchants capables d'induire l'élagage des synapses pourraient être liés à des processus dynamiques qui dépendent de la temporalité relative des potentiels d'actions. La plasticité synaptique à modulation temporelle relative STDP correspond à un changement de la force synaptique basé sur l'ordre des décharges pré- et post-synaptiques. La relation entre l'efficacité synaptique et l'élagage des synapses suggère que les synapses les plus faibles pourraient être modifiées et retirées au moyen d'une règle "d'apprentissage" faisant intervenir une compétition. Cette règle de plasticité pourrait produire le renforcement des connections parmi les neurones qui appartiennent à une assemblée de cellules caractérisée par des motifs de décharge récurrents. A l'inverse, les connections non activées de façon récurrente pourraient voir leur efficacité diminuée et être finalement éliminées. Le but principal de notre travail est de déterminer dans quelles conditions de telles assemblées pourraient émerger d'un réseau d'unités integrate-and-fire connectées aléatoirement à la surface d'une grille bidimensionnelle recevant à la fois du bruit et des entrées organisées dans les dimensions temporelle et spatiale. L'originalité de notre étude tient dans la taille relativement grande du réseau, 10'000 unités, dans la durée des simulations, 1 million d'unités de temps, et dans l'utilisation d'une règle STDP originale compatible avec une implémentation matérielle.
|
26 |
Comment déchiffrer le code impulsionnel de la Vision? Étude du flux parallèle, asynchrone et épars dans le traitement visuel ultra-rapide.Perrinet, Laurent 07 February 2003 (has links) (PDF)
Le cadre de ce travail est l'étude de modèles neuromimétiques de codage parallèle et asynchrone de l'information visuelle ---tel qu'il est mis en évidence dans des taches de traitement ultra-rapide--- en la transformant en une vague d'événements élémentaires d'importance décroissante. Nous allons baser dans un premier temps les mécanismes de ce code sur les processus biologiques à l'échelle du neurone et de la synapse. En particulier, la plasticité synaptique peut induire l'extraction non-supervisée de l'information cohérente dans le flux des impulsions neuronales. Le codage par la latence de la première décharge permet de définir un code impulsionnel dans le nerf optique grâce une architecture multiéchelle. Nous avons étendu cette démarche en utilisant une approche \emph(écologique) qui permet exploiter les régularités de ses coefficients sur les images naturelles pour les quantifier par le rang d'arrivée des impulsions neuronales. Ce code par le rang des décharges, est basé sur une architecture hiérarchique et ``en avant'' qui se distingue, outre sa simplicité, par la richesse des résultats mathématiques et de par ses performances computationnelles. Enfin, nous avons répondu aux besoins d'un modèle efficace de la Vision en fondant une théorie de \emph(représentation impulsionnelle sur-complète) de l'image. Cette formalisation conduit alors à une stratégie de \emph(code impulsionnel épars) en définissant des interactions latérales. Cette stratégie est étendue à un modèle général de \emph(colonne corticale adaptative) permettant l'émergence de dictionnaires de représentation et s'adapte particulièrement à la construction d'une carte de saillance. Ces techniques font émerger de nouveaux outils pour le traitement de l'image et de vision active adaptés à des architectures de calcul distribué.
|
27 |
Learning in spiking neural networksDavies, Sergio January 2013 (has links)
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
|
28 |
Homeostatic Plasticity in Input-Driven Dynamical SystemsToutounji, Hazem 26 February 2015 (has links)
The degree by which a species can adapt to the demands of its changing environment defines how well it can exploit the resources of new ecological niches. Since the nervous system is the seat of an organism's behavior, studying adaptation starts from there. The nervous system adapts through neuronal plasticity, which may be considered as the brain's reaction to environmental perturbations. In a natural setting, these perturbations are always changing. As such, a full understanding of how the brain functions requires studying neuronal plasticity under temporally varying stimulation conditions, i.e., studying the role of plasticity in carrying out spatiotemporal computations. It is only then that we can fully benefit from the full potential of neural information processing to build powerful brain-inspired adaptive technologies. Here, we focus on homeostatic plasticity, where certain properties of the neural machinery are regulated so that they remain within a functionally and metabolically desirable range. Our main goal is to illustrate how homeostatic plasticity interacting with associative mechanisms is functionally relevant for spatiotemporal computations. The thesis consists of three studies that share two features: (1) homeostatic and synaptic plasticity act on a dynamical system such as a recurrent neural network. (2) The dynamical system is nonautonomous, that is, it is subject to temporally varying stimulation. In the first study, we develop a rigorous theory of spatiotemporal representations and computations, and the role of plasticity. Within the developed theory, we show that homeostatic plasticity increases the capacity of the network to encode spatiotemporal patterns, and that synaptic plasticity associates these patterns to network states. The second study applies the insights from the first study to the single node delay-coupled reservoir computing architecture, or DCR. The DCR's activity is sampled at several computational units. We derive a homeostatic plasticity rule acting on these units. We analytically show that the rule balances between the two necessary processes for spatiotemporal computations identified in the first study. As a result, we show that the computational power of the DCR significantly increases. The third study considers minimal neural control of robots. We show that recurrent neural control with homeostatic synaptic dynamics endows the robots with memory. We show through demonstrations that this memory is necessary for generating behaviors like obstacle-avoidance of a wheel-driven robot and stable hexapod locomotion.
|
Page generated in 0.0425 seconds