• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 9
  • 5
  • Tagged with
  • 131
  • 59
  • 49
  • 45
  • 37
  • 33
  • 32
  • 30
  • 25
  • 24
  • 19
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Silicon neural networks : implementation of cortical cells to improve the artificial-biological hybrid technique / Réseau de neurones in silico : contribution au développement de la technique hybride pour les réseaux corticaux

Grassia, Filippo Giovanni 07 January 2013 (has links)
Ces travaux ont été menés dans le cadre du projet européen FACETS-ITN. Nous avons contribué à la simulation de cellules corticales grâce à des données expérimentales d'électrophysiologie comme référence et d'un circuit intégré neuromorphique comme simulateur. Les propriétés intrinsèques temps réel de nos circuits neuromorphiques à base de modèles à conductance, autorisent une exploration détaillée des différents types de neurones. L'aspect analogique des circuits intégrés permet le développement d'un simulateur matériel temps réel à l'échelle du réseau. Le deuxième objectif de cette thèse est donc de contribuer au développement d'une plate-forme mixte - matérielle et logicielle - dédiée à la simulation de réseaux de neurones impulsionnels. / This work has been supported by the European FACETS-ITN project. Within the frameworkof this project, we contribute to the simulation of cortical cell types (employingexperimental electrophysiological data of these cells as references), using a specific VLSIneural circuit to simulate, at the single cell level, the models studied as references in theFACETS project. The real-time intrinsic properties of the neuromorphic circuits, whichprecisely compute neuron conductance-based models, will allow a systematic and detailedexploration of the models, while the physical and analog aspect of the simulations, as opposedthe software simulation aspect, will provide inputs for the development of the neuralhardware at the network level. The second goal of this thesis is to contribute to the designof a mixed hardware-software platform (PAX), specifically designed to simulate spikingneural networks. The tasks performed during this thesis project included: 1) the methodsused to obtain the appropriate parameter sets of the cortical neuron models that can beimplemented in our analog neuromimetic chip (the parameter extraction steps was validatedusing a bifurcation analysis that shows that the simplified HH model implementedin our silicon neuron shares the dynamics of the HH model); 2) the fully customizablefitting method, in voltage-clamp mode, to tune our neuromimetic integrated circuits usinga metaheuristic algorithm; 3) the contribution to the development of the PAX systemin terms of software tools and a VHDL driver interface for neuron configuration in theplatform. Finally, it also addresses the issue of synaptic tuning for future SNN simulation.
102

Systèmes neuromorphiques temps réel : contribution à l’intégration de réseaux de neurones biologiquement réalistes avec fonctions de plasticité

Belhadj-Mohamed, Bilel 22 July 2010 (has links)
Cette thèse s’intègre dans le cadre du projet Européen FACETS. Pour ce projet, des systèmes matériels mixtes analogique-numérique effectuant des simulations en temps réel des réseaux de neurones doivent être développés. Le but est d’aider à la compréhension des phénomènes d’apprentissage dans le néocortex. Des circuits intégrés spécifiques analogiques ont préalablement été conçus par l’équipe pour simuler le comportement de plusieurs types de neurones selon le formalisme de Hodgkin-Huxley. La contribution de cette thèse consiste à la conception et la réalisation des circuits numériques permettant de gérer la connectivité entre les cellules au sein du réseau de neurones, suivant les règles de plasticité configurées par l’utilisateur. L’implantation de ces règles est réalisée sur des circuits numériques programmables (FPGA) et est optimisée pour assurer un fonctionnement temps réel pour des réseaux de grande taille. Des nouvelles méthodes de calculs et de communication ont été développées pour satisfaire les contraintes temporelles et spatiales imposées par le degré de réalisme souhaité. Entre autres, un protocole de communication basé sur la technique anneau à jeton a été conçu pour assurer le dialogue entre plusieurs FPGAs situés dans un système multicarte tout en garantissant l’aspect temps-réel des simulations. Les systèmes ainsi développés seront exploités par les laboratoires partenaires, neurobiologistes ou informaticiens. / This work has been supported by the European FACETS project. Within this project, we contribute in developing hardware mixed-signal devices for real-time spiking neural network simulation. These devices may potentially contribute to an improved understanding of learning phenomena in the neo-cortex. Neuron behaviours are reproduced using analog integrated circuits which implement Hodgkin-Huxley based models. In this work, we propose a digital architecture aiming to connect many neuron circuits together, forming a network. The inter-neuron connections are reconfigurable and can be ruled by a plasticity model. The architecture is mapped onto a commercial programmable circuit (FPGA). Many methods are developed to optimize the utilisation of hardware resources as well as to meet real-time constraints. In particular, a token-passing communication protocol has been designed and developed to guarantee real-time aspects of the dialogue between several FPGAs in a multiboard system allowing the integration of a large number of neurons. The global system is able to run neural simulations in biological real-time with high degree of realism, and then can be used by neurobiologists and computer scientists to carry on neural experiments.
103

Learning in spiking neural networks

Davies, Sergio January 2013 (has links)
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
104

Training Methodologies for Energy-Efficient, Low Latency Spiking Neural Networks

Nitin Rathi (11849999) 17 December 2021 (has links)
<div>Deep learning models have become the de-facto solution in various fields like computer vision, natural language processing, robotics, drug discovery, and many others. The skyrocketing performance and success of multi-layer neural networks comes at a significant power and energy cost. Thus, there is a need to rethink the current trajectory and explore different computing frameworks. One such option is spiking neural networks (SNNs) that is inspired from the spike-based processing observed in biological brains. SNNs operating with binary signals (or spikes), can potentially be an energy-efficient alternative to the power-hungry analog neural networks (ANNs) that operate on real-valued analog signals. The binary all-or-nothing spike-based communication in SNNs implemented on event-driven hardware offers a low-power alternative to ANNs. A spike is a Delta function with magnitude 1. With all its appeal for low power, training SNNs efficiently for high accuracy remains an active area of research. The existing ANN training methodologies when applied to SNNs, results in networks that have very high latency. Supervised training of SNNs with spikes is challenging (due to discontinuous gradients) and resource-intensive (time, compute, and memory).Thus, we propose compression methods, training methodologies, learning rules</div><div><br></div><div>First, we propose compression techniques for SNNs based on unsupervised spike timing dependent plasticity (STDP) model. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels in emerging in-memory computing hardware . Pruning is based on the power law weight-dependent</div><div>STDP model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The process of pruning non-critical connections and quantizing the weights of critical synapses is</div><div>performed at regular intervals during training.</div><div><br></div><div>Second, we propose a multimodal SNN that combines two modalities (image and audio). The two unimodal ensembles are connected with cross-modal connections and the entire network is trained with unsupervised learning. The network receives inputs in both modalities for the same class and</div><div>predicts the class label. The excitatory connections in the unimodal ensemble and the cross-modal connections are trained with STDP. The cross-modal connections capture the correlation between neurons of different modalities. The multimodal network learns features of both modalities and improves the classification accuracy compared to unimodal topology, even when one of the modality is distorted by noise. The cross-modal connections are only excitatory and do not inhibit the normal activity of the unimodal ensembles. </div><div><br></div><div>Third, we explore supervised learning methods for SNNs.Many works have shown that an SNN for inference can be formed by copying the weights from a trained ANN and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology:</div><div>1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike.</div><div><br></div><div>Fourth, we present techniques to further reduce the inference latency in SNNs. SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency.</div><div><br></div><div>Finally, we explore the application of SNNs in sequential learning tasks. We propose LITE-SNN, a lightweight SNN suitable for sequential learning tasks on data from dynamic vision sensors (DVS) and natural language processing (NLP). In general sequential data is processed with complex recurrent neural networks (like long short-term memory (LSTM), and gated recurrent unit (GRU)) with explicit feedback connections and internal states to handle the long-term dependencies. Whereas neuron models in SNNs - integrate-and-fire (IF) or leaky-integrate-and-fire (LIF) - have implicit feedback in their internal state (membrane potential) by design and can be leveraged for sequential tasks. The membrane potential in the IF/LIF neuron integrates the incoming current and outputs an event (or spike) when the potential crosses a threshold value. Since SNNs compute with highly sparse spike-based spatio-temporal data, the energy/inference is lower than LSTMs/GRUs. SNNs also have fewer parameters than LSTM/GRU resulting in smaller models and faster inference. We observe the problem of vanishing gradients in vanilla SNNs for longer sequences and implement a convolutional SNN with attention layers to perform sequence-to-sequence learning tasks. The inherent recurrence in SNNs, in addition to the fully parallelized convolutional operations, provides an additional mechanism to model sequential dependencies and leads to better accuracy than convolutional neural networks with ReLU activations.</div>
105

Vers une utilisation synaptique de composants mémoires innovants pour l’électronique neuro-inspirée / Toward using innovative memory devices as artificial synapses in neuro-inspired electronics

Vincent, Adrien F. 03 February 2017 (has links)
Les réseaux de neurones artificiels, dont le concept s'inspire du fonctionnement des cerveaux biologiques et de leurs capacités d'apprentissage, sont une approche prometteuse pour répondre aux nouveaux usages informatiques dits « cognitifs », tels que la reconnaissance d'images ou l'interaction en langage naturel. Néanmoins, leur mise en œuvre par des ordinateurs conventionnels est peu efficace. Une solution à ce problème est le développement de puces d'accélération matérielle spécialisées qui comportent :- des neurones, unités de traitement de l'information, pour lesquelles des circuits électroniques efficaces existent ;- des synapses, reliant les neurones mais aussi support matériel de l'apprentissage, par le biais de la modulation de leur conductance électrique (qualifiée de « plasticité synaptique »). Réaliser des synapses artificielles intégrables densément et capables d'apprendre in situ reste aujourd'hui un défi majeur.Ces travaux de thèse portent sur l'utilisation synaptique de nanocomposants mémoires innovants, dont certains comportements plastiques riches et intrinsèques sont analogues aux fonctionnalités que nous recherchons.Nous nous intéressons tout d'abord aux jonctions tunnel magnétiques à transfert de spin, développées dans l'industrie pour concevoir de nouvelles mémoires informatiques non volatiles. Nous montrons qu'il est aussi possible d'en faire des synapses artificielles binaires. Après la modélisation analytique de leur comportement naturellement stochastique, nous présentons comment exploiter ce dernier pour faciliter la mise en œuvre in situ d'une règle d'apprentissage probabiliste. À l'aide d'outils de simulation développés au laboratoire, nous étudions l'influence du régime de programmation sur la robustesse d'un système à la variabilité de telles synapses et sur leur consommation énergétique.Nous nous tournons ensuite vers des cellules électrochimiques métalliques Ag2S, d'autres nanocomposants mémoires innovants fabriqués et étudiés par des collaborateurs de l'Université de Lille I, qui y ont déjà observé plusieurs comportements plastiques. Nous avons découvert une plasticité supplémentaire, proche d'un comportement observé en neurosciences. Grâce à un modèle analytique simple permettant de comprendre les relations entre les différentes plasticités, nous montrons en simulation une preuve de concept d'apprentissage non supervisé qui repose sur l'interaction de ces multiples comportements.Pour finir, nous soulevons des pistes de réflexion sur les défis posés par les circuits nécessaires au bon fonctionnement d'un système utilisant comme synapses artificielles les nanocomposants étudiés, notamment lors de la lecture ou de l'écriture de ces derniers.Les résultats de cette thèse ouvrent la voie à la conception de systèmes neuro-inspirés capables d'apprendre en s'appuyant sur la richesse de comportements plastiques offerte par les nanocomposants mémoires innovants. / Artificial neural networks, which take some inspiration from the behavior of biological brains and their learning capabilities, are promising tools to address emerging computing uses known as “cognitive” tasks like classifying images or natural language interaction. However, implementing them on conventional computers is poorly efficient. A solution to this problem is to develop specialized acceleration chips which feature:• neurons, the information processing units, which can be implemented efficienctly with current electronic technologies;• synapses, the connections between the neurons which also support the learning process by adjusting their electrical conductance (“synaptic plasticity”). Implementing artificial synapses with high integration and on-line learning capabilities is still a challenge.This thesis explores the use of innovative memory nanodevices as artificial synapses: some of their rich plastic behaviors naturally implement features that are difficult to access with other devices.First, we investigate spin-transfer torque magnetic tunnel junctions, that are currently develop in industry as a new non volatile memory technology. We show that they can also be used as binary artificial synapses. After modeling their intrinsic stochastic behavior analytically, we describe how to harness this behavior to facilitate the implementation of an on-line probabilistic learning rule. With simulations tools developped in the laboratory, we detail the impact of the programming regime on the resilience of a system that uses such synapses, as well as on the system's power consumptionWe then investigate Ag2S electrochemical metalization cells, another type of innovative memory nanodevices fabricated and characterized by collaborators from Université de Lille I, who had already observed the existence of several plastic behaviors. We discovered an additional plasticity, close to a behavior known in neurosciences. With a simple analytical model that allows a better understanding of the relationships between theses plasticities, we show by simulations means a proof of concept of an unsupervised learning that relies on the interaction of the plastic behaviors theses nanodevices feature.Finally, we consider the challenges arising from the circuits that are required to read and write such artificial synapses in a neuro-inspired system.The results of this Ph.D. work pave the way for the design of neuro-inspired systems that can learn by harnessing the rich plastic behaviors that are featured by innovative memory nanodevices.
106

HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELS

Bing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
107

Compute-in-Memory Primitives for Energy-Efficient Machine Learning

Amogh Agrawal (10506350) 26 July 2021 (has links)
<div>Machine Learning (ML) workloads, being memory and compute-intensive, consume large amounts of power running on conventional computing systems, restricting their implementations to large-scale data centers. Thus, there is a need for building domain-specific hardware primitives for energy-efficient ML processing at the edge. One such approach is in-memory computing, which eliminates frequent and unnecessary data-transfers between the memory and the compute units, by directly computing the data where it is stored. Most of the chip area is consumed by on-chip SRAMs in both conventional von-Neumann systems (e.g. CPU/GPU) as well as application-specific ICs (e.g. TPU). Thus, we propose various circuit techniques to enable a range of computations such as bitwise Boolean and arithmetic computations, binary convolution operations, non-Boolean dot-product operations, lookup-table based computations, and spiking neural network implementation - all within standard SRAM memory arrays.</div><div><br></div><div>First, we propose X-SRAM, where, by using skewed sense amplifiers, bitwise Boolean operations such as NAND/NOR/XOR/IMP etc. can be enabled within 6T and 8T SRAM arrays. Moreover, exploiting the decoupled read/write ports in 8T SRAMs, we propose read-compute-store scheme where the computed data can directly be written back in the array simultaneously. </div><div><br></div><div>Second, we propose Xcel-RAM, where we show how binary convolutions can be enabled in 10T SRAM arrays for accelerating binary neural networks. We present charge sharing approach for performing XNOR operations followed by a population count (popcount) using both analog and digital techniques, highlighting the accuracy-energy tradeoff. </div><div><br></div><div>Third, we take this concept further and propose CASH-RAM, to accelerate non-Boolean operations, such as dot-products within standard 8T-SRAM arrays by utilizing the parasitic capacitances of bitlines and sourcelines. We analyze the non-idealities that arise due to analog computations and propose a self-compensation technique which reduces the effects of non-idealities, thereby reducing the errors. </div><div><br></div><div>Fourth, we propose ROM-embedded caches, RECache, using standard 8T SRAMs, useful for lookup-table (LUT) based computations. We show that just by adding an extra word-line (WL) or a source-line (SL), the same bit-cell can store a ROM bit, as well as the usual RAM bit, while maintaining the performance and area-efficiency, thereby doubling the memory density. Further we propose SPARE, an in-memory, distributed processing architecture built on RECache, for accelerating spiking neural networks (SNNs), which often require high-order polynomials and transcendental functions for solving complex neuro-synaptic models. </div><div><br></div><div>Finally, we propose IMPULSE, a 10T-SRAM compute-in-memory (CIM) macro, specifically designed for state-of-the-art SNN inference. The inherent dynamics of the neuron membrane potential in SNNs allows processing of sequential learning tasks, avoiding the complexity of recurrent neural networks. The highly-sparse spike-based computations in such spatio-temporal data can be leveraged for energy-efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. IMPULSE triew to tackle the above challenges. It consists of a fused weight (WMEM) and membrane potential (VMEM) memory and inherently exploits sparsity in input spikes. We propose staggered data mapping and re-configurable peripherals for handling different bit-precision requirements of WMEM and VMEM, while supporting multiple neuron functionalities. The proposed macro was fabricated in 65nm CMOS technology. We demonstrate a sentiment classification task from the IMDB dataset of movie reviews and show that the SNN achieves competitive accuracy with only a fraction of trainable parameters and effective operations compared to an LSTM network.</div><div><br></div><div>These circuit explorations to embed computations in standard memory structures shows that on-chip SRAMs can do much more than just store data and can be re-purposed as on-demand accelerators for a variety of applications. </div>
108

Training of Object Detection Spiking Neural Networks for Event-Based Vision

Johansson, Olof January 2021 (has links)
Event-based vision offers high dynamic range, time resolution and lower latency than conventional frame-based vision sensors. These attributes are useful in varying light condition and fast motion. However, there are no neural network models and training protocols optimized for object detection with event data, and conventional artificial neural networks for frame-based data are not directly suitable for that task. Spiking neural networks are natural candidates but further work is required to develop an efficient object detection architecture and end-to-end training protocol. For example, object detection in varying light conditions is identified as a challenging problem for the automation of construction equipment such as earth-moving machines, aiming to increase the safety of operators and make repetitive processes less tedious. This work focuses on the development and evaluation of a neural network for object detection with data from an event-based sensor. Furthermore, the strengths and weaknesses of an event-based vision solution are discussed in relation to the known challenges described in former works on automation of earth-moving machines. A solution for object detection with event data is implemented as a modified YOLOv3 network with spiking convolutional layers trained with a backpropagation algorithm adapted for spiking neural networks. The performance is evaluated on the N-Caltech101 dataset with classes for airplanes and motorbikes, resulting in a mAP of 95.8% for the combined network and 98.8% for the original YOLOv3 network with the same architecture. The solution is investigated as a proof of concept and suggestions for further work is described based on a recurrent spiking neural network.
109

Deep learning in event-based neuromorphic systems / L'apprentissage profond dans les systèmes évènementiels, bio-inspirés

Thiele, Johannes C. 22 November 2019 (has links)
Inférence et apprentissage dans les réseaux de neurones profonds nécessitent une grande quantité de calculs qui, dans beaucoup de cas, limite leur intégration dans les environnements limités en ressources. Les réseaux de neurones évènementiels de type « spike » présentent une alternative aux réseaux de neurones artificiels classiques, et promettent une meilleure efficacité énergétique. Cependant, entraîner les réseaux spike demeure un défi important, particulièrement dans le cas où l’apprentissage doit être exécuté sur du matériel de calcul bio-inspiré, dit matériel neuromorphique. Cette thèse constitue une étude sur les algorithmes d’apprentissage et le codage de l’information dans les réseaux de neurones spike.A partir d’une règle d’apprentissage bio-inspirée, nous analysons quelles propriétés sont nécessaires dans les réseaux spike pour rendre possible un apprentissage embarqué dans un scénario d’apprentissage continu. Nous montrons qu’une règle basée sur le temps de déclenchement des neurones (type « spike-timing dependent plasticity ») est capable d’extraire des caractéristiques pertinentes pour permettre une classification d’objets simples comme ceux des bases de données MNIST et N-MNIST.Pour dépasser certaines limites de cette approche, nous élaborons un nouvel outil pour l’apprentissage dans les réseaux spike, SpikeGrad, qui représente une implémentation entièrement évènementielle de la rétro-propagation du gradient. Nous montrons comment cette approche peut être utilisée pour l’entrainement d’un réseau spike qui est capable d’inférer des relations entre valeurs numériques et des images MNIST. Nous démontrons que cet outil est capable d’entrainer un réseau convolutif profond, qui donne des taux de reconnaissance d’image compétitifs avec l’état de l’art sur les bases de données MNIST et CIFAR10. De plus, SpikeGrad permet de formaliser la réponse d’un réseau spike comme celle d’un réseau de neurones artificiels classique, permettant un entraînement plus rapide.Nos travaux introduisent ainsi plusieurs mécanismes d’apprentissage puissants pour les réseaux évènementiels, contribuant à rendre l’apprentissage des réseaux spike plus adaptés à des problèmes réels. / Inference and training in deep neural networks require large amounts of computation, which in many cases prevents the integration of deep networks in resource constrained environments. Event-based spiking neural networks represent an alternative to standard artificial neural networks that holds the promise of being capable of more energy efficient processing. However, training spiking neural networks to achieve high inference performance is still challenging, in particular when learning is also required to be compatible with neuromorphic constraints. This thesis studies training algorithms and information encoding in such deep networks of spiking neurons. Starting from a biologically inspired learning rule, we analyze which properties of learning rules are necessary in deep spiking neural networks to enable embedded learning in a continuous learning scenario. We show that a time scale invariant learning rule based on spike-timing dependent plasticity is able to perform hierarchical feature extraction and classification of simple objects of the MNIST and N-MNIST dataset. To overcome certain limitations of this approach we design a novel framework for spike-based learning, SpikeGrad, which represents a fully event-based implementation of the gradient backpropagation algorithm. We show how this algorithm can be used to train a spiking network that performs inference of relations between numbers and MNIST images. Additionally, we demonstrate that the framework is able to train large-scale convolutional spiking networks to competitive recognition rates on the MNIST and CIFAR10 datasets. In addition to being an effective and precise learning mechanism, SpikeGrad allows the description of the response of the spiking neural network in terms of a standard artificial neural network, which allows a faster simulation of spiking neural network training. Our work therefore introduces several powerful training concepts for on-chip learning in neuromorphic devices, that could help to scale spiking neural networks to real-world problems.
110

Intelligent Sensing and Energy Efficient Neuromorphic Computing using Magneto-Resistive Devices

Chamika M Liyanagedera (11191896) 27 July 2021 (has links)
<p>With the Moore’s Law era coming to an end, much attention has been given to novel nanoelectronic devices as a key driving force behind technological innovation. Utilizing the inherent device physics of nanoelectronic components, for sensory and computational tasks have proven to be useful in reducing the area and energy requirements of the underlying hardware fabrics. In this work we demonstrate how the intrinsic noise present in nano magnetic devices can pave the pathway for energy efficient neuromorphic hardware. Furthermore, we illustrate how the unique magnetic properties of such devices can be leveraged for accurate estimation of environmental magnetic fields. We focus on spintronic technologies in particular, due to the low current and energy requirements in contrast to traditional CMOS technologies.</p><p>Image segmentation is a crucial pre-processing stage used in many object identification tasks that involves simplifying the representation of an image so it can be conveniently analyzed in the later stages of a problem. This is achieved through partitioning a complicated image into specific groups based on color, intensity or texture of the pixels of that image. Locally Excitatory Globally Inhibitory Oscillator Network or LEGION is one such segmentation algorithm, where synchronization and desynchronization between coupled oscillators are used for segmenting an image. In this work we present an energy efficient and scalable hardware implementation of LEGION using stochastic Magnetic Tunnel Junctions that leverage the fast parallel</p><p> nature of the algorithm. We demonstrate that the proposed hardware is capable of segmenting binary and gray-scale images with multiple objects more efficiently than<br> existing hardware implementations. </p><p>It is understood that the underlying device physics of spin devices can be used for emulating the functionality of a spiking neuron. Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway of achieving brain-like compact and energy-efficient cognitive intelligence. Current computational models attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning and inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. Our work attempts to explore the design space and analyze the performance of nanomagnet based stochastic neuromorphic computing architectures, for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets as they are scaled into the superparamagnetic regime.<br></p><p>Next we investigate how the magnetic properties of spin devices can be utilized for real world sensory applications. Magnetic Tunnel Junctions can efficiently translate variations in external magnetic fields into variations in electrical resistance. We couple this property of Magnetic Tunnel Junctions with Amperes law to design a non-invasive sensor to measure the current flowing through a wire. We demonstrate how undesirable effects of thermal noise and process variations can be suppressed through novel analog and digital signal conditioning techniques to obtain reliable and accurate current measurements. Our results substantiate that the proposed noninvasive current sensor surpass other state-of-the-art technologies in terms of noise and accuracy.<br></p><br>

Page generated in 0.069 seconds