Return to search

Training Spiking Neural Networks for Energy-Efficient Neuromorphic Computing

<p>Spiking Neural Networks (SNNs), widely known as the third
generation of artificial neural networks, offer a promising solution to
approaching the brains' processing capability for cognitive tasks. With more
biologically realistic perspective on input processing, SNN performs neural
computations using spikes in an event-driven manner. The asynchronous
spike-based computing capability can be exploited to achieve improved energy
efficiency in neuromorphic hardware. Furthermore, SNN, on account of
spike-based processing, can be trained in an unsupervised manner using Spike
Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength
of a multi-bit synapse based on the correlation between the spike times of the
input and output neurons. In order to achieve plasticity with compressed
synaptic memory, stochastic binary synapse is proposed where spike timing
information is embedded in the synaptic switching probability. A bio-plausible
probabilistic-STDP learning rule consistent with Hebbian learning theory is
proposed to train a network of binary as well as quaternary synapses. In
addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and
anti-Hebbian mechanisms is proposed to enhance the learnt representations of
the stochastic SNN. The efficacy of the presented learning rules are
demonstrated for feed-forward fully-connected and residual convolutional SNNs
on the MNIST and the CIFAR-10 datasets.<br></p><p>STDP-based learning is limited to shallow SNNs (<5
layers) yielding lower than acceptable accuracy on complex datasets. This
thesis proposes block-wise complexity-aware training algorithm, referred to as
BlocTrain, for incrementally training deep SNNs with reduced memory
requirements using spike-based backpropagation through time. The deep network
is divided into blocks, where each block consists of few convolutional layers
followed by an auxiliary classifier. The blocks are trained sequentially using
local errors from the respective auxiliary classifiers. Also, the deeper blocks
are trained only on the hard classes determined using the class-wise accuracy
obtained from the classifier of previously trained blocks. Thus, BlocTrain
improves the training time and computational efficiency with increasing block
depth. In addition, higher computational efficiency is obtained during
inference by exiting early for easy class instances and activating the deeper
blocks only for hard class instances. The ability of BlocTrain to provide
improved accuracy as well as higher training and inference efficiency compared
to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the
CIFAR-10 and the CIFAR-100 datasets.<br></p><p>Feed-forward SNNs are typically used for static
image recognition while recurrent Liquid State Machines (LSMs) have been shown
to encode time-varying speech data. Liquid-SNN, consisting of input neurons
sparsely connected by plastic synapses to randomly interlinked reservoir of
spiking neurons (or liquid), is proposed for unsupervised speech and image
recognition. The strength of the synapses interconnecting the input and liquid
are trained using STDP, which makes it possible to infer the class of a test
pattern without a readout layer typical in standard LSMs. The Liquid-SNN
suffers from scalability challenges due to the need to primarily increase the
number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of
multiple liquids, where each liquid is trained on a unique input segment, is
proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a
test pattern by combining the spiking activity of the individual liquids, each
of which identifies unique input features. As a result, SpiLinC offers
comparable accuracy to Liquid-SNN with added synaptic sparsity and faster
training convergence, which is validated on the digit subset of TI46 speech
corpus and the MNIST dataset.</p>

  1. 10.25394/pgs.11336840.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/11336840
Date06 December 2019
CreatorsGopalakrishnan Srinivasan (8088431)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/Training_Spiking_Neural_Networks_for_Energy-Efficient_Neuromorphic_Computing/11336840

Page generated in 0.0023 seconds