Return to search

Fantastic spiking neural networks and how to train them

Spiking neural networks are a new generation of neural networks that use neuronal models that are more biologically plausible than the typically used perceptron model. They do not use analog values to perform computations, as is the case in regular neural networks, but rely on spatio-temporal information encoded into sequences of delta-functions known as spike trains. Spiking neural networks are highly energy efficient compared to regular neural networks which makes them highly attractive in certain applications. This thesis implements two approaches for training spiking neural networks. The first approach uses surrogate gradient descent to deal with the issues of non-differentiability that arise with training spiking neural networks. The second approach is based on Bayesian probability theory and uses variational inference for parameter estimation and leads to a Bayesian spiking neural network. The two methods are tested on two datasets from the spiking neural network literature and limited hyperparameter studies are performed. The results indicate that both training methods work on the two datasets but that the Bayesian implementation yields a lower accuracy on test data. Moreover, the Bayesian implementation appear to be robust to the choice of prior parameter distribution. / <p>Sekretess</p>

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-441658
Date January 2021
CreatorsWeinberg, David
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC E, 1654-7616 ; 21003

Page generated in 0.0019 seconds