Return to search

An Exploration of Linear Classifiers for Unsupervised Spiking Neural Networks with Event-Driven Data

Object recognition in video has seen giant strides in accuracy improvements in the last few years, a testament to the computational capacity of deep convolutional neural networks. However, this computational capacity of software-based neural networks coincides with high power consumption compared to that of some spiking neural networks (SNNs), up to 300,000 times more energy per synaptic event in IBM's TrueNorth chip, for example. SNNs are also well-suited to exploit the precise timing of event-driven image sensors, which transmit asynchronous "events" only when the luminance of a pixel changes above or below a threshold value. The combination of event-based imagers and SNNs becomes a straightforward way to achieve low power consumption in object recognition tasks. This thesis compares different linear classifiers for two low-power, hardware-friendly, spiking, unsupervised neural network architectures, SSLCA and HFirst, in response to asynchronous event-based data, and explores their ability to learn and recognize patterns from two event-based image datasets, N-MNIST and CIFAR10-DVS. By performing a grid search of important SNN and classifier hyperparameters, we also explore how to improve classification performance of these architectures. Results show that a softmax regression classifier exhibits modest accuracy gains (0.73%) over the next-best performing linear support vector machine (SVM), and considerably outperforms a single layer perceptron (by 5.28%) when classification performance is averaged over all datasets and spiking neural network architectures with varied hyperparameters. Min-max normalization of the inputs to the linear classifiers aides in classification accuracy, except in the case of the single layer perceptron classifier. We also see the highest reported classification accuracy for spiking convolutional networks on N-MNIST and CIFAR10-DVS, increasing this accuracy from 97.77% to 97.82%, and 29.67% to 31.76%, respectively. These findings are relevant for any system employing unsupervised SNNs to extract redundant features from event-driven data for recognition.

Identiferoai:union.ndltd.org:pdx.edu/oai:pdxscholar.library.pdx.edu:open_access_etds-5510
Date12 June 2018
CreatorsChavez, Wesley
PublisherPDXScholar
Source SetsPortland State University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceDissertations and Theses

Page generated in 0.0016 seconds