Spelling suggestions: "subject:"sinn"" "subject:"sdnn""
1 |
Adapting Neural Network Learning Algorithms for Neuromorphic ImplementationsJason M Allred (11197680) 29 July 2021 (has links)
<div>Computing with Artificial Neural Networks (ANNs) is a branch of machine learning that has seen substantial growth over the last decade, significantly increasing the accuracy and capability of machine learning systems. ANNs are connected networks of computing elements inspired by the neuronal connectivity in the brain. Spiking Neural Networks (SNNs) are a type of ANN that operate with event-driven computation, inspired by the “spikes” or firing events of individual neurons in the brain. Neuromorphic computing—the implementation of neural networks in hardware—seeks to improve the energy efficiency of these machine learning systems either by computing directly with device physical primitives, by bypassing the software layer of logical implementations, or by operating with SNN event-driven computation. Such implementations may, however, have added restrictions, including weight-localized learning and hard-wired connections. Further obstacles, such as catastrophic forgetting, the lack of supervised error signals, and storage and energy constraints, are encountered when these systems need to perform autonomous online, real-time learning in an unknown, changing environment. </div><div><br></div><div>Adapting neural network learning algorithms for these constraints can help address these issues. Specifically, corrections to Spike Timing-Dependent Plasticity (STDP) can stabilize local, unsupervised learning; accounting for the statistical firing properties of spiking neurons may improve conversions from non-spiking to spiking networks; biologically-inspired dopaminergic and habituation adjustments to STDP can limit catastrophic forgetting; convolving temporally instead of spatially can provide for localized weight sharing with direct synaptic connections; and explicitly training for spiking sparsity can significantly reduce computational energy consumption.</div>
|
2 |
Exploring the column elimination optimization in LIF-STDP networksSun, Mingda January 2022 (has links)
Spiking neural networks using Leaky-Integrate-and-Fire (LIF) neurons and Spike-timing-depend Plasticity (STDP) learning, are commonly used as more biological possible networks. Compare to DNNs and RNNs, the LIF-STDP networks are models which are closer to the biological cortex. LIF-STDP neurons use spikes to communicate with each other, and they learn through the correlation among these pre- and post-synaptic spikes. Simulation of such networks usually requires high-performance supercomputers which are almost all based on von Neumann architecture that separates storage and computation. In von Neumann architecture solutions, memory access is the bottleneck even for highly optimized Application-Specific Integrated Circuits (ASICs). In this thesis, we propose an optimization method that can reduce the memory access cost by avoiding a dual-access pattern. In LIF-STDP networks, the weights usually are stored in the form of a two-dimensional matrix. Pre- and post-synaptic spikes trigger row and column access correspondingly. But this dual-access pattern is very costly for DRAM. We eliminate the column access by introducing a post-synaptic buffer and an approximation function. The post-synaptic spikes are recorded in the buffer and are processed at pre-synaptic spikes together with the row updates. This column update elimination method will introduce errors due to the limited buffer size. In our error analysis, the experiments show that the probability of introducing intolerable errors can be bounded to a very small number with proper buffer size and approximation function. We also present a performance analysis of the Column Update Elimination (CUE) optimization. The error analysis of the column updates elimination method is the main contribution of our work. / Spikande neurala nätverk som använder LIF-neuroner och STDP-inlärning, används vanligtvis som ett mer biologiskt möjligt nätverk. Jämfört med DNN och RNN är LIF-STDP-nätverken modeller närmare den biologiska cortex. LIFSTDP-neuroner använder spikar för att kommunicera med varandra, och de lär sig genom korrelationen mellan dessa pre- och postsynaptiska spikar. Simulering av sådana nätverk kräver vanligtvis högpresterande superdatorer som nästan alla är baserade på von Neumann-arkitektur som separerar lagring och beräkning. I von Neumanns arkitekturlösningar är minnesåtkomst flaskhalsen även för högt optimerade Application-Specific Integrated Circuits (ASIC). I denna avhandling föreslår vi en optimeringsmetod som kan minska kostnaden för minnesåtkomst genom att undvika ett dubbelåtkomstmönster. I LIF-STDPnätverk lagras vikterna vanligtvis i form av en tvådimensionell matris. Preoch postsynaptiska toppar kommer att utlösa rad- och kolumnåtkomst på motsvarande sätt. Men detta mönster med dubbel åtkomst är mycket dyrt i DRAM. Vi eliminerar kolumnåtkomsten genom att införa en postsynaptisk buffert och en approximationsfunktion. De postsynaptiska topparna registreras i bufferten och bearbetas vid presynaptiska toppar tillsammans med raduppdateringarna. Denna metod för eliminering av kolumnuppdatering kommer att introducera fel på grund av den begränsade buffertstorleken. I vår felanalys visar experimenten att sannolikheten för att införa oacceptabla fel kan begränsas till ett mycket litet antal med korrekt buffertstorlek och approximationsfunktion. Vi presenterar också en prestandaanalys av CUE-optimeringen. Felanalysen av elimineringsmetoden för kolumnuppdateringar är det huvudsakliga bidraget från vårt arbete
|
3 |
Analysing the Energy Efficiency of Training Spiking Neural Networks / Analysering av Energieffektiviteten för Träning av Spikande NeuronnätLiu, Richard, Bixo, Fredrik January 2022 (has links)
Neural networks have become increasingly adopted in society over the last few years. As neural networks consume a lot of energy to train, reducing the energy consumption of these networks is desirable from an environmental perspective. Spiking neural network is a type of neural network inspired by the human brain which is significantly more energy efficient than traditional neural networks. However, there is little research about how the hyper parameters of these networks affect the relationship between accuracy and energy. The aim of this report is therefore to analyse this relationship. To do this, we measure the energy usage of training several different spiking network models. The results of this study shows that the choice of hyper-parameters in a neural network does affect the efficiency of the network. While correlation between any individual factors and energy consumption is inconclusive, this work could be used as a springboard for further research in this area. / Under de senaste åren har neuronnät blivit allt vanligare i samhället. Eftersom neuronnät förbrukar mycket energi för att träna dem är det önskvärt ur miljösynpunkt att minska energiförbrukningen för dessa nätverk. Spikande neuronnät är en typ av neuronnät inspirerade av den mänskliga hjärnan som är betydligt mer energieffektivt än traditionella neuronnät. Det finns dock lite forskning om hur hyperparametrarna i dessa nätverk påverkar sambandet mellan noggrannhet och energi. Syftet med denna rapport är därför att analysera detta samband. För att göra detta mäter vi energiförbrukningen vid träning av flera olika modeller av spikande neuronnät-modeller. Resultaten av denna studie visar att valet av hyperparametrar i ett neuronnät påverkar nätverkets effektivitet. Även om korrelationen mellan enskilda faktorer och energiförbrukning inte är entydig kan detta arbete användas som en startpunkt för ytterligare forskning inom detta område.
|
4 |
A neuromorphic approach for edge use allocationPetersson Steenari, Kim January 2022 (has links)
This paper introduces a new way of solving an edge user allocation problem. The problem is to be solved with a network of spiking neurons. This network should quickly and with low energy cost solve the optimization problem of allocating users to servers and minimizing the amount of servers hired to reduce the related hiring cost. The demonstrated method is a simulation of a method which could be implemented onto neuromorphic hardware. It is written in Python using the Brian2 spiking neural network simulator. The core of the method involves simulating an energy function through the use of circuit motifs. The dynamics of these circuit motifs mimic a search for the lowest energy point in an energy landscape, corresponding to a valid solution for the edge user allocation problem. The paper also shows the results of testing this network within the Brian2 environment.
|
5 |
GENERATIVE MODELS IN NATURAL LANGUAGE PROCESSING AND COMPUTER VISIONTalafha, Sameerah M 01 August 2022 (has links)
Generative models are broadly used in many subfields of DL. DNNs have recently developed a core approach to solving data-centric problems in image classification, translation, etc. The latest developments in parameterizing these models using DNNs and stochastic optimization algorithms have allowed scalable modeling of complex, high-dimensional data, including speech, text, and image. This dissertation proposal presents our state-the-art probabilistic bases and DL algorithms for generative models, including VAEs, GANs, and RNN-based encoder-decoder. The proposal also discusses application areas that may benefit from deep generative models in both NLP and computer vision. In NLP, we proposed an Arabic poetry generation model with extended phonetic and semantic embeddings (Phonetic CNN_subword embeddings). Extensive quantitative experiments using BLEU scores and Hamming distance show notable enhancements over strong baselines. Additionally, a comprehensive human evaluation confirms that the poems generated by our model outperform the base models in criteria including meaning, coherence, fluency, and poeticness. We proposed a generative video model using a hybrid VAE-GAN model in computer vision. Besides, we integrate two attentional mechanisms with GAN to get the essential regions of interest in a video, focused on enhancing the visual implementation of the human motion in the generated output. We have considered quantitative and qualitative experiments, including comparisons with other state-of-the-arts for evaluation. Our results indicate that our model enhances performance compared with other models and performs favorably under different quantitive metrics PSNR, SSIM, LPIPS, and FVD.Recently, mimicking biologically inspired learning in generative models based on SNNs has been shown their effectiveness in different applications. SNNs are the third generation of neural networks, in which neurons communicate through binary signals known as spikes. Since SNNs are more energy-efficient than DNNs. Moreover, DNN models have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. This dissertation shows the proposed ``VAE-Sleep'' that combines ideas from VAE and the sleep mechanism leveraging the advantages of deep and spiking neural networks (DNN--SNN).On top of that, we present ``Defense–VAE–Sleep'' that extended work of ``VAE-Sleep'' model used to purge adversarial perturbations from contaminated images. We demonstrate the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on different datasets.
|
6 |
Managing a real-time massively-parallel neural architecturePatterson, James Cameron January 2012 (has links)
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
|
7 |
Training Methodologies for Energy-Efficient, Low Latency Spiking Neural NetworksNitin Rathi (11849999) 17 December 2021 (has links)
<div>Deep learning models have become the de-facto solution in various fields like computer vision, natural language processing, robotics, drug discovery, and many others. The skyrocketing performance and success of multi-layer neural networks comes at a significant power and energy cost. Thus, there is a need to rethink the current trajectory and explore different computing frameworks. One such option is spiking neural networks (SNNs) that is inspired from the spike-based processing observed in biological brains. SNNs operating with binary signals (or spikes), can potentially be an energy-efficient alternative to the power-hungry analog neural networks (ANNs) that operate on real-valued analog signals. The binary all-or-nothing spike-based communication in SNNs implemented on event-driven hardware offers a low-power alternative to ANNs. A spike is a Delta function with magnitude 1. With all its appeal for low power, training SNNs efficiently for high accuracy remains an active area of research. The existing ANN training methodologies when applied to SNNs, results in networks that have very high latency. Supervised training of SNNs with spikes is challenging (due to discontinuous gradients) and resource-intensive (time, compute, and memory).Thus, we propose compression methods, training methodologies, learning rules</div><div><br></div><div>First, we propose compression techniques for SNNs based on unsupervised spike timing dependent plasticity (STDP) model. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels in emerging in-memory computing hardware . Pruning is based on the power law weight-dependent</div><div>STDP model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The process of pruning non-critical connections and quantizing the weights of critical synapses is</div><div>performed at regular intervals during training.</div><div><br></div><div>Second, we propose a multimodal SNN that combines two modalities (image and audio). The two unimodal ensembles are connected with cross-modal connections and the entire network is trained with unsupervised learning. The network receives inputs in both modalities for the same class and</div><div>predicts the class label. The excitatory connections in the unimodal ensemble and the cross-modal connections are trained with STDP. The cross-modal connections capture the correlation between neurons of different modalities. The multimodal network learns features of both modalities and improves the classification accuracy compared to unimodal topology, even when one of the modality is distorted by noise. The cross-modal connections are only excitatory and do not inhibit the normal activity of the unimodal ensembles. </div><div><br></div><div>Third, we explore supervised learning methods for SNNs.Many works have shown that an SNN for inference can be formed by copying the weights from a trained ANN and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology:</div><div>1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike.</div><div><br></div><div>Fourth, we present techniques to further reduce the inference latency in SNNs. SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency.</div><div><br></div><div>Finally, we explore the application of SNNs in sequential learning tasks. We propose LITE-SNN, a lightweight SNN suitable for sequential learning tasks on data from dynamic vision sensors (DVS) and natural language processing (NLP). In general sequential data is processed with complex recurrent neural networks (like long short-term memory (LSTM), and gated recurrent unit (GRU)) with explicit feedback connections and internal states to handle the long-term dependencies. Whereas neuron models in SNNs - integrate-and-fire (IF) or leaky-integrate-and-fire (LIF) - have implicit feedback in their internal state (membrane potential) by design and can be leveraged for sequential tasks. The membrane potential in the IF/LIF neuron integrates the incoming current and outputs an event (or spike) when the potential crosses a threshold value. Since SNNs compute with highly sparse spike-based spatio-temporal data, the energy/inference is lower than LSTMs/GRUs. SNNs also have fewer parameters than LSTM/GRU resulting in smaller models and faster inference. We observe the problem of vanishing gradients in vanilla SNNs for longer sequences and implement a convolutional SNN with attention layers to perform sequence-to-sequence learning tasks. The inherent recurrence in SNNs, in addition to the fully parallelized convolutional operations, provides an additional mechanism to model sequential dependencies and leads to better accuracy than convolutional neural networks with ReLU activations.</div>
|
8 |
Product Matching Using Image SimilarityForssell, Melker, Janér, Gustav January 2020 (has links)
PriceRunner is an online shopping comparison company. To maintain up-todate prices, PriceRunner has to process large amounts of data every day. The processing of the data includes matching unknown products, referred to as offers, to known products. Offer data includes information about the product such as: title, description, price and often one image of the product. PriceRunner has previously implemented a textual-based machine learning (ML) model, but is also looking for new approaches to complement the current product matching system. The objective of this master’s thesis is to investigate the potential of using an image-based ML model for product matching. Our method uses a similarity learning approach where the network learns to recognise the similarity between images. To achieve this, a siamese neural network was trained with the triplet loss function. The network is trained to map similar images closer together and dissimilar images further apart in a vector space. This approach is often used for face recognition, where there is an extensive amount of classes and a limited amount of images per class, and new classes are frequently added. This is also the case for the image data used in this thesis project. A general model was trained on images from the Clothing and Accessories hierarchy, one of the 16 toplevel hierarchies at PriceRunner, consisting of 17 product categories. The results varied between each product category. Some categories proved to be less suitable for image-based classification while others excelled. The model handles new classes relatively well without any, or with briefer, retraining. It was concluded that there is potential in using images to complement the current product matching system at PriceRunner.
|
9 |
Amygdala Modeling with Context and Motivation Using Spiking Neural Networks for Robotics ApplicationsZeglen, Matthew Aaron 27 May 2022 (has links)
No description available.
|
10 |
HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELSBing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
|
Page generated in 0.0398 seconds