• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 12
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 175
  • 126
  • 79
  • 51
  • 45
  • 41
  • 37
  • 37
  • 31
  • 29
  • 21
  • 21
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Training of Object Detection Spiking Neural Networks for Event-Based Vision

Johansson, Olof January 2021 (has links)
Event-based vision offers high dynamic range, time resolution and lower latency than conventional frame-based vision sensors. These attributes are useful in varying light condition and fast motion. However, there are no neural network models and training protocols optimized for object detection with event data, and conventional artificial neural networks for frame-based data are not directly suitable for that task. Spiking neural networks are natural candidates but further work is required to develop an efficient object detection architecture and end-to-end training protocol. For example, object detection in varying light conditions is identified as a challenging problem for the automation of construction equipment such as earth-moving machines, aiming to increase the safety of operators and make repetitive processes less tedious. This work focuses on the development and evaluation of a neural network for object detection with data from an event-based sensor. Furthermore, the strengths and weaknesses of an event-based vision solution are discussed in relation to the known challenges described in former works on automation of earth-moving machines. A solution for object detection with event data is implemented as a modified YOLOv3 network with spiking convolutional layers trained with a backpropagation algorithm adapted for spiking neural networks. The performance is evaluated on the N-Caltech101 dataset with classes for airplanes and motorbikes, resulting in a mAP of 95.8% for the combined network and 98.8% for the original YOLOv3 network with the same architecture. The solution is investigated as a proof of concept and suggestions for further work is described based on a recurrent spiking neural network.
152

Deep learning in event-based neuromorphic systems / L'apprentissage profond dans les systèmes évènementiels, bio-inspirés

Thiele, Johannes C. 22 November 2019 (has links)
Inférence et apprentissage dans les réseaux de neurones profonds nécessitent une grande quantité de calculs qui, dans beaucoup de cas, limite leur intégration dans les environnements limités en ressources. Les réseaux de neurones évènementiels de type « spike » présentent une alternative aux réseaux de neurones artificiels classiques, et promettent une meilleure efficacité énergétique. Cependant, entraîner les réseaux spike demeure un défi important, particulièrement dans le cas où l’apprentissage doit être exécuté sur du matériel de calcul bio-inspiré, dit matériel neuromorphique. Cette thèse constitue une étude sur les algorithmes d’apprentissage et le codage de l’information dans les réseaux de neurones spike.A partir d’une règle d’apprentissage bio-inspirée, nous analysons quelles propriétés sont nécessaires dans les réseaux spike pour rendre possible un apprentissage embarqué dans un scénario d’apprentissage continu. Nous montrons qu’une règle basée sur le temps de déclenchement des neurones (type « spike-timing dependent plasticity ») est capable d’extraire des caractéristiques pertinentes pour permettre une classification d’objets simples comme ceux des bases de données MNIST et N-MNIST.Pour dépasser certaines limites de cette approche, nous élaborons un nouvel outil pour l’apprentissage dans les réseaux spike, SpikeGrad, qui représente une implémentation entièrement évènementielle de la rétro-propagation du gradient. Nous montrons comment cette approche peut être utilisée pour l’entrainement d’un réseau spike qui est capable d’inférer des relations entre valeurs numériques et des images MNIST. Nous démontrons que cet outil est capable d’entrainer un réseau convolutif profond, qui donne des taux de reconnaissance d’image compétitifs avec l’état de l’art sur les bases de données MNIST et CIFAR10. De plus, SpikeGrad permet de formaliser la réponse d’un réseau spike comme celle d’un réseau de neurones artificiels classique, permettant un entraînement plus rapide.Nos travaux introduisent ainsi plusieurs mécanismes d’apprentissage puissants pour les réseaux évènementiels, contribuant à rendre l’apprentissage des réseaux spike plus adaptés à des problèmes réels. / Inference and training in deep neural networks require large amounts of computation, which in many cases prevents the integration of deep networks in resource constrained environments. Event-based spiking neural networks represent an alternative to standard artificial neural networks that holds the promise of being capable of more energy efficient processing. However, training spiking neural networks to achieve high inference performance is still challenging, in particular when learning is also required to be compatible with neuromorphic constraints. This thesis studies training algorithms and information encoding in such deep networks of spiking neurons. Starting from a biologically inspired learning rule, we analyze which properties of learning rules are necessary in deep spiking neural networks to enable embedded learning in a continuous learning scenario. We show that a time scale invariant learning rule based on spike-timing dependent plasticity is able to perform hierarchical feature extraction and classification of simple objects of the MNIST and N-MNIST dataset. To overcome certain limitations of this approach we design a novel framework for spike-based learning, SpikeGrad, which represents a fully event-based implementation of the gradient backpropagation algorithm. We show how this algorithm can be used to train a spiking network that performs inference of relations between numbers and MNIST images. Additionally, we demonstrate that the framework is able to train large-scale convolutional spiking networks to competitive recognition rates on the MNIST and CIFAR10 datasets. In addition to being an effective and precise learning mechanism, SpikeGrad allows the description of the response of the spiking neural network in terms of a standard artificial neural network, which allows a faster simulation of spiking neural network training. Our work therefore introduces several powerful training concepts for on-chip learning in neuromorphic devices, that could help to scale spiking neural networks to real-world problems.
153

Characterization of a Spiking Neuron Model via a Linear Approach

Jabalameli, Amirhossein 01 January 2015 (has links)
In the past decade, characterizing spiking neuron models has been extensively researched as an essential issue in computational neuroscience. In this thesis, we examine the estimation problem of two different neuron models. In Chapter 2, We propose a modified Izhikevich model with an adaptive threshold. In our two-stage estimation approach, a linear least squares method and a linear model of the threshold are derived to predict the location of neuronal spikes. However, desired results are not obtained and the predicted model is unsuccessful in duplicating the spike locations. Chapter 3 is focused on the parameter estimation problem of a multi-timescale adaptive threshold (MAT) neuronal model. Using the dynamics of a non-resetting leaky integrator equipped with an adaptive threshold, a constrained iterative linear least squares method is implemented to fit the model to the reference data. Through manipulation of the system dynamics, the threshold voltage can be obtained as a realizable model that is linear in the unknown parameters. This linearly parametrized realizable model is then utilized inside a prediction error based framework to identify the threshold parameters with the purpose of predicting single neuron precise firing times. This estimation scheme is evaluated using both synthetic data obtained from an exact model as well as the experimental data obtained from in vitro rat somatosensory cortical neurons. Results show the ability of this approach to fit the MAT model to different types of reference data.
154

Models of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networks

Ghosh Dastidar, Samanwoy 22 June 2007 (has links)
No description available.
155

Low-Power UAV Detection Using Spiking Neural Networks and Event Cameras

Eldeborg Lundin, Anton, Winzell, Rasmus January 2024 (has links)
The growing availability of UAVs has created a demand for drone detection systems. Several studies have used neuromorphic cameras to detect UAVs; however, a fully neuromorphic system remains to be explored. We present a fully neuromorphic system consisting of an event camera and a spiking neural network running on neuromorphic hardware. Two spiking neural network architectures have been evaluated and compared to a non-spiking artificial neural network. The spiking networks show promise and perform on par with the non-spiking network in a few scenarios. Spiking networks were deployed on the Synsense Speck, a neuromorphic system on a chip, and demonstrated increased performance compared to simulations. The deployed network is capable of detecting drones up to a distance of 20 meters with high probability while consuming less than 7.13 milliwatts. The system can operate for over a year powered by a small power bank. In contrast, the equivalent non-spiking network running on the NVIDIA Jetson would operate for a few hours. The use of neuromorphic hardware enables sustained UAV detection in remote and challenging environments previously deemed inaccessible due to power constraints.
156

ENHANCING VISUAL UNDERSTANDING AND ENERGY-EFFICIENCY IN DEEP NEURAL NETWORKS

Sayeed Shafayet Chowdhury (19469710) 23 August 2024 (has links)
<p dir="ltr">Today’s deep neural networks (DNNs) have achieved tremendous performance in various domains such as computer vision, natural language processing, robotics, generative tasks etc. However, these high-performing DNNs require enormous amounts of compute, resulting in significant power consumption. Moreover, these often struggle in terms of visual understanding capabilities. To that effect, this thesis focuses on two aspects - enhancing efficiency of neural networks and improving their visual understanding. On the efficiency front, we leverage brain-inspired Spiking Neural Networks (SNNs), which offer a promising alternative to traditional deep learning. We first perform a comparative analysis between models with and without leak, revealing that leaky-integrate-and-fire (LIF) model provides improved robustness and better generalization compared to integrate-and-fire (IF). However, leak decreases the sparsity of computation. In the second work, by introducing a Discrete Cosine Transform-based novel spike encoding scheme (DCT-SNN), we demonstrate significant performance improvements, achieving 2-14X reduction in latency compared to state-of-the-art SNNs. Next, a novel temporal pruning method is proposed, which dynamically reduces the number of timesteps during training, enabling SNN inference with just one timestep while maintaining high accuracy. The second focus of the thesis is on improving the visual understanding aspect of DNNs. The first work along this direction introduces a framework for visual syntactic understanding, drawing parallels between linguistic syntax and visual components of an image. By manipulating images to create syntactically incorrect examples and using a BERT-like autoencoder for reconstruction, the study significantly enhances the visual syntactic recognition capabilities of DNNs, evidenced by substantial improvements in classification accuracies on the CelebA and AFHQ datasets. Further, the thesis tackles unsupervised procedure learning from videos, given multiple videos of the same underlying task. Employing optimal transport (OT) and introducing novel regularization strategies, we develop the ‘OPEL’ framework, which substantially outperforms existing methods (27-46% average enhancement in F1-score) on both egocentric and third-person benchmarks. Overall, the dissertation advances the field by proposing brain-inspired models and novel learning frameworks that significantly enhance the efficiency and visual understanding capabilities of deep learning systems, making them more suitable for real-world applications.</p>
157

Equivalence of Additive and Multiplicative Coupling in Spiking Neural Networks

Börner, Georg, Schittler Neves, Fabio, Timme, Marc 08 November 2024 (has links)
Spiking neural network models characterize the emergent collective dynamics of circuits of biological neurons and help engineer neuro-inspired solutions across fields. Most dynamical systems’ models of spiking neural networks typically exhibit one of two major types of interactions: First, the response of a neuron’s state variable to incoming pulse signals (spikes) may be additive and independent of its current state. Second, the response may depend on the current neuron’s state and multiply a function of the state variable. Here we reveal that deterministic spiking neural network models with additive coupling are equivalent to models with multiplicative coupling for simultaneously modified intrinsic neuron time evolution. As a consequence, the same collective dynamics can be attained by state-dependent multiplicative and constant (state-independent) additive coupling. Such a mapping enables the transfer of theoretical results between spiking neural network models with different types of interaction mechanisms and at the same time extends the option space for hardware implementation or modeling. By allowing to choose the coupling type or neuron type that is the simplest one to implement in a given practical situation where a specific dynamic or functionality is required, it potentially allows simpler or more effective engineering applications.
158

A High-Level Interface for Accelerating Spiking Neural Networks on the Edge with Heterogeneous Hardware : Enabling Rapid Prototyping of Training Algorithms and Topologies on Field-Programmable Gate Arrays

Eidlitz Rivera, Kaspar Oscarsson January 2024 (has links)
With the increasing use of machine learning by devices at the network's edge, a trend of moving computation from data centers to these devices is emerging. This shift imposes strict energy requirements on the algorithms used and the hardware on which they are implemented. Neuromorphic spiking neural networks (SNNs) and heterogeneous sytems on a chip (SoCs) are showing great potential for energy-efficient computing on the edge. This thesis describes the development of a high-level interface for accelerating SNNs on an FPGA–CPU SoC. The system is based on an existing open-source, low-level implementation, adapting it for a research-focused Python front-end. The developed interface provides a productive environment for exploring and evaluating SNN algorithms and topologies through compatibility with industry-standard tools for numerical computing, data analysis, and visualization, while still taking full advantage of FPGA-based hardware acceleration. The system is evaluated and showcased by analyzing the training of a small network to solve the XOR problem. As the project matures, future development could enable integration with commonly used machine learning libraries, further increasing it's potential.
159

Theoretical Studies of the Dynamics of Action Potential Initiation and its Role in Neuronal Encoding / Theoretische Studie über die Dynamik der Aktionspotentialauslösung und seine Rolle in neuronaler Kodierung

Wei, Wei 21 January 2011 (has links)
No description available.
160

ANNarchy: a code generation approach to neural simulations on parallel hardware

Vitay, Julien, Dinkelbach, Helge Ülo, Hamker, Fred Henrik 07 October 2015 (has links) (PDF)
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.

Page generated in 0.0579 seconds