Spelling suggestions: "subject:"spiking neural networks"" "subject:"spikings neural networks""
51 |
Efficient and Robust Deep Learning through Approximate ComputingSanchari Sen (9178400) 28 July 2020 (has links)
<p>Deep
Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range
of machine learning tasks involving image, video, speech and text analytics,
and are deployed in numerous widely-used products and services. Improvements in
the capabilities of hardware platforms such as Graphics Processing Units (GPUs)
and specialized accelerators have been instrumental in enabling these advances
as they have allowed more complex and accurate networks to be trained and
deployed. However, the enormous computational and memory demands of DNNs
continue to increase with growing data size and network complexity, posing a
continuing challenge to computing system designers. For instance,
state-of-the-art image recognition DNNs require hundreds of millions of
parameters and hundreds of billions of multiply-accumulate operations while
state-of-the-art language models require hundreds of billions of parameters and
several trillion operations to process a single input instance. Another major
obstacle in the adoption of DNNs, despite their impressive accuracies on a range
of datasets, has been their lack of robustness. Specifically, recent efforts
have demonstrated that small, carefully-introduced input perturbations can
force a DNN to behave in unexpected and erroneous ways, which can have to
severe consequences in several safety-critical DNN applications like healthcare
and autonomous vehicles. In this dissertation, we explore approximate computing
as an avenue to improve the speed and energy efficiency of DNNs, as well as
their robustness to input perturbations.</p>
<p> </p>
<p>Approximate
computing involves executing selected computations of an application in an
approximate manner, while generating favorable trade-offs between computational
efficiency and output quality. The intrinsic error resilience of machine learning
applications makes them excellent candidates for approximate computing, allowing
us to achieve execution time and energy reductions with minimal effect on the
quality of outputs. This dissertation performs a comprehensive analysis of
different approximate computing techniques for improving the execution efficiency
of DNNs. Complementary to generic approximation techniques like quantization,
it identifies approximation opportunities based on the specific characteristics
of three popular classes of networks - Feed-forward Neural Networks (FFNNs),
Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary
considerably in their network structure and computational patterns.</p>
<p> </p>
<p>First, in
the context of feed-forward neural networks, we identify sparsity, or the presence
of zero values in the data structures (activations, weights, gradients and errors),
to be a major source of redundancy and therefore, an easy target for
approximations. We develop lightweight micro-architectural and instruction set
extensions to a general-purpose processor core that enable it to dynamically
detect zero values when they are loaded and skip future instructions that are
rendered redundant by them. Next, we explore LSTMs (the most widely used class
of RNNs), which map sequences from an input space to an output space. We
propose hardware-agnostic approximations that dynamically skip redundant
symbols in the input sequence and discard redundant elements in the state
vector to achieve execution time benefits. Following that, we consider SNNs,
which are an emerging class of neural networks that represent and process
information in the form of sequences of binary spikes. Observing that spike-triggered
updates along synaptic connections are the dominant operation in SNNs, we
propose hardware and software techniques to identify connections that can be
minimally impact the output quality and deactivate them dynamically, skipping any
associated updates.</p>
<p> </p>
<p>The
dissertation also delves into the efficacy of combining multiple approximate computing
techniques to improve the execution efficiency of DNNs. In particular, we focus
on the combination of quantization, which reduces the precision of DNN data-structures,
and pruning, which introduces sparsity in them. We observe that the ability of
pruning to reduce the memory demands of quantized DNNs decreases with precision
as the overhead of storing non-zero locations alongside the values starts to
dominate in different sparse encoding schemes. We analyze this overhead and the
overall compression of three different sparse formats across a range of
sparsity and precision values and propose a hybrid compression scheme that
identifies that optimal sparse format for a pruned low-precision DNN.</p>
<p> </p>
<p>Along with
improved execution efficiency of DNNs, the dissertation explores an additional
advantage of approximate computing in the form of improved robustness. We
propose ensembles of quantized DNN models with different numerical precisions as
a new approach to increase robustness against adversarial attacks. It is based on
the observation that quantized neural networks often demonstrate much higher robustness
to adversarial attacks than full precision networks, but at the cost of a substantial
loss in accuracy on the original (unperturbed) inputs. We overcome this limitation
to achieve the best of both worlds, i.e., the higher unperturbed accuracies of
the full precision models combined with the higher robustness of the low
precision models, by composing them in an ensemble.</p>
<p> </p>
<p><br></p><p>In
summary, this dissertation establishes approximate computing as a promising direction
to improve the performance, energy efficiency and robustness of neural networks.</p>
|
52 |
Training Methodologies for Energy-Efficient, Low Latency Spiking Neural NetworksNitin Rathi (11849999) 17 December 2021 (has links)
<div>Deep learning models have become the de-facto solution in various fields like computer vision, natural language processing, robotics, drug discovery, and many others. The skyrocketing performance and success of multi-layer neural networks comes at a significant power and energy cost. Thus, there is a need to rethink the current trajectory and explore different computing frameworks. One such option is spiking neural networks (SNNs) that is inspired from the spike-based processing observed in biological brains. SNNs operating with binary signals (or spikes), can potentially be an energy-efficient alternative to the power-hungry analog neural networks (ANNs) that operate on real-valued analog signals. The binary all-or-nothing spike-based communication in SNNs implemented on event-driven hardware offers a low-power alternative to ANNs. A spike is a Delta function with magnitude 1. With all its appeal for low power, training SNNs efficiently for high accuracy remains an active area of research. The existing ANN training methodologies when applied to SNNs, results in networks that have very high latency. Supervised training of SNNs with spikes is challenging (due to discontinuous gradients) and resource-intensive (time, compute, and memory).Thus, we propose compression methods, training methodologies, learning rules</div><div><br></div><div>First, we propose compression techniques for SNNs based on unsupervised spike timing dependent plasticity (STDP) model. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels in emerging in-memory computing hardware . Pruning is based on the power law weight-dependent</div><div>STDP model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The process of pruning non-critical connections and quantizing the weights of critical synapses is</div><div>performed at regular intervals during training.</div><div><br></div><div>Second, we propose a multimodal SNN that combines two modalities (image and audio). The two unimodal ensembles are connected with cross-modal connections and the entire network is trained with unsupervised learning. The network receives inputs in both modalities for the same class and</div><div>predicts the class label. The excitatory connections in the unimodal ensemble and the cross-modal connections are trained with STDP. The cross-modal connections capture the correlation between neurons of different modalities. The multimodal network learns features of both modalities and improves the classification accuracy compared to unimodal topology, even when one of the modality is distorted by noise. The cross-modal connections are only excitatory and do not inhibit the normal activity of the unimodal ensembles. </div><div><br></div><div>Third, we explore supervised learning methods for SNNs.Many works have shown that an SNN for inference can be formed by copying the weights from a trained ANN and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology:</div><div>1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike.</div><div><br></div><div>Fourth, we present techniques to further reduce the inference latency in SNNs. SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency.</div><div><br></div><div>Finally, we explore the application of SNNs in sequential learning tasks. We propose LITE-SNN, a lightweight SNN suitable for sequential learning tasks on data from dynamic vision sensors (DVS) and natural language processing (NLP). In general sequential data is processed with complex recurrent neural networks (like long short-term memory (LSTM), and gated recurrent unit (GRU)) with explicit feedback connections and internal states to handle the long-term dependencies. Whereas neuron models in SNNs - integrate-and-fire (IF) or leaky-integrate-and-fire (LIF) - have implicit feedback in their internal state (membrane potential) by design and can be leveraged for sequential tasks. The membrane potential in the IF/LIF neuron integrates the incoming current and outputs an event (or spike) when the potential crosses a threshold value. Since SNNs compute with highly sparse spike-based spatio-temporal data, the energy/inference is lower than LSTMs/GRUs. SNNs also have fewer parameters than LSTM/GRU resulting in smaller models and faster inference. We observe the problem of vanishing gradients in vanilla SNNs for longer sequences and implement a convolutional SNN with attention layers to perform sequence-to-sequence learning tasks. The inherent recurrence in SNNs, in addition to the fully parallelized convolutional operations, provides an additional mechanism to model sequential dependencies and leads to better accuracy than convolutional neural networks with ReLU activations.</div>
|
53 |
HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELSBing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
|
54 |
Stimulus representation in anisotropically connected spiking neural networks / Representation av stimuli i anisotropiskt kopplade spikande neurala nätverkHiselius, Leo January 2021 (has links)
Biological neuronal networks are a key object of study in the field of computational neuroscience, and recent studies have also shown their potential applicability within artificial intelligence and robotics [1]. They come in many shapes and forms, and a well known and widely studied example is the liquid state machine from 2004 [2]. In 2019, a novel and simple connectivity rule was presented with the introduction of the SpreizerNet [3]. The connectivity of the SpreizerNet is governed by a type of gradient noise known as Perlin noise, and as such the connectivity is anisotropic but correlated. The spiking activity produced in the SpreizerNet is possibly functionally relevant, e.g. for motor control or classification of input stimuli. In 2020, it was shown to be useful for motor control [4]. In this Master’s thesis, we inquire if the spiking activity of the SpreizerNet is functionally relevant in the context of stimulus representation. We investigate how input stimulus from the MNIST handwritten digits dataset is represented in the spatio-temporal activity sequences produced by the SpreizerNet, and whether this representation is sufficient for separation. Furthermore, we consider how the parameters governing the local structure of connectivity impacts representation and separation. We find that (1) the SpreizerNet separates input stimulus at the initial stage after stimulus and (2) that separation decreases with time when the activity from dissimilar inputs becomes unified. / Biologiska neurala nätverk är ett centralt studieobjekt inom beräkningsneurovetenskapen, och nyliga studier har även visat deras potentiella applicerbarhet inom artificiell intelligens och robotik [1]. De kan formuleras på många olika sätt, och ett välkänt och vida studerat exempel är liquid state machine från 2004 [2]. 2019 presenterades en ny och enkel kopplingsregel i SpreizerNätverket [3]. Kopplingarna i SpreizerNätverket styrs av en typ av gradientbrus vid namn Perlinbrus, och som sådana är de anisotropiska men korrelerade. Spikdatan som genereras av SpreizerNätverket är möjligtvis betydelsefull för funktion, till exempel för motorisk kontroll eller separation av stimuli. 2020 visade Michaelis m. fl. att spikdatan var relevant för motorisk kontroll [4]. I denna masteruppsats frågar vi oss om spikdatan är funktionellt relevant för stimulusrepresentation. Vi undersöker hur stimulus från MNIST handwritten digits -datasetet representeras i de spatiotemporella aktivitetssekvenserna som genereras i SpreizerNätverket, och huruvida denna representation är tillräcklig för separation.Vidare betraktar vi hur parametrarna som styr den lokala kopplingsstrukturen påverkar representation och separation. Vi visar att (1) SpreizerNätverket separerar stimuli i ett initialt skede efter stimuli och (2) att separationen minskar med tid när aktiviteten från olika stimuli blir enhetlig.
|
55 |
Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuliCastellano, Marta 11 December 2014 (has links)
In order to understand how the neural system encodes and processes information, research
has focused on the study of neural representations of simple stimuli, paying
no particular attention to it's temporal structure, with the assumption that a deeper
understanding of how the neural system processes simpli fied stimuli will lead to an understanding of how the brain functions as a whole [1]. However, time is intrinsically bound to neural processing as all sensory, motor, and cognitive processes are inherently dynamic. Despite the importance of neural and stimulus dynamics, little is known of how the neural system represents rich spatio-temporal stimulus, which ultimately link the neural system to a continuously changing environment. The purpose of this thesis is to understand whether and how temporally-structured neural activity modulates the processing of information within the brain, proposing in turn that, the precise interaction
between the spatio-temporal structure of the stimulus and the neural system is
particularly relevant, particularly when considering the ongoing plasticity mechanisms
which allow the neural system to learn from experience. In order to answer these questions, three studies were conducted. First, we studied the impact of spiking temporal structure on a single neuron spiking response, and explored in which way the functional connections to pre-synaptic neurons are modulated through adaptation. Our results suggest that, in a generic spiking neuron, the temporal
structure of pre-synaptic excitatory and inhibitory neurons modulate both the
spiking response of that same neuron and, most importantly, the speed and strength
of learning. In the second, we present a generic model of a spiking neural network that processes rich spatio-temporal stimuli, and explored whether the processing of stimulus within the network is modulated due to the interaction with an external dynamical
system (i.e. extracellular media), as well as several plasticity mechanisms. Our results
indicate that the memory capacity, that re
ects a dynamic short-term memory of incoming stimuli, can be extended on the presence of plasticity and through the interaction with an external dynamical system, while maintaining the network dynamics in a regime suitable for information processing. Finally, we characterized cortical
signals of human subjects (electroencephalography, EEG) associated to a visual categorization task. Among other aspects, we studied whether changes in the dynamics of the stimulus leads to a changes in the neural processing at the cortical level, and introduced the relevance of large-scale integration for cognitive processing. Our results suggest that the dynamic synchronization across distributed cortical areas is stimulus specific and specifically linked to perceptual grouping.
Taken together, the results presented here suggest that the temporal structure of the
stimulus modulates how the neural system encodes and processes information within
single neurons, network of neurons and cortical areas. In particular, the results indicate that timing modulates single neuron connectivity structures, the memory capability of networks of neurons, and the cortical representation of a visual stimuli. While the learning of invariant representations remains as the best framework to account for a number of neural processes (e.g. long-term memory [2]), the reported studies seem to provide support the idea that, at least to some extent, the neural system functions in a non-stationary fashion, where the processing of information is modulated by the stimulus dynamics itself. Altogether, this thesis highlights the relevance of understanding adaptive processes and their interaction with the temporal structure of the stimulus, arguing that a further understanding how the neural system processes dynamic stimuli is crucial for the further understanding of neural processing itself, and any theory that aims to understand neural processing should consider the processing of dynamic signals. 1. Frankish, K., and Ramsey, W. The Cambridge Handbook of Cognitive Science.
Cambridge University Press, 2012. // 2. McGaugh, J. L. Memory{a Century of Consolidation. Science 287, 5451 (Jan. 2000), 248{251.
|
56 |
[pt] MODELOS NEURO-EVOLUCIONÁRIOS DE REDES NEURAIS SPIKING APLICADOS AO PRÉ-DIAGNÓSTICO DE ENVELHECIMENTO VOCAL / [en] NEURO-EVOLUTIONARY OF SPIKING NEURAL NETWORKS APPLIED TO PRE-DIAGNOSIS OF VOCAL AGINGMARCO AURELIO BOTELHO DA SILVA 09 October 2015 (has links)
[pt] O envelhecimento da voz, conhecido como presbifonia, é um processo natural que pode causar grande modificação na qualidade vocal do indivíduo. A sua identificação precoce pode trazer benefícios, buscando tratamentos que possam prevenir o seu avanço. Esse trabalho tem como motivação a identificação de vozes com sinais de envelhecimento através de redes neurais do tipo Spiking (SNN). O objetivo principal é o de construir dois novos modelos, denominados híbridos, utilizando SNN para problemas de agrupamento, onde os atributos de entrada e os parâmetros que configuram a SNN são otimizados por algoritmos evolutivos. Mais especificamente, os modelos neuro-evolucionários propostos são utilizados com o propósito de configurar corretamente a SNN, e selecionar os atributos mais relevantes para a formação dos grupos. Os algoritmos evolutivos utilizados foram o Algoritmo Evolutivo com Inspiração Quântica com representação Binário-Real (AEIQ-BR) e o Optimization by Genetic Programming (OGP). Os modelos resultantes foram nomeados Quantum-Inspired Evolution of Spiking Neural Networks with Binary-Real (QbrSNN) e Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Foram utilizadas oito bases benchmark e duas bases de voz, masculinas e femininas, a fim de caracterizar o envelhecimento. Para uma análise funcional da SNN, as bases benchmark forma testadas com uma abordagem clássica de agrupamento (kmeans) e com uma SNN sem evolução. Os modelos propostos foram comparados com uma abordagem clássica de Algoritmo Genético (AG). Os resultados mostraram a viabilidade do uso das SNNs para agrupamento de vozes envelhecidas. / [en] The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. Its early identification can benefit, seeking treatments that could prevent their advance. This work is motivated by the identification of voices with signs of aging through neural networks of spiking type (SNN). The main objective is to build two new models, called hybrids, using SNN for clustering problems where the input attributes and parameters that configure the SNN are optimized by evolutionary algorithms. More specifically, the proposed neuro-evolutionary models are used in order to properly configure the SNN, and select the most relevant attributes for the formation of groups. Evolutionary algorithms used were the Evolutionary Algorithm with Quantum Inspiration with representation Binary-Real (AEIQ-BR) and the Optimization by Genetic Programming (OGP). The resulting models were named Quantum-Inspired Spiking Neural Evolution of Networks with Binary-Real (QbrSNN) and Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Eight bases were used, and two voice benchmark bases, male and female, in order to characterize aging. NNS for functional analysis, the tested benchmark base form with a classical clustering approach (kmeans) and a SNN without change. The proposed models were compared with a classical approach of Genetic Algorithm (GA). The results showed the feasibility of using the SNN to agrupamentode aged voices.
|
57 |
Models of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networksGhosh Dastidar, Samanwoy 22 June 2007 (has links)
No description available.
|
58 |
A High-Level Interface for Accelerating Spiking Neural Networks on the Edge with Heterogeneous Hardware : Enabling Rapid Prototyping of Training Algorithms and Topologies on Field-Programmable Gate ArraysEidlitz Rivera, Kaspar Oscarsson January 2024 (has links)
With the increasing use of machine learning by devices at the network's edge, a trend of moving computation from data centers to these devices is emerging. This shift imposes strict energy requirements on the algorithms used and the hardware on which they are implemented. Neuromorphic spiking neural networks (SNNs) and heterogeneous sytems on a chip (SoCs) are showing great potential for energy-efficient computing on the edge. This thesis describes the development of a high-level interface for accelerating SNNs on an FPGA–CPU SoC. The system is based on an existing open-source, low-level implementation, adapting it for a research-focused Python front-end. The developed interface provides a productive environment for exploring and evaluating SNN algorithms and topologies through compatibility with industry-standard tools for numerical computing, data analysis, and visualization, while still taking full advantage of FPGA-based hardware acceleration. The system is evaluated and showcased by analyzing the training of a small network to solve the XOR problem. As the project matures, future development could enable integration with commonly used machine learning libraries, further increasing it's potential.
|
59 |
Implementación en hardware de sistemas de alta fiabilidad basados en metodologías estocásticasCanals Guinand, Vicente José 27 July 2012 (has links)
La sociedad actual demanda cada vez más aplicaciones computacionalmente exigentes y
que se implementen de forma energéticamente eficiente. Esto obliga a la industria del
semiconductor a mantener una continua progresión de la tecnología CMOS. No obstante,
los expertos vaticinan que el fin de la era de la progresión de la tecnología CMOS se
acerca, puesto que se prevé que alrededor del 2020 la tecnología CMOS llegue a su límite.
Cuando ésta llegue al punto conocido como “Red Brick Wall”, las limitaciones físicas,
tecnológicas y económicas no harán viable el proseguir por esta senda. Todo ello ha
motivado que a lo largo de la última década tanto instituciones públicas como privadas
apostasen por el desarrollo de soluciones tecnológicas alternativas como es el caso de la
nanotecnología (nanotubos, nanohilos, tecnologías basadas en el grafeno, etc.). En esta tesis
planteamos una solución alternativa para poder afrontar algunos de los problemas
computacionalmente exigentes. Esta solución hace uso de la tecnología CMOS actual
sustituyendo la forma de computación clásica desarrollada por Von Neumann por formas
de computación no convencionales. Éste es el caso de las computaciones basadas en lógicas
pulsantes y en especial la conocida como computación estocástica, la cual proporciona un
aumento de la fiabilidad y del paralelismo en los sistemas digitales.
En esta tesis se presenta el desarrollo y evaluación de todo un conjunto de bloques
computacionales estocásticos implementados mediante elementos digitales clásicos. A
partir de estos bloques se proponen diversas metodologías computacionalmente eficientes
que mediante su uso permiten afrontar algunos problemas de computación masiva de forma
mucho más eficiente. En especial se ha centrado el estudio en los problemas relacionados
con el campo del reconocimiento de patrones. / Today's society demands the use of applications with a high computational complexity that
must be executed in an energy-efficient way. Therefore the semiconductor industry is
forced to maintain the CMOS technology progression. However, experts predict that the
end of the age of CMOS technology progression is approaching. It is expected that at 2020
CMOS technology would reach the point known as "Red Brick Wall" at which the
physical, technological and economic limitations of CMOS technology will be unavoidable.
All of this has caused that over the last decade public and private institutions has bet by the
development of alternative technological solutions as is the case of nanotechnology
(nanotubes, nanowires, graphene, etc.). In this thesis we propose an alternative solution to
address some of the computationally exigent problems by using the current CMOS
technology but replacing the classical computing way developed by Von Neumann by other
forms of unconventional computing. This is the case of computing based on pulsed logic
and especially the stochastic computing that provide a significant increase of the
parallelism and the reliability of the systems. This thesis presents the development and
evaluation of different stochastic computing methodologies implemented by digital gates.
The different methods proposed are able to face some massive computing problems more
efficiently than classical digital electronics. This is the case of those fields related to pattern
recognition, which is the field we have focused the main part of the research work
developed in this thesis.
|
60 |
Silicon neural networks : implementation of cortical cells to improve the artificial-biological hybrid techniqueGrassia, Filippo 07 January 2013 (has links) (PDF)
This work has been supported by the European FACETS-ITN project. Within the frameworkof this project, we contribute to the simulation of cortical cell types (employingexperimental electrophysiological data of these cells as references), using a specific VLSIneural circuit to simulate, at the single cell level, the models studied as references in theFACETS project. The real-time intrinsic properties of the neuromorphic circuits, whichprecisely compute neuron conductance-based models, will allow a systematic and detailedexploration of the models, while the physical and analog aspect of the simulations, as opposedthe software simulation aspect, will provide inputs for the development of the neuralhardware at the network level. The second goal of this thesis is to contribute to the designof a mixed hardware-software platform (PAX), specifically designed to simulate spikingneural networks. The tasks performed during this thesis project included: 1) the methodsused to obtain the appropriate parameter sets of the cortical neuron models that can beimplemented in our analog neuromimetic chip (the parameter extraction steps was validatedusing a bifurcation analysis that shows that the simplified HH model implementedin our silicon neuron shares the dynamics of the HH model); 2) the fully customizablefitting method, in voltage-clamp mode, to tune our neuromimetic integrated circuits usinga metaheuristic algorithm; 3) the contribution to the development of the PAX systemin terms of software tools and a VHDL driver interface for neuron configuration in theplatform. Finally, it also addresses the issue of synaptic tuning for future SNN simulation.
|
Page generated in 0.0634 seconds