• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 55
  • 55
  • 55
  • 18
  • 16
  • 15
  • 15
  • 14
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machines

Sala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
42

Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"

Buhry, Laure 21 September 2010 (has links)
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma. / These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations.
43

Silicon neural networks : implementation of cortical cells to improve the artificial-biological hybrid technique / Réseau de neurones in silico : contribution au développement de la technique hybride pour les réseaux corticaux

Grassia, Filippo Giovanni 07 January 2013 (has links)
Ces travaux ont été menés dans le cadre du projet européen FACETS-ITN. Nous avons contribué à la simulation de cellules corticales grâce à des données expérimentales d'électrophysiologie comme référence et d'un circuit intégré neuromorphique comme simulateur. Les propriétés intrinsèques temps réel de nos circuits neuromorphiques à base de modèles à conductance, autorisent une exploration détaillée des différents types de neurones. L'aspect analogique des circuits intégrés permet le développement d'un simulateur matériel temps réel à l'échelle du réseau. Le deuxième objectif de cette thèse est donc de contribuer au développement d'une plate-forme mixte - matérielle et logicielle - dédiée à la simulation de réseaux de neurones impulsionnels. / This work has been supported by the European FACETS-ITN project. Within the frameworkof this project, we contribute to the simulation of cortical cell types (employingexperimental electrophysiological data of these cells as references), using a specific VLSIneural circuit to simulate, at the single cell level, the models studied as references in theFACETS project. The real-time intrinsic properties of the neuromorphic circuits, whichprecisely compute neuron conductance-based models, will allow a systematic and detailedexploration of the models, while the physical and analog aspect of the simulations, as opposedthe software simulation aspect, will provide inputs for the development of the neuralhardware at the network level. The second goal of this thesis is to contribute to the designof a mixed hardware-software platform (PAX), specifically designed to simulate spikingneural networks. The tasks performed during this thesis project included: 1) the methodsused to obtain the appropriate parameter sets of the cortical neuron models that can beimplemented in our analog neuromimetic chip (the parameter extraction steps was validatedusing a bifurcation analysis that shows that the simplified HH model implementedin our silicon neuron shares the dynamics of the HH model); 2) the fully customizablefitting method, in voltage-clamp mode, to tune our neuromimetic integrated circuits usinga metaheuristic algorithm; 3) the contribution to the development of the PAX systemin terms of software tools and a VHDL driver interface for neuron configuration in theplatform. Finally, it also addresses the issue of synaptic tuning for future SNN simulation.
44

Evolution of spiking neural networks for temporal pattern recognition and animat control

Abdelmotaleb, Ahmed Mostafa Othman January 2016 (has links)
I extended an artificial life platform called GReaNs (the name stands for Gene Regulatory evolving artificial Networks) to explore the evolutionary abilities of biologically inspired Spiking Neural Network (SNN) model. The encoding of SNNs in GReaNs was inspired by the encoding of gene regulatory networks. As proof-of-principle, I used GReaNs to evolve SNNs to obtain a network with an output neuron which generates a predefined spike train in response to a specific input. Temporal pattern recognition was one of the main tasks during my studies. It is widely believed that nervous systems of biological organisms use temporal patterns of inputs to encode information. The learning technique used for temporal pattern recognition is not clear yet. I studied the ability to evolve spiking networks with different numbers of interneurons in the absence and the presence of noise to recognize predefined temporal patterns of inputs. Results showed, that in the presence of noise, it was possible to evolve successful networks. However, the networks with only one interneuron were not robust to noise. The foraging behaviour of many small animals depends mainly on their olfactory system. I explored whether it was possible to evolve SNNs able to control an agent to find food particles on 2-dimensional maps. Using ring rate encoding to encode the sensory information in the olfactory input neurons, I managed to obtain SNNs able to control an agent that could detect the position of the food particles and move toward it. Furthermore, I did unsuccessful attempts to use GReaNs to evolve an SNN able to control an agent able to collect sound sources from one type out of several sound types. Each sound type is represented as a pattern of different frequencies. In order to use the computational power of neuromorphic hardware, I integrated GReaNs with the SpiNNaker hardware system. Only the simulation part was carried out using SpiNNaker, but the rest steps of the genetic algorithm were done with GReaNs.
45

Learning in spiking neural networks

Davies, Sergio January 2013 (has links)
Artificial neural network simulators are a research field which attracts the interest of researchers from various fields, from biology to computer science. The final objectives are the understanding of the mechanisms underlying the human brain, how to reproduce them in an artificial environment, and how drugs interact with them. Multiple neural models have been proposed, each with their peculiarities, from the very complex and biologically realistic Hodgkin-Huxley neuron model to the very simple 'leaky integrate-and-fire' neuron. However, despite numerous attempts to understand the learning behaviour of the synapses, few models have been proposed. Spike-Timing-Dependent Plasticity (STDP) is one of the most relevant and biologically plausible models, and some variants (such as the triplet-based STDP rule) have been proposed to accommodate all biological observations. The research presented in this thesis focuses on a novel learning rule, based on the spike-pair STDP algorithm, which provides a statistical approach with the advantage of being less computationally expensive than the standard STDP rule, and is therefore suitable for its implementation on stand-alone computational units. The environment in which this research work has been carried out is the SpiNNaker project, which aims to provide a massively parallel computational substrate for neural simulation. To support such research, two other topics have been addressed: the first is a way to inject spikes into the SpiNNaker system through a non-real-time channel such as the Ethernet link, synchronising with the timing of the SpiNNaker system. The second research topic is focused on a way to route spikes in the SpiNNaker system based on populations of neurons. The three topics are presented in sequence after a brief introduction to the SpiNNaker project. Future work could include structural plasticity (also known as synaptic rewiring); here, during the simulation of neural networks on the SpiNNaker system, axons, dendrites and synapses may be grown or pruned according to biological observations.
46

Efficient and Robust Deep Learning through Approximate Computing

Sanchari Sen (9178400) 28 July 2020 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced the state-of-the-art in a wide range of machine learning tasks involving image, video, speech and text analytics, and are deployed in numerous widely-used products and services. Improvements in the capabilities of hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators have been instrumental in enabling these advances as they have allowed more complex and accurate networks to be trained and deployed. However, the enormous computational and memory demands of DNNs continue to increase with growing data size and network complexity, posing a continuing challenge to computing system designers. For instance, state-of-the-art image recognition DNNs require hundreds of millions of parameters and hundreds of billions of multiply-accumulate operations while state-of-the-art language models require hundreds of billions of parameters and several trillion operations to process a single input instance. Another major obstacle in the adoption of DNNs, despite their impressive accuracies on a range of datasets, has been their lack of robustness. Specifically, recent efforts have demonstrated that small, carefully-introduced input perturbations can force a DNN to behave in unexpected and erroneous ways, which can have to severe consequences in several safety-critical DNN applications like healthcare and autonomous vehicles. In this dissertation, we explore approximate computing as an avenue to improve the speed and energy efficiency of DNNs, as well as their robustness to input perturbations.</p> <p> </p> <p>Approximate computing involves executing selected computations of an application in an approximate manner, while generating favorable trade-offs between computational efficiency and output quality. The intrinsic error resilience of machine learning applications makes them excellent candidates for approximate computing, allowing us to achieve execution time and energy reductions with minimal effect on the quality of outputs. This dissertation performs a comprehensive analysis of different approximate computing techniques for improving the execution efficiency of DNNs. Complementary to generic approximation techniques like quantization, it identifies approximation opportunities based on the specific characteristics of three popular classes of networks - Feed-forward Neural Networks (FFNNs), Recurrent Neural Networks (RNNs) and Spiking Neural Networks (SNNs), which vary considerably in their network structure and computational patterns.</p> <p> </p> <p>First, in the context of feed-forward neural networks, we identify sparsity, or the presence of zero values in the data structures (activations, weights, gradients and errors), to be a major source of redundancy and therefore, an easy target for approximations. We develop lightweight micro-architectural and instruction set extensions to a general-purpose processor core that enable it to dynamically detect zero values when they are loaded and skip future instructions that are rendered redundant by them. Next, we explore LSTMs (the most widely used class of RNNs), which map sequences from an input space to an output space. We propose hardware-agnostic approximations that dynamically skip redundant symbols in the input sequence and discard redundant elements in the state vector to achieve execution time benefits. Following that, we consider SNNs, which are an emerging class of neural networks that represent and process information in the form of sequences of binary spikes. Observing that spike-triggered updates along synaptic connections are the dominant operation in SNNs, we propose hardware and software techniques to identify connections that can be minimally impact the output quality and deactivate them dynamically, skipping any associated updates.</p> <p> </p> <p>The dissertation also delves into the efficacy of combining multiple approximate computing techniques to improve the execution efficiency of DNNs. In particular, we focus on the combination of quantization, which reduces the precision of DNN data-structures, and pruning, which introduces sparsity in them. We observe that the ability of pruning to reduce the memory demands of quantized DNNs decreases with precision as the overhead of storing non-zero locations alongside the values starts to dominate in different sparse encoding schemes. We analyze this overhead and the overall compression of three different sparse formats across a range of sparsity and precision values and propose a hybrid compression scheme that identifies that optimal sparse format for a pruned low-precision DNN.</p> <p> </p> <p>Along with improved execution efficiency of DNNs, the dissertation explores an additional advantage of approximate computing in the form of improved robustness. We propose ensembles of quantized DNN models with different numerical precisions as a new approach to increase robustness against adversarial attacks. It is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. We overcome this limitation to achieve the best of both worlds, i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble.</p> <p> </p> <p><br></p><p>In summary, this dissertation establishes approximate computing as a promising direction to improve the performance, energy efficiency and robustness of neural networks.</p>
47

Training Methodologies for Energy-Efficient, Low Latency Spiking Neural Networks

Nitin Rathi (11849999) 17 December 2021 (has links)
<div>Deep learning models have become the de-facto solution in various fields like computer vision, natural language processing, robotics, drug discovery, and many others. The skyrocketing performance and success of multi-layer neural networks comes at a significant power and energy cost. Thus, there is a need to rethink the current trajectory and explore different computing frameworks. One such option is spiking neural networks (SNNs) that is inspired from the spike-based processing observed in biological brains. SNNs operating with binary signals (or spikes), can potentially be an energy-efficient alternative to the power-hungry analog neural networks (ANNs) that operate on real-valued analog signals. The binary all-or-nothing spike-based communication in SNNs implemented on event-driven hardware offers a low-power alternative to ANNs. A spike is a Delta function with magnitude 1. With all its appeal for low power, training SNNs efficiently for high accuracy remains an active area of research. The existing ANN training methodologies when applied to SNNs, results in networks that have very high latency. Supervised training of SNNs with spikes is challenging (due to discontinuous gradients) and resource-intensive (time, compute, and memory).Thus, we propose compression methods, training methodologies, learning rules</div><div><br></div><div>First, we propose compression techniques for SNNs based on unsupervised spike timing dependent plasticity (STDP) model. We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels in emerging in-memory computing hardware . Pruning is based on the power law weight-dependent</div><div>STDP model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The process of pruning non-critical connections and quantizing the weights of critical synapses is</div><div>performed at regular intervals during training.</div><div><br></div><div>Second, we propose a multimodal SNN that combines two modalities (image and audio). The two unimodal ensembles are connected with cross-modal connections and the entire network is trained with unsupervised learning. The network receives inputs in both modalities for the same class and</div><div>predicts the class label. The excitatory connections in the unimodal ensemble and the cross-modal connections are trained with STDP. The cross-modal connections capture the correlation between neurons of different modalities. The multimodal network learns features of both modalities and improves the classification accuracy compared to unimodal topology, even when one of the modality is distorted by noise. The cross-modal connections are only excitatory and do not inhibit the normal activity of the unimodal ensembles. </div><div><br></div><div>Third, we explore supervised learning methods for SNNs.Many works have shown that an SNN for inference can be formed by copying the weights from a trained ANN and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time steps to achieve competitive accuracy which diminishes the energy savings. The number of time steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology:</div><div>1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time step and the most recent time step the neuron generated an output spike.</div><div><br></div><div>Fourth, we present techniques to further reduce the inference latency in SNNs. SNNs suffer from high inference latency, resulting from inefficient input encoding, and sub-optimal settings of the neuron parameters (firing threshold, and membrane leak). We propose DIET-SNN, a low-latency deep spiking network that is trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold for each layer of the SNN are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The analog pixel values of an image are directly applied to the input layer of DIET-SNN without the need to convert to spike-train. The first convolutional layer is trained to convert inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network. The reduced latency combined with high activation sparsity provides large improvements in computational efficiency.</div><div><br></div><div>Finally, we explore the application of SNNs in sequential learning tasks. We propose LITE-SNN, a lightweight SNN suitable for sequential learning tasks on data from dynamic vision sensors (DVS) and natural language processing (NLP). In general sequential data is processed with complex recurrent neural networks (like long short-term memory (LSTM), and gated recurrent unit (GRU)) with explicit feedback connections and internal states to handle the long-term dependencies. Whereas neuron models in SNNs - integrate-and-fire (IF) or leaky-integrate-and-fire (LIF) - have implicit feedback in their internal state (membrane potential) by design and can be leveraged for sequential tasks. The membrane potential in the IF/LIF neuron integrates the incoming current and outputs an event (or spike) when the potential crosses a threshold value. Since SNNs compute with highly sparse spike-based spatio-temporal data, the energy/inference is lower than LSTMs/GRUs. SNNs also have fewer parameters than LSTM/GRU resulting in smaller models and faster inference. We observe the problem of vanishing gradients in vanilla SNNs for longer sequences and implement a convolutional SNN with attention layers to perform sequence-to-sequence learning tasks. The inherent recurrence in SNNs, in addition to the fully parallelized convolutional operations, provides an additional mechanism to model sequential dependencies and leads to better accuracy than convolutional neural networks with ReLU activations.</div>
48

HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELS

Bing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
49

Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuli

Castellano, Marta 11 December 2014 (has links)
In order to understand how the neural system encodes and processes information, research has focused on the study of neural representations of simple stimuli, paying no particular attention to it's temporal structure, with the assumption that a deeper understanding of how the neural system processes simpli fied stimuli will lead to an understanding of how the brain functions as a whole [1]. However, time is intrinsically bound to neural processing as all sensory, motor, and cognitive processes are inherently dynamic. Despite the importance of neural and stimulus dynamics, little is known of how the neural system represents rich spatio-temporal stimulus, which ultimately link the neural system to a continuously changing environment. The purpose of this thesis is to understand whether and how temporally-structured neural activity modulates the processing of information within the brain, proposing in turn that, the precise interaction between the spatio-temporal structure of the stimulus and the neural system is particularly relevant, particularly when considering the ongoing plasticity mechanisms which allow the neural system to learn from experience. In order to answer these questions, three studies were conducted. First, we studied the impact of spiking temporal structure on a single neuron spiking response, and explored in which way the functional connections to pre-synaptic neurons are modulated through adaptation. Our results suggest that, in a generic spiking neuron, the temporal structure of pre-synaptic excitatory and inhibitory neurons modulate both the spiking response of that same neuron and, most importantly, the speed and strength of learning. In the second, we present a generic model of a spiking neural network that processes rich spatio-temporal stimuli, and explored whether the processing of stimulus within the network is modulated due to the interaction with an external dynamical system (i.e. extracellular media), as well as several plasticity mechanisms. Our results indicate that the memory capacity, that re ects a dynamic short-term memory of incoming stimuli, can be extended on the presence of plasticity and through the interaction with an external dynamical system, while maintaining the network dynamics in a regime suitable for information processing. Finally, we characterized cortical signals of human subjects (electroencephalography, EEG) associated to a visual categorization task. Among other aspects, we studied whether changes in the dynamics of the stimulus leads to a changes in the neural processing at the cortical level, and introduced the relevance of large-scale integration for cognitive processing. Our results suggest that the dynamic synchronization across distributed cortical areas is stimulus specific and specifically linked to perceptual grouping. Taken together, the results presented here suggest that the temporal structure of the stimulus modulates how the neural system encodes and processes information within single neurons, network of neurons and cortical areas. In particular, the results indicate that timing modulates single neuron connectivity structures, the memory capability of networks of neurons, and the cortical representation of a visual stimuli. While the learning of invariant representations remains as the best framework to account for a number of neural processes (e.g. long-term memory [2]), the reported studies seem to provide support the idea that, at least to some extent, the neural system functions in a non-stationary fashion, where the processing of information is modulated by the stimulus dynamics itself. Altogether, this thesis highlights the relevance of understanding adaptive processes and their interaction with the temporal structure of the stimulus, arguing that a further understanding how the neural system processes dynamic stimuli is crucial for the further understanding of neural processing itself, and any theory that aims to understand neural processing should consider the processing of dynamic signals. 1. Frankish, K., and Ramsey, W. The Cambridge Handbook of Cognitive Science. Cambridge University Press, 2012. // 2. McGaugh, J. L. Memory{a Century of Consolidation. Science 287, 5451 (Jan. 2000), 248{251.
50

[pt] MODELOS NEURO-EVOLUCIONÁRIOS DE REDES NEURAIS SPIKING APLICADOS AO PRÉ-DIAGNÓSTICO DE ENVELHECIMENTO VOCAL / [en] NEURO-EVOLUTIONARY OF SPIKING NEURAL NETWORKS APPLIED TO PRE-DIAGNOSIS OF VOCAL AGING

MARCO AURELIO BOTELHO DA SILVA 09 October 2015 (has links)
[pt] O envelhecimento da voz, conhecido como presbifonia, é um processo natural que pode causar grande modificação na qualidade vocal do indivíduo. A sua identificação precoce pode trazer benefícios, buscando tratamentos que possam prevenir o seu avanço. Esse trabalho tem como motivação a identificação de vozes com sinais de envelhecimento através de redes neurais do tipo Spiking (SNN). O objetivo principal é o de construir dois novos modelos, denominados híbridos, utilizando SNN para problemas de agrupamento, onde os atributos de entrada e os parâmetros que configuram a SNN são otimizados por algoritmos evolutivos. Mais especificamente, os modelos neuro-evolucionários propostos são utilizados com o propósito de configurar corretamente a SNN, e selecionar os atributos mais relevantes para a formação dos grupos. Os algoritmos evolutivos utilizados foram o Algoritmo Evolutivo com Inspiração Quântica com representação Binário-Real (AEIQ-BR) e o Optimization by Genetic Programming (OGP). Os modelos resultantes foram nomeados Quantum-Inspired Evolution of Spiking Neural Networks with Binary-Real (QbrSNN) e Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Foram utilizadas oito bases benchmark e duas bases de voz, masculinas e femininas, a fim de caracterizar o envelhecimento. Para uma análise funcional da SNN, as bases benchmark forma testadas com uma abordagem clássica de agrupamento (kmeans) e com uma SNN sem evolução. Os modelos propostos foram comparados com uma abordagem clássica de Algoritmo Genético (AG). Os resultados mostraram a viabilidade do uso das SNNs para agrupamento de vozes envelhecidas. / [en] The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. Its early identification can benefit, seeking treatments that could prevent their advance. This work is motivated by the identification of voices with signs of aging through neural networks of spiking type (SNN). The main objective is to build two new models, called hybrids, using SNN for clustering problems where the input attributes and parameters that configure the SNN are optimized by evolutionary algorithms. More specifically, the proposed neuro-evolutionary models are used in order to properly configure the SNN, and select the most relevant attributes for the formation of groups. Evolutionary algorithms used were the Evolutionary Algorithm with Quantum Inspiration with representation Binary-Real (AEIQ-BR) and the Optimization by Genetic Programming (OGP). The resulting models were named Quantum-Inspired Spiking Neural Evolution of Networks with Binary-Real (QbrSNN) and Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Eight bases were used, and two voice benchmark bases, male and female, in order to characterize aging. NNS for functional analysis, the tested benchmark base form with a classical clustering approach (kmeans) and a SNN without change. The proposed models were compared with a classical approach of Genetic Algorithm (GA). The results showed the feasibility of using the SNN to agrupamentode aged voices.

Page generated in 0.071 seconds