• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 12
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 125
  • 78
  • 50
  • 45
  • 41
  • 37
  • 37
  • 31
  • 29
  • 21
  • 21
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Méthode de calcul et implémentation d’un processeur neuromorphique appliqué à des capteurs évènementiels / Computational method and neuromorphic processor design applied to event-based sensors

Mesquida, Thomas 20 December 2018 (has links)
L’étude du fonctionnement de notre système nerveux et des mécanismes sensoriels a mené à la création de capteurs événementiels. Ces capteurs ont un fonctionnement qui retranscrit les atouts de nos yeux et oreilles par exemple. Cette thèse se base sur la recherche de méthodes bio-inspirés et peu coûteuses en énergie permettant de traiter les données envoyées par ces nouveaux types de capteurs. Contrairement aux capteurs conventionnels, nos rétines et cochlées ne réagissent qu’à l’activité perçue dans l’environnement sensoriel. Les implémentations de type « rétine » ou « cochlée » artificielle, que nous appellerons capteurs dynamiques, fournissent des trains d’évènements comparables à des impulsions neuronales. La quantité d’information transmise est alors étroitement liée à l’activité présentée, ce qui a aussi pour effet de diminuer la redondance des informations de sortie. De plus, n’étant plus contraint à suivre une cadence d’échantillonnage, les événements créés fournissent une résolution temporelle supérieure. Ce mode bio-inspiré de retrait d’information de l’environnement a entraîné la création d’algorithmes permettant de suivre le déplacement d’entité au niveau visuel ou encore reconnaître la personne parlant ou sa localisation au niveau sonore, ainsi que des implémentations d’environnements de calcul neuromorphiques. Les travaux que nous présentons s’appuient sur ces nouvelles idées pour créer de nouvelles solutions de traitement. Plus précisément, les applications et le matériel développés s’appuient sur un codage temporel de l’information dans la suite d'événements fournis par le capteur. / Studying how our nervous system and sensory mechanisms work lead to the creation of event-driven sensors. These sensors follow the same principles as our eyes or ears for example. This Ph.D. focuses on the search for bio-inspired low power methods enabling processing data from this new kind of sensor. Contrary to legacy sensors, our retina and cochlea only react to the perceived activity in the sensory environment. The artificial “retina” and “cochlea” implementations we call dynamic sensors provide streams of events comparable to neural spikes. The quantity of data transmitted is closely linked to the presented activity, which decreases the redundancy in the output data. Moreover, not being forced to follow a frame-rate, the created events provide increased timing resolution. This bio-inspired support to convey data lead to the development of algorithms enabling visual tracking or speaker recognition or localization at the auditory level, and neuromorphic computing environment implementation. The work we present rely on these new ideas to create new processing solutions. More precisely, the applications and hardware developed rely on temporal coding of the data in the spike stream provided by the sensors.
42

Contributions to statistical analysis methods for neural spiking activity

Tao, Long 27 November 2018 (has links)
With the technical advances in neuroscience experiments in the past few decades, we have seen a massive expansion in our ability to record neural activity. These advances enable neuroscientists to analyze more complex neural coding and communication properties, and at the same time, raise new challenges for analyzing neural spiking data, which keeps growing in scale, dimension, and complexity. This thesis proposes several new statistical methods that advance statistical analysis approaches for neural spiking data, including sequential Monte Carlo (SMC) methods for efficient estimation of neural dynamics from membrane potential threshold crossings, state-space models using multimodal observation processes, and goodness-of-fit analysis methods for neural marked point process models. In a first project, we derive a set of iterative formulas that enable us to simulate trajectories from stochastic, dynamic neural spiking models that are consistent with a set of spike time observations. We develop a SMC method to simultaneously estimate the parameters of the model and the unobserved dynamic variables from spike train data. We investigate the performance of this approach on a leaky integrate-and-fire model. In another project, we define a semi-latent state-space model to estimate information related to the phenomenon of hippocampal replay. Replay is a recently discovered phenomenon where patterns of hippocampal spiking activity that typically occur during exploration of an environment are reactivated when an animal is at rest. This reactivation is accompanied by high frequency oscillations in hippocampal local field potentials. However, methods to define replay mathematically remain undeveloped. In this project, we construct a novel state-space model that enables us to identify whether replay is occurring, and if so to estimate the movement trajectories consistent with the observed neural activity, and to categorize the content of each event. The state-space model integrates information from the spiking activity from the hippocampal population, the rhythms in the local field potential, and the rat's movement behavior. Finally, we develop a new, general time-rescaling theorem for marked point processes, and use this to develop a general goodness-of-fit framework for neural population spiking models. We investigate this approach through simulation and a real data application.
43

Action learning experiments using spiking neural networks and humanoid robots

de Azambuja, Ricardo January 2018 (has links)
The way our brain works is still an open question, but one thing seems to be clear: biological neural systems are computationally powerful, robust and noisy. Natural nervous system are able to control limbs in different scenarios with high precision. As neural networks in living beings communicate through spikes, modern neuromorphic systems try to mimic them by using spike-based neuron models. This thesis is focused on the advancement of neurorobotics or brain inspired robotic arm controllers based on artificial neural network architectures. The architecture chosen to implement those controllers was the spike neuron version of Reservoir Computing framework, called Liquid State Machines. The main goal is to explore the possibility of using brain inspired neural networks to control a robot by demonstration. Moreover, it aims to achieve systems robust to environmental noise and internal structure destruction presenting a graceful degradation. As the validation, a series of action learning experiments are presented where simulated robotic arms are controlled. The investigation starts with a 2 degrees of freedom arm and moves to the research version of the Rethink Robotics Inc. collaborative humanoid robot Baxter. Moreover, a proof-of- concept experiment is also done using the real Baxter robot. The results show Liquid State Machines, when endowed with an extra external feedback loop, can be also employed to control more complex humanoid robotic arms than a simple planar 2 degrees of freedom one. Additionally, the new parallel architecture presented here was capable to withstand noise and internal destruction better than a simple use of multiple columns also presenting a graceful degradation behaviour.
44

Aspects of learning within networks of spiking neurons

Carnell, Andrew Robert January 2008 (has links)
Spiking neural networks have, in recent years, become a popular tool for investigating the properties and computational performance of large massively connected networks of neurons. Equally as interesting is the investigation of the potential computational power of individual spiking neurons. An overview is provided of current and relevant research into the Liquid Sate Machine, biologically inspired artificial STDP learning mechanisms and the investigation of aspects of the computational power of artificial, recurrent networks of spiking neurons. First, it is shown that, using simple structures of spiking Leaky Integrate and Fire (LIF) neurons, a network n(P), can be built to perform any program P that can be performed by a general parallel programming language. Next, a form of STDP learning with normalisation is developed, referred to as STDP + N learning. The effects of applying this STDP + N learning within recurrently connected networks of neurons is then investigated. It is shown experimentally that, in very specific circumstances Anti-Hebbian and Hebbian STDP learning may be considered to be approximately equivalent processes. A metric is then developed that can be used to measure the distance between any two spike trains. The metric is then used, along with the STDP + N learning, in an experiment to examine the capacity of a single spiking neuron that receives multiple input spike trains, to simultaneously learn many temporally precise Input/Output spike train associations. The STDP +N learning is further modified for use in recurrent networks of spiking neurons, to give the STDP + NType2 learning methodology. An experiment is devised which demonstrates that the Type 2 method of applying learning to the synapses of a recurrent network — effectively a randomly shifting locality of learning — can enable the network to learn firing patterns that the typical application of learning is unable to learn. The resulting networks could, in theory, be used to create to simple structures discussed in the first chapter of original work.
45

DHyANA : neuromorphic architecture for liquid computing / DHyANA : uma arquitetura digital neuromórfica hierárquica para máquinas de estado líquido

Holanda, Priscila Cavalcante January 2016 (has links)
Redes Neurais têm sido um tema de pesquisas por pelo menos sessenta anos. Desde a eficácia no processamento de informações à incrível capacidade de tolerar falhas, são incontáveis os mecanismos no cérebro que nos fascinam. Assim, não é nenhuma surpresa que, na medida que tecnologias facilitadoras tornam-se disponíveis, cientistas e engenheiros têm aumentado os esforços para o compreender e simular. Em uma abordagem semelhante à do Projeto Genoma Humano, a busca por tecnologias inovadoras na área deu origem a projetos internacionais que custam bilhões de dólares, o que alguns denominam o despertar global de pesquisa da neurociência. Avanços em hardware fizeram a simulação de milhões ou até bilhões de neurônios possível. No entanto, as abordagens existentes ainda não são capazes de fornecer a densidade de conexões necessária ao enorme número de neurônios e sinapses. Neste sentido, este trabalho propõe DHyANA (Arquitetura Digital Neuromórfica Hierárquica), uma nova arquitetura em hardware para redes neurais pulsadas, a qual utiliza comunicação em rede-em-chip hierárquica. A arquitetura é otimizada para implementações de Máquinas de Estado Líquido. A arquitetura DHyANA foi exaustivamente testada em plataformas de simulação, bem como implementada em uma FPGA Stratix IV da Altera. Além disso, foi realizada a síntese lógica em tecnologia 65nm, a fim de melhor avaliar e comparar o sistema resultante com projetos similares, alcançando uma área de 0,23mm2 e potência de 147mW para uma implementação de 256 neurônios. / Neural Networks has been a subject of research for at least sixty years. From the effectiveness in processing information to the amazing ability of tolerating faults, there are countless processing mechanisms in the brain that fascinates us. Thereupon, it comes with no surprise that as enabling technologies have become available, scientists and engineers have raised the efforts to understand, simulate and mimic parts of it. In a similar approach to that of the Human Genome Project, the quest for innovative technologies within the field has given birth to billion dollar projects and global efforts, what some call a global blossom of neuroscience research. Advances in hardware have made the simulation of millions or even billions of neurons possible. However, existing approaches cannot yet provide the even more dense interconnect for the massive number of neurons and synapses required. In this regard, this work proposes DHyANA (Digital HierArchical Neuromorphic Architecture), a new hardware architecture for a spiking neural network using hierarchical network-on-chip communication. The architecture is optimized for Liquid State Machine (LSM) implementations. DHyANA was exhaustively tested in simulation platforms, as well as implemented in an Altera Stratix IV FPGA. Furthermore, a logic synthesis analysis using 65-nm CMOS technology was performed in order to evaluate and better compare the resulting system with similar designs, achieving an area of 0.23mm2 and a power dissipation of 147mW for a 256 neurons implementation.
46

Structural, functional and dynamical properties of a lognormal network of bursting neurons / Propriedades estruturais, funcionais e dinâmicas de uma rede lognormal de neurônios bursters

Milena Menezes Carvalho 27 March 2017 (has links)
In hippocampal CA1 and CA3 regions, various properties of neuronal activity follow skewed, lognormal-like distributions, including average firing rates, rate and magnitude of spike bursts, magnitude of population synchrony, and correlations between pre- and postsynaptic spikes. In recent studies, the lognormal features of hippocampal activities were well replicated by a multi-timescale adaptive threshold (MAT) neuron network of lognormally distributed excitatory-to-excitatory synaptic weights, though it remains unknown whether and how other neuronal and network properties can be replicated in this model. Here we implement two additional studies of the same network: first, we further analyze its burstiness properties by identifying and clustering neurons with exceptionally bursty features, once again demonstrating the importance of the lognormal synaptic weight distribution. Second, we characterize dynamical patterns of activity termed neuronal avalanches in in vivo CA3 recordings of behaving rats and in the model network, revealing the similarities and differences between experimental and model avalanche size distributions across the sleep-wake cycle. These results show the comparison between the MAT neuron network and hippocampal readings in a different approach than shown before, providing more insight into the mechanisms behind activity in hippocampal subregions. / Nas regiões CA1 e CA3 do hipocampo, várias propriedades da atividade neuronal seguem distribuições assimétricas com características lognormais, incluindo frequência de disparo média, frequência e magnitude de rajadas de disparo (bursts), magnitude da sincronia populacional e correlações entre disparos pré- e pós-sinápticos. Em estudos recentes, as características lognormais das atividades hipocampais foram bem reproduzidas por uma rede de neurônios de limiar adaptativo (multi-timescale adaptive threshold, MAT) com pesos sinápticos entre neurônios excitatórios seguindo uma distribuição lognormal, embora ainda não se saiba se e como outras propriedades neuronais e da rede podem ser replicadas nesse modelo. Nesse trabalho implementamos dois estudos adicionais da mesma rede: primeiramente, analisamos mais a fundo as propriedades dos bursts identificando e agrupando neurônios com capacidade de burst excepcional, mostrando mais uma vez a importância da distribuição lognormal de pesos sinápticos. Em seguida, caracterizamos padrões dinâmicos de atividade chamados avalanches neuronais no modelo e em aquisições in vivo do CA3 de roedores em atividades comportamentais, revelando as semelhanças e diferenças entre as distribuições de tamanho de avalanche através do ciclo sono-vigília. Esses resultados mostram a comparação entre a rede de neurônios MAT e medições hipocampais em uma abordagem diferente da apresentada anteriormente, fornecendo mais percepção acerca dos mecanismos por trás da atividade em subregiões hipocampais.
47

Energy Efficient Hardware Design of Neural Networks

January 2018 (has links)
abstract: Hardware implementation of deep neural networks is earning significant importance nowadays. Deep neural networks are mathematical models that use learning algorithms inspired by the brain. Numerous deep learning algorithms such as multi-layer perceptrons (MLP) have demonstrated human-level recognition accuracy in image and speech classification tasks. Multiple layers of processing elements called neurons with several connections between them called synapses are used to build these networks. Hence, it involves operations that exhibit a high level of parallelism making it computationally and memory intensive. Constrained by computing resources and memory, most of the applications require a neural network which utilizes less energy. Energy efficient implementation of these computationally intense algorithms on neuromorphic hardware demands a lot of architectural optimizations. One of these optimizations would be the reduction in the network size using compression and several studies investigated compression by introducing element-wise or row-/column-/block-wise sparsity via pruning and regularization. Additionally, numerous recent works have concentrated on reducing the precision of activations and weights with some reducing to a single bit. However, combining various sparsity structures with binarized or very-low-precision (2-3 bit) neural networks have not been comprehensively explored. Output activations in these deep neural network algorithms are habitually non-binary making it difficult to exploit sparsity. On the other hand, biologically realistic models like spiking neural networks (SNN) closely mimic the operations in biological nervous systems and explore new avenues for brain-like cognitive computing. These networks deal with binary spikes, and they can exploit the input-dependent sparsity or redundancy to dynamically scale the amount of computation in turn leading to energy-efficient hardware implementation. This work discusses configurable spiking neuromorphic architecture that supports multiple hidden layers exploiting hardware reuse. It also presents design techniques for minimum-area/-energy DNN hardware with minimal degradation in accuracy. Area, performance and energy results of these DNN and SNN hardware is reported for the MNIST dataset. The Neuromorphic hardware designed for SNN algorithm in 28nm CMOS demonstrates high classification accuracy (>98% on MNIST) and low energy (51.4 - 773 (nJ) per classification). The optimized DNN hardware designed in 40nm CMOS that combines 8X structured compression and 3-bit weight precision showed 98.4% accuracy at 33 (nJ) per classification. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2018
48

Structural, functional and dynamical properties of a lognormal network of bursting neurons / Propriedades estruturais, funcionais e dinâmicas de uma rede lognormal de neurônios bursters

Carvalho, Milena Menezes 27 March 2017 (has links)
In hippocampal CA1 and CA3 regions, various properties of neuronal activity follow skewed, lognormal-like distributions, including average firing rates, rate and magnitude of spike bursts, magnitude of population synchrony, and correlations between pre- and postsynaptic spikes. In recent studies, the lognormal features of hippocampal activities were well replicated by a multi-timescale adaptive threshold (MAT) neuron network of lognormally distributed excitatory-to-excitatory synaptic weights, though it remains unknown whether and how other neuronal and network properties can be replicated in this model. Here we implement two additional studies of the same network: first, we further analyze its burstiness properties by identifying and clustering neurons with exceptionally bursty features, once again demonstrating the importance of the lognormal synaptic weight distribution. Second, we characterize dynamical patterns of activity termed neuronal avalanches in in vivo CA3 recordings of behaving rats and in the model network, revealing the similarities and differences between experimental and model avalanche size distributions across the sleep-wake cycle. These results show the comparison between the MAT neuron network and hippocampal readings in a different approach than shown before, providing more insight into the mechanisms behind activity in hippocampal subregions. / Nas regiões CA1 e CA3 do hipocampo, várias propriedades da atividade neuronal seguem distribuições assimétricas com características lognormais, incluindo frequência de disparo média, frequência e magnitude de rajadas de disparo (bursts), magnitude da sincronia populacional e correlações entre disparos pré- e pós-sinápticos. Em estudos recentes, as características lognormais das atividades hipocampais foram bem reproduzidas por uma rede de neurônios de limiar adaptativo (multi-timescale adaptive threshold, MAT) com pesos sinápticos entre neurônios excitatórios seguindo uma distribuição lognormal, embora ainda não se saiba se e como outras propriedades neuronais e da rede podem ser replicadas nesse modelo. Nesse trabalho implementamos dois estudos adicionais da mesma rede: primeiramente, analisamos mais a fundo as propriedades dos bursts identificando e agrupando neurônios com capacidade de burst excepcional, mostrando mais uma vez a importância da distribuição lognormal de pesos sinápticos. Em seguida, caracterizamos padrões dinâmicos de atividade chamados avalanches neuronais no modelo e em aquisições in vivo do CA3 de roedores em atividades comportamentais, revelando as semelhanças e diferenças entre as distribuições de tamanho de avalanche através do ciclo sono-vigília. Esses resultados mostram a comparação entre a rede de neurônios MAT e medições hipocampais em uma abordagem diferente da apresentada anteriormente, fornecendo mais percepção acerca dos mecanismos por trás da atividade em subregiões hipocampais.
49

Evolving connectionist systems for adaptive decision support with application in ecological data modelling

Soltic, Snjezana January 2009 (has links)
Ecological modelling problems have characteristics both featured in other modelling fields and specific ones, hence, methods developed and tested in other research areas may not be suitable for modelling ecological problems or may perform poorly when used on ecological data. This thesis identifies issues associated with the techniques typically used for solving ecological problems and develops new generic methods for decision support, especially suitable for ecological data modelling, which are characterised by: (1) adaptive learning, (2) knowledge discovery and (3) accurate prediction. These new methods have been successfully applied to challenging real world ecological problems. Despite the fact that the number of possible applications of computational intelligence methods in ecology is vast, this thesis primarily concentrates on two problems: (1) species establishment prediction and (2) environmental monitoring. Our review of recent papers suggests that multi-layer perceptron networks trained using the backpropagation algorithm are most widely used of all artificial neural networks for forecasting pest insect invasions. While the multi-layer perceptron networks are appropriate for modelling complex nonlinear relationships, they have rather limited exploratory capabilities and are difficult to adapt to dynamically changing data. In this thesis an approach that addresses these limitations is proposed. We found that environmental monitoring applications could benefit from having an intelligent taste recognition system possibly embedded in an autonomous robot. Hence, this thesis reviews the current knowledge on taste recognition and proposes a biologically inspired artificial model of taste recognition based on biologically plausible spiking neurons. The model is dynamic and is capable of learning new tastants as they become available. Furthermore, the model builds a knowledge base that can be extracted during or after the learning process in form of IF-THEN fuzzy rules. It also comprises a layer that simulates the influence of taste receptor cells on the activity of their adjacent cells. These features increase the biological relevance of the model compared to other current taste recognition models. The proposed model was implemented in software on a single personal computer and in hardware on an Altera FPGA chip. Both implementations were applied to two real-world taste datasets.In addition, for the first time the applicability of transductive reasoning for forecasting the establishment potential of pest insects into new locations was investigated. For this purpose four types of predictive models, built using inductive and transductive reasoning, were used for predicting the distributions of three pest insects. The models were evaluated in terms of their predictive accuracy and their ability to discover patterns in the modelling data. The results obtained indicate that evolving connectionist systems can be successfully used for building predictive distribution models and environmental monitoring systems. The features available in the proposed dynamic systems, such as on-line learning and knowledge discovery, are needed to improve our knowledge of the species distributions. This work laid down the foundation for a number of interesting future projects in the field of ecological modelling, robotics, pervasive computing and pattern recognition that can be undertaken separately or in sequence.
50

Artificial Grammar Recognition Using Spiking Neural Networks

Cavaco, Philip January 2009 (has links)
<p>This thesis explores the feasibility of Artificial Grammar (AG) recognition using spiking neural networks. A biologically inspired minicolumn model is designed as the base computational unit. Two network topographies are defined with different ideologies. Both networks consists of minicolumn models, referred to as nodes, connected with excitatory and inhibitory connections. The first network contains nodes for every bigram and trigram producible by the grammar’s finite state machine (FSM). The second network has only nodes required to identify unique internal states of the FSM. The networks produce predictable activity for tested input strings. Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.</p>

Page generated in 0.0355 seconds