Spelling suggestions: "subject:"spiking"" "subject:"spikings""
61 |
Evolving connectionist systems for adaptive decision support with application in ecological data modellingSoltic, Snjezana January 2009 (has links)
Ecological modelling problems have characteristics both featured in other modelling fields and specific ones, hence, methods developed and tested in other research areas may not be suitable for modelling ecological problems or may perform poorly when used on ecological data. This thesis identifies issues associated with the techniques typically used for solving ecological problems and develops new generic methods for decision support, especially suitable for ecological data modelling, which are characterised by: (1) adaptive learning, (2) knowledge discovery and (3) accurate prediction. These new methods have been successfully applied to challenging real world ecological problems. Despite the fact that the number of possible applications of computational intelligence methods in ecology is vast, this thesis primarily concentrates on two problems: (1) species establishment prediction and (2) environmental monitoring. Our review of recent papers suggests that multi-layer perceptron networks trained using the backpropagation algorithm are most widely used of all artificial neural networks for forecasting pest insect invasions. While the multi-layer perceptron networks are appropriate for modelling complex nonlinear relationships, they have rather limited exploratory capabilities and are difficult to adapt to dynamically changing data. In this thesis an approach that addresses these limitations is proposed. We found that environmental monitoring applications could benefit from having an intelligent taste recognition system possibly embedded in an autonomous robot. Hence, this thesis reviews the current knowledge on taste recognition and proposes a biologically inspired artificial model of taste recognition based on biologically plausible spiking neurons. The model is dynamic and is capable of learning new tastants as they become available. Furthermore, the model builds a knowledge base that can be extracted during or after the learning process in form of IF-THEN fuzzy rules. It also comprises a layer that simulates the influence of taste receptor cells on the activity of their adjacent cells. These features increase the biological relevance of the model compared to other current taste recognition models. The proposed model was implemented in software on a single personal computer and in hardware on an Altera FPGA chip. Both implementations were applied to two real-world taste datasets.In addition, for the first time the applicability of transductive reasoning for forecasting the establishment potential of pest insects into new locations was investigated. For this purpose four types of predictive models, built using inductive and transductive reasoning, were used for predicting the distributions of three pest insects. The models were evaluated in terms of their predictive accuracy and their ability to discover patterns in the modelling data. The results obtained indicate that evolving connectionist systems can be successfully used for building predictive distribution models and environmental monitoring systems. The features available in the proposed dynamic systems, such as on-line learning and knowledge discovery, are needed to improve our knowledge of the species distributions. This work laid down the foundation for a number of interesting future projects in the field of ecological modelling, robotics, pervasive computing and pattern recognition that can be undertaken separately or in sequence.
|
62 |
Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networksSchliebs, Stefan January 2010 (has links)
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
|
63 |
Heterogeneous probabilistic models for optimisation and modelling of evolving spiking neural networksSchliebs, Stefan January 2010 (has links)
This thesis proposes a novel feature selection and classification method employing evolving spiking neural networks (eSNN) and evolutionary algorithms (EA). The method is named the Quantum-inspired Spiking Neural Network (QiSNN) framework. QiSNN represents an integrated wrapper approach. An evolutionary process evolves appropriate feature subsets for a given classification task and simultaneously optimises the neural and learning-related parameters of the network. Unlike other methods, the connection weights of this network are determined by a fast one-pass learning algorithm which dramatically reduces the training time. In its core, QiSNN employs the Thorpe neural model that allows the efficient simulation of even large networks. In QiSNN, the presence or absence of features is represented by a string of concatenated bits, while the parameters of the neural network are continuous. For the exploration of these two entirely different search spaces, a novel Estimation of Distribution Algorithm (EDA) is developed. The method maintains a population of probabilistic models specialised for the optimisation of either binary, continuous or heterogeneous search spaces while utilising a small and intuitive set of parameters. The EDA extends the Quantum-inspired Evolutionary Algorithm (QEA) proposed by Han and Kim (2002) and was named the Heterogeneous Hierarchical Model EDA (hHM-EDA). The algorithm is compared to numerous contemporary optimisation methods and studied in terms of convergence speed, solution quality and robustness in noisy search spaces. The thesis investigates the functioning and the characteristics of QiSNN using both synthetic feature selection benchmarks and a real-world case study on ecological modelling. By evolving suitable feature subsets, QiSNN significantly enhances the classification accuracy of eSNN. Compared to numerous other feature selection techniques, like the wrapper-based Multilayer Perceptron (MLP) and the Naive Bayesian Classifier (NBC), QiSNN demonstrates a competitive classification and feature selection performance while requiring comparatively low computational costs.
|
64 |
Margin learning in spiking neural networksBrune, Rafael 15 December 2017 (has links)
No description available.
|
65 |
Functional relevance of inhibitory and disinhibitory circuits in signal propagation in recurrent neuronal networksBihun, Marzena Maria January 2018 (has links)
Cell assemblies are considered to be physiological as well as functional units in the brain. A repetitive and stereotypical sequential activation of many neurons was observed, but the mechanisms underlying it are not well understood. Feedforward networks, such as synfire chains, with the pools of excitatory neurons unidirectionally connected and facilitating signal transmission in a cascade-like fashion were proposed to model such sequential activity. When embedded in a recurrent network, these were shown to destabilise the whole network’s activity, challenging the suitability of the model. Here, we investigate a feedforward chain of excitatory pools enriched by inhibitory pools that provide disynaptic feedforward inhibition. We show that when embedded in a recurrent network of spiking neurons, such an augmented chain is capable of robust signal propagation. We then investigate the influence of overlapping two chains on the signal transmission as well as the stability of the host network. While shared excitatory pools turn out to be detrimental to global stability, inhibitory overlap implicitly realises the motif of lateral inhibition, which, if moderate, maintains the stability but if substantial, it silences the whole network activity including the signal. Addition of a disinhibitory pathway along the chain proves to rescue the signal transmission by transforming a strong inhibitory wave into a disinhibitory one, which specifically guards the excitatory pools from receiving excessive inhibition and thereby allowing them to remain responsive to the forthcoming activation. Disinhibitory circuits not only improve the signal transmission, but can also control it via a gating mechanism. We demonstrate that by manipulating a firing threshold of the disinhibitory neurons, the signal transmission can be enabled or completely blocked. This mechanism corresponds to cholinergic modulation, which was shown to be signalled by volume as well as phasic transmission and variably target classes of neurons. Furthermore, we show that modulation of the feedforward inhibition circuit can promote generating spontaneous replay at the absence of external inputs. This mechanism, however, tends to also cause global instabilities. Overall, these results underscore the importance of inhibitory neuron populations in controlling signal propagation in cell assemblies as well as global stability. Specific inhibitory circuits, when controlled by neuromodulatory systems, can robustly guide or block the signals and invoke replay. This mounts to evidence that the population of interneurons is diverse and can be best categorised by neurons’ specific circuit functions as well as their responsiveness to neuromodulators.
|
66 |
DHyANA : neuromorphic architecture for liquid computing / DHyANA : uma arquitetura digital neuromórfica hierárquica para máquinas de estado líquidoHolanda, Priscila Cavalcante January 2016 (has links)
Redes Neurais têm sido um tema de pesquisas por pelo menos sessenta anos. Desde a eficácia no processamento de informações à incrível capacidade de tolerar falhas, são incontáveis os mecanismos no cérebro que nos fascinam. Assim, não é nenhuma surpresa que, na medida que tecnologias facilitadoras tornam-se disponíveis, cientistas e engenheiros têm aumentado os esforços para o compreender e simular. Em uma abordagem semelhante à do Projeto Genoma Humano, a busca por tecnologias inovadoras na área deu origem a projetos internacionais que custam bilhões de dólares, o que alguns denominam o despertar global de pesquisa da neurociência. Avanços em hardware fizeram a simulação de milhões ou até bilhões de neurônios possível. No entanto, as abordagens existentes ainda não são capazes de fornecer a densidade de conexões necessária ao enorme número de neurônios e sinapses. Neste sentido, este trabalho propõe DHyANA (Arquitetura Digital Neuromórfica Hierárquica), uma nova arquitetura em hardware para redes neurais pulsadas, a qual utiliza comunicação em rede-em-chip hierárquica. A arquitetura é otimizada para implementações de Máquinas de Estado Líquido. A arquitetura DHyANA foi exaustivamente testada em plataformas de simulação, bem como implementada em uma FPGA Stratix IV da Altera. Além disso, foi realizada a síntese lógica em tecnologia 65nm, a fim de melhor avaliar e comparar o sistema resultante com projetos similares, alcançando uma área de 0,23mm2 e potência de 147mW para uma implementação de 256 neurônios. / Neural Networks has been a subject of research for at least sixty years. From the effectiveness in processing information to the amazing ability of tolerating faults, there are countless processing mechanisms in the brain that fascinates us. Thereupon, it comes with no surprise that as enabling technologies have become available, scientists and engineers have raised the efforts to understand, simulate and mimic parts of it. In a similar approach to that of the Human Genome Project, the quest for innovative technologies within the field has given birth to billion dollar projects and global efforts, what some call a global blossom of neuroscience research. Advances in hardware have made the simulation of millions or even billions of neurons possible. However, existing approaches cannot yet provide the even more dense interconnect for the massive number of neurons and synapses required. In this regard, this work proposes DHyANA (Digital HierArchical Neuromorphic Architecture), a new hardware architecture for a spiking neural network using hierarchical network-on-chip communication. The architecture is optimized for Liquid State Machine (LSM) implementations. DHyANA was exhaustively tested in simulation platforms, as well as implemented in an Altera Stratix IV FPGA. Furthermore, a logic synthesis analysis using 65-nm CMOS technology was performed in order to evaluate and better compare the resulting system with similar designs, achieving an area of 0.23mm2 and a power dissipation of 147mW for a 256 neurons implementation.
|
67 |
DHyANA : neuromorphic architecture for liquid computing / DHyANA : uma arquitetura digital neuromórfica hierárquica para máquinas de estado líquidoHolanda, Priscila Cavalcante January 2016 (has links)
Redes Neurais têm sido um tema de pesquisas por pelo menos sessenta anos. Desde a eficácia no processamento de informações à incrível capacidade de tolerar falhas, são incontáveis os mecanismos no cérebro que nos fascinam. Assim, não é nenhuma surpresa que, na medida que tecnologias facilitadoras tornam-se disponíveis, cientistas e engenheiros têm aumentado os esforços para o compreender e simular. Em uma abordagem semelhante à do Projeto Genoma Humano, a busca por tecnologias inovadoras na área deu origem a projetos internacionais que custam bilhões de dólares, o que alguns denominam o despertar global de pesquisa da neurociência. Avanços em hardware fizeram a simulação de milhões ou até bilhões de neurônios possível. No entanto, as abordagens existentes ainda não são capazes de fornecer a densidade de conexões necessária ao enorme número de neurônios e sinapses. Neste sentido, este trabalho propõe DHyANA (Arquitetura Digital Neuromórfica Hierárquica), uma nova arquitetura em hardware para redes neurais pulsadas, a qual utiliza comunicação em rede-em-chip hierárquica. A arquitetura é otimizada para implementações de Máquinas de Estado Líquido. A arquitetura DHyANA foi exaustivamente testada em plataformas de simulação, bem como implementada em uma FPGA Stratix IV da Altera. Além disso, foi realizada a síntese lógica em tecnologia 65nm, a fim de melhor avaliar e comparar o sistema resultante com projetos similares, alcançando uma área de 0,23mm2 e potência de 147mW para uma implementação de 256 neurônios. / Neural Networks has been a subject of research for at least sixty years. From the effectiveness in processing information to the amazing ability of tolerating faults, there are countless processing mechanisms in the brain that fascinates us. Thereupon, it comes with no surprise that as enabling technologies have become available, scientists and engineers have raised the efforts to understand, simulate and mimic parts of it. In a similar approach to that of the Human Genome Project, the quest for innovative technologies within the field has given birth to billion dollar projects and global efforts, what some call a global blossom of neuroscience research. Advances in hardware have made the simulation of millions or even billions of neurons possible. However, existing approaches cannot yet provide the even more dense interconnect for the massive number of neurons and synapses required. In this regard, this work proposes DHyANA (Digital HierArchical Neuromorphic Architecture), a new hardware architecture for a spiking neural network using hierarchical network-on-chip communication. The architecture is optimized for Liquid State Machine (LSM) implementations. DHyANA was exhaustively tested in simulation platforms, as well as implemented in an Altera Stratix IV FPGA. Furthermore, a logic synthesis analysis using 65-nm CMOS technology was performed in order to evaluate and better compare the resulting system with similar designs, achieving an area of 0.23mm2 and a power dissipation of 147mW for a 256 neurons implementation.
|
68 |
Reliable Arithmetic Circuit Design Inspired by SNP SystemsJanuary 2013 (has links)
abstract: ABSTRACT Developing new non-traditional device models is gaining popularity as the silicon-based electrical device approaches its limitation when it scales down. Membrane systems, also called P systems, are a new class of biological computation model inspired by the way cells process chemical signals. Spiking Neural P systems (SNP systems), a certain kind of membrane systems, is inspired by the way the neurons in brain interact using electrical spikes. Compared to the traditional Boolean logic, SNP systems not only perform similar functions but also provide a more promising solution for reliable computation. Two basic neuron types, Low Pass (LP) neurons and High Pass (HP) neurons, are introduced. These two basic types of neurons are capable to build an arbitrary SNP neuron. This leads to the conclusion that these two basic neuron types are Turing complete since SNP systems has been proved Turing complete. These two basic types of neurons are further used as the elements to construct general-purpose arithmetic circuits, such as adder, subtractor and comparator. In this thesis, erroneous behaviors of neurons are discussed. Transmission error (spike loss) is proved to be equivalent to threshold error, which makes threshold error discussion more universal. To improve the reliability, a new structure called motif is proposed. Compared to Triple Modular Redundancy improvement, motif design presents its efficiency and effectiveness in both single neuron and arithmetic circuit analysis. DRAM-based CMOS circuits are used to implement the two basic types of neurons. Functionality of basic type neurons is proved using the SPICE simulations. The motif improved adder and the comparator, as compared to conventional Boolean logic design, are much more reliable with lower leakage, and smaller silicon area. This leads to the conclusion that SNP system could provide a more promising solution for reliable computation than the conventional Boolean logic. / Dissertation/Thesis / M.S. Electrical Engineering 2013
|
69 |
Artificial Grammar Recognition Using Spiking Neural NetworksCavaco, Philip January 2009 (has links)
This thesis explores the feasibility of Artificial Grammar (AG) recognition using spiking neural networks. A biologically inspired minicolumn model is designed as the base computational unit. Two network topographies are defined with different ideologies. Both networks consists of minicolumn models, referred to as nodes, connected with excitatory and inhibitory connections. The first network contains nodes for every bigram and trigram producible by the grammar’s finite state machine (FSM). The second network has only nodes required to identify unique internal states of the FSM. The networks produce predictable activity for tested input strings. Future work to improve the performance of the networks is discussed. The modeling framework developed can be used by neurophysiological research to implement network layouts and compare simulated performance characteristics to actual subject performance.
|
70 |
Learning transformation-invariant visual representations in spiking neural networksEvans, Benjamin D. January 2012 (has links)
This thesis aims to understand the learning mechanisms which underpin the process of visual object recognition in the primate ventral visual system. The computational crux of this problem lies in the ability to retain specificity to recognize particular objects or faces, while exhibiting generality across natural variations and distortions in the view (DiCarlo et al., 2012). In particular, the work presented is focussed on gaining insight into the processes through which transformation-invariant visual representations may develop in the primate ventral visual system. The primary motivation for this work is the belief that some of the fundamental mechanisms employed in the primate visual system may only be captured through modelling the individual action potentials of neurons and therefore, existing rate-coded models of this process constitute an inadequate level of description to fully understand the learning processes of visual object recognition. To this end, spiking neural network models are formulated and applied to the problem of learning transformation-invariant visual representations, using a spike-time dependent learning rule to adjust the synaptic efficacies between the neurons. The ways in which the existing rate-coded CT (Stringer et al., 2006) and Trace (Földiák, 1991) learning mechanisms may operate in a simple spiking neural network model are explored, and these findings are then applied to a more accurate model using realistic 3-D stimuli. Three mechanisms are then examined, through which a spiking neural network may solve the problem of learning separate transformation-invariant representations in scenes composed of multiple stimuli by temporally segmenting competing input representations. The spike-time dependent plasticity in the feed-forward connections is then shown to be able to exploit these input layer dynamics to form individual stimulus representations in the output layer. Finally, the work is evaluated and future directions of investigation are proposed.
|
Page generated in 0.0635 seconds