Spelling suggestions: "subject:"spiking"" "subject:"spikings""
131 |
Single unit and correlated neural activity observed in the cat motor cortex during a reaching movementPutrino, David January 2009 (has links)
[Truncated abstract] The goal of this research was to investigate some of the ways that neurons located in the primary motor cortex (MI) code for skilled movement. The task-related and temporally correlated spike activity that occurred during the performance of a goal-directed reaching and retrieval task invloving multiple motion elements and limbs was evaluated in cats. The contributions made by different neuronal subtypes loctaed in MI (which were identified based upon extracellular spiking features0 to the coding of movement was also investigated. Spike activity was simulateously recorded from microelectrodes that were chronically implanted into the motor cortex of both cerebral hemispheres. Task-related neurons modulated their activity during the reaching and retrieval movements of one forelimb, or the postural reactions of the contralateral forelimb and ipsilateral hindlimb. Spike durations and baseline firing rates of neurons were used to distinguish between putative excitatory (Regular Spiking; RS) and inhibitory (Fast Spiking; FS) neurons in the cortex. Frame by frame video analysis of the task was used to subdivide each task trial into stages (e.g. premovement, reach, withdraw and feed) and relate modulations in neural activity to the individual task stages. Task-related neurons were classified as either narrowly tuned or broadly tuned depending on whether their activity modulated during a single task stage or more than one stage respectively. Recordings were made from 163 task-related neurons, and temporal correlations in the spike activity of simultaneously recorded neurons were identified using shuffle corrected cross-correlograms on 662 different neuronal pairs.... The results of this research suggest that temporally correlated activity may reflect the activation of intracortical and callosal connections between a variety of efferent zones involved in task performance, playing a role in the coordination of muscles and limbs during motor tasks. The differences in the patterns of task-related activity, and in the incidence of significant neuronal interactions that were observed between the RS and FS neuronal populations implies that they make different contributions to the coding of movement in MI.
|
132 |
Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machinesSala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
|
133 |
Building and operating large-scale SpiNNaker machinesHeathcote, Jonathan David January 2016 (has links)
SpiNNaker is an unconventional supercomputer architecture designed to simulate up to one billion biologically realistic neurons in real-time. To achieve this goal, SpiNNaker employs a novel network architecture which poses a number of practical problems in scaling up from desktop prototypes to machine room filling installations. SpiNNaker's hexagonal torus network topology has received mostly theoretical treatment in the literature. This thesis tackles some of the challenges encountered when building `real-world' systems. Firstly, a scheme is devised for physically laying out hexagonal torus topologies in machine rooms which avoids long cables; this is demonstrated on a half-million core SpiNNaker prototype. Secondly, to improve the performance of existing routing algorithms, a more efficient process is proposed for finding (logically) short paths through hexagonal torus topologies. This is complemented by a formula which provides routing algorithms with greater flexibility when finding paths, potentially resulting in a more balanced network utilisation. The scale of SpiNNaker's network and the models intended for it also present their own challenges. Placement and routing algorithms are developed which assign processes to nodes and generate paths through SpiNNaker's network. These algorithms minimise congestion and tolerate network faults. The proposed placement algorithm is inspired by techniques used in chip design and is shown to enable larger applications to run on SpiNNaker than the previous state-of-the-art. Likewise the routing algorithm developed is able to tolerate network faults, inevitably present in large-scale systems, with little performance overhead.
|
134 |
Monte Carlo Optimization of Neuromorphic Cricket Auditory Feature Detection Circuits in the Dynap-SE ProcessorNilsson, Mattias January 2018 (has links)
Neuromorphic information processing systems mimic the dynamics of neurons and synapses, and the architecture of biological nervous systems. By using a combination of sub-threshold analog circuits, and fast programmable digital circuits, spiking neural networks with co-localized memory and computation can be implemented, enabling more energy-efficient information processing than conventional von Neumann digital computers. When configuring such a spiking neural network, the variability caused by device mismatch of the analog electronic circuits must be managed and exploited. While pre-trained spiking neural networks have been approximated in neuromorphic processors in previous work, configuration methods and tools need to be developed that make efficient use of the high number of inhomogeneous analog neuron and synapse circuits in a systematic manner. The aim of the work presented here is to investigate such automatic configuration methods, focusing in particular on Monte Carlo methods, and to develop software for training and configuration of the Dynap-SE neuromorphic processor, which is based on the Dynamic Neuromorphic Asynchronous Processor (DYNAP) architecture. A Monte Carlo optimization method enabling configuration of spiking neural networks on the Dynap-SE is developed and tested with the Metropolis-Hastings algorithm in the low-temperature limit. The method is based on a hardware-in-the-loop setup where a PC performs online optimization of a Dynap-SE, and the resulting system is tested by reproducing properties of small neural networks in the auditory system of field crickets. It is shown that the system successfully configures two different auditory neural networks, consisting of three and four neurons respectively. However, appropriate bias parameter values defining the dynamic properties of the analog neuron and synapse circuits must be manually defined prior to optimization, which is time consuming and should be included in the optimization protocol in future work.
|
135 |
Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machinesSala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
|
136 |
Estimation de paramètres de modèles de neurones biologiques sur une plate-forme de SNN (Spiking Neural Network) implantés "insilico"Buhry, Laure 21 September 2010 (has links)
Ces travaux de thèse, réalisés dans une équipe concevant des circuits analogiques neuromimétiques suivant le modèle d’Hodgkin-Huxley, concernent la modélisation de neurones biologiques, plus précisément, l’estimation des paramètres de modèles de neurones. Une première partie de ce manuscrit s’attache à faire le lien entre la modélisation neuronale et l’optimisation. L’accent est mis sur le modèle d’Hodgkin- Huxley pour lequel il existait déjà une méthode d’extraction des paramètres associée à une technique de mesures électrophysiologiques (le voltage-clamp) mais dont les approximations successives rendaient impossible la détermination précise de certains paramètres. Nous proposons dans une seconde partie une méthode alternative d’estimation des paramètres du modèle d’Hodgkin-Huxley s’appuyant sur l’algorithme d’évolution différentielle et qui pallie les limitations de la méthode classique. Cette alternative permet d’estimer conjointement tous les paramètres d’un même canal ionique. Le troisième chapitre est divisé en trois sections. Dans les deux premières, nous appliquons notre nouvelle technique à l’estimation des paramètres du même modèle à partir de données biologiques, puis développons un protocole automatisé de réglage de circuits neuromimétiques, canal ionique par canal ionique. La troisième section présente une méthode d’estimation des paramètres à partir d’enregistrements de la tension de membrane d’un neurone, données dont l’acquisition est plus aisée que celle des courants ioniques. Le quatrième et dernier chapitre, quant à lui, est une ouverture vers l’utilisation de petits réseaux d’une centaine de neurones électroniques : nous réalisons une étude logicielle de l’influence des propriétés intrinsèques de la cellule sur le comportement global du réseau dans le cadre des oscillations gamma. / These works, which were conducted in a research group designing neuromimetic integrated circuits based on the Hodgkin-Huxley model, deal with the parameter estimation of biological neuron models. The first part of the manuscript tries to bridge the gap between neuron modeling and optimization. We focus our interest on the Hodgkin-Huxley model because it is used in the group. There already existed an estimation method associated to the voltage-clamp technique. Nevertheless, this classical estimation method does not allow to extract precisely all parameters of the model, so in the second part, we propose an alternative method to jointly estimate all parameters of one ionic channel avoiding the usual approximations. This method is based on the differential evolution algorithm. The third chaper is divided into three sections : the first two sections present the application of our new estimation method to two different problems, model fitting from biological data and development of an automated tuning of neuromimetic chips. In the third section, we propose an estimation technique using only membrane voltage recordings – easier to mesure than ionic currents. Finally, the fourth and last chapter is a theoretical study preparing the implementation of small neural networks on neuromimetic chips. More specifically, we try to study the influence of cellular intrinsic properties on the global behavior of a neural network in the context of gamma oscillations.
|
137 |
Silicon neural networks : implementation of cortical cells to improve the artificial-biological hybrid technique / Réseau de neurones in silico : contribution au développement de la technique hybride pour les réseaux corticauxGrassia, Filippo Giovanni 07 January 2013 (has links)
Ces travaux ont été menés dans le cadre du projet européen FACETS-ITN. Nous avons contribué à la simulation de cellules corticales grâce à des données expérimentales d'électrophysiologie comme référence et d'un circuit intégré neuromorphique comme simulateur. Les propriétés intrinsèques temps réel de nos circuits neuromorphiques à base de modèles à conductance, autorisent une exploration détaillée des différents types de neurones. L'aspect analogique des circuits intégrés permet le développement d'un simulateur matériel temps réel à l'échelle du réseau. Le deuxième objectif de cette thèse est donc de contribuer au développement d'une plate-forme mixte - matérielle et logicielle - dédiée à la simulation de réseaux de neurones impulsionnels. / This work has been supported by the European FACETS-ITN project. Within the frameworkof this project, we contribute to the simulation of cortical cell types (employingexperimental electrophysiological data of these cells as references), using a specific VLSIneural circuit to simulate, at the single cell level, the models studied as references in theFACETS project. The real-time intrinsic properties of the neuromorphic circuits, whichprecisely compute neuron conductance-based models, will allow a systematic and detailedexploration of the models, while the physical and analog aspect of the simulations, as opposedthe software simulation aspect, will provide inputs for the development of the neuralhardware at the network level. The second goal of this thesis is to contribute to the designof a mixed hardware-software platform (PAX), specifically designed to simulate spikingneural networks. The tasks performed during this thesis project included: 1) the methodsused to obtain the appropriate parameter sets of the cortical neuron models that can beimplemented in our analog neuromimetic chip (the parameter extraction steps was validatedusing a bifurcation analysis that shows that the simplified HH model implementedin our silicon neuron shares the dynamics of the HH model); 2) the fully customizablefitting method, in voltage-clamp mode, to tune our neuromimetic integrated circuits usinga metaheuristic algorithm; 3) the contribution to the development of the PAX systemin terms of software tools and a VHDL driver interface for neuron configuration in theplatform. Finally, it also addresses the issue of synaptic tuning for future SNN simulation.
|
138 |
Evolution of spiking neural networks for temporal pattern recognition and animat controlAbdelmotaleb, Ahmed Mostafa Othman January 2016 (has links)
I extended an artificial life platform called GReaNs (the name stands for Gene Regulatory evolving artificial Networks) to explore the evolutionary abilities of biologically inspired Spiking Neural Network (SNN) model. The encoding of SNNs in GReaNs was inspired by the encoding of gene regulatory networks. As proof-of-principle, I used GReaNs to evolve SNNs to obtain a network with an output neuron which generates a predefined spike train in response to a specific input. Temporal pattern recognition was one of the main tasks during my studies. It is widely believed that nervous systems of biological organisms use temporal patterns of inputs to encode information. The learning technique used for temporal pattern recognition is not clear yet. I studied the ability to evolve spiking networks with different numbers of interneurons in the absence and the presence of noise to recognize predefined temporal patterns of inputs. Results showed, that in the presence of noise, it was possible to evolve successful networks. However, the networks with only one interneuron were not robust to noise. The foraging behaviour of many small animals depends mainly on their olfactory system. I explored whether it was possible to evolve SNNs able to control an agent to find food particles on 2-dimensional maps. Using ring rate encoding to encode the sensory information in the olfactory input neurons, I managed to obtain SNNs able to control an agent that could detect the position of the food particles and move toward it. Furthermore, I did unsuccessful attempts to use GReaNs to evolve an SNN able to control an agent able to collect sound sources from one type out of several sound types. Each sound type is represented as a pattern of different frequencies. In order to use the computational power of neuromorphic hardware, I integrated GReaNs with the SpiNNaker hardware system. Only the simulation part was carried out using SpiNNaker, but the rest steps of the genetic algorithm were done with GReaNs.
|
139 |
Reinforcement learning with time perceptionLiu, Chong January 2012 (has links)
Classical value estimation reinforcement learning algorithms do not perform very well in dynamic environments. On the other hand, the reinforcement learning of animals is quite flexible: they can adapt to dynamic environments very quickly and deal with noisy inputs very effectively. One feature that may contribute to animals' good performance in dynamic environments is that they learn and perceive the time to reward. In this research, we attempt to learn and perceive the time to reward and explore situations where the learned time information can be used to improve the performance of the learning agent in dynamic environments. The type of dynamic environments that we are interested in is that type of switching environment which stays the same for a long time, then changes abruptly, and then holds for a long time before another change. The type of dynamics that we mainly focus on is the time to reward, though we also extend the ideas to learning and perceiving other criteria of optimality, e.g. the discounted return, so that they can still work even when the amount of reward may also change. Specifically, both the mean and variance of the time to reward are learned and then used to detect changes in the environment and to decide whether the agent should give up a suboptimal action. When a change in the environment is detected, the learning agent responds specifically to the change in order to recover quickly from it. When it is found that the current action is still worse than the optimal one, the agent gives up this time's exploration of the action and then remakes its decision in order to avoid longer than necessary exploration. The results of our experiments using two real-world problems show that they have effectively sped up learning, reduced the time taken to recover from environmental changes, and improved the performance of the agent after the learning converges in most of the test cases compared with classical value estimation reinforcement learning algorithms. In addition, we have successfully used spiking neurons to implement various phenomena of classical conditioning, the simplest form of animal reinforcement learning in dynamic environments, and also pointed out a possible implementation of instrumental conditioning and general reinforcement learning using similar models.
|
140 |
Managing a real-time massively-parallel neural architecturePatterson, James Cameron January 2012 (has links)
A human brain has billions of processing elements operating simultaneously; the only practical way to model this computationally is with a massively-parallel computer. A computer on such a significant scale requires hundreds of thousands of interconnected processing elements, a complex environment which requires many levels of monitoring, management and control. Management begins from the moment power is applied and continues whilst the application software loads, executes, and the results are downloaded. This is the story of the research and development of a framework of scalable management tools that support SpiNNaker, a novel computing architecture designed to model spiking neural networks of biologically-significant sizes. This management framework provides solutions from the most fundamental set of power-on self-tests, through to complex, real-time monitoring of the health of the hardware and the software during simulation. The framework devised uses standard tools where appropriate, covering hardware up / down events and capacity information, through to bespoke software developed to provide real-time insight to neural network software operation across multiple levels of abstraction. With this layered management approach, users (or automated agents) have access to results dynamically and are able to make informed decisions on required actions in real-time.
|
Page generated in 0.0817 seconds