• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A general hippocampal computational model combining episodic and spatial memory in a spiking model

Aguiar, Paulo de Castro January 2006 (has links)
The hippocampus, in humans and rats, plays crucial roles in spatial tasks and nonspatial tasks involving episodic-type memory. This thesis presents a novel computational model of the hippocampus (CA1, CA3 and dentate gyrus) which creates a framework where spatial memory and episodic memory are explained together. This general model follows the approach where the memory function of the rodent hippocampus is seen as a “memory space” instead of a “spatial memory”. The innovations of this novel model are centred around the fact that it follows detailed hippocampal architecture constraints and uses spiking networks to represent all hippocampal subfields. This hippocampal model does not require stable attractor states to produce a robust memory system capable of pattern separation and pattern completion. In this hippocampal theory, information is represented and processed in the form of activity patterns. That is, instead of assuming firing-rate coding, this model assumes that information is coded in the activation of specific constellations of neurons. This coding mechanism, associated with the use of spiking neurons, raises many problems on how information is transferred, processed and stored in the different hippocampal subfields. This thesis explores which mechanisms are available in the hippocampus to achieve such control, and produces a detailed model which is biologically realistic and capable of explaining how several computational components can work together to produce the emergent functional properties of the hippocampus. In this hippocampal theory, precise explanations are given to why mossy fibres are important for storage but not recall, what is the functional role of the mossy cells (excitatory interneurons) in the dentate gyrus, why firing fields can be asymmetric with the firing peak closer to the end of the field, which features are used to produce “place fields”, among others. An important property of this hippocampal model is that the memory system provided by the CA3 is a palimpsest memory: after saturation, the number of patterns that can be recalled is independent of the number of patterns engraved in the recurrent network. In parallel with the development of the hippocampal computational model, a simulation environment was created. This simulation environment was tailored for the needs and assumptions of the hippocampal model and represents an important component of this thesis.
2

Le rôle de la balance entre excitation et inhibition dans l'apprentissage dans les réseaux de neurones à spikes / The role of balance between excitation and inhibition in learning in spiking networks

Bourdoukan, Ralph 10 October 2016 (has links)
Lorsqu'on effectue une tâche, les circuits neuronaux doivent représenter et manipuler des stimuli continus à l'aide de potentiels d'action discrets. On suppose communément que les neurones représentent les quantités continues à l'aide de leur fréquence de décharge et ceci indépendamment les un des autres. Cependant, un tel codage indépendant est inefficace puisqu'il exige la génération d'un très grand nombre de potentiels d'action pour atteindre un certain niveau de précision. Dans ces travaux, on montre que les neurones d'un réseau récurrent peuvent apprendre - à l'aide d'une règle de plasticité locale - à coordonner leurs potentiels d'actions afin de représenter l'information avec une très haute précision tout en déchargeant de façon minimale. La règle d'apprentissage qui agit sur les connexions récurrentes, conduit à un codage efficace en imposant au niveau de chaque neurone un équilibre précis entre excitation et inhibition. Cet équilibre est un phénomène fréquemment observer dans le cerveau et c'est un principe central de notre théorie. On dérive également deux autres règles d'apprentissages biologiquement plausibles qui permettent respectivement au réseau de s'adapter aux statistiques de ses entrées et d'effectuer des transformations complexes et dynamiques sur elles. Finalement, dans ces réseaux, le stochasticité du temps de décharge d'un neurone n'est pas la signature d'un bruit mais au contraire de précision et d'efficacité. Le caractère aléatoire du temps de décharge résulte de la dégénérescence de la représentation. Ceci constitue donc une interprétation radicalement différente et nouvelle de l'irrégularité trouvée dans des trains de potentiels d'actions. / When performing a task, neural circuits must represent and manipulate continuous stimuli using discrete action potentials. It is commonly assumed that neurons represent continuous quantities with their firing rate and this independently from one another. However, such independent coding is very inefficient because it requires the generation of a large number of action potentials in order to achieve a certain level of accuracy. We show that neurons in a spiking recurrent network can learn - using a local plasticity rule - to coordinate their action potentials in order to represent information with high accuracy while discharging minimally. The learning rule that acts on recurrent connections leads to such an efficient coding by imposing a precise balance between excitation and inhibition at the level of each neuron. This balance is a frequently observed phenomenon in the brain and is central in our work. We also derive two biologically plausible learning rules that respectively allows the network to adapt to the statistics of its inputs and to perform complex and dynamic transformations on them. Finally, in these networks, the stochasticity of the spike timing is not a signature of noise but rather of precision and efficiency. In fact, the random nature of the spike times results from the degeneracy of the representation. This constitutes a new and a radically different interpretation of the irregularity found in spike trains.
3

ANNarchy: a code generation approach to neural simulations on parallel hardware

Vitay, Julien, Dinkelbach, Helge Ülo, Hamker, Fred Henrik 07 October 2015 (has links) (PDF)
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
4

ANNarchy: a code generation approach to neural simulations on parallel hardware

Vitay, Julien, Dinkelbach, Helge Ülo, Hamker, Fred Henrik 07 October 2015 (has links)
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.

Page generated in 0.0813 seconds