• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 34
  • 27
  • 21
  • 14
  • 10
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Hebbian mechanisms and temporal contiguity for unsupervised task-set learning / Mécanismes Hebbiens et contiguïté temporelle pour l'apprentissage de task-set non-supervisé

Bouchacourt, Flora 07 November 2016 (has links)
L'homme est capable d'utiliser des stratégies ou règles concurrentes selon les contraintes environnementales. Nous étudions un modèle plausible pour une tâche nécessitant l'apprentissage de plusieurs règles associant des stimuli visuels à des réponses motrices. Deux réseaux de populations neurales à sélectivité mixte interagissent. Le réseau décisionnel apprend les associations stimulus-réponse une à une, mais ne peut gérer qu'une règle à la fois. Son activité modifie la plasticité synaptique du second réseau qui apprend les statistiques d'évènements sur une échelle de temps plus longue. Lorsque des motifs entre les associations stimulus-réponse sont détectés, un biais d'inférence vers le réseau décisionnel guide le comportement futur. Nous montrons que le mécanisme de Hebb non-supervisé dans le second réseau est suffisant pour l'implémentation des règles. Leur récupération dans le réseau de décision améliore la performance. Le modèle prédit des changements comportementaux en fonction de la séquence des réponses précédentes, dont les effets sur la performance peuvent être positifs ou négatifs. Les prédictions sont confirmées par les données, et permettent d'identifier les sujets ayant appris la structure de la tâche. Le signal d'inférence corrèle avec l'activité BOLD dans le réseau fronto-pariétal. Au sein de ce réseau, les n¿uds préfrontaux dorsomédial et dorsolatéral sont préférentiellement recrutés lorsque les règles sont récurrentes: l'activité dans ces régions pourrait biaiser les circuits de décision lorsqu'une règle est récupérée. Ces résultats montrent que le mécanisme de Hebb peut expliquer l'apprentissage de comportements complexes en contrôle cognitif. / Depending on environmental demands, humans performing in a given task are able to exploit multiple concurrent strategies, for which the mental representations are called task-sets. We examine a candidate model for a specific human experiment, where several stimulus-response mappings, or task-sets, need to be learned and monitored. The model is composed of two interacting networks of mixed-selective neural populations. The decision network learns stimulus-response associations, but cannot learn more than one task-set. Its activity drives synaptic plasticity in a second network that learns event statistics on a longer timescale. When patterns in stimulus-response associations are detected, an inference bias to the decision network guides successive behavior. We show that a simple unsupervised Hebbian mechanism in the second network is sufficient to learn an implementation of task-sets. Their retrieval in the decision network improves performance. The model predicts abrupt changes in behavior depending on the precise statistics of previous responses, corresponding to positive (task-set retrieval) or negative effects on performance. The predictions are borne out by the data, and enable to identify subjects who have learned the task structure. The inference signal correlates with BOLD activity in the fronto-parietal network. Within this network, dorsomedial and dorsolateral prefrontal nodes are preferentially recruited when task-sets are recurrent: activity in these regions may provide a bias to decision circuits when a task-set is retrieved. These results show that Hebbian mechanisms and temporal contiguity may parsimoniously explain the learning of rule-guided behavior.
12

Adaptation through a Stochastic Evolutionary Neuron Migration Process

Haverinen, J. (Janne) 23 March 2004 (has links)
Abstract Artificial Life is an interdisciplinary scientific and engineering enterprise investigating the fundamental properties of living systems through the simulation and synthesis of life-like processes in artificial media. One of the avenues of investigation is autonomous robots and agents. Mimicking of the growth and adaptation of a biological neural circuit in an artificial medium is a challenging task owing to our limited knowledge of the complex process taking place in a living organism. By combining several developmental mechanisms, including the chemical, mechanical, genetic, and electrical, researchers have succeeded in developing networks with interesting topology, morphology, and function within Artificial Computational Chemistry. However, most of these approaches still fail to create neural circuits able to solve real problems in perception and robot control. In this thesis a phenomenological developmental model called a Stochastic Evolutionary Neuron Migration Process (SENMP) is proposed. Employing a spatial encoding scheme with lateral interaction of neurons for artificial neural networks, which represent candidate solutions within a neural network ensemble, neurons of the ensemble form problem-specific spatial patterns with the desired dynamics as they migrate under the selective pressure. The approach is applied to gain new insights into development, adaptation and plasticity in neural networks and to evolve purposeful behaviors for mobile robots. In addition, the approach is used to study the relationship of spatial patterns, composed of interacting entities, and their dynamics. The feasibility and advantages of the approach are demonstrated by evolving neural controllers for solving a non-Markovian double pole balancing problem and by evolving controllers that exhibit navigation behavior for simulated and real mobile robots in complex environments. Preliminary results regarding the behavior of the adapting neural network ensemble are also shown and, particularly, a phenomenon exhibiting Hebbian-like dynamics. This thesis is a step toward a long range goal that aims to create an intelligent robot that is capable of learning complex skills and adapts rapidly to environmental changes.
13

Improving Liquid State Machines Through Iterative Refinement of the Reservoir

Norton, R David 18 March 2008 (has links) (PDF)
Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to create a more effective one. First, we apply Hebbian learning to LSMs by building the liquid with spike-time dependant plasticity (STDP) synapses. Second, we create an eligibility based reinforcement learning algorithm for synaptic development. Third, we apply principles of Hebbian learning and reinforcement learning to create a new algorithm called separation driven synaptic modification (SDSM). These three methods are compared across four artificial pattern recognition problems, generating only fifty liquids for each problem. Each of these algorithms shows overall improvements to LSMs with SDSM demonstrating the greatest improvement. SDSM is also shown to generalize well and outperforms traditional LSMs when presented with speech data obtained from the TIMIT dataset.
14

Functions of the cerebral cortex and cholinergic systems in synaptic plasticity induced by sensory preconditioning

Maalouf, Marwan 04 1900 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal. / This thesis provides evidence to support the hypothesis that synaptic plasticity in the primary somatosensory cortex is a cellular correlate of associative learning, that the process depends upon acetylcholine and that only certain cortical neurons display this plasticity. In a first series of experiments, single-imit recordings were carried out in the barrel cortex of awake, adult rats subjected to whisker pairing, an associative learning paradigm where deflections of the recorded neuron's principle vibrissa were repeatedly paired with those of a non-adjacent one. On average, this form of sensory preconditioning increased the responses of a recorded unit to the stimulation of the non-adjacent vibrissa. In contrast, following explicitly unpaired control experiments, neuronal responsiveness decreased. The effect of pairing was further enhanced by local, microiontophoretic delivery of NMDA and the nitric oxide synthase inhibitor L-NAME and reduced by the NMDA receptor competitive antagonist AP5. These results and the fact that the influence of the pharmacological agents on neuronal excitability were either transient (liinited to the delivery period) or simply absent indicated that the somatosensory cerebral cortex is one site where plasticity emerges following whisker pairing. In subsequent experiments, using a similar conditioning paradigm that relied on evoked potential rather than single-unit recordings, increases in the responses of cortical neurons to the non-adjacent whisker were blocked by atropine sulfate, an antagonist of muscarinic cholinoreceptors. Administration of norn-ial saline or atropine methyl nitrate, a muscarinic antagonist that did not cross the blood-brain barrier, instead of atropine sulfate, did not affect plasticity. Analysis of the behavioral state of the animal showed that the changes observed in the evoked potential could not be attributed to fluctuations m the behavioral state of the animal. By combining the results described in this thesis with data foimd in related literature, the author hypothesizes that whisker pairing induces an acetylcholine-dependent form of plasticity within the somatosensory cortex through Hebbian mechanisms.
15

Storing information through complex dynamics in recurrent neural networks

Molter, Colin C 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus. In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
16

Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits

Tully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
17

Mathematical Description of Differential Hebbian Plasticity and its Relation to Reinforcement Learning / Mathematische Beschreibung Hebb'scher Plastizität und deren Beziehung zu Bestärkendem Lernen

Kolodziejski, Christoph Markus 13 February 2009 (has links)
No description available.
18

Redes neurais não-supervisionadas para processamento de sequências temporais / Unsupervised neural networks for temporal sequence processing

Barreto, Guilherme de Alencar 31 August 1998 (has links)
Em muitos domínios de aplicação, a variável tempo é uma dimensão essencial. Este é o caso da robótica, na qual trajetórias de robôs podem ser interpretadas como seqüências temporais cuja ordem de ocorrência de suas componentes precisa ser considerada. Nesta dissertação, desenvolve-se um modelo de rede neural não-supervisionada para aprendizagem e reprodução de trajetórias do Robô PUMA 560. Estas trajetórias podem ter estados em comum, o que torna o processo de reprodução susceptível a ambigüidades. O modelo proposto consiste em uma rede competitiva composta por dois conjuntos de pesos sinápticos; pesos intercamadas e pesos intracamada. Pesos intercamadas conectam as unidades na camada de entrada com os neurônios da camada de saída e codificam a informação espacial contida no estímulo de entrada atual. Os pesos intracamada conectam os neurônios da camada de saída entre si, sendo divididos em dois grupos: autoconexões e conexões laterais. A função destes é codificar a ordem temporal dos estados da trajetória, estabelecendo associações entre estados consecutivos através de uma regra hebbiana. Três mecanismos adicionais são propostos de forma a tornar a aprendizagem e reprodução das trajetórias mais confiável: unidades de contexto, exclusão de neurônios e redundância na representação dos estados. A rede funciona indicando na sua saída o estado atual e o próximo estado da trajetória. As simulações com o modelo proposto ilustram a habilidade do modelo em aprender e reproduzir múltiplas trajetórias com precisão e sem ambiguidades. A rede também é capaz de reproduzir trajetórias mesmo diante de perdas de neurônios e de generalizar diante da presença de ruído nos estímulos de entrada da rede. / In many application domains, the variable time is an essential dimension. This is the case of Robotics, where robot trajectories can be interpreted as temporal sequences in which the order of occurrence of each component needs to be considered. In this dissertation, an unsupervised neural network model is developed for learning and reproducing trajectories of a Robot PUMA 560. These trajectories can have states in common, making the process of reproduction susceptible to ambiguities. The proposed model consists of a competitive network with two groups of synaptic connections: interlayer anel intralayer ones. The interlayer weights connect units in the input layer with neurons in the output layer and they encode the spatial information contained in the current input stimulus. The intralayer weights connect the neurons of the output Iayer to each other, being divided in two groups: self-connections and lateral connections. The function of these links is to encode the temporal order of the trajectory states, establishing associations among consecutive states through a Hebbian rule. Three additional mechanisms are proposed in order to make trajectory Iearning and reproduction more reliable: context units, exclusion of neurons and redundancy in the representation of the states. The model outputs the current state and the next state of the trajectory. The simulations with the proposed model illustrate the ability of the network in learning and reproducing muItiple trajectories accurateIy and without arnbiguities. In addition, the proposed neural network model is able to reproduce trajectories even when neuron failures occur and can generalize well in the presence of noise in the input stimulus.
19

Biologicky motivovaná autoasociativní neuronová síť s dynamickými synapsemi. / Activity and Memory in Biologically Motivated Neural Network.

Štroffek, Július January 2018 (has links)
This work presents biologically motivated neural network model which works as an auto-associative memory. Architecture of the presented model is similar to the architecture of the Hopfield network which might be similar to some parts of the hippocampal network area CA3 (Cornu Amonis). Patterns learned and retrieved are not static but they are periodically repeating sequences of sparse synchronous activities. Patterns were stored to the network using the modified Hebb rule adjusted to store cyclic sequences. Capacity of the model is analyzed together with the numerical simulations. The model is further extended with short term potentiation (STP), which is forming the essential part of the successful pattern recall process. The memory capacity of the extended version of the model is highly increased. The joint version of the model combining both approaches is discussed. The model might be able to retrieve the pattern in short time interval without STP (fast patterns) or in a longer time period utilizing STP (slow patterns). We know from our everyday life that some patterns could be recalled promptly and some may need much longer time to reveal. Keywords auto-associative neural network, Hebbian learning, neural coding, memory, pattern recognition, short-term potentiation 1
20

Redes neurais não-supervisionadas para processamento de sequências temporais / Unsupervised neural networks for temporal sequence processing

Guilherme de Alencar Barreto 31 August 1998 (has links)
Em muitos domínios de aplicação, a variável tempo é uma dimensão essencial. Este é o caso da robótica, na qual trajetórias de robôs podem ser interpretadas como seqüências temporais cuja ordem de ocorrência de suas componentes precisa ser considerada. Nesta dissertação, desenvolve-se um modelo de rede neural não-supervisionada para aprendizagem e reprodução de trajetórias do Robô PUMA 560. Estas trajetórias podem ter estados em comum, o que torna o processo de reprodução susceptível a ambigüidades. O modelo proposto consiste em uma rede competitiva composta por dois conjuntos de pesos sinápticos; pesos intercamadas e pesos intracamada. Pesos intercamadas conectam as unidades na camada de entrada com os neurônios da camada de saída e codificam a informação espacial contida no estímulo de entrada atual. Os pesos intracamada conectam os neurônios da camada de saída entre si, sendo divididos em dois grupos: autoconexões e conexões laterais. A função destes é codificar a ordem temporal dos estados da trajetória, estabelecendo associações entre estados consecutivos através de uma regra hebbiana. Três mecanismos adicionais são propostos de forma a tornar a aprendizagem e reprodução das trajetórias mais confiável: unidades de contexto, exclusão de neurônios e redundância na representação dos estados. A rede funciona indicando na sua saída o estado atual e o próximo estado da trajetória. As simulações com o modelo proposto ilustram a habilidade do modelo em aprender e reproduzir múltiplas trajetórias com precisão e sem ambiguidades. A rede também é capaz de reproduzir trajetórias mesmo diante de perdas de neurônios e de generalizar diante da presença de ruído nos estímulos de entrada da rede. / In many application domains, the variable time is an essential dimension. This is the case of Robotics, where robot trajectories can be interpreted as temporal sequences in which the order of occurrence of each component needs to be considered. In this dissertation, an unsupervised neural network model is developed for learning and reproducing trajectories of a Robot PUMA 560. These trajectories can have states in common, making the process of reproduction susceptible to ambiguities. The proposed model consists of a competitive network with two groups of synaptic connections: interlayer anel intralayer ones. The interlayer weights connect units in the input layer with neurons in the output layer and they encode the spatial information contained in the current input stimulus. The intralayer weights connect the neurons of the output Iayer to each other, being divided in two groups: self-connections and lateral connections. The function of these links is to encode the temporal order of the trajectory states, establishing associations among consecutive states through a Hebbian rule. Three additional mechanisms are proposed in order to make trajectory Iearning and reproduction more reliable: context units, exclusion of neurons and redundancy in the representation of the states. The model outputs the current state and the next state of the trajectory. The simulations with the proposed model illustrate the ability of the network in learning and reproducing muItiple trajectories accurateIy and without arnbiguities. In addition, the proposed neural network model is able to reproduce trajectories even when neuron failures occur and can generalize well in the presence of noise in the input stimulus.

Page generated in 0.038 seconds