• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 18
  • 11
  • 10
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Negative feedback as an organising principle for artificial neural networks

Fyfe, Colin January 1995 (has links)
We investigate the properties of an unsupervised neural network which uses simple Hebbian learning and negative feedback of activation in order to self-organise. The negative feedback circumvents the well-known difficulty of positive feedback in Hebbian learning systems which causes the networks' weights to increase without bound. We show, both analytically and experimentally, that not only do the weights of networks with this architecture converge, they do so to values which give the networks important information processing properties: linear versions of the model are shown to perform a Principal Component Analysis of the input data while a non-linear version is shown to be capable of Exploratory Projection Pursuit. While there is no claim that the networks described herein represent the complexity found in biological networks, we believe that the networks investigated are not incompatible with known neurobiology. However, the main thrust of the thesis is a mathematical analysis of the emergent properties of the network; such analysis is backed by empirical evidence at all times.
2

DISTRIBUTED HEBBIAN INFERENCE OF ENVIRONMENT STRUCTURE IN SELF-ORGANIZED SENSOR NETWORKS

SHAH, PAYAL D. 03 July 2007 (has links)
No description available.
3

Cell assemblies para expansão de consultas / Cell assemblies for query expansion

Volpe, Isabel Cristina January 2011 (has links)
Uma das principais tarefas de Recuperação de Informações é encontrar documentos que sejam relevantes a uma consulta. Esta tarefa é difícil porque, em muitos casos os termos de busca escolhidos pelo usuário são diferentes dos termos utilizados pelos autores dos documentos. Ao longo dos anos, várias abordagens foram propostas para lidar com este problema. Uma das técnicas mais utilizadas, com o objetivo de expandir o número de documentos relevantes recuperados é a Expansão de Consultas, que consiste em expandir a consulta com a adição de termos relacionados. Este trabalho propõe um método que utiliza o modelo de Cell Assemblies para a expansão da consulta. Cell Assemblies são grupos de neurônios conectados, com padrões de disparo, que permitem que a atividade persista mesmo após a remoção dos estímulos externos. A modificação das sinapses entre os neurônios é feita através de regras de aprendizagem Hebbiana. Neste trabalho, o modelo Cell Assemblies foi adaptado a fim de aprender os relacionamentos entre os termos de uma coleção de documentos. Esses relacionamentos são utilizados para expandir a consulta original com termos relacionados. A avaliação experimental sobre uma coleção de testes padrão em Recuperação de Informações mostrou que algumas consultas melhoraram significativamente seus resultados com a técnica proposta. / One of the main tasks in Information Retrieval is to match a user query to the documents that are relevant for it. This matching is challenging because in many cases the keywords the user chooses will be different from the words the authors of the relevant documents have used. Throughout the years, many approaches have been proposed to deal with this problem. One of the most popular consists in expanding the query with related terms with the goal of retrieving more relevant documents. In this work, we propose a new method in which a Cell Assembly model is applied for query expansion. Cell Assemblies are reverberating circuits of neurons that can persist long beyond the initial stimulus has ceased. They learn through Hebbian Learning rules and have been used to simulate the formation and the usage of human concepts. We adapted the Cell Assembly model to learn relationships between the terms in a document collection. These relationships are then used to augment the original queries. Our experiments use standard Information Retrieval test collections and show that some queries significantly improved their results with the proposed technique.
4

Cell assemblies para expansão de consultas / Cell assemblies for query expansion

Volpe, Isabel Cristina January 2011 (has links)
Uma das principais tarefas de Recuperação de Informações é encontrar documentos que sejam relevantes a uma consulta. Esta tarefa é difícil porque, em muitos casos os termos de busca escolhidos pelo usuário são diferentes dos termos utilizados pelos autores dos documentos. Ao longo dos anos, várias abordagens foram propostas para lidar com este problema. Uma das técnicas mais utilizadas, com o objetivo de expandir o número de documentos relevantes recuperados é a Expansão de Consultas, que consiste em expandir a consulta com a adição de termos relacionados. Este trabalho propõe um método que utiliza o modelo de Cell Assemblies para a expansão da consulta. Cell Assemblies são grupos de neurônios conectados, com padrões de disparo, que permitem que a atividade persista mesmo após a remoção dos estímulos externos. A modificação das sinapses entre os neurônios é feita através de regras de aprendizagem Hebbiana. Neste trabalho, o modelo Cell Assemblies foi adaptado a fim de aprender os relacionamentos entre os termos de uma coleção de documentos. Esses relacionamentos são utilizados para expandir a consulta original com termos relacionados. A avaliação experimental sobre uma coleção de testes padrão em Recuperação de Informações mostrou que algumas consultas melhoraram significativamente seus resultados com a técnica proposta. / One of the main tasks in Information Retrieval is to match a user query to the documents that are relevant for it. This matching is challenging because in many cases the keywords the user chooses will be different from the words the authors of the relevant documents have used. Throughout the years, many approaches have been proposed to deal with this problem. One of the most popular consists in expanding the query with related terms with the goal of retrieving more relevant documents. In this work, we propose a new method in which a Cell Assembly model is applied for query expansion. Cell Assemblies are reverberating circuits of neurons that can persist long beyond the initial stimulus has ceased. They learn through Hebbian Learning rules and have been used to simulate the formation and the usage of human concepts. We adapted the Cell Assembly model to learn relationships between the terms in a document collection. These relationships are then used to augment the original queries. Our experiments use standard Information Retrieval test collections and show that some queries significantly improved their results with the proposed technique.
5

Cell assemblies para expansão de consultas / Cell assemblies for query expansion

Volpe, Isabel Cristina January 2011 (has links)
Uma das principais tarefas de Recuperação de Informações é encontrar documentos que sejam relevantes a uma consulta. Esta tarefa é difícil porque, em muitos casos os termos de busca escolhidos pelo usuário são diferentes dos termos utilizados pelos autores dos documentos. Ao longo dos anos, várias abordagens foram propostas para lidar com este problema. Uma das técnicas mais utilizadas, com o objetivo de expandir o número de documentos relevantes recuperados é a Expansão de Consultas, que consiste em expandir a consulta com a adição de termos relacionados. Este trabalho propõe um método que utiliza o modelo de Cell Assemblies para a expansão da consulta. Cell Assemblies são grupos de neurônios conectados, com padrões de disparo, que permitem que a atividade persista mesmo após a remoção dos estímulos externos. A modificação das sinapses entre os neurônios é feita através de regras de aprendizagem Hebbiana. Neste trabalho, o modelo Cell Assemblies foi adaptado a fim de aprender os relacionamentos entre os termos de uma coleção de documentos. Esses relacionamentos são utilizados para expandir a consulta original com termos relacionados. A avaliação experimental sobre uma coleção de testes padrão em Recuperação de Informações mostrou que algumas consultas melhoraram significativamente seus resultados com a técnica proposta. / One of the main tasks in Information Retrieval is to match a user query to the documents that are relevant for it. This matching is challenging because in many cases the keywords the user chooses will be different from the words the authors of the relevant documents have used. Throughout the years, many approaches have been proposed to deal with this problem. One of the most popular consists in expanding the query with related terms with the goal of retrieving more relevant documents. In this work, we propose a new method in which a Cell Assembly model is applied for query expansion. Cell Assemblies are reverberating circuits of neurons that can persist long beyond the initial stimulus has ceased. They learn through Hebbian Learning rules and have been used to simulate the formation and the usage of human concepts. We adapted the Cell Assembly model to learn relationships between the terms in a document collection. These relationships are then used to augment the original queries. Our experiments use standard Information Retrieval test collections and show that some queries significantly improved their results with the proposed technique.
6

Understanding language and attention : brain-based model and neurophysiological experiments

Garagnani, Max January 2009 (has links)
This work concerns the investigation of the neuronal mechanisms at the basis of language acquisition and processing, and the complex interactions of language and attention processes in the human brain. In particular, this research was motivated by two sets of existing neurophysiological data which cannot be reconciled on the basis of current psycholinguistic accounts: on the one hand, the N400, a robust index of lexico-semantic processing which emerges at around 400ms after stimulus onset in attention demanding tasks and is larger for senseless materials (meaningless pseudowords) than for matched meaningful stimuli (words); on the other, the more recent results on the Mismatch Negativity (MMN, latency 100-250ms), an early automatic brain response elicited under distraction which is larger to words than to pseudowords. We asked what the mechanisms underlying these differential neurophysiological responses may be, and whether attention and language processes could interact so as to produce the observed brain responses, having opposite magnitude and different latencies. We also asked questions about the functional nature and anatomical characteristics of the cortical representation of linguistic elements. These questions were addressed by combining neurocomputational techniques and neuroimaging (magneto-encephalography, MEG) experimental methods. Firstly, a neurobiologically realistic neural-network model composed of neuron-like elements (graded response units) was implemented, which closely replicates the neuroanatomical and connectivity features of the main areas of the left perisylvian cortex involved in spoken language processing (i.e., the areas controlling speech output – left inferior-prefrontal cortex, including Broca’s area – and the main sensory input – auditory – areas, located in the left superior-temporal lobe, including Wernicke’s area). Secondly, the model was used to simulate early word acquisition processes by means of a Hebbian correlation learning rule (which reflects known synaptic plasticity mechanisms of the neocortex). The network was “taught” to associate pairs of auditory and articulatory activation patterns, simulating activity due to perception and production of the same speech sound: as a result, neuronal word representations distributed over the different cortical areas of the model emerged. Thirdly, the network was stimulated, in its “auditory cortex”, with either one of the words it had learned, or new, unfamiliar pseudoword patterns, while the availability of attentional resources was modulated by changing the level of non-specific, global cortical inhibition. In this way, the model was able to replicate both the MMN and N400 brain responses by means of a single set of neuroscientifically grounded principles, providing the first mechanistic account, at the cortical-circuit level, for these data. Finally, in order to verify the neurophysiological validity of the model, its crucial predictions were tested in a novel MEG experiment investigating how attention processes modulate event-related brain responses to speech stimuli. Neurophysiological responses to the same words and pseudowords were recorded while the same subjects were asked to attend to the spoken input or ignore it. The experimental results confirmed the model’s predictions; in particular, profound variability of magnetic brain responses to pseudowords but relative stability of activation to words as a function of attention emerged. While the results of the simulations demonstrated that distributed cortical representations for words can spontaneously emerge in the cortex as a result of neuroanatomical structure and synaptic plasticity, the experimental results confirm the validity of the model and provide evidence in support of the existence of such memory circuits in the brain. This work is a first step towards a mechanistic account of cognition in which the basic atoms of cognitive processing (e.g., words, objects, faces) are represented in the brain as discrete and distributed action-perception networks that behave as closed, independent systems.
7

Improving Liquid State Machines Through Iterative Refinement of the Reservoir

Norton, R David 18 March 2008 (has links) (PDF)
Liquid State Machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, a reservoir, or liquid, is randomly created which acts as a filter for a readout function. We develop three methods for iteratively refining a randomly generated liquid to create a more effective one. First, we apply Hebbian learning to LSMs by building the liquid with spike-time dependant plasticity (STDP) synapses. Second, we create an eligibility based reinforcement learning algorithm for synaptic development. Third, we apply principles of Hebbian learning and reinforcement learning to create a new algorithm called separation driven synaptic modification (SDSM). These three methods are compared across four artificial pattern recognition problems, generating only fifty liquids for each problem. Each of these algorithms shows overall improvements to LSMs with SDSM demonstrating the greatest improvement. SDSM is also shown to generalize well and outperforms traditional LSMs when presented with speech data obtained from the TIMIT dataset.
8

Storing information through complex dynamics in recurrent neural networks

Molter, Colin C 20 May 2005 (has links)
The neural net computer simulations which will be presented here are based on the acceptance of a set of assumptions that for the last twenty years have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First of all, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural net should be coded in some way or another in one of the dynamical attractors of the brain and retrieved by stimulating the net so as to trap its dynamics in the desired item's basin of attraction. The second view shared by neural net researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The last assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli and being able to easily switch from one of these potential attractors to another in response to any coming stimulus. In this thesis, it is shown experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the back, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause but the consequence of the learning. However, it appears as an helpful consequence that widens the net's encoding capacity. To learn the information to be stored, an unsupervised Hebbian learning algorithm is introduced. By leaving the semantics of the attractors to be associated with the feeding data unprescribed, promising results have been obtained in term of storing capacity.
9

Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits

Tully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
10

Redes neurais não-supervisionadas para processamento de sequências temporais / Unsupervised neural networks for temporal sequence processing

Barreto, Guilherme de Alencar 31 August 1998 (has links)
Em muitos domínios de aplicação, a variável tempo é uma dimensão essencial. Este é o caso da robótica, na qual trajetórias de robôs podem ser interpretadas como seqüências temporais cuja ordem de ocorrência de suas componentes precisa ser considerada. Nesta dissertação, desenvolve-se um modelo de rede neural não-supervisionada para aprendizagem e reprodução de trajetórias do Robô PUMA 560. Estas trajetórias podem ter estados em comum, o que torna o processo de reprodução susceptível a ambigüidades. O modelo proposto consiste em uma rede competitiva composta por dois conjuntos de pesos sinápticos; pesos intercamadas e pesos intracamada. Pesos intercamadas conectam as unidades na camada de entrada com os neurônios da camada de saída e codificam a informação espacial contida no estímulo de entrada atual. Os pesos intracamada conectam os neurônios da camada de saída entre si, sendo divididos em dois grupos: autoconexões e conexões laterais. A função destes é codificar a ordem temporal dos estados da trajetória, estabelecendo associações entre estados consecutivos através de uma regra hebbiana. Três mecanismos adicionais são propostos de forma a tornar a aprendizagem e reprodução das trajetórias mais confiável: unidades de contexto, exclusão de neurônios e redundância na representação dos estados. A rede funciona indicando na sua saída o estado atual e o próximo estado da trajetória. As simulações com o modelo proposto ilustram a habilidade do modelo em aprender e reproduzir múltiplas trajetórias com precisão e sem ambiguidades. A rede também é capaz de reproduzir trajetórias mesmo diante de perdas de neurônios e de generalizar diante da presença de ruído nos estímulos de entrada da rede. / In many application domains, the variable time is an essential dimension. This is the case of Robotics, where robot trajectories can be interpreted as temporal sequences in which the order of occurrence of each component needs to be considered. In this dissertation, an unsupervised neural network model is developed for learning and reproducing trajectories of a Robot PUMA 560. These trajectories can have states in common, making the process of reproduction susceptible to ambiguities. The proposed model consists of a competitive network with two groups of synaptic connections: interlayer anel intralayer ones. The interlayer weights connect units in the input layer with neurons in the output layer and they encode the spatial information contained in the current input stimulus. The intralayer weights connect the neurons of the output Iayer to each other, being divided in two groups: self-connections and lateral connections. The function of these links is to encode the temporal order of the trajectory states, establishing associations among consecutive states through a Hebbian rule. Three additional mechanisms are proposed in order to make trajectory Iearning and reproduction more reliable: context units, exclusion of neurons and redundancy in the representation of the states. The model outputs the current state and the next state of the trajectory. The simulations with the proposed model illustrate the ability of the network in learning and reproducing muItiple trajectories accurateIy and without arnbiguities. In addition, the proposed neural network model is able to reproduce trajectories even when neuron failures occur and can generalize well in the presence of noise in the input stimulus.

Page generated in 0.0625 seconds