• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 13
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 24
  • 21
  • 16
  • 14
  • 14
  • 13
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Enhanced Learning for Restricted Hopfield Networks

Halabian, Faezeh 10 June 2021 (has links)
This research investigates developing a training method for Restricted Hopfield Network (RHN) which is a subcategory of Hopfield Networks. Hopfield Networks are recurrent neural networks proposed in 1982 by John Hopfield. They are useful for different applications such as pattern restoration, pattern completion/generalization, and pattern association. In this study, we propose an enhanced training method for RHN which not only improves the convergence of the training sub-routine, but also is shown to enhance the learning capability of the network. Particularly, after describing the architecture/components of the model, we propose a modified variant of SPSA which in conjunction with back-propagation over time result in a training algorithm with an enhanced convergence for RHN. The trained network is also shown to achieve a better memory recall in the presence of noisy/distorted input. We perform several experiments, using various datasets, to verify the convergence of the training sub-routine, evaluate the impact of different parameters of the model, and compare the performance of the trained RHN in recreating distorted input patterns compared to conventional RBM and Hopfield network and other training methods.
22

Implementation Of Associative Memory With Online Learning into a Spiking Neural Network On Neuromorphic Hardware

Hampo, Michael J. January 2020 (has links)
No description available.
23

Neural Correlates of Verbal Associative Memory and Mnemonic Strategy Use Following Childhood Traumatic Brain Injury

Kramer, Megan Elizabeth 04 December 2009 (has links)
No description available.
24

Low-Power High-Performance Ternary Content Addressable Memory Circuits

Mohan, Nitin January 2006 (has links)
Ternary content addressable memories (TCAMs) are hardware-based parallel lookup tables with bit-level masking capability. They are attractive for applications such as packet forwarding and classification in network routers. Despite the attractive features of TCAMs, high power consumption is one of the most critical challenges faced by TCAM designers. This work proposes circuit techniques for reducing TCAM power consumption. The main contribution of this work is divided in two parts: (i) reduction in match line (ML) sensing energy, and (ii) static-power reduction techniques. The ML sensing energy is reduced by employing (i) positive-feedback ML sense amplifiers (MLSAs), (ii) low-capacitance comparison logic, and (iii) low-power ML-segmentation techniques. The positive-feedback MLSAs include both resistive and active feedback to reduce the ML sensing energy. A body-bias technique can further improve the feedback action at the expense of additional area and ML capacitance. The measurement results of the active-feedback MLSA show 50-56% reduction in ML sensing energy. The measurement results of the proposed low-capacitance comparison logic show 25% and 42% reductions in ML sensing energy and time, respectively, which can further be improved by careful layout. The low-power ML-segmentation techniques include dual ML TCAM and charge-shared ML. Simulation results of the dual ML TCAM that connects two sides of the comparison logic to two ML segments for sequential sensing show 43% power savings for a small (4%) trade-off in the search speed. The charge-shared ML scheme achieves power savings by partial recycling of the charge stored in the first ML segment. Chip measurement results show that the charge-shared ML scheme results in 11% and 9% reductions in ML sensing time and energy, respectively, which can be improved to 19-25% by using a digitally controlled charge sharing time-window and a slightly modified MLSA. The static power reduction is achieved by a dual-VDD technique and low-leakage TCAM cells. The dual-VDD technique trades-off the excess noise margin of MLSA for smaller cell leakage by applying a smaller VDD to TCAM cells and a larger VDD to the peripheral circuits. The low-leakage TCAM cells trade off the speed of READ and WRITE operations for smaller cell area and leakage. Finally, design and testing of a complete TCAM chip are presented, and compared with other published designs.
25

Low-Power High-Performance Ternary Content Addressable Memory Circuits

Mohan, Nitin January 2006 (has links)
Ternary content addressable memories (TCAMs) are hardware-based parallel lookup tables with bit-level masking capability. They are attractive for applications such as packet forwarding and classification in network routers. Despite the attractive features of TCAMs, high power consumption is one of the most critical challenges faced by TCAM designers. This work proposes circuit techniques for reducing TCAM power consumption. The main contribution of this work is divided in two parts: (i) reduction in match line (ML) sensing energy, and (ii) static-power reduction techniques. The ML sensing energy is reduced by employing (i) positive-feedback ML sense amplifiers (MLSAs), (ii) low-capacitance comparison logic, and (iii) low-power ML-segmentation techniques. The positive-feedback MLSAs include both resistive and active feedback to reduce the ML sensing energy. A body-bias technique can further improve the feedback action at the expense of additional area and ML capacitance. The measurement results of the active-feedback MLSA show 50-56% reduction in ML sensing energy. The measurement results of the proposed low-capacitance comparison logic show 25% and 42% reductions in ML sensing energy and time, respectively, which can further be improved by careful layout. The low-power ML-segmentation techniques include dual ML TCAM and charge-shared ML. Simulation results of the dual ML TCAM that connects two sides of the comparison logic to two ML segments for sequential sensing show 43% power savings for a small (4%) trade-off in the search speed. The charge-shared ML scheme achieves power savings by partial recycling of the charge stored in the first ML segment. Chip measurement results show that the charge-shared ML scheme results in 11% and 9% reductions in ML sensing time and energy, respectively, which can be improved to 19-25% by using a digitally controlled charge sharing time-window and a slightly modified MLSA. The static power reduction is achieved by a dual-VDD technique and low-leakage TCAM cells. The dual-VDD technique trades-off the excess noise margin of MLSA for smaller cell leakage by applying a smaller VDD to TCAM cells and a larger VDD to the peripheral circuits. The low-leakage TCAM cells trade off the speed of READ and WRITE operations for smaller cell area and leakage. Finally, design and testing of a complete TCAM chip are presented, and compared with other published designs.
26

Mémorisation de séquences dans des réseaux de neurones binaires avec efficacité élevée

JIANG, Xiaoran 08 January 2014 (has links) (PDF)
Sequential structure imposed by the forward linear progression of time is omnipresent in all cognitive behaviors. This thesis proposes a novel model to store sequences of any length, scalar or vectorial, in binary neural networks. Particularly, the model that we introduce allows resolving some well known problems in sequential learning, such as error intolerance, catastrophic forgetting and the interference issue while storing complex sequences, etc. The quantity of the total sequential information that the network is able to store grows quadratically with the number of nodes. And the efficiency - the ratio between the capacity and the total amount of information consumed by the storage device - can reach around 30%. This work could be considered as an extension of the non oriented clique-based neural networks previously proposed and studied within our team. Those networks composed of binary neurons and binary connections utilize the concept of graph redundancy and sparsity in order to acquire a quadratic learning diversity. To obtain the ability to store sequences, connections are provided with orientation to form a tournament-based neural network. This is more natural biologically speak- ing, since communication between neurons is unidirectional, from axons to synapses. Any component of the network, a cluster or a node, can be revisited several times within a sequence or by multiple sequences. This allows the network to store se- quences of any length, independent of the number of clusters and only limited by the total available resource of the network. Moreover, in order to allow error correction and provide robustness to the net- work, both spatial assembly redundancy and sequential redundancy, with or without anticipation, may be combined to offer a large amount of redundancy in the activation of a node. Subsequently, a double layered structure is introduced with the purpose of accurate retrieval. The lower layer of tournament-based hetero-associative network stores sequential oriented associations between patterns. An upper auto-associative layer of mirror nodes is superposed to emphasize the co-occurrence of the elements belonging to the same pattern, in the form of a clique. This model is then extended to a hierarchical structure, which helps resolve the interference issue while storing complex sequences. This thesis also contributes in proposing and assessing new decoding rules with respect to sparse messages in order to fully benefit from the theoretical quadratic law of the learning diversity. Besides the performance aspect, the biological plausibility is also a constant concern during this thesis work.
27

Behavioural Studies and Computational Models Exploring Visual Properties that Lead to the First Floral Contact by Bumblebees

Orbán, Levente L. 16 April 2014 (has links)
This dissertation explored the way in which bumblebees' visual system helps them discover their first flower. Previous studies found bees have unlearned preferences for parts of a flower, such as its colour and shape. The first study pitted two variables against each other: pattern type: sunburst or bull's eye, versus the location of the pattern: shapes appeared peripherally or centrally. We observed free-flying bees in a flight cage using Radio-Frequency Identification (RFID) tracking. The results show two distinct behavioural preferences: Pattern type predicts landing: bees prefer radial over concentric patterns, regardless of whether the radial pattern is on the perimeter or near the centre of the flower. Pattern location predicts exploration: bees were more likely to explore the inside of artificial flowers if the shapes were displayed near the centre of the flower, regardless of whether the pattern was radial or concentric. As part of the second component, we implemented a mathematical model aimed at explaining how bees come to prefer radial patterns, leafy backgrounds and symmetry. The model was based on unsupervised neural networks used to describe cognitive mechanisms. The results captured with the results of multiple behavioural experiments. The model suggests that bees choose computationally "cheaper" stimuli, those that contain less information. The third study tested the computational load hypothesis generated by the artificial neural networks. Visual properties of symmetry, and spatial frequency were tested. Studying free-flying bees in a flight cage using motion-sensitive video recordings, we found that bees preferred 4-axis symmetrical patterns in both low and high frequency displays.
28

Words And Rules In L2 Processing: An Analysis Of The Dual-mechanism Model

Bilal, Kirkici 01 March 2005 (has links) (PDF)
The nature of the mental representation and processing of morphologically complex words has constituted one of the major points of controversy in psycholinguistic research over the past two decades. The Dual-Mechanism Model defends the necessity of two separate mechanisms for linguistic processing, an associative memory and a rule-system, which account for the processing of irregular and regular word forms, respectively. The purpose of the present study was to analyse the validity of the claims of the Dual-Mechanism Model for second language (L2) processing in order to contribute to the accumulating but so far equivocal knowledge concerning L2 processing. A second purpose of the study was to find out whether L2 proficiency could be identified as a determining factor in the processing of L2 morphology. Two experiments (a lexical decision task on the English past tense and a elicited production task on English lexical compounds) were run with 22 low-proficiency and 24 high-proficiency first language (L1) Turkish users of L2 English and with 6 L1 speakers of English. The results showed that the regular-irregular dissociation predicted by the Dual-Mechanism Model was clearly evident in the production of English lexical compounds for all three subject groups. A comparatively weaker dissociation coupled with intricate response patterns was found in the processing of the English past tense, though possibly because of a number of confounding factors that were not sufficiently controlled. In addition, direct comparisons of the L2 groups displayed a remarkable effect of L2 proficiency on L2 morphological processing.
29

On Teaching Quality Improvement of a Mathematical Topic Using Artificial Neural Networks Modeling (With a Case Study)

Mustafa, Hassan M., Al-Hamadi, Ayoub 07 May 2012 (has links) (PDF)
This paper inspired by simulation by Artificial Neural Networks (ANNs) applied recently for evaluation of phonics methodology to teach children "how to read". A novel approach for teaching a mathematical topic using a computer aided learning (CAL) package applied at educational field (a children classroom). Interesting practical results obtained after field application of suggested CAL package with and without associated teacher's voice. Presented study highly recommends application of a novel teaching trend based on behaviorism and individuals' learning styles. That is to improve quality of children mathematical learning performance.
30

Estabilidade de atividade basal, recuperação e formação de memórias em redes de neurônios / Stability of basal activity, retrieval and formation of memories in networks of spiking neurons

Agnes, Everton João January 2014 (has links)
O encéfalo, através de complexa atividade elétrica, é capaz de processar diversos tipos de informação, que são reconhecidos, memorizados e recuperados. A base do processamento é dada pela atividade de neurônios, que se comunicam principalmente através de eventos discretos no tempo: os potenciais de ação. Os disparos desses potenciais de ação podem ser observados por técnicas experimentais; por exemplo, é possível medir os instantes dos disparos dos potenciais de ação de centenas de neurônios em camundongos vivos. No entanto, as intensidades das conexões entre esses neurônios não são totalmente acessíveis, o que, além de outros fatores, impossibilita um entendimento mais completo do funcionamento da rede neural. Desse modo, a neurociência computacional tem papel importante para o entendimento dos processos envolvidos no encéfalo, em vários níveis de detalhamento. Dentro da área da neurociência computacional, o presente trabalho aborda a aquisição e recuperação de memórias dadas por padrões espaciais, onde o espaço é definido pelos neurônios da rede simulada. Primeiro utilizamos o conceito da regra de Hebb para construir redes de neurônios com conexões previamente definidas por esses padrões espaciais. Se as memórias são armazenadas nas conexões entre os neurônios, então a inclusão de um período de aprendizado torna necessária a implementação de plasticidade nos pesos sinápticos. As regras de modificação sináptica que permitem memorização (Hebbianas) geralmente causam instabilidades na atividade dos neurônios. Com isso desenvolvemos regras de plasticidade homeostática capazes de estabilizar a atividade basal de redes de neurônios. Finalizamos com o estudo analítico e numérico de regras de plasticidade sináptica que permitam o aprendizado não-supervisionado por elevação da taxa de disparos de potenciais de ação de neurônios. Mostramos que, com uma regra de aprendizado baseada em evidências experimentais, a recuperação de padrões memorizados é possível, com ativação supervisionada ou espontânea. / The brain, through complex electrical activity, is able to process different types of information, which are encoded, stored and retrieved. The processing is based on the activity of neurons that communicate primarily by discrete events in time: the action potentials. These action potentials can be observed via experimental techniques; for example, it is possible to measure the moment of action potentials (spikes) of hundreds of neurons in living mice. However, the strength of the connections among these neurons is not fully accessible, which, among other factors, preclude a more complete understanding of the neural network. Thus, computational neuroscience has an important role in understanding the processes involved in the brain, at various levels of detail. Within the field of computational neuroscience, this work presents a study on the acquisition and retrieval of memories given by spatial patterns, where space is defined by the neurons of the simulated network. First we use Hebb’s rule to build up networks of spiking neurons with static connections chosen from these spatial patterns. If memories are stored in the connections between neurons, then synaptic weights should be plastic so that learning is possible. Synaptic plasticity rules that allow memory formation (Hebbian) usually introduce instabilities on the neurons’ activity. Therefore, we developed homeostatic plasticity rules that stabilize baseline activity regimes in neural networks of spiking neurons. This thesis ends with analytical and numerical studies regarding plasticity rules that allow unsupervised learning by increasing the activity of specific neurons. We show that, with a plasticity rule based on experimental evidences, retrieval of learned patterns is possible, either with supervised or spontaneous recalling.

Page generated in 0.061 seconds