• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 16
  • 16
  • 14
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Memory-based learning structure : learning covergence [sic], network structures and training techniques /

Cheng, Yi-Hsun Ethan, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 104-106). Also available on the Internet.
12

Memory-based learning structure learning covergence [sic], network structures and training techniques /

Cheng, Yi-Hsun Ethan, January 2000 (has links)
Thesis (Ph. D.)--University of Missouri-Columbia, 2000. / Typescript. Vita. Includes bibliographical references (leaves 104-106). Also available on the Internet.
13

Novel low power CAM architecture /

Ng, Ka Fai. January 2008 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2008. / Typescript. Includes bibliographical references (leaves 73-75).
14

A garbage-collecting associative memory for database systems

Ross, Richard Alan January 1982 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1982. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Bibliography: leaves 64-66. / by Richard Alan Ross. / M.S.
15

Evolving Nano-scale Associative Memories with Memristors

Sinha, Arpita 01 January 2011 (has links)
Associative Memories (AMs) are essential building blocks for brain-like intelligent computing with applications in artificial vision, speech recognition, artificial intelligence, and robotics. Computations for such applications typically rely on spatial and temporal associations in the input patterns and need to be robust against noise and incomplete patterns. The conventional method for implementing AMs is through Artificial Neural Networks (ANNs). Improving the density of ANN based on conventional circuit elements poses a challenge as devices reach their physical scalability limits. Furthermore, stored information in AMs is vulnerable to destructive input signals. Novel nano-scale components, such as memristors, represent one solution to the density problem. Memristors are non-linear time-dependent circuit elements with an inherently small form factor. However, novel neuromorphic circuits typically use memristors to replace synapses in conventional ANN circuits. This sub-optimal use is primarily because there is no established design methodology to exploit the memristor's non-linear properties in a more encompassing way. The objective of this thesis is to explore denser and more robust AM designs using memristor networks. We hypothesize that such network AMs will be more area-efficient than the traditional ANN designs if we can use the memristor's non-linear property for spatial and time-dependent temporal association. We have built a comprehensive simulation framework that employs Genetic Programming (GP) to evolve AM circuits with memristors. The framework is based on the ParadisEO metaheuristics API and uses ngspice for the circuit evaluation. Our results show that we can evolve efficient memristor-based networks that have the potential to replace conventional ANNs used for AMs. We obtained AMs that a) can learn spatial and temporal correlation in the input patterns; b) optimize the trade-off between the size and the accuracy of the circuits; and c) are robust against destructive noise in the inputs. This robustness was achieved at the expense of additional components in the network. We have shown that automated circuit discovery is a promising tool for memristor-based circuits. Future work will focus on evolving circuits that can be used as a building block for more complicated intelligent computing architectures.
16

Improving associative memory in a network of spiking neurons

Hunter, Russell I. January 2011 (has links)
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.

Page generated in 0.0659 seconds