• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 16
  • 16
  • 14
  • 7
  • 7
  • 7
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Distributed associative memory

Sterne, Philip Jonathan January 2011 (has links)
This dissertation modifies error-correcting codes and Bloom filters to create high-capacity associative memories. These associative memories use principled statistical inference and are distributed as no single component contains enough information to complete the task by itself, yet the components can collectively solve the task by passing information to each other. These associative memories are also robust to hardware failure as their distributed nature ensures there is no single point of failure. This dissertation starts by simplifying a Bloom filter so that it tolerates hardware failure (albeit with reduced performance). An efficient associative memory is created by performing inference over the set of items stored in the Bloom filter. This architecture suggests a modification which forgets old patterns stored in the associative memory (known as a palimpsest memory). It is shown that overwriting old patterns in an independent manner reduces performance, but is still comparable to the well-known Hopfield network. The lost performance can be regained using integer storage which allows the superposition of the pattern representation, or ensuring bits are not overwritten independently using concepts from errorcorrecting codes. The final task performs recall in continuous time using components which are more similar to neurons than used in the rest of the dissertation. The resulting memory has the exciting ability to recall many patterns simultaneously. Statistical inference ensures gradual degradation of the performance as an associative memory is overloaded. Since many definitions of associative memory capacity rely on the existence of catastrophic failure a new definition of capacity is provided. In spite of some biologically unrealistic attributes, this work is relevant to the understanding of the brain as it provides high performance solutions to the associative memory task which is known to be relevant to the brain.
2

Logic-per-track associative memory

Tang, Geok-Seng January 1976 (has links)
An associative, or content-addressable, memory, one in which data may he retrieved "by its value rather than by real address, has always been an attractive idea. Although such a memory has not yet proven practical for files of respectable size, much interesting work has been done on the subject, for example, Minsky (1972), Slotnick (1970) and Parker (1970, 1971). This thesis is concerned with the device proposed by Slotnick and Parker, called 'Logic Per Track Device'. After briefly reviewing the design and capabilities of their device, the thesis proceeds to propose some modifications to the design which not only lead to greatly enhanced performance, but also establish its practical application for files of respectable size. In the device of Slotnick and Parker, there is a fairly sophisticated logic chip attached directly to each non-movable read-write head. This allows all logic heads to search simultaneously for information matching a given key, so that any desired record could be located within one revolution. However, reading and writing will require a second revolution because part of the record will have passed the head before the match is recognized. Moreover, if more than one record matches the search key, the extra bookkeeping will be needed if matching records on different tracks should partially overlap. These problems have been ignored in the retrieval system developed by Parker (1970, 1971). The following four additional features of the device have been proposed: 1. Two logic heads on each track has been introduced. The leading head will continue to have the primary responsibility for simultaneous searching. The additional second head, trailing a fixed distance behind will do the actual reading and writing of records. 2. A delay register whose length is the distance between logic heads on the same track, has been added to the read-write head. The function of the delay bit is to tell the read-write head partner where to start reading (or writing) a record whenever a match is recognized so that retrieving (or writing) a single record can always be performed in the same revolution. 3. Another major design change will give the new device the ability to keep track of all records which may be retrieved within a single revolution by parallel search. To this end, the monitor, which synchronizes the activities of all logic head couples, will be provided with a record counter, and a mark entity will be prefixed to every record on the disk itself. A file identification mechanism has been established for the associative memory. Functions of such a mechanism are (a) to manage file names, and (b) to manipulate data on the storage device. Next step is to explore the use of such a modified device for file-oriented problems. 'Hierarchical search' for records possessing a specified combination of keys can be performed directly on the key part of records without the intermediate step of transmitting records into the main computer memory. In an application requiring chain processing, the chain pointer can he a key of the record because each record in the associative memory is accessed by content rather than by real address. The chain key can be generated from the key if the record it points to by a simple and reversible procedure. Such a chain technique has a number of advantages: (a) any chain is in fact a two-way chain, (b) each record in the chain can be retrieved by following the chain key, or directly by the key of the record if it is known, and (c) the tangle of actual physical addresses in the chain processing can be avoided. The storage organization for more complex data structure such as tree structures presents another unique feature of the modified memory. In a tree structure, indexes to the subordinate records may be kept with each parent record, or each subordiante recoEdsnjaays^tDreaaniaindsx to its parent record. Both data structures take the same amount of storage space. Comparison <3f its performance to the convent tional counterpart shows that significant improvements in access times can be achived. / Science, Faculty of / Computer Science, Department of / Graduate
3

The stability and attractivity of neural associative memories.

January 1996 (has links)
Han-bing Ji. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (p. 160-163). / Microfiche. Ann Arbor, Mich.: UMI, 1998. 2 microfiches ; 11 x 15 cm.
4

Associative neural networks: properties, learning, and applications.

January 1994 (has links)
by Chi-sing Leung. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (leaves 236-244). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background of Associative Neural Networks --- p.1 / Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3 / Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6 / Chapter 1.4 --- Scope and Organization --- p.9 / Chapter 1.5 --- Summary of Publications --- p.13 / Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17 / Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18 / Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18 / Chapter 2.2 --- Recall Process of BAM --- p.20 / Chapter 2.3 --- Stability of BAM --- p.22 / Chapter 2.4 --- Memory Capacity of BAM --- p.24 / Chapter 2.5 --- Error Correction Capability of BAM --- p.28 / Chapter 2.6 --- Chapter Summary --- p.29 / Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31 / Chapter 3.1 --- Introduction --- p.31 / Chapter 3.2 --- Existence of Energy Barrier --- p.34 / Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44 / Chapter 3.4 --- Confidence Dynamics --- p.49 / Chapter 3.5 --- Numerical Results from the Dynamics --- p.63 / Chapter 3.6 --- Chapter Summary --- p.68 / Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70 / Chapter 4.1 --- Introduction --- p.70 / Chapter 4.2 --- Second order BAM and its Stability --- p.71 / Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75 / Chapter 4.4 --- Numerical Results --- p.82 / Chapter 4.5 --- Extension to higher order BAM --- p.90 / Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94 / Chapter 4.7 --- Chapter Summary --- p.95 / Chapter 5 --- Enhancement of BAM --- p.97 / Chapter 5.1 --- Background --- p.97 / Chapter 5.2 --- Review on Modifications of BAM --- p.101 / Chapter 5.2.1 --- Change of the encoding method --- p.101 / Chapter 5.2.2 --- Change of the topology --- p.105 / Chapter 5.3 --- Householder Encoding Algorithm --- p.107 / Chapter 5.3.1 --- Construction from Householder Transforms --- p.107 / Chapter 5.3.2 --- Construction from iterative method --- p.109 / Chapter 5.3.3 --- Remarks on HCA --- p.111 / Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112 / Chapter 5.4.1 --- Construction of EHCA --- p.112 / Chapter 5.4.2 --- Remarks on EHCA --- p.114 / Chapter 5.5 --- Bidirectional Learning --- p.115 / Chapter 5.5.1 --- Construction of BL --- p.115 / Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116 / Chapter 5.5.3 --- Remarks on BL --- p.120 / Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121 / Chapter 5.6.1 --- Construction of AHKBL --- p.121 / Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124 / Chapter 5.6.3 --- Remarks on AHKBL --- p.125 / Chapter 5.7 --- Computer Simulations --- p.126 / Chapter 5.7.1 --- Memory Capacity --- p.126 / Chapter 5.7.2 --- Error Correction Capability --- p.130 / Chapter 5.7.3 --- Learning Speed --- p.157 / Chapter 5.8 --- Chapter Summary --- p.158 / Chapter 6 --- BAM under Forgetting Learning --- p.160 / Chapter 6.1 --- Introduction --- p.160 / Chapter 6.2 --- Properties of Forgetting Learning --- p.162 / Chapter 6.3 --- Computer Simulations --- p.168 / Chapter 6.4 --- Chapter Summary --- p.168 / Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170 / Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171 / Chapter 7.1 --- Background on Vector quantization --- p.171 / Chapter 7.2 --- Introduction to LBG algorithm --- p.173 / Chapter 7.3 --- Introduction to Kohonen Map --- p.174 / Chapter 7.4 --- Chapter Summary --- p.179 / Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181 / Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182 / Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188 / Chapter 8.1.3 --- Computer Simulations --- p.191 / Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195 / Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195 / Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198 / Chapter 8.2.3 --- Computer Simulations --- p.200 / Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213 / Chapter 8.3.1 --- Motivation and Background --- p.214 / Chapter 8.3.2 --- Trellis Coded Modulation --- p.216 / Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220 / Chapter 8.3.4 --- Computer Simulations --- p.223 / Chapter 8.4 --- Chapter Summary --- p.226 / Chapter 9 --- Conclusion --- p.232 / Bibliography --- p.236
5

Neural network with multiple-valued activation function. / CUHK electronic theses & dissertations collection

January 1996 (has links)
by Chen, Zhong-Yu. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1996. / Includes bibliographical references (p. 146-[154]). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
6

Associative processing implemented with content-addressable memories

Kida, Luis Sergio 01 January 1991 (has links)
The associative processing model provides an alternative solution to the von Neumann bottleneck. The memory of an associative computer takes some of the responsibility for processing. Only intermediate results are exchanged between memory and processor. This greatly reduces the amount of communication between them. Content-addressable memories are one implementation of memory for this computational model. Associative computers implemented with CAMs have reported performance improvements of three orders of magnitude, which is equivalent to the performance of the same application running in a conventional computer with clock frequencies of the order of GHz. Among the benefits of content-addressable memories to the computer system are: 1) it is simpler to parallelize algorithms and implement concurrency; 2) the synchronization cost for parallel processing is lower, which enables the use of small grain parallelism; 3) it can improve the performance in non-numeric applications that are known to have low performance in conventional computers; 4) it provides a trade off between integration density and clock frequencies to achieve the same performance that is not available in RAM 5) matches well to current and future technologies due to the trade off between integration and clock frequency; 6) it attacks the von Neumann bottleneck by reducing the requirements on the communication bandwidth between processor and memory. In this thesis, the role of CAMs in associative processing is analyzed, reaching the conclusion that to implement these characteristics the CAM must be able to filter the data transferred to the processor, provide explicit support for parallelism and data structures, support non-numeric applications, and execute logical operations. The characteristics and architecture of a content-addressable memory integrated circuit are presented along with an application with estimated performance improvement of over three orders of magnitude.
7

Smart Memory: An Inexact Content-Addressable Memory

Lee, Jack 12 February 1993 (has links)
The function of a Content-Addressable Memory (CAM) is to efficiently search the information stored in the memory, by using hardware rather than software with a corresponding improvement in searching speed. This hardware allows a parallel search by matching the data stored in memory to a search key rather than sequentially searching address by address as is done in a Random Access Memory. Although existing CAMs are more efficient in finding relevant information than RAM, there are additional improvements that can be made to further improve its efficiency. For example, previous CAMs use a word parallel searching scheme that can only identify exact matches. To find the best (closest) match, previous CAMs had to use bit serial approaches. Although still more efficient than RAM searching, these CAMs were limited by the word size (bit width) of the memory. Responding to this inefficiency, the CAM described in this thesis improves best-fit searching by using analog design in combination with digital design. This design retains a mismatch line to collect the result of the comparison of each bit of a word which is decoded by a simple flash A/D. This means that after a single operation the best-fit plus all words with zero to three bits of mismatch, are determined. This word/bit parallel searching makes this CAM more efficient than existing CAMs. The best-fit function of this CAM is good for database retrieval, communications and error correction circuitry. By using the high speed searching and the inexact match feature, this CAM also provides efficient sorting and set operations. The accumulated searching time is shortened when compared to regular CAM and RAM. The inexact CAM in this thesis is designed using mixed analog/digital design in a 2~ CMOS technology.
8

Attention modulated associative computing and content-associative search in image archive

Khan, Muhammad Javed Iqbal January 1995 (has links)
Thesis (Ph. D.)--University of Hawaii at Manoa, 1995. / Includes bibliographical references (leaves 220-227). / Microfiche. / xiii, 227 leaves, bound ill. (some col.) 29 cm
9

Low-power dynamic CMOS circuits in high-performance memory arrays /

Patwary, Md. Ataur R. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 77-79). Also available on the World Wide Web.
10

The Design of a Simple, Spiking Sparse Coding Algorithm for Memristive Hardware

Woods, Walt 11 March 2016 (has links)
Calculating a sparse code for signals with high dimensionality, such as high-resolution images, takes substantial time to compute on a traditional computer architecture. Memristors present the opportunity to combine storage and computing elements into a single, compact device, drastically reducing the area required to perform these calculations. This work focused on the analysis of two existing sparse coding architectures, one of which utilizes memristors, as well as the design of a new, third architecture that employs a memristive crossbar. These architectures implement either a non-spiking or spiking variety of sparse coding based on the Locally Competitive Algorithm (LCA) introduced by Rozell et al. in 2008. Each architecture receives an arbitrary number of input lines and drives an arbitrary number of output lines. Training of the dictionary used for the sparse code was implemented through external control signals that approximate Oja's rule. The resulting designs were capable of representing input in real-time: no resets would be needed between frames of a video, for instance, though some settle time would be needed. The spiking architecture proposed is novel, emphasizing simplicity to achieve lower power than existing designs. The architectures presented were tested for their ability to encode and reconstruct 8 x 8 patches of natural images. The proposed network reconstructed patches with a normalized, root-mean-square error of 0.13, while a more complicated CMOS-only approach yielded 0.095, and a non-spiking approach yielded 0.074. Several outputs competing for representation of the input was shown to improve reconstruction quality and preserve more subtle components in the final encoding; the proposed algorithm lacks this feature. Steps to address this were proposed for future work by scaling input spikes according to the current expected residual, without adding much complexity. The architectures were also tested with the MNIST digit database, passing a sparse code onto a basic classifier. The proposed architecture scored 81% on this test, a CMOS-only spiking variant scored 76%, and the non-spiking algorithm scored 85%. Power calculations were made for each design and compared against other publications. The overall findings showed great promise for spiking memristor-based ASICs, consuming only 28% of the power used by non-spiking architectures and 6.6% as much power as a CMOS-only spiking architecture on this task. The spike-based nature of the novel design was also parameterized into several intuitive parameters that could be adjusted to prefer either performance or power efficiency. The design and analysis of architectures for sparse coding should greatly reduce the amount of future work needed to implement an end-to-end classification pipeline for images or other signal data. When lower power is a primary concern, the proposed architecture should be considered as it surpassed other published algorithms. These pipelines could be used to provide low-power visual assistance, highlighting objects within high-definition video frames in real-time. The technology could also be used to help self-driving cars identify hazards more quickly and efficiently.

Page generated in 0.0836 seconds