• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A CURRENT-BASED WINNER-TAKE-ALL (WTA) CIRCUIT FOR ANALOG NEURAL NETWORK ARCHITECTURE

Rijal, Omkar 01 December 2022 (has links)
The Winner-Take-All (WTA) is an essential neural network operation for locating the most active neuron. Such a procedure has been extensively used in larger application areas. The Winner-Take-All circuit selects the maximum of the inputs inhibiting all other nodes. The efficiency of the analog circuits may well be considerably higher than the digital circuits. Also, analog circuits’ design footprint and processing time can be significantly small. A current-based Winner-Take-All circuit for analog neural networks is presented in this research. A compare and pass (CAP) mechanism has been used, where each input pair is compared, and the winner is selected and passed to another level. The inputs are compared by a sense amplifier which generates high and low voltage signals at the output node. The voltage signal of the sense amplifier is used to select the winner and passed to another level using logic gates. Also, each winner follows a sequence of digital bits to be selected. The findings of the SPICE simulation are also presented. The simulation results on the MNIST, Fashion-MNIST, and CIFAR10 datasets for the memristive deep neural network model show the significantly accurate result of the winner class with an average difference of input and selected winner output current of 0.00795uA, 0.01076uA and 0.02364uA respectively. The experimental result with transient noise analysis is also presented.
2

Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural Gradient

January 2019 (has links)
abstract: Deep neural networks (DNNs) have had tremendous success in a variety of statistical learning applications due to their vast expressive power. Most applications run DNNs on the cloud on parallelized architectures. There is a need for for efficient DNN inference on edge with low precision hardware and analog accelerators. To make trained models more robust for this setting, quantization and analog compute noise are modeled as weight space perturbations to DNNs and an information theoretic regularization scheme is used to penalize the KL-divergence between perturbed and unperturbed models. This regularizer has similarities to both natural gradient descent and knowledge distillation, but has the advantage of explicitly promoting the network to and a broader minimum that is robust to weight space perturbations. In addition to the proposed regularization, KL-divergence is directly minimized using knowledge distillation. Initial validation on FashionMNIST and CIFAR10 shows that the information theoretic regularizer and knowledge distillation outperform existing quantization schemes based on the straight through estimator or L2 constrained quantization. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019

Page generated in 0.0455 seconds