• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TOWARDS EFFICIENT AND ROBUST DEEP LEARNING :HANDLING DATA NON-IDEALITY AND LEVERAGINGIN-MEMORY COMPUTING

Sangamesh D Kodge (19958580) 05 November 2024 (has links)
<p dir="ltr">Deep learning has achieved remarkable success across various domains, largely relyingon assumptions of ideal data conditions—such as balanced distributions, accurate labeling,and sufficient computational resources—that rarely hold in real-world applications. Thisthesis addresses the significant challenges posed by data non-idealities, including privacyconcerns, label noise, non-IID (Independent and Identically Distributed) data, and adversarial threats, which can compromise model performance and security. Additionally, weexplore the computational limitations inherent in traditional architectures by introducingin-memory computing techniques to mitigate the memory bottleneck in deep neural networkimplementations.We propose five novel contributions to tackle these challenges and enhance the efficiencyand robustness of deep learning models. First, we introduce a gradient-free machine unlearning algorithm to ensure data privacy by effectively forgetting specific classes withoutretraining. Second, we propose a corrective machine unlearning technique, SAP, that improves robustness against label noise using Scaled Activation Projections. Third, we presentthe Neighborhood Gradient Mean (NGM) method, a decentralized learning approach thatoptimizes performance on non-IID data with minimal computational overhead. Fourth, wedevelop TREND, an ensemble design strategy that leverages transferability metrics to enhance adversarial robustness. Finally, we explore an in-memory computing solution, IMAC,that enables energy-efficient and low-latency multiplication and accumulation operationsdirectly within 6T SRAM arrays.These contributions collectively advance the state-of-the-art in handling data non-idealitiesand computational efficiency in deep learning, providing robust, scalable, and privacypreserving solutions suitable for real-world deployment across diverse environments.</p>
2

Adoption of 2T2C ferroelectric memory cells for logic operation

Ravsher, Taras, Mulaosmanovic, Halid, Breyer, Evelyn T., Havel, Viktor, Mikolajick, Thomas, Slesazeck, Stefan 17 December 2021 (has links)
A 2T2C ferroelectric memory cell consisting of a select transistor, a read transistor and two ferroelectric capacitors that can be operated either in FeRAM mode or in memristive ferroelectric tunnel junction mode is proposed. The two memory devices can be programmed individually. By performing a combined readout operation, the two stored bits of the memory cells can be combined to perform in-memory logic operation. Moreover, additional input logic signals that are applied as external readout voltage pulses can be used to perform logic operation together with the stored logic states of the ferroelectric capacitors. Electrical characterization results of the logic-in-memory (LiM) functionality is presented.

Page generated in 0.0436 seconds