• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TOWARDS EFFICIENT AND ROBUST DEEP LEARNING :HANDLING DATA NON-IDEALITY AND LEVERAGINGIN-MEMORY COMPUTING

Sangamesh D Kodge (19958580) 05 November 2024 (has links)
<p dir="ltr">Deep learning has achieved remarkable success across various domains, largely relyingon assumptions of ideal data conditions—such as balanced distributions, accurate labeling,and sufficient computational resources—that rarely hold in real-world applications. Thisthesis addresses the significant challenges posed by data non-idealities, including privacyconcerns, label noise, non-IID (Independent and Identically Distributed) data, and adversarial threats, which can compromise model performance and security. Additionally, weexplore the computational limitations inherent in traditional architectures by introducingin-memory computing techniques to mitigate the memory bottleneck in deep neural networkimplementations.We propose five novel contributions to tackle these challenges and enhance the efficiencyand robustness of deep learning models. First, we introduce a gradient-free machine unlearning algorithm to ensure data privacy by effectively forgetting specific classes withoutretraining. Second, we propose a corrective machine unlearning technique, SAP, that improves robustness against label noise using Scaled Activation Projections. Third, we presentthe Neighborhood Gradient Mean (NGM) method, a decentralized learning approach thatoptimizes performance on non-IID data with minimal computational overhead. Fourth, wedevelop TREND, an ensemble design strategy that leverages transferability metrics to enhance adversarial robustness. Finally, we explore an in-memory computing solution, IMAC,that enables energy-efficient and low-latency multiplication and accumulation operationsdirectly within 6T SRAM arrays.These contributions collectively advance the state-of-the-art in handling data non-idealitiesand computational efficiency in deep learning, providing robust, scalable, and privacypreserving solutions suitable for real-world deployment across diverse environments.</p>

Page generated in 0.0402 seconds