• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Achieving More with Less: Learning Generalizable Neural Networks With Less Labeled Data and Computational Overheads

Bu, Jie 15 March 2023 (has links)
Recent advancements in deep learning have demonstrated its incredible ability to learn generalizable patterns and relationships automatically from data in a number of mainstream applications. However, the generalization power of deep learning methods largely comes at the costs of working with very large datasets and using highly compute-intensive models. Many applications cannot afford these costs needed to ensure generalizability of deep learning models. For instance, obtaining labeled data can be costly in scientific applications, and using large models may not be feasible in resource-constrained environments involving portable devices. This dissertation aims to improve efficiency in machine learning by exploring different ways to learn generalizable neural networks that require less labeled data and computational resources. We demonstrate that using physics supervision in scientific problems can reduce the need for labeled data, thereby improving data efficiency without compromising model generalizability. Additionally, we investigate the potential of transfer learning powered by transformers in scientific applications as a promising direction for further improving data efficiency. On the computational efficiency side, we present two efforts for increasing parameter efficiency of neural networks through novel architectures and structured network pruning. / Doctor of Philosophy / Deep learning is a powerful technique that can help us solve complex problems, but it often requires a lot of data and resources. This research aims to make deep learning more efficient, so it can be applied in more situations. We propose ways to make the deep learning models require less data and less computer power. For example, we leverage the physics rules as additional information for training the neural network to learn from less labeled data and we use a technique called transfer learning to leverage knowledge from data that is from other distribution. Transfer learning may allow us to further reduce the need for labeled data in scientific applications. We also look at ways to make the deep learning models use less computational resources, by effectively reducing their sizes via novel architectures or pruning out redundant structures.
2

Evaluation of Pruning Algorithms for Activity Recognition on Embedded Machine Learning / Utvärdering av beskärningsalgoritmer för aktivitetsigenkänning på inbäddad maskininlärning

Namazi, Amirhossein January 2023 (has links)
With the advancement of neural networks and deep learning, the complexity and size of models have increased exponentially. On the other hand, advancements of internet of things (IoT) and sensor technology have opened for many embedded machine learning applications and projects. In many of these applications, the hardware has some constraints in terms of computational and memory resources. The always increasing popularity of these applications, require shrinking and compressing neural networks in order to satisfy the requirements. The frameworks and algorithms governing the compression of a neural network are commonly referred to as pruning algorithms. In this project several pruning frameworks are applied to different neural network architectures to better understand their effect on the performance as well as the size of the model. Through experimental evaluations and analysis, this thesis provides insights into the benefits and trade-offs of pruning algorithms in terms of size and performance, shedding light on their practicality and suitability for embedded machine learning. The findings contribute to the development of more efficient and optimized neural networks for resource constrained hardware, in real-world IoT applications such as wearable technology. / Med framstegen inom neurala nätverk och djupinlärning har modellernas komplexitet och storlek ökat exponentiellt. Samtidigt har framsteg inom Internet of Things (IoT) och sensorteknik öppnat upp för många inbyggda maskininlärningsapplikationer och projekt. I många av dessa applikationer finns det begränsningar i hårdvaran avseende beräknings- och minnesresurser. Den ständigt ökande populariteten hos dessa applikationer kräver att neurala nätverk minskas och komprimeras för att uppfylla kraven. Ramverken och algoritmerna som styr komprimeringen av ett neuralt nätverk kallas vanligtvis för beskärningsalgoritmer. I detta projekt tillämpas flera beskärningsramverk på olika neurala nätverksarkitekturer för att bättre förstå deras effekt på prestanda och modellens storlek. Genom experimentella utvärderingar och analys ger denna avhandling insikter om fördelarna och avvägningarna med beskärningsalgoritmer vad gäller storlek och prestanda, och belyser deras praktiska användbarhet och lämplighet för inbyggd maskininlärning. Resultaten bidrar till utvecklingen av mer effektiva och optimerade neurala nätverk för resursbegränsad hårdvara i verkliga IoT-applikationer, såsom bärbar teknik.

Page generated in 0.0761 seconds