• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 1
  • Tagged with
  • 15
  • 15
  • 13
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficientnext: Efficientnet For Embedded Systems

Deokar, Abhishek 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Convolutional Neural Networks have come a long way since AlexNet. Each year the limits of the state of the art are being pushed to new levels. EfficientNet pushed the performance metrics to a new high and EfficientNetV2 even more so. Even so, architectures for mobile applications can benefit from improved accuracy and reduced model footprint. The classic Inverted Residual block has been the foundation upon which most mobile networks seek to improve. EfficientNet architecture is built using the same Inverted Residual block. In this thesis we experiment with Harmonious Bottlenecks in place of the Inverted Residuals to observe a reduction in the number of parameters and improvement in accuracy. The designed network is then deployed on the NXP i.MX 8M Mini board for Image classification. / 2023-10-11
12

Implementace neuronové sítě bez operace násobení / Neural Network Implementation without Multiplication

Slouka, Lukáš January 2018 (has links)
The subject of this thesis is neural network acceleration with the goal of reducing the number of floating point multiplications. The theoretical part of the thesis surveys current trends and methods used in the field of neural network acceleration. However, the focus is on the binarization techniques which allow replacing multiplications with logical operators. The theoretical base is put into practice in two ways. First is the GPU implementation of crucial binary operators in the Tensorflow framework with a performance benchmark. Second is an application of these operators in simple image classifier. Results are certainly encouraging. Implemented operators achieve speed-up by a factor of 2.5 when compared to highly optimized cuBLAS operators. The last chapter compares accuracies achieved by binarized models and their full-precision counterparts on various architectures.
13

Transfer learning between domains : Evaluating the usefulness of transfer learning between object classification and audio classification

Frenger, Tobias, Häggmark, Johan January 2020 (has links)
Convolutional neural networks have been successfully applied to both object classification and audio classification. The aim of this thesis is to evaluate the degree of how well transfer learning of convolutional neural networks, trained in the object classification domain on large datasets (such as CIFAR-10, and ImageNet), can be applied to the audio classification domain when only a small dataset is available. In this work, four different convolutional neural networks are tested with three configurations of transfer learning against a configuration without transfer learning. This allows for testing how transfer learning and the architectural complexity of the networks affects the performance. Two of the models developed by Google (Inception-V3, Inception-ResNet-V2), are used. These models are implemented using the Keras API where they are pre-trained on the ImageNet dataset. This paper also introduces two new architectures which are developed by the authors of this thesis. These are Mini-Inception, and Mini-Inception-ResNet, and are inspired by Inception-V3 and Inception-ResNet-V2, but with a significantly lower complexity. The audio classification dataset consists of audio from RC-boats which are transformed into mel-spectrogram images. For transfer learning to be possible, Mini-Inception, and Mini-Inception-ResNet are pre-trained on the dataset CIFAR-10. The results show that transfer learning is not able to increase the performance. However, transfer learning does in some cases enable models to obtain higher performance in the earlier stages of training.
14

Comparative Study of Classification Methods for the Mitigation of Class Imbalance Issues in Medical Imaging Applications

Kueterman, Nathan 22 June 2020 (has links)
No description available.
15

EFFICIENTNEXT: EFFICIENTNET FOR EMBEDDED SYSTEMS

Abhishek Rajendra Deokar (12456477) 12 July 2022 (has links)
<p>Convolutional Neural Networks have come a long way since  AlexNet. Each year the limits of the state of the art are being pushed to new  levels. EfficientNet pushed the performance metrics to a new high and EfficientNetV2 even more so. Even so, architectures for mobile applications can benefit from improved accuracy and reduced model footprint. The classic Inverted Residual block has been the foundation upon which most mobile networks seek to improve. EfficientNet architecture is built using the same Inverted Residual block. In this paper we experiment with Harmonious Bottlenecks in  place of the Inverted Residuals to observe a reduction in the number of parameters and improvement in accuracy. The designed network is then deployed on the NXP i.MX 8M Mini board for Image classification.</p>

Page generated in 0.0267 seconds