Spelling suggestions: "subject:"squeezenet"" "subject:"squeezenext""
1 |
Real-time face recognition using one-shot learning : A deep learning and machine learning projectDarborg, Alex January 2020 (has links)
Face recognition is often described as the process of identifying and verifying people in a photograph by their face. Researchers have recently given this field increased attention, continuously improving the underlying models. The objective of this study is to implement a real-time face recognition system using one-shot learning. “One shot” means learning from one or few training samples. This paper evaluates different methods to solve this problem. Convolutional neural networks are known to require large datasets to reach an acceptable accuracy. This project proposes a method to solve this problem by reducing the number of training instances to one and still achieving an accuracy close to 100%, utilizing the concept of transfer learning.
|
2 |
Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware DeploymentAkash Gaikwad (5931047) 17 January 2019 (has links)
<p>In recent years, deep learning models have become popular in
the real-time embedded application, but there are many complexities for
hardware deployment because of limited resources such as memory, computational
power, and energy. Recent research in the field of deep learning focuses on
reducing the model size of the Convolution Neural Network (CNN) by various
compression techniques like Architectural compression, Pruning, Quantization,
and Encoding (e.g., Huffman encoding). Network pruning is one of the promising
technique to solve these problems.</p>
<p>This thesis proposes methods to
prune the convolution neural network (SqueezeNet) without introducing network
sparsity in the pruned model. </p>
<p>This thesis proposes three methods to prune the CNN to
decrease the model size of CNN without a significant drop in the accuracy of
the model.</p>
<p>1: Pruning based on Taylor expansion of change in cost
function Delta C.</p>
<p>2: Pruning based on L<sub>2</sub> normalization of activation maps.</p>
<p>3: Pruning based on a combination of method 1 and method 2.</p><p>The proposed methods use various
ranking methods to rank the convolution kernels and prune the lower ranked
filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer
learning technique is used to train the SqueezeNet on the CIFAR-10 dataset.
Results show that the proposed approach reduces the SqueezeNet model by 72%
without a significant drop in the accuracy of the model (optimal pruning
efficiency result). Results also show that Pruning based on a combination of
Taylor expansion of the cost function and L<sub>2</sub> normalization of activation maps
achieves better pruning efficiency compared to other individual pruning
criteria and most of the pruned kernels are from mid and high-level layers. The
Pruned model is deployed on BlueBox 2.0 using RTMaps software and model
performance was evaluated.</p><p></p>
|
3 |
Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware DeploymentGaikwad, Akash S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems.
This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model.
This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model.
1: Pruning based on Taylor expansion of change in cost function Delta C.
2: Pruning based on L2 normalization of activation maps.
3: Pruning based on a combination of method 1 and method 2.
The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
|
Page generated in 0.0294 seconds