• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Akash Gaikwad (5931047) 17 January 2019 (has links)
<p>In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems.</p> <p>This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. </p> <p>This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model.</p> <p>1: Pruning based on Taylor expansion of change in cost function Delta C.</p> <p>2: Pruning based on L<sub>2</sub> normalization of activation maps.</p> <p>3: Pruning based on a combination of method 1 and method 2.</p><p>The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L<sub>2</sub> normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.</p><p></p>
2

Identification of thermal building properties using gray box and deep learning methods

Baasch, Gaby 25 January 2021 (has links)
Enterprising technologies and policies that focus on energy reduction in buildings are paramount to achieving global carbon emissions targets. Energy retrofits, building stock modelling, heating, ventilation, and air conditioning (HVAC) upgrades and demand side management all present high leverage opportunities in this regard. Advances in computing, data science and machine learning can be leveraged to enhance these methods and thus to expedite energy reduction in buildings but challenges such as lack of data, limited model generalizability and reliability and un-reproducible studies have resulted in restricted industry adoption. In this thesis, rigorous and reproducible studies are designed to evaluate the benefits and limitations of state-of-the-art machine learning and statistical techniques for high-impact applications, with an emphasis on addressing the challenges listed above. The scope of this work includes calibration of physics-based building models and supervised deep learning, both of which are used to estimate building properties from real and synthetic data. • Original grey-box methods are developed to characterize physical thermal properties (RC and RK)from real-world measurement data. • The novel application of supervised deep learning for thermal property estimation and HVAC systems identification is shown to achieve state-of-the-art performance (root mean squared error of 0.089 and 87% validation accuracy, respectively). • A rigorous empirical review is conducted to assess which types of gray and black box models are most suitable for practical application. The scope of the review is wider than previous studies, and the conclusions suggest a re-framing of research priorities for future work. • Modern interpretability techniques are used to provide unique insight into the learning behaviour of the black box methods. Overall, this body of work provides a critical appraisal of new and existing data-driven approaches for thermal property estimation in buildings. It provides valuable and novel insight into barriers to widespread adoption of these techniques and suggests pathways forward. Performance benchmarks, open-source model code and a parametrically generated, synthetic dataset are provided to support further research and to encourage industry adoption of the approaches. This lays the necessary groundwork for the accelerated adoption of data-driven models for thermal property identification in buildings. / Graduate
3

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Gaikwad, Akash S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
4

Towards gradient faithfulness and beyond

Buono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
5

NaV1.5 Modulation: From Ionic Channels to Cardiac Conduction and Substrate Heterogeneity

Raad, Nour 16 January 2014 (has links)
No description available.

Page generated in 0.1657 seconds