Return to search

Fast Computation of Wide Neural Networks

<div>The recent advances in articial neural networks have demonstrated competitive performance of deep neural networks (and it is comparable with humans) on tasks like image classication, natural language processing and time series classication. These large scale networks pose an enormous computational challenge, especially in resource constrained devices. The current work proposes a targeted-rank based framework for accelerated computation of wide neural networks. It investigates the problem of rank-selection for tensor ring nets to achieve optimal network compression. When applied to a state of the art wide residual network, namely WideResnet, the framework yielded a signicant reduction in computational time. The optimally compressed non-parallel WideResnet is faster to compute on a CPU by almost 2x with only 5% degradation in accuracy when compared to a non-parallel implementation of uncompressed WideResnet.</div>

  1. 10.25394/pgs.7539974.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/7539974
Date02 January 2019
CreatorsVineeth Chigarangappa Rangadhamap (5930585)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/Fast_Computation_of_Wide_Neural_Networks/7539974

Page generated in 0.0021 seconds