Return to search

ACCELERATING SPARSE MACHINE LEARNING INFERENCE

<p>Convolutional neural networks (CNNs) have become important workloads due to their<br>
impressive accuracy in tasks like image classification and recognition. Convolution operations<br>
are compute intensive, and this cost profoundly increases with newer and better CNN models.<br>
However, convolutions come with characteristics such as sparsity which can be exploited. In<br>
this dissertation, we propose three different works to capture sparsity for faster performance<br>
and reduced energy. </p>
<p><br></p>
<p>The first work is an accelerator design called <em>SparTen</em> for improving two-<br>
sided sparsity (i.e, sparsity in both filters and feature maps) convolutions with fine-grained<br>
sparsity. <em>SparTen</em> identifies efficient inner join as the key primitive for hardware acceleration<br>
of sparse convolution. In addition, <em>SparTen</em> proposes load balancing schemes for higher<br>
compute unit utilization. <em>SparTen</em> performs 4.7x, 1.8x and 3x better than dense architecture,<br>
one-sided architecture and SCNN, the previous state of the art accelerator. The second work<br>
<em>BARISTA</em> scales up SparTen (and SparTen like proposals) to large-scale implementation<br>
with as many compute units as recent dense accelerators (e.g., Googles Tensor processing<br>
unit) to achieve full speedups afforded by sparsity. However at such large scales, buffering,<br>
on-chip bandwidth, and compute utilization are highly intertwined where optimizing for<br>
one factor strains another and may invalidate some optimizations proposed in small-scale<br>
implementations. <em>BARISTA</em> proposes novel techniques to balance the three factors in large-<br>
scale accelerators. <em>BARISTA</em> performs 5.4x, 2.2x, 1.7x and 2.5x better than dense, one-<br>
sided, naively scaled two-sided and an iso-area two-sided architecture, respectively. The last<br>
work, <em>EUREKA</em> builds an efficient tensor core to execute dense, structured and unstructured<br>
sparsity with losing efficiency. <em>EUREKA</em> achieves this by proposing novel techniques to<br>
improve compute utilization by slightly tweaking operand stationarity. <em>EUREKA</em> achieves a<br>
speedup of 5x, 2.5x, along with 3.2x and 1.7x energy reductions over Dense and structured<br>
sparse execution respectively. <em>EUREKA</em> only incurs area and power overheads of 6% and<br>
11.5%, respectively, over Ampere</p>

  1. 10.25394/pgs.21673115.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/21673115
Date17 May 2024
CreatorsAshish Gondimalla (14214179)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY-NC-SA 4.0
Relationhttps://figshare.com/articles/thesis/ACCELERATING_SPARSE_MACHINE_LEARNING_INFERENCE/21673115

Page generated in 0.0021 seconds