abstract: Graphs are one of the key data structures for many real-world computing applica-
tions such as machine learning, social networks, genomics etc. The main challenges of
graph processing include diculty in parallelizing the workload that results in work-
load imbalance, poor memory locality and very large number of memory accesses.
This causes large-scale graph processing to be very expensive.
This thesis presents implementation of a select set of graph kernels on a multi-core
architecture, Transmuter. The kernels are Breadth-First Search (BFS), Page Rank
(PR), and Single Source Shortest Path (SSSP). Transmuter is a multi-tiled architec-
ture with 4 tiles and 16 general processing elements (GPE) per tile that supports a
two level cache hierarchy. All graph processing kernels have been implemented on
Transmuter using Gem5 architectural simulator.
The key pre-processing steps in improving the performance are static partition-
ing by destination and balancing the workload among the processing cores. Results
obtained by processing graphs that are partitioned against un-partitioned graphs
show almost 3x improvement in performance. Choice of data structure also plays an
important role in the amount of storage space consumed and the amount of synchro-
nization required in a parallel implementation. Here the compressed sparse column
data format was used. BFS and SSSP are frontier-based algorithms where a frontier
represents a subset of vertices that are active during the current iteration. They
were implemented using the Boolean frontier array data structure. PR is an iterative
algorithm where all vertices are active at all times.
The performance of the dierent Transmuter implementations for the 14nm node
were evaluated based on metrics such as power consumption (Watt), Giga Operations
Per Second(GOPS), GOPS/Watt and L1/L2 cache misses. GOPS/W numbers for
graphs with 10k nodes and 10k edges is 33 for BFS, 477 for PR and 10 for SSSP.
i
Frontier-based algorithms have much lower GOPS/W compared to iterative algo-
rithms such as PR. This is because all nodes in Page Rank are active at all points
in time. For all three kernel implementations, the L1 cache miss rates are quite low
while the L2 cache hit rates are high. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2019
Identifer | oai:union.ndltd.org:asu.edu/item:54935 |
Date | January 2019 |
Contributors | RENGANATHAN, SRINIDHI (Author), CHAKRABARTI, CHAITALI (Advisor), Shrivastava, Aviral (Committee member), Mudge, Trevor (Committee member), Arizona State University (Publisher) |
Source Sets | Arizona State University |
Language | English |
Detected Language | English |
Type | Masters Thesis |
Format | 51 pages |
Rights | http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0019 seconds