In this thesis, we study the notion of graph summarization,
which is a fundamental task of finding a compact representation of the original graph called the summary.
Graph summarization can be used for reducing the footprint of the input graph, better visualization, anonymizing the identity of users, and query answering.
There are two different frameworks of graph summarization we consider in this thesis, the utility-based framework and the correction set-based framework.
In the utility-based framework, the input graph is summarized until a utility threshold is not violated.
In the correction set-based framework a set of correction edges is produced along with the summary graph.
In this thesis we propose two algorithms for the utility-based framework and one for the correction set-based framework. All these three algorithms are for static graphs (i.e. graphs that do not change over time).
Then, we propose two more utility-based algorithms for fully dynamic graphs (i.e. graphs with edge insertions and deletions).
Algorithms for graph summarization can be lossless (summarizing the input graph without losing any information) or lossy (losing some information about the input graph in order to summarize it more).
Some of our algorithms are lossless and some lossy, but with controlled utility loss.
Our first utility-driven graph summarization algorithm, G-SCIS, is based on a clique and independent set decomposition, that produces optimal compression with zero
loss of utility. The compression provided is significantly better than
state-of-the-art in lossless graph summarization, while the runtime
is two orders of magnitude lower.
Our second algorithm is T-BUDS, a highly scalable, utility-driven algorithm for fully controlled lossy summarization.
It achieves high scalability by combining memory reduction using Maximum Spanning Tree with a novel binary
search procedure. T-BUDS outperforms state-of-the-art drastically in terms of the quality of summarization and is about two orders of magnitude better in terms of speed. In contrast to the competition, we are able to handle web-scale graphs in a single machine
without performance impediment as the utility threshold (and size of summary) decreases. Also, we show that our graph summaries can be used as-is to answer several important classes of queries, such as triangle enumeration, Pagerank and shortest paths.
We then propose algorithm LDME, a correction set-based graph summarization algorithm that produces compact output representations in a fast and scalable manner. To achieve this, we introduce (1) weighted locality sensitive hashing to drastically reduce the number of comparisons required to find good node merges, (2) an efficient way to compute the best quality merges that produces more compact outputs, and (3) a new sort-based encoding algorithm that is faster and more robust. More interestingly, our algorithm provides performance tuning settings to allow the option of trading compression for running
time. On high compression settings, LDME achieves compression equal to or better than the state of the art with up to 53x speedup in running time. On high speed settings, LDME achieves up to two orders of magnitude speedup with only slightly lower compression.
We also present two lossless summarization algorithms, Optimal and Scalable, for summarizing fully dynamic graphs.
More concretely, we follow the framework of G-SCIS, which produces summaries that can be used as-is in several graph analytics tasks. Different from G-SCIS, which is a batch algorithm, Optimal and Scalable are fully dynamic and can respond rapidly to each change in the graph.
Not only are Optimal and Scalable able to outperform G-SCIS and other batch algorithms by several orders of magnitude, but they also significantly outperform MoSSo, the state-of-the-art in lossless dynamic graph summarization.
While Optimal produces always the most optimal summary, Scalable is able to trade the amount of node reduction for extra scalability.
For reasonable values of the parameter $K$, Scalable is able to outperform Optimal by an order of magnitude in speed, while keeping the rate of node reduction close to that of Optimal.
An interesting fact that we observed experimentally is that even if we were to run a batch algorithm, such as G-SCIS, once for every big batch of changes, still they would be much slower than Scalable. For instance, if 1 million changes occur in a graph, Scalable is two orders of magnitude faster than running G-SCIS just once at the end of the 1 million-edge sequence. / Graduate
Identifer | oai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/13995 |
Date | 24 June 2022 |
Creators | Hajiabadi, Mahdi |
Contributors | Srinivasan, Venkatesh, Thomo, Alex |
Source Sets | University of Victoria |
Language | English, English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Rights | Available to the World Wide Web |
Page generated in 0.0015 seconds