We relate properties of attributed random graph models to the performance of GNN architectures. We identify regimes where GNNs outperform feedforward neural networks and non-attributed graph clustering methods. We compare GNN performance on our synthetic benchmark to performance on popular real-world datasets. We analyze the theoretical foundations for weak recovery in GNNs for popular one- and two-layer architectures. We obtain an explicit formula for the performance of a 1-layer GNN, and we obtain useful insights on how to proceed in the 2-layer case. Finally, we improve the bound for a notable result on the GNN size generalization problem by 1.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10938 |
Date | 18 April 2023 |
Creators | Carson, Brigham Stone |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0016 seconds