Skip to yearly menu bar Skip to main content


Poster

Enhancing Spectral GNNs: From Topology and Perturbation Perspectives

Taoyang Qin · Ke-Jia CHEN · Zheng Liu

West Exhibition Hall B2-B3 #W-1108
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Spectral Graph Neural Networks process graph signals using the spectral properties of the normalized graph Laplacian matrix. However, the frequent occurrence of repeated eigenvalues limits the expressiveness of spectral GNNs. To address this, we propose a higher-dimensional sheaf Laplacian matrix, which not only encodes the graph's topological information but also increases the upper bound on the number of distinct eigenvalues. The sheaf Laplacian matrix is derived from carefully designed perturbations of the block form of the normalized graph Laplacian, yielding a perturbed sheaf Laplacian (PSL) matrix with more distinct eigenvalues. We provide a theoretical analysis of the expressiveness of spectral GNNs equipped with the PSL and establish perturbation bounds for the eigenvalues. Extensive experiments on benchmark datasets for node classification demonstrate that incorporating the perturbed sheaf Laplacian enhances the performance of spectral GNNs.

Lay Summary:

Many graph neural networks (GNNs) process graph-structured data (or signals) by leveraging the spectral properties of a special matrix or operator known as the graph Laplacian. When the normalized graph Laplacian has repeated eigenvalues, a GNN’s ability to process data in the spectral domain is weakened, limiting the model’s expressiveness. Drawing on cellular sheaf theory, we use the sheaf Laplacian—associating each edge with a small vector space—and apply carefully designed perturbations to its block structure to construct a higher-dimensional perturbed sheaf Laplacian (PSL) with a richer spectrum of eigenvalues.Our theoretical analysis and empirical experiments both demonstrate the effectiveness of PSL-based spectral GNNs.

Chat is not available.