Skip to yearly menu bar Skip to main content


Poster

ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning

Jianqing Liang · Zhiqiang Li · Xinkai Wei · Yuan Liu · Zhiqiang Wang

East Exhibition Hall A-B #E-2104
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract: Graph contrastive learning has attracted great interest as a dominant and promising self-supervised representation learning approach in recent years. While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away, they still suffer from several critical problems, such as the underlying semantic disturbance brought by augmentation strategies, the failure of GCN in capturing long-range dependence, rigidness and inefficiency of node sampling techniques. To address these issues, we propose Manifold Learning Inspired Lightweight Graph Contrastive Learning (ML$^2$-GCL), which inherits the merits of both manifold learning and GCN. ML$^2$-GCL avoids the potential risks of semantic disturbance with only one single view. It achieves global nonlinear structure recovery from locally linear fits, which can make up for the defects of GCN. The most amazing advantage is about the lightweight due to its closed-form solution of positive pairs weights and removal of pairwise distances calculation. Theoretical analysis proves the existence of the optimal closed-form solution. Extensive empirical results on various benchmarks and evaluation protocols demonstrate effectiveness and lightweight of ML$^2$-GCL. We release the code at https://github.com/a-hou/ML2-GCL.

Lay Summary:

This paper proposes Manifold Learning Inspired Lightweight Graph Contrastive Learning, achieving both effectiveness and lightweight.

Chat is not available.