Skip to yearly menu bar Skip to main content


Spotlight Poster

Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs

Jie Hu · Yi-Ting Ma · Do-Young Eun

East Exhibition Hall A-B #E-1304
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT
 
Oral presentation: Oral 5C Probablistic Models
Thu 17 Jul 10 a.m. PDT — 11 a.m. PDT

Abstract: We propose a *history-driven target (HDT)* framework in Markov Chain Monte Carlo (MCMC) to improve any random walk algorithm on discrete state spaces, such as general undirected graphs, for efficient sampling from target distribution $\\boldsymbol{\\mu}$. With broad applications in network science and distributed optimization, recent innovations like the self-repellent random walk (SRRW) achieve near-zero variance by prioritizing under-sampled states through transition kernel modifications based on past visit frequencies. However, SRRW's reliance on explicit computation of transition probabilities for all neighbors at each step introduces substantial computational overhead, while its strict dependence on time-reversible Markov chains excludes advanced non-reversible MCMC methods. To overcome these limitations, instead of direct modification of transition kernel, HDT introduces a history-dependent target distribution $\\boldsymbol{\\pi}[\\mathbf{x}]$ to replace the original target $\\boldsymbol{\\mu}$ in any graph sampler, where $\\mathbf{x}$ represents the empirical measure of past visits. This design preserves lightweight implementation by requiring only local information between the current and proposed states and achieves compatibility with both reversible and non-reversible MCMC samplers, while retaining unbiased samples with target distribution $\\boldsymbol{\\mu}$ and near-zero variance performance. Extensive experiments in graph sampling demonstrate consistent performance gains, and a memory-efficient Least Recently Used (LRU) cache ensures scalability to large general graphs.

Lay Summary:

Efficiently exploring and sampling data from complex networks using random walks presents ongoing challenges. Methods like the Self-Repellent Random Walk (SRRW) aim to prevent over-exploration of areas but often come with significant computational costs.This research introduces the History-Driven Target (HDT) framework, a novel approach enhancing sampling efficiency while reducing computational demands. Instead of modifying the walker's movement rules, HDT dynamically adjusts the target distribution for any compatible MCMC sampler based on sample history, effectively guiding exploration towards under-sampled regions. This computationally lightweight design uses only local information, maintaining the same cost as the base sampler. HDT offers broad compatibility with advanced algorithms, including both reversible and non-reversible MCMC methods, and incorporates a practical "Least Recently Used" (LRU) caching strategy for effective use in memory-constrained scenarios with very large graphs.

Chat is not available.