Poster
Regularized Langevin Dynamics for Combinatorial Optimization
Shengyu Feng · Yiming Yang
West Exhibition Hall B2-B3 #W-514
This work proposes a simple yet effective sampling framework for combinatorial optimization (CO). Our method builds on discrete Langevin dynamics (LD), an efficient gradient-guided generative paradigm. However, we observe that directly applying LD often leads to limited exploration. To overcome this limitation, we propose the Regularized Langevin Dynamics (RLD), which enforces an expected distance between the sampled and current solutions, effectively avoiding local minima. We develop two CO solvers on top of RLD, one based on simulated annealing (SA), and the other one based on neural network (NN). Empirical results on three classic CO problems demonstrate that both of our methods can achieve comparable or better performance against the previous state-of-the-art (SOTA) SA- and NN-based solvers. In particular, our SA algorithm reduces the runtime of the previous SOTA SA method by up to 80\%, while achieving equal or superior performance. In summary, RLD offers a promising framework for enhancing both traditional heuristics and NN models to solve CO problems. Our code is available at https://github.com/Shengyu-Feng/RLD4CO.
Existing sampling-based methods typically suffer from the local-optima issue in combinatorial optimization. We propose to enforce the update magnitude between the sampled and current solutions, forcing the solutions to escape the local minima. This simple technique significantly boosts the performance of both simulated annealing-based and neural network-based solvers for combinatorial optimization, achieving state-of-the-art results on maximum independent set, maximum clique and maximum cut problems.