Skip to yearly menu bar Skip to main content


Poster

COExpander: Adaptive Solution Expansion for Combinatorial Optimization

Jiale Ma · Wenzheng Pan · Yang Li · Junchi Yan

East Exhibition Hall A-B #E-1909
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Despite rapid progress in neural combinatorial optimization (NCO) for solving CO problems (COPs), as the problem scale grows, several bottlenecks persist: 1) solvers in the Global Prediction (GP) paradigm struggle in long-range decisions where the overly smooth intermediate heatmaps impede effective decoding, and 2) solvers in the Local Construction (LC) paradigm are time-consuming and incapable of tackling large instances due to the onerous auto-regressive process. Observing these challenges, we propose a new paradigm named Adaptive Expansion AE with its instantiation COExpander, positioned to leverage both advantages of GP and LC. COExpander utilizes informative heatmaps generated by a global predictor, which is learned under the guidance of locally determined partial solutions, to in turn direct the expansion of determined decision variables with adaptive step-sizes. To ensure transparent evaluation, we further take the lead to canonicalize 29 benchmarks spanning 6 popular COPs (MIS, MCl, MVC, MCut, TSP, ATSP) and various scales (50-10K nodes), upon which experiments demonstrate concrete SOTA performance of COExpander over these tasks. Source code and our standardized datasets will be made public.

Lay Summary:

We define the process of using machine learning to solve combinatorial optimization problems (COPs) as progressively determining decision variables. Previous approaches tend to be extreme: they either attempt to predict all decision variables in one shot (global prediction, GP) or determine them one by one in a sequential manner (local construction, LC). Through experiments, we find that GP solvers often lead to conflicts between predicted variables, while LC solvers is too inefficient. Therefore, we propose a new paradigm, namely adaptive expansion (AE), along with a supervised learning-based instance called COExpander. Compared to the LC solvers, COExpander requires fewer iterations. In each iteration, it adaptively selects a subset of variables using a global predictor, while the remaining variables are deferred for future inference. We evaluate COExpander on six classic COPs across 29 benchmarks and achieve SOTA results.

Chat is not available.