Skip to yearly menu bar Skip to main content


Spotlight Poster

Large Language Model-driven Large Neighborhood Search for Large-Scale MILP Problems

Huigen Ye · Hua Xu · An Yan · Yaoyang Cheng

West Exhibition Hall B2-B3 #W-615
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Large Neighborhood Search (LNS) is a widely used method for solving large-scale Mixed Integer Linear Programming (MILP) problems. The effectiveness of LNS crucially depends on the choice of the search neighborhood. However, existing strategies either rely on expert knowledge or computationally expensive Machine Learning (ML) approaches, both of which struggle to scale effectively for large problems. To address this, we propose LLM-LNS, a novel Large Language Model (LLM)-driven LNS framework for large-scale MILP problems. Our approach introduces a dual-layer self-evolutionary LLM agent to automate neighborhood selection, discovering effective strategies with scant small-scale training data that generalize well to large-scale MILPs. The inner layer evolves heuristic strategies to ensure convergence, while the outer layer evolves evolutionary prompt strategies to maintain diversity. Experimental results demonstrate that the proposed dual-layer agent outperforms state-of-the-art agents such as FunSearch and EOH. Furthermore, the full LLM-LNS framework surpasses manually designed LNS algorithms like ACP, ML-based LNS methods like CL-LNS, and large-scale solvers such as Gurobi and SCIP. It also achieves superior performance compared to advanced ML-based MILP optimization frameworks like GNN&GBDT and Light-MILPopt, further validating the effectiveness of our approach.

Lay Summary:

Solving real-world planning tasks—like delivery routing or factory scheduling—often involves tackling Mixed Integer Linear Programs (MILPs), which become extremely hard as they grow. Large Neighborhood Search (LNS) is a common technique that improves solutions by repeatedly focusing on parts of the problem. But deciding which part to focus on is difficult and usually requires domain expertise or costly AI methods that don’t scale well. We introduce LLM-LNS, a new system that uses Large Language Models (LLMs)—the same kind of AI behind ChatGPT—to guide this process automatically. Our approach features a dual-layer self-evolving LLM agent: one layer explores diverse strategies, while the other refines them to boost performance. Remarkably, it learns from small problems and generalizes to much larger ones. LLM-LNS consistently outperforms existing methods, including expert-designed strategies, other AI systems, and industry-standard solvers like Gurobi. It delivers faster and better solutions, offering major efficiency gains for industries that rely on solving large-scale optimization problems, such as logistics and manufacturing.

Chat is not available.