Poster
Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems
Mohammadreza Daneshvaramoli · Helia Karisani · Adam Lechowicz · Bo Sun · Cameron Musco · Mohammad Hajiesmaili
West Exhibition Hall B2-B3 #W-916
This paper introduces a family of learning-augmented algorithms for online knapsack problems that achieve near Pareto-optimal consistency-robustness trade-offs through a simple combination of trusted learning-augmented and worst-case algorithms. Our approach relies on succinct, practical predictions—single values or intervals estimating the minimum value of any item in an offline solution. Additionally, we propose a novel fractional-to-integral conversion procedure, offering new insights for online algorithm design.
This work tackles a classic challenge: how to make good decisions when items arrive one at a time and we must act immediately. We study this in the context of the online knapsack problem, where the goal is to select high-value items without exceeding a fixed budget.Our key idea is to use simple machine learning predictions—just a single value or a range—to help guide decisions. We design new algorithms that are robust when predictions are wrong and high-performing when predictions are accurate. These algorithms come close to the best possible trade-off between the two.We also show how to adapt our methods for realistic cases and demonstrate their effectiveness on both synthetic and real-world data.