Skip to yearly menu bar Skip to main content


Oral

A Generalization Result for Convergence in Learning-to-Optimize

Michael Sucker · Peter Ochs

West Ballroom D
[ ] [ Visit Oral 5E Learning Theory ]
Thu 17 Jul 10:30 a.m. — 10:45 a.m. PDT

Abstract:

Learning-to-optimize leverages machine learning to accelerate optimization algorithms. While empirical results show tremendous improvements compared to classical optimization algorithms, theoretical guarantees are mostly lacking, such that the outcome cannot be reliably assured. Especially, convergence is hardly studied in learning-to-optimize, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles classical optimization and allows for transferring geometric arguments into learning-to-optimize. Based on our new proof-strategy, our main theorem is a generalization result for parametric classes of potentially non-smooth, non-convex loss functions and establishes the convergence of learned optimization algorithms to critical points with high probability. This effectively generalizes the results of a worst-case analysis into a probabilistic framework, and frees the design of the learned algorithm from using safeguards.

Chat is not available.