Skip to yearly menu bar Skip to main content


Poster

Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

Ya-Chi Chu · Wenzhi Gao · Yinyu Ye · Madeleine Udell

West Exhibition Hall B2-B3 #W-507
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: This paper investigates the convergence properties of the hypergradient descent method ($\texttt{HDM}$), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of $\texttt{HDM}$ using the online learning framework and apply this analysis to develop a new state-of-the-art adaptive gradient methods with empirical and theoretical support. Notably, $\texttt{HDM}$ automatically identifies the optimal stepsize for the local optimization landscape and achieves local superlinear convergence. Our analysis explains the instability of $\texttt{HDM}$ reported in the literature and proposes efficient strategies to address it. We also develop two $\texttt{HDM}$ variants with heavy-ball and Nesterov momentum. Experiments on deterministic convex problems show $\texttt{HDM}$ with heavy-ball momentum ($\texttt{HDM-HB}$) exhibits robust performance and significantly outperforms other adaptive first-order methods. Moreover, $\texttt{HDM-HB}$ often matches the performance of $\texttt{L-BFGS}$, an efficient and practical quasi-Newton method, using less memory and cheaper iterations.

Lay Summary:

Learning rate selection is a critical hyperparameter that significantly influences the convergence speed of first-order optimization algorithms, yet adaptively choosing it can be challenging. We developed an efficient learning rate update strategy based on online learning, proved its convergence guarantees, and investigate its convergence behavior. Our analysis resolves a 25-year-old heuristic that was proposed for adaptive step size selection. The resulting algorithm significantly outperforms the state-of-the-art.

Chat is not available.