Skip to yearly menu bar Skip to main content


Poster

The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training

Fabian Schaipp · Alexander Hägele · Adrien Taylor · Umut Simsekli · Francis Bach

West Exhibition Hall B2-B3 #W-510
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant schedule with linear cooldown; in particular, the practical benefit of cooldown is reflected in the bound due to the absence of logarithmic terms.Further, we show that this surprisingly close match between optimization theory and practice can be exploited for learning-rate tuning: we achieve noticeable improvements for training 124M and 210M Llama-type models by (i) extending the schedule for continued training with optimal learning-rate, and (ii) transferring the optimal learning-rate across schedules.

Lay Summary:

The problem of training machine learning models is often formulated as a complicated optimization problem, which is generally handled via iterative optimization algorithms.A particularly crucial stage in this procedure is the choice of the size of the steps taken by the optimization algorithm (this is called a "learning-rate schedule").We show that many empirical effects of these schedules can be explained by a theoretical model for convex optimization.This is surprising, as it is known that the practical training problems are not convex; however, the theory appears to still match the observed behaviors.It is also surprising, as optimization theory often fails to make accurate predictions about the real-world behavior of optimization algorithms in machine learning.As an application, we can use our theoretical model to design better schedules for practical training scenarios.This is more efficient than a trial-and-error approach and helps to reduce the computational burden of the training procedure.

Chat is not available.