Skip to yearly menu bar Skip to main content


Oral

Learning Dynamics in Continual Pre-Training for Large Language Models

Xingjin Wang · Howe Tissue · Lu Wang · Linjing Li · Daniel Zeng

West Ballroom C
[ ] [ Visit Oral 1D Learning Dynamics 1 ]
Tue 15 Jul 10:15 a.m. — 10:30 a.m. PDT

Abstract:

Continual Pre-Training (CPT) has become a popular and effective method to apply strong foundation models to specific downstream tasks. In this work, we explore the learning dynamics throughout the CPT process for large language models (LLMs). We specifically focus on how general and downstream domain performance evolves at each training step, with domain performance measured via validation losses. We have observed that the CPT loss curve fundamentally characterizes the transition from one curve to another hidden curve, and could be described by decoupling the effects of distribution shift and learning rate (LR) annealing. We derive a CPT scaling law that combines the two factors, enabling the prediction of loss at any (continual) training steps and across learning rate schedules (LRS) in CPT. Our formulation presents a comprehensive understanding of several critical factors in CPT, including the learning rate, the training steps, and the distribution distance between PT and CPT datasets.Moreover, our approach can be adapted to customize training hyper-parameters to different CPT goals such as balancing general and domain-specific performance.Extensive experiments demonstrate that our scaling law holds across various CPT datasets and training hyper-parameters.

Chat is not available.