Poster
A Bayesian Model Selection Criterion for Selecting Pretraining Checkpoints
Michael Munn · Susan Wei
East Exhibition Hall A-B #E-1406
Recent advances in artificial intelligence have been fueled by the development of foundation models such as BERT, GPT, T5, and VisionTransformers. These models are first pretrained on vast and diverse datasets and then adapted to specific downstream tasks, often with significantly less data. However, the mechanisms behind the success of this ubiquitous pretrain-then-adapt paradigm remain underexplored, particularly the characteristics of pretraining checkpoints that enhance downstream adaptation. We introduce a Bayesian model selection criterion, called the downstream free energy, which quantifies a checkpoint’s adaptability by measuring the concentration of nearby favorable parameters for a downstream task. We demonstrate that this Bayesian model selection criterion can be effectively implemented without access to the downstream data or prior knowledge of the downstream task. Furthermore, we provide empirical evidence that the criterion reliably correlates with improved fine-tuning performance, offering a principled approach to predicting model adaptability.
Foundation models, the AI systems behind tools like ChatGPT and image generators, are typically trained on massive datasets and then fine-tuned for specific applications. However, selecting the best version (checkpoint) of the initial model to use is a significant challenge, especially without knowledge of or access to future task data. This research introduces "downstream free energy," a Bayesian statistical measure, to predict a checkpoint's adaptability, and proposes using "pretraining free energy" which is calculable using only the initial training data. This approach offers a principled way to choose the best checkpoint, leading to AI models that adapt more effectively to diverse applications when future task details are limited.