Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: 2nd Generative AI for Biology Workshop

Calibrating Generative Models

Henry Smith · Brian Trippe

Keywords: [ diffusion ] [ fine-tuning ] [ stochastic optimal control ] [ reward fine-tuning ] [ protein structure generation ] [ generative model ]


Abstract:

Generative models frequently suffer miscalibration, wherein properties such as class probabilities and other statistics of the sampling distribution deviate from target values. We frame calibration as a constrained optimization problem and seek the minimally perturbed model (in Kullback-Leibler divergence) satisfying calibration constraints. To address the intractability of the hard constraint, we introduce two surrogate objectives: (1) the relaxed loss, which replaces the constraint with a miscalibration penalty, and (2) the reward loss, which is a divergence to a variational characterization of the exact target that coincides with reward-fine tuning. These losses yield our CGM-relax and CGM-reward algorithms. We show how to apply these methods to neural-SDE models and find they solve low-dimensional synthetic calibration problems to high precision. Finally, we demonstrate the practicality of the approach by calibrating a 15M-parameter protein structure diffusion model to match statistics of native proteins.

Chat is not available.