Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Generative AI for Biology Workshop

Importance-Weighted Training of Diffusion Samplers

Sanghyeok Choi · Sarthak Mittal · Víctor Elvira · Jinkyoo Park · Nikolay Malkin

Keywords: [ Amortized inference ] [ Monte Carlo methods ] [ Diffusion models ] [ GFlowNets ]


Abstract:

We propose an importance-weighted training framework for diffusion samplers — diffusion models trained to sample from a Boltzmann distribution — that leverages Monte Carlo methods with off-policy training to improve training efficiency and mode coverage. Building upon past attempts to use experience replay to guide the training of denoising models as policies, we derive a way to combine historical samples with adaptive importance weights so as to make the training samples better approximate the desired distribution even when the sampler is far from converged. On synthetic multi-modal targets and the Boltzmann distribution of alanine dipeptide conformations, we demonstrate improvements in distribution approximation and training stability compared to existing baselines. Our results are a step towards combining the strengths of amortized (RL- and control-based) approaches to training diffusion samplers with those of Monte Carlo methods.

Chat is not available.