Spotlight Poster
Score-of-Mixture Training: One-Step Generative Model Training Made Simple via Score Estimation of Mixture Distributions
Tejas Jayashankar · Jongha (Jon) Ryu · Gregory Wornell
East Exhibition Hall A-B #E-3209
Generative modeling enables the exploration of the statistical structures inherent in data by learning to produce rich, diverse, and realistic samples. In this paper, we develop a method for efficient one-step generative modeling, where high-quality samples are produced in a single model execution.Recently diffusion models have become popular for generation, but they require many iterative steps to transform noise into structure. Recent efforts to enable one-step generation typically rely on distilling such pre-trained diffusion models, an approach that is computationally expensive. Alternatives that train one-step models from scratch often suffer from instability or expensive simulation.We show that one-step generative models can be trained from scratch without costly pre-training or distillation. Our method centers on learning a model that estimates the gradient of the mixture distribution of real and generated data. Inspired by advances in diffusion modeling, we introduce a novel, stable, and efficient training scheme for one-step generation that is purely based on ensuring distributional overlap between real and generated samples using distribution matching principles from information theory.