Poster
Generative Data Mining with Longtail-Guided Diffusion
David Hayden · Mao Ye · Timur Garipov · Gregory Meyer · Carl Vondrick · Zhao Chen · Yuning Chai · Eric M. Wolff · Siddhartha Srinivasa
East Exhibition Hall A-B #E-3205
It is difficult to anticipate the myriad challenges that a predictive model will encounter once deployed. Common practice entails a reactive, cyclical approach: model deployment, data mining, and retraining. We instead develop a proactive longtail discovery process by imagining additional data during training. In particular, we develop general model-based longtail signals, including a differentiable, single forward pass formulation of epistemic uncertainty that does not impact model parameters or predictive performance but can flag rare or hard inputs. We leverage these signals as guidance to generate additional training data from a latent diffusion model in a process we call Longtail Guidance (LTG). Crucially, we can perform LTG without retraining the diffusion model or the predictive model, and we do not need to expose the predictive model to intermediate diffusion states. Data generated by LTG exhibit semantically meaningful variation, yield significant generalization improvements on numerous image classification benchmarks, and can be analyzed by a VLM to proactively discover, textually explain, and address conceptual gaps in a deployed predictive model.
We imbue predictive AI models with the ability to continuously dream up additional hard or rare data that can be used as additional training data for improving their own capabilities in uncommon real-world scenarios. We further provide techniques to reduce those hard or rare data to textual descriptions so that humans can better anticipate what an AI model might struggle with before release.