Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT)

Diffusion Tree Sampling: Scalable inference‑time alignment of diffusion models

Vineet Jain · Kusha Sareen · Mohammad Pedramfar · Siamak Ravanbakhsh

[ ] [ Project Page ]
Fri 18 Jul 2:30 p.m. PDT — 3:15 p.m. PDT

Abstract: Adapting a pretrained diffusion model to new objectives at inference time remains an open problem in generative modeling. Existing steering methods suffer from inaccurate value estimation, especially at high noise levels, which biases guidance. Moreover, information from past runs is not reused to improve sample quality, leading to inefficient use of compute. Inspired by the success of Monte Carlo Tree Search, we address these limitations by casting inference-time alignment as a search problem that reuses past computations. We introduce a tree-based approach that _samples_ from the reward-aligned target density by propagating terminal rewards back through the diffusion chain and iteratively refining value estimates with each additional generation. Our proposed method, Diffusion Tree Sampling (DTS), produces asymptotically exact samples from the target distribution in the limit of infinite rollouts, and its greedy variant Diffusion Tree Search (DTS*) performs a robust search for high reward samples. On MNIST and CIFAR-10 class-conditional generation, DTS matches the FID of the best-performing baseline with up to $5\times$ less compute. In text-to-image generation and language completion tasks, DTS* effectively searches for high reward samples that match best-of-N with $2\times$ less compute. By reusing information from previous generations, we get an _anytime algorithm_ that turns additional compute budget into steadily better samples, providing a scalable approach for inference-time alignment of diffusion models.

Chat is not available.