Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

Performance Plateaus in Inference-Time Scaling for Text-to-Image Diffusion Without External Models

Changhyun Choi · Sungha Kim · H. Jin Kim

Keywords: [ Text-to-Image Diffusion Models ] [ VRAM-Limited GPUs ] [ Inference-Time Scaling ] [ Initial Noise Optimization ]


Abstract:

Recently, it has been shown that investing computing resources in searching for good initial noise for a text-to-image diffusion model helps improve performance. However, previous studies required external models to evaluate the resulting images, which is impossible on GPUs with small VRAM. For these reasons, we apply Best-of-N inference-time scaling to algorithms that optimize the initial noise of a diffusion model without external models across multiple datasets and backbones. We demonstrate that inference-time scaling for text-to-image diffusion models in this setting quickly reaches a performance plateau, and a relatively small number of optimization steps suffices to achieve the maximum achievable performance with each algorithm.

Chat is not available.