Skip to yearly menu bar Skip to main content


Poster

Posterior Inference with Diffusion Models for High-dimensional Black-box Optimization

Taeyoung Yun · Kiyoung Om · Jaewoo Lee · Sujin Yun · Jinkyoo Park

East Exhibition Hall A-B #E-1109
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Optimizing high-dimensional and complex black-box functions is crucial in numerous scientific applications.While Bayesian optimization (BO) is a powerful method for sample-efficient optimization, it struggles with the curse of dimensionality and scaling to thousands of evaluations. Recently, leveraging generative models to solve black-box optimization problems has emerged as a promising framework.However, those methods often underperform compared to BO methods due to limited expressivity and difficulty of uncertainty estimation in high-dimensional spaces.To overcome these issues, we introduce \textbf{DiBO}, a novel framework for solving high-dimensional black-box optimization problems.Our method iterates two stages. First, we train a diffusion model to capture the data distribution and deep ensembles to predict function values with uncertainty quantification.Second, we cast the candidate selection as a posterior inference problem to balance exploration and exploitation in high-dimensional spaces. Concretely, we fine-tune diffusion models to amortize posterior inference.Extensive experiments demonstrate that our method outperforms state-of-the-art baselines across synthetic and real-world tasks. Our code is publicly available \href{https://github.com/umkiyoung/DiBO}{here}.

Lay Summary:

Many real-world scientific problems—like designing new materials or tuning AI systems—require finding the best solution from a vast space of possibilities. But evaluating each possibility is expensive, and traditional methods like Bayesian optimization struggle when the number of variables gets too high.To tackle this, researchers have recently turned to generative models, which can learn patterns from past attempts and propose smarter guesses. However, these models often fail in complex scenarios due to poor uncertainty estimates and limited flexibility.Our research introduces DiBO, a new method that combines the strengths of generative models and uncertainty-aware prediction to solve these hard problems. DiBO trains a diffusion model to understand the search space, and a deep ensemble to estimate how good each solution might be.We then refine the generative model to focus on the most promising areas, effectively balancing between exploring new ideas and improving known ones.Through extensive experiments, we show that DiBO consistently finds better solutions than existing methods, even in very high-dimensional settings. This makes it a powerful tool for scientific discovery, engineering, and beyond.

Chat is not available.