Skip to yearly menu bar Skip to main content


Poster

IMPACT: Iterative Mask-based Parallel Decoding for Text-to-Audio Generation with Diffusion Modeling

Kuan Po Huang · Shu-wen Yang · Huy Phan · Bo-Ru Lu · Byeonggeun Kim · Sashank Macha · Qingming Tang · Shalini Ghosh · Hung-yi Lee · Chieh-Chi Kao · Chao Wang

East Exhibition Hall A-B #E-3208
[ ] [ ] [ Project Page ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Text-to-audio generation synthesizes realistic sounds or music given a natural language prompt. Diffusion-based frameworks, including the Tango and the AudioLDM series, represent the state-of-the-art in text-to-audio generation. Despite achieving high audio fidelity, they incur significant inference latency due to the slow diffusion sampling process. MAGNET, a mask-based model operating on discrete tokens, addresses slow inference through iterative mask-based parallel decoding. However, its audio quality still lags behind that of diffusion-based models. In this work, we introduce IMPACT, a text-to-audio generation framework that achieves high performance in audio quality and fidelity while ensuring fast inference. IMPACT utilizes iterative mask-based parallel decoding in a continuous latent space powered by diffusion modeling. This approach eliminates the fidelity constraints of discrete tokens while maintaining competitive inference speed. Results on AudioCaps demonstrate that IMPACT achieves state-of-the-art performance on key metrics including Fréchet Distance (FD) and Fréchet Audio Distance (FAD) while significantly reducing latency compared to prior models. The project website is available at https://audio-impact.github.io/.

Lay Summary:

Imagine typing a sentence like “a dog barking in the park” and having a computer generate a realistic audio clip to match. This is the goal of text-to-audio generation, but current methods often take a long time to produce high-quality sounds. Some fast models generate sound quickly but sacrifice realism; others sound great but are painfully slow.Our research introduces IMPACT, a new method that combines the best of both worlds. It generates audio using a smart technique that masks and fills in missing parts step by step, guided by a simplified version of a powerful method called diffusion modeling. Unlike earlier systems that use inefficient components or only work with rough sound units, IMPACT works in a smooth, continuous space, enabling both realism and speed.Why does this matter? IMPACT achieves state-of-the-art audio quality on standard benchmarks while being much faster than previous high-quality models. This opens the door for real-time applications like sound design, immersive gaming, and creative tools where both fidelity and responsiveness are crucial.

Chat is not available.