Skip to yearly menu bar Skip to main content


Oral
in
Workshop: AI Heard That! ICML 2025 Workshop on Machine Learning for Audio

MMMG: A Comprehensive and Reliable Evaluation Suite for Multitask Multimodal Generation

Jihan Yao · Yushi Hu · Yujie Yi · Bin Han · Shangbin Feng · Guang Yang · Bingbing Wen · Ranjay Krishna · Lucy Lu Wang · Yulia Tsvetkov · Noah Smith · Banghua Zhu

[ ]
Sat 19 Jul 4 p.m. PDT — 4:20 p.m. PDT
 
presentation: AI Heard That! ICML 2025 Workshop on Machine Learning for Audio
Sat 19 Jul 9 a.m. PDT — 5 p.m. PDT

Abstract:

Automatically evaluating multimodal generation presents a significant challenge, as automated metrics often struggle to align reliably with human evaluation, especially for complex tasks that involve multiple modalities. To address this, we present MMMG, a comprehensive and human-aligned benchmark for multimodal generation across 4 modality combinations (image, audio, interleaved text and image, interleaved text and audio), with a focus on tasks that present significant challenges for generation models, while still enabling reliable automatic evaluation through a combination of models and programs. MMMG encompasses 49 tasks (including 29 newly developed ones), each with a carefully designed evaluation pipeline, and 937 instructions to systematically assess reasoning, controllability, and other key capabilities of multimodal generation models. Extensive validation demonstrates that MMMG is highly aligned with human evaluation, achieving an average agreement of 94.3%. Benchmarking results on 24 multimodal generation models reveal that even though the state-of-the-art model, GPT Image, achieves 78.3% accuracy for image generation, it falls short on multimodal reasoning and interleaved generation. Furthermore, results suggest considerable headroom for improvement in audio generation, highlighting an important direction for future research.

Chat is not available.