Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI Heard That! ICML 2025 Workshop on Machine Learning for Audio

Large-Scale Training Data Attribution for Music Generative Models via Unlearning

Woosung Choi · Junghyun (Tony) Koo · Kin Wai Cheuk · Joan Serrà · Marco Martínez-Ramírez · Yukara Ikemiya · Naoki Murata · Yuhta Takida · WeiHsiang Liao · Yuki Mitsufuji

[ ]
 
presentation: AI Heard That! ICML 2025 Workshop on Machine Learning for Audio
Sat 19 Jul 9 a.m. PDT — 5 p.m. PDT

Abstract:

This paper explores the use of unlearning methods for training data attribution (TDA) in music generative models trained on large-scale datasets. TDA aims to identify which specific training data points contributed to the generation of a particular output from a specific model. This is crucial in the context of AI-generated music, where proper recognition and credit for original artists are generally overlooked. By enabling white-box attribution, our work supports a fairer system for acknowledging artistic contributions and addresses pressing concerns related to AI ethics and copyright. We apply unlearning-based attribution to a text-to-music diffusion model trained on a large-scale dataset and investigate its feasibility and behavior in this setting. To validate the method, we perform a grid search over different hyperparameter configurations and quantitatively evaluate the consistency of the unlearning approach. We then compare attribution patterns from unlearning with those from a similarity-based approach. Our findings suggest that unlearning-based approaches can be effectively adapted to music generative models, introducing large-scale TDA to this domain and paving the way for more ethical and accountable AI systems for music creation.

Chat is not available.