Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Scaling Up Intervention Models

Towards Causal Representation Learning with Observable Sources as Auxiliaries

Kwonho Kim · Heejeong Nam · Inwoo Hwang · Sanghack Lee


Abstract:

Causal representation learning seeks to uncover latent variables that generate observed data, especially within nonlinear ICA frameworks. A central challenge is identifiability, as infinitely many spurious solutions can exist. Prior work often assumes conditional independence of latents given auxiliary variables not invoved in the mxing function—a condition rarely met in real-world scenarios. To address the issue, we study a more realistic setting where observed sources serve as auxiliary variables. We introduce a novel framework that systematically selects suitable auxiliaries to improve latent recoverability while satisfying identifiability conditions. To our knowledge, this is the first approach to establish identifiability in such a setting. By leveraging the graphical structure of latent variables, our method enhances both identifiability and recoverability, pushing the boundaries of existing techniques in causal representation learning.

Chat is not available.