Spotlight Poster
Exogenous Isomorphism for Counterfactual Identifiability
Yikang Chen · Dehui du
East Exhibition Hall A-B #E-1906
Causal models can answer hypothetical “what-if” questions, but different models may yield different answers—a phenomenon known as the counterfactual identification problem. This inconsistency makes it difficult for researchers and decision-makers to know which predictions to trust.To address this challenge, we introduce the concept of exogenous isomorphism, which aligns the latent components of different models so that they produce consistent answers to every “what-if” query. We then identify sufficient assumptions that guarantee this alignment across two well-studied model families. Finally, we demonstrate the practical feasibility of our approach by implementing it with neural networks and validating its performance on simulated datasets.Guaranteeing that all models constructed under the same assumptions produce identical answers enhances the reliability of counterfactual reasoning. This consistency is crucial for domains such as healthcare, economics, and policymaking, where trustworthy “what-if” analyses underpin sound decisions.