Skip to yearly menu bar Skip to main content


Spotlight Poster

One-Step Generalization Ratio Guided Optimization for Domain Generalization

Sumin Cho · Dongwon Kim · Kwangsu Kim

East Exhibition Hall A-B #E-1911
[ ] [ ] [ Project Page ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT
 
Oral presentation: Oral 3E Causality and Domain Generalization
Wed 16 Jul 10 a.m. PDT — 11 a.m. PDT

Abstract:

Domain Generalization (DG) aims to train models that generalize to unseen target domains but often overfit to domain-specific features, known as undesired correlations. Gradient-based DG methods typically guide gradients in a dominant direction but often inadvertently reinforce spurious correlations. Recent work has employed dropout to regularize overconfident parameters, but has not explicitly adjusted gradient alignment or ensured balanced parameter updates. We propose GENIE (Generalization-ENhancing Iterative Equalizer), a novel optimizer that leverages the One-Step Generalization Ratio (OSGR) to quantify each parameter's contribution to loss reduction and assess gradient alignment. By dynamically equalizing OSGR via a preconditioning factor, GENIE prevents a small subset of parameters from dominating optimization, thereby promoting domain-invariant feature learning. Theoretically, GENIE balances convergence contribution and gradient alignment among parameters, achieving higher OSGR while retaining SGD's convergence rate. Empirically, it outperforms existing optimizers and enhances performance when integrated with various DG and single-DG methods.

Lay Summary:

Domain Generalization (DG) aims to train models that perform well on unseen domains. However, existing methods often overfit to domain-specific patterns, failing to generalize robustly. Our work introduces GENIE, a novel optimizer that balances the contribution of each model parameter to generalization. GENIE identifies which parameters help or hurt generalization using a theoretical metric called the One-Step Generalization Ratio (OSGR), and dynamically adjusts their updates during training. This prevents over-reliance on a small set of spurious features and promotes learning of domain-invariant representations. As a result, GENIE improves generalization across diverse domains and can be seamlessly integrated with existing domain generalization methods.

Chat is not available.