Poster
in
Affinity Workshop: New In ML
AIB: Adaptive Information Bottleneck via Bayesian Optimization for Robust Training with Noisy Labels
Kunyu Zhang
Abstract:
The problem of learning with noisy labels is aggravated when the noise rate drifts during training, yet most Information Bottleneck (IB) methods still rely on a \emph{fixed} compression–prediction trade-off controlled by a constant $\beta$. A natural question therefore arises: can a single, static $\beta$ truly accommodate the changing reliability of data? In this work, we show that the answer is no. Specifically, we propose an \emph{adaptive Information Bottleneck} framework in which a Bayesian-optimisation controller continuously calibrates $\beta$ \emph{on the fly}. The system first estimates the current label-noise level and then, via a Gaussian-process surrogate, predicts how different $\beta$ settings will impact validation accuracy. By maximising an acquisition function that balances exploration and exploitation, the controller automatically tightens the bottleneck under severe noise and relaxes it as data quality improves, thereby mitigating the detrimental effects of mislabels without manual tuning. We instantiate the encoder with a Transformer backbone and pair it with a residual, self-attentive decoder to ensure both expressive feature extraction and faithful reconstruction through the stochastic bottleneck. Extensive experiments on synthetically corrupted and naturally noisy benchmarks validate that the proposed closed-loop scheme consistently surpasses static IB baselines, delivering superior generalisation while requiring \emph{zero} human intervention.
Chat is not available.