Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

CAMERO: An Uncertainty-Aware Multi-Resolution Pre-training Framework for Self-Supervised Time-Series Modeling

Junyao Wang · Halima Bouzidi · Mohammad Al Faruque


Abstract:

Deep learning-based time series modeling often encounters challenges from uncertain or incomplete observations in real-world applications, arising from data inconsistencies or task-specific constraints. Transformers are increasingly adopted for time-series data due to their ability to capture long-range temporal dependencies. However, existing approaches often assign equal weights to all input tokens regardless of their reliability. Consequently, imputed or corrupted tokens may disproportionately influence the attention mechanism and degrade learning outcomes. We introduce CAMERO, a novel self-supervised Transformer-based pre-training framework that explicitly models token-level uncertainty to improve robustness. Our approach introduces two key components: (1) a lightweight variational module that estimates uncertainty for each token embedding, and (2) an uncertainty-aware attention mechanism that dynamically adjusts attention scores, enabling the model to focus on reliable information while down-weighting noisy inputs. To further capture temporal patterns across multiple timescales, we adopt a multi-resolution Transformer encoder, and train the model by combining masked reconstruction and contrastive learning. CAMERO can be used both as an end-to-end model and as a pre-training backbone for downstream tasks. Extensive experiments across multiple benchmarks demonstrate that CAMERO consistently outperforms state-of-the-art self-supervised models.

Chat is not available.