Skip to yearly menu bar Skip to main content


Poster

Time Series Representations with Hard-Coded Invariances

Thibaut Germain · Chrysoula Kosma · Laurent Oudre

East Exhibition Hall A-B #E-1704
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Automatically extracting robust representations from large and complex time series data is becoming imperative for several real-world applications. Unfortunately, the potential of common neural network architectures in capturing invariant properties of time series remains relatively underexplored. For instance, convolutional layers often fail to capture underlying patterns in time series inputs that encompass strong deformations, such as trends. Indeed, invariances to some deformations may be critical for solving complex time series tasks, such as classification, while guaranteeing good generalization performance.To address these challenges, we mathematically formulate and technically design efficient and hard-coded invariant convolutions for specific group actions applicable to the case of time series.We construct these convolutions by considering specific sets of deformations commonly observed in time series, including scaling, offset shift, and trend.We further combine the proposed invariant convolutions with standard convolutions in single embedding layers, and we showcase the layer capacity to capture complex invariant time series properties in several scenarios.

Lay Summary:

Time series data, such as physiological signals, often contain distortions such as baseline wander, which can mislead the training of neural networks. For example, a long-term trend can mask periodic signal patterns, causing convolutional networks to not correctly model the data. To address this, we propose a mathematical framework that uses group actions for time series to model how certain deformations affect time series data. Subsequently, we propose deformation-free representations of time series. It allows neural networks to learn representations inherently robust to common time series distortions, such as offset shift and linear trend. Combined into standard convolutional layers, these deformation-invariant representations can improve the network’s robustness across tasks like classification, anomaly detection, and transfer learning.

Chat is not available.