Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 1st Workshop on Foundation Models for Structured Data (FMSD)

Towards Interpretable Time Series Foundation Models

Matthieu Boileau · Philippe Helluy · Jérémy Pawlus · Svitlana Vyetrenko


Abstract:

In this paper, we investigate the distillation of time series reasoning capabilities into small, instruction-tuned language models as a step toward building interpretable time series foundation models. Leveraging a synthetic dataset of mean-reverting time series with systematically varied trends and noise levels, we generate natural language annotations using a large multimodal model and use these to supervise the fine-tuning of compact \texttt{Qwen} models. We introduce evaluation metrics that assess the quality of the distilled reasoning—focusing on trend direction, noise intensity, and extremum localization—and show that the post-trained models acquire meaningful interpretive capabilities. Our results highlight the feasibility of compressing time series understanding into lightweight, language-capable models suitable for on-device or privacy-sensitive deployment. This work contributes a concrete foundation toward developing small, interpretable models that explain temporal patterns in natural language.

Chat is not available.