Skip to yearly menu bar Skip to main content


Poster
in
Workshop: TerraBytes: Towards global datasets and models for Earth Observation

Shaping Fine-Tuning of Geospatial Foundation Models: Effects of Label Availability and Temporal Resolution

Giovanni Castiglioni · Nicolás Fernández · Cristian Calderon · Javiera Castillo Navarro · Sébastien Lefèvre · Valentin Barriere

[ ] [ Project Page ]
Sat 19 Jul 2 p.m. PDT — 3 p.m. PDT
 
presentation: TerraBytes: Towards global datasets and models for Earth Observation
Sat 19 Jul 9 a.m. PDT — 5:30 p.m. PDT

Abstract:

Fine-tuning foundation models is a key step in adapting them to a particular task. In the case of Geospatial Foundation Models (GFMs), fine-tuning can be particularly challenging given data scarcity both in terms of the amount of labeled data and, in the case of Satellite Image Time Series (SITS), temporal context. Under these circumstances, the optimal GFM fine-tuning strategy across different labeled data regimes remains poorly understood. In this paper, we thoroughly assess and study the performances of two different GFMs given several combinations of two data scarcity factors: number of labeled samples and sequence length. Specifically, we analyze the performances on a crop classification task, i.e., semantic segmentation, of the Sentinel-2 images contained in the PASTIS-HD dataset. We compare GFMs to U-TAE, a fully supervised baseline, across varying amounts of labeled data (1\%, 10\%, 50\%, 100\%) and temporal input lengths (1, 6, 15, 25 and 35) under different training configurations. Among these explorations, we find that using a smaller learning rate for the pre-trained encoders improves performance in moderate and high data regimes (50\%-100\%). In contrast, full fine-tuning outperforms partial fine-tuning in very low-label settings (1\%-10\%). This behavior suggests a nuanced trade-off between feature reuse and adaptation that defies the intuition of standard transfer learning.

Chat is not available.