Poster
in
Workshop: 2nd Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT)
Test-time Offline Reinforcement Learning on Goal-related Experience
Marco Bagatella · Mert Albaba · Jonas Hübotter · Georg Martius · Andreas Krause
Foundation models compress a large amount of information in a single, large neural network, which can then be queried for individual tasks.There are strong parallels between this widespread framework and offline goal-conditioned reinforcement learning algorithms: a universal value function is trained on a large number of goals, and the policy is evaluated on a single goal in each test episode.Extensive research in foundation models has shown that performance can be substantially improved through test-time training, specializing the model to the current goal.We find similarly that test-time offline reinforcement learning on experience related to the test goal can lead to substantially better policies at minimal compute costs.We propose a novel self-supervised data selection criterion, which selects transitions from an offline dataset according to their relevance to the current state and quality with respect to the evaluation goal.We demonstrate across a wide range of high-dimensional loco-navigation and manipulation tasks that fine-tuning a policy on the selected data for a few gradient steps leads to significant performance gains over standard offline pre-training.