Poster
Vintix: Action Model via In-Context Reinforcement Learning
Andrei Polubarov · Nikita Lyubaykin · Alexander Derevyagin · Ilya Zisman · Denis Tarasov · Alexander Nikulin · Vladislav Kurenkov
West Exhibition Hall B2-B3 #W-603
In-Context Reinforcement Learning (ICRL) represents a promising paradigm for developing generalist agents that learn at inference time through trial-and-error interactions, analogous to how large language models adapt contextually, but with a focus on reward maximization. However, the scalability of ICRL beyond toy tasks and single-domain settings remains an open challenge. In this work, we present the first steps toward scaling ICRL by introducing a fixed, cross-domain model capable of learning behaviors through in-context reinforcement learning. Our results demonstrate that Algorithm Distillation, a framework designed to facilitate ICRL, offers a compelling and competitive alternative to expert distillation to construct versatile action models. These findings highlight the potential of ICRL as a scalable approach for generalist decision-making systems.
What if AI agents could learn new tasks on the fly—just by interacting with their environment—without ever being retrained? In-Context Reinforcement Learning (ICRL) promises exactly that, but until now, it hasn’t scaled beyond toy examples or single-domain settings. Our work takes a leap forward: we introduce a single model that learns from inference-time trial and error across a wide variety of tasks and environments. No task-specific tuning. Just adaptation in real time. This is a glimpse into the future of generalist AI—agents that learn the way we do: by doing.