Skip to yearly menu bar Skip to main content


Poster

AdaWorld: Learning Adaptable World Models with Latent Actions

Shenyuan Gao · Siyuan Zhou · Yilun Du · Jun Zhang · Chuang Gan

West Exhibition Hall B2-B3 #W-101
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

World models aim to learn action-controlled future prediction and have proven essential for the development of intelligent agents. However, most existing world models rely heavily on substantial action-labeled data and costly training, making it challenging to adapt to novel environments with heterogeneous actions through limited interactions. This limitation can hinder their applicability across broader domains. To overcome this limitation, we propose AdaWorld, an innovative world model learning approach that enables efficient adaptation. The key idea is to incorporate action information during the pretraining of world models. This is achieved by extracting latent actions from videos in a self-supervised manner, capturing the most critical transitions between frames. We then develop an autoregressive world model that conditions on these latent actions. This learning paradigm enables highly adaptable world models, facilitating efficient transfer and learning of new actions even with limited interactions and finetuning. Our comprehensive experiments across multiple environments demonstrate that AdaWorld achieves superior performance in both simulation quality and visual planning.

Lay Summary:

How to achieve human-alike adaptability in unseen environments with new action controls? In this paper, we answer this by pretraining AdaWorld with continuous latent actions from thousands of environments. It enables zero-shot action transfer, fast adaptation, and effective planning with minimal finetuning.

Chat is not available.