Skip to yearly menu bar Skip to main content


Poster

VIP: Vision Instructed Pre-training for Robotic Manipulation

Zhuoling Li · LiangLiang Ren · Jinrong Yang · Yong Zhao · Xiaoyang Wu · Zhenhua Xu · Xiang Bai · Hengshuang Zhao

West Exhibition Hall B2-B3 #W-405
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

The effectiveness of scaling up training data in robotic manipulation is still limited. A primary challenge in manipulation is the tasks are diverse, and the trained policy would be confused if the task targets are not specified clearly. Existing works primarily rely on text instruction to describe targets. However, we reveal that current robotic data cannot train policies to understand text instruction effectively, and vision is much more comprehensible. Therefore, we introduce utilizing vision instruction to specify targets. A straightforward implementation is training a policy to predict the intermediate actions linking the current observation and a future image. Nevertheless, a single future image does not describe the task target in insufficient detail. To handle this problem, we propose to use sparse point flows to provide more detailed information. Extensive tasks are designed based on real and simulated environments to evaluate the effectiveness of our vision instructed pre-training (VIP) method. The results indicate VIP improves the performance on diverse tasks significantly, and the derived policy can complete competitive tasks like ``opening the lid of a tightly sealed bottle''.

Lay Summary:

This work points out that vision instruction is more comprehensible than text instruction for current embodied policies and develops a novel manipulation pre-train paradigm based on sparse point flow.

Chat is not available.