Poster
MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning
Suning Huang · Zheyu Zhang · Tianhai Liang · Yihan Xu · Zhehao Kou · Chenhao Lu · Guowei Xu · Zhengrong Xue · Huazhe Xu
West Exhibition Hall B2-B3 #W-607
Visual deep reinforcement learning (RL) enables robots to acquire skills from visual input for unstructured tasks. However, current algorithms suffer from low sample efficiency, limiting their practical applicability. In this work, we present MENTOR, a method that improves both the architecture and optimization of RL agents. Specifically, MENTOR replaces the standard multi-layer perceptron (MLP) with a mixture-of-experts (MoE) backbone and introduces a task-oriented perturbation mechanism. MENTOR outperforms state-of-the-art methods across three simulation benchmarks and achieves an average of 83\% success rate on three challenging real-world robotic manipulation tasks, significantly surpassing the 32% success rate of the strongest existing model-free visual RL algorithm. These results underscore the importance of sample efficiency in advancing visual RL for real-world robotics. Experimental videos are available at https://suninghuang19.github.io/mentor_page/.
(1) Teaching robots skills from raw camera images is painfully slow and data-hungry. (2) We introduce MENTOR, an AI mentor that assigns specialist mini-brains to each situation and nudges them using lessons from past successes. (3) Robots can learn complex manipulation in real world, taking a step toward adaptable helpers in workplaces and everyday homes.