Poster
One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Zhendong Wang · Max Li · Ajay Mandlekar · Zhenjia Xu · Jiaojiao Fan · Yashraj Narang · Jim Fan · Yuke Zhu · Yogesh Balaji · Mingyuan Zhou · Ming-Yu Liu · Yu Zeng
West Exhibition Hall B2-B3 #W-410
Robots are increasingly using advanced AI models to learn how to perform tasks by watching demonstrations, similar to how humans learn by imitation. One promising type of model, called a diffusion model, has shown great results in teaching robots complex behaviors. However, these models are typically slow, making them hard to use in real-time scenarios—like responding quickly in dynamic environments or running on less powerful hardware.To solve this, we developed the One-Step Diffusion Policy, a new method that speeds up these slow models without losing their performance. Our approach trains a lightweight version of the original model that can make decisions in just one step, rather than many. This was done by carefully guiding the simpler model to mimic the original’s behavior, adding only a small extra training cost.We tested our method on both simulated environments and real-world robot tasks, where it matched or exceeded previous performance while making decisions over 40 times faster. This breakthrough brings us closer to making fast, capable robots that can operate reliably in the real world.