Spotlight Poster
Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
Connor Schenck · Isaac Reid · Mithun Jacob · Alex Bewley · Joshua Ainslie · David Rendleman · Deepali Jain · Mohit Sharma · Kumar Avinava Dubey · Ayzaan Wahid · Sumeet Singh · René Wagner · Tianli Ding · Chuyuan Fu · Arunkumar Byravan · Jacob J Varley · Alexey Gritsenko · Matthias Minderer · Dmitry Kalashnikov · Jonathan Tompson · Vikas Sindhwani · Krzysztof Choromanski
East Exhibition Hall A-B #E-3500
This paper introduces STRING, a new and improved method for AI to understand the position of items, especially in 2D images and 3D scenes. Current AI models (Transformers) grasp content but struggle with order or location. STRING builds upon a popular method called RoPE but is more general and better suited for multi-dimensional data. It retains RoPE's key benefits—encoding each item's position independently ("separability") and focusing on relative distances ("translational invariance")—while being more powerful. The paper proves STRING is theoretically the most comprehensive approach of its kind under certain conditions. Crucially, it delivers significant performance gains in practical applications like object detection and robotics control, where efficiently representing 2D/3D information is vital. In short, STRING helps AI "see" and understand spatial arrangements more effectively.