Poster
Safe-EF: Error Feedback for Non-smooth Constrained Optimization
Rustem Islamov · Yarden As · Ilyas Fatkhullin
West Exhibition Hall B2-B3 #W-513
We consider a problem in which devices such as phones, sensors, or robots collaborate to train a shared AI model without transmitting all their local data to a central server, due to resource constraints or privacy concerns. A key challenge is that each device must upload large model updates at every iteration, which can quickly saturate the communication network. Compressing updates is one approach to mitigating this bottleneck. However, naive compression often disrupts the learning process. A technique known as error feedback can compensate for the error introduced by compression, but to date, it has only proven effective for simpler tasks without constraints. Yet, constraints are critical in practice for enforcing properties such as safety and fairness in the learned model.We introduce a novel distributed learning algorithm, Safe-EF, which incorporates error feedback in a manner that ensures constraint satisfaction and achieves effective optimization. We also analyze the algorithm’s performance in settings where clients use only a small subset of their local data, for example, a finite number of trajectory samples in humanoid robot training, to compute updates. In simulated experiments involving humanoid robot training, Safe-EF not only reduces communication costs by orders of magnitude but also preserves the safety and reliability of the robot’s behavior. This work advances the development of scalable, communication-efficient, and safe distributed AI systems.