Skip to yearly menu bar Skip to main content


Poster

On the Convergence of Continuous Single-timescale Actor-critic

Xuyang Chen · Lin Zhao

West Exhibition Hall B2-B3 #W-912
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Actor-critic algorithms have been instrumental in boosting the performance of numerous challenging applications involving continuous control, such as highly robust and agile robot motion control. However, their theoretical understanding remains largely underdeveloped. Existing analyses mostly focus on finite state-action spaces and on simplified variants of actor-critic, such as double-loop updates with i.i.d. sampling, which are often impractical for real-world applications.We consider the canonical and widely adopted single-timescale updates with Markovian sampling in continuous state-action space. Specifically, we establish finite-time convergence by introducing a novel Lyapunov analysis framework, which provides a unified convergence characterization of both the actor and the critic. Our approach is less conservative than previous methods and offers new insights into the coupled dynamics of actor-critic updates.

Lay Summary:

Actor-critic algorithms have played a pivotal role in advancing performance across a range of challenging continuous control tasks, including robust and agile robotic motion. In this work, we establish the finite-time convergence of the widely used single-timescale actor-critic algorithm with Markovian sampling under continuous state-action spaces. This result bridges the gap between practical implementations and theoretical guarantees, and offers a promising method for analyzing other single-timescale reinforcement learning algorithms.

Chat is not available.