Poster
A Theoretical Justification for Asymmetric Actor-Critic Algorithms
Gaspard Lambrechts · Damien Ernst · Aditya Mahajan
West Exhibition Hall B2-B3 #W-1008
In reinforcement learning for partially observable environments, many successful algorithms have been developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a precise theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates error terms arising from aliasing in the agent state.
Some intelligent agents learn faster by using extra information during training — like full knowledge of the environment’s state — even if that information is not available later. This is called asymmetric learning, and it works well in practice. But why does it work so well? In this paper, we offer a theoretical answer for a learning algorithm called the asymmetric actor-critic algorithm. We show that giving this extra information to part of the learning algorithm — the critic — reduces specific errors caused by limited observations. This makes learning more efficient, and our analysis explains when and why this advantage appears.