Skip to yearly menu bar Skip to main content


Poster

An Analysis of Quantile Temporal-Difference Learning

Mark Rowland · Remi Munos · Mohammad Gheshlaghi Azar · Yunhao Tang · Georg Ostrovski · Anna Harutyunyan · Karl Tuyls · Marc G. Bellemare · Will Dabney

West Exhibition Hall B2-B3 #W-821
[ ]
[ JMLR
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

We analyse quantile temporal-difference learning (QTD), a distributional reinforcement learning algorithm that has proven to be a key component in several successful large-scale applications of reinforcement learning. Despite these empirical successes, a theoretical understanding of QTD has proven elusive until now. Unlike classical TD learning, which can be analysed with standard stochastic approximation tools, QTD updates do not approximate contraction mappings, are highly non-linear, and may have multiple fixed points. The core result of this paper is a proof of convergence to the fixed points of a related family of dynamic programming procedures with probability 1, putting QTD on firm theoretical footing. The proof establishes connections between QTD and non-linear differential inclusions through stochastic approximation theory and non-smooth analysis.

Chat is not available.