Poster
in
Workshop: Time Series Workshop
Morning Poster Session: Revisiting Dynamic Regret of Strongly Adaptive Methods
Dheeraj Baby
Abstract:
We consider the framework of non-stationary Online Convex Optimization where a learner seeks to control its \emph{dynamic regret} against an \emph{arbitrary} sequence of comparators. When the loss functions are strongly convex or exp-concave, we demonstrate that Strongly Adaptive (SA) algorithms can be viewed as a principled way of controlling dynamic regret in terms of \emph{path variation} $V_T$ \emph{of the comparator sequence}. Specifically, we show that SA algorithms enjoy $\tilde O(\sqrt{TV_T} \vee \log T)$ and $\tilde O(\sqrt{dTV_T} \vee d\log T)$ dynamic regret for strongly convex and exp-concave losses respectively \emph{without} apriori knowledge of $V_T$, thus answering an open question in \cite{zhang2018dynamic}. The versatility of the principled approach is further demonstrated by the novel results in the setting of learning against bounded linear predictors and online regression with Gaussian kernels.
Chat is not available.