Skip to yearly menu bar Skip to main content


Spotlight Poster

An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions

Yuhan Ye · Ying Cui · Jingyi Wang

West Exhibition Hall B2-B3 #W-312
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT
 
Oral presentation: Oral 3D Optimization
Wed 16 Jul 10 a.m. PDT — 11 a.m. PDT

Abstract: We propose an online adaptive sampling algorithm for solving stochastic nonsmooth difference-of-convex (DC) problems under time-varying distributions. At each iteration, the algorithm relies solely on data generated from the current distribution and employs distinct adaptive sampling rates for the convex and concave components of the DC function, a novel design guided by our theoretical analysis. We show that, under proper conditions on the convergence of distributions, the algorithm converges subsequentially to DC critical points almost surely. Furthermore, the sample size requirement of our proposed algorithm matches the results achieved in the smooth case or when a measurable subgradient selector is available, both under static distributions. A key element of this analysis is the derivation of a novel $O(\sqrt{p/n})$ pointwise convergence rate (modulo logarithmic factors) for the sample average approximation of subdifferential mappings, where $p$ is the dimension of the variable and $n$ is the sample size -- a result of independent interest. Numerical experiments confirm that the proposed algorithm is both efficient and effective for addressing stochastic nonsmooth problems.

Lay Summary:

Many real-world problems—from resource allocation to machine learning—can be described using a flexible mathematical structure called difference-of-convex (DC) functions, which represent complex goals as the difference between two simpler parts. While powerful, these problems become especially difficult when data arrives continuously and is uncertain, as in many online systems.In this work, we introduce a new algorithm that adaptively tackles such problems in real time. The algorithm adjusts how much data to use for each part of the problem at every step, guided by our theoretical analysis. We also develop new mathematical tools to deal with the unique challenges caused by randomness and non-smoothness in these settings.Our algorithm is not only reliable under changing data distributions but also matches the best performance known in easier cases. Experiments show it is both robust and efficient for solving demanding online optimization tasks.

Chat is not available.