Spotlight Poster
Stochastic Smoothed Primal-Dual Algorithms for Nonconvex Optimization with Linear Inequality Constraints
Ruichuan Huang · Jiawei Zhang · Ahmet Alacaoglu
West Exhibition Hall B2-B3 #W-610
Current machine learning systems train neural networks with constraints, which can be for example safety limits or a desired functionality from the network. These problems are modeled by what is referred to as "constrained optimization problems". In particular, in modern machine learning, due to the structure and large size of the neural networks, vast amount of data, these problems lack the property called "convexity" and the algorithms used in practice are "stochastic", that is, they use only a fraction of the available data at every iteration.Our paper focuses on a special case of the above problem, one where the constraints are given as linear functions. We propose and theoretically analyze algorithms that are similar to ones used in practice and provide guarantees on the amount of computational resources they need to give us an "approximately good" point in terms of solving our problem. These guarantees are of the same order as the best possible guarantees that can be achievable by algorithms of the type we analyze for this problem.Theoretical guarantees for algorithms ensure practitioners that the method they use to solve a problem will behave correctly in practice. Moreover, this also guides the design of faster algorithms that will require less computational resources to output a point that is as good as more computationally heavy algorithms.