Invited Talk
in
Workshop: Subset Selection in Machine Learning: From Theory to Applications
Theory of feature selection
Rajiv Khanna
Abstract:
In this talk, I will present novel theoretical results to bridge the gap in theory and practice for interpretable dimensionality reduction aka feature selection. Specifically, I will show that feature selection satisfies a weaker form of submodularity. Because of this connection, for any function, one can provide constant-factor approximation guarantees that are solely dependent on the condition number of the function. Moreover, I will discuss that the "cost of interpretability" accrued because of selecting features as opposed to principal components is not as high as was previously thought to be.
Chat is not available.