Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Actionable Interpretability

Learning-Augmented Robust Algorithmic Recourse

Kshitij Kayastha · Shahin Jabbari · Vasilis Gkatzelis


Abstract:

Algorithmic recourse provides individuals who receive undesirable outcomes from machine learning systems with suggestions for minimum-cost improvements to achieve a desirable outcome. However, machine learning models often get updated, causing the recourse to not lead to the desired outcome. The robust recourse framework chooses less sensitive recourses against adversarial model changes, but this comes at a higher cost. To address this, we initiate the study of learning-augmented algorithmic recourse and evaluate the extent to which a designer equipped with a prediction of the future model can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.

Chat is not available.