Skip to yearly menu bar Skip to main content


Poster

Position: Rethinking Explainable Machine Learning as Applied Statistics

Sebastian Bordt · Eric Raidl · Ulrike Luxburg

East Exhibition Hall A-B #E-501
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

In the rapidly growing literature on explanation algorithms, it often remains unclear what precisely these algorithms are for and how they should be used. In this position paper, we argue for a novel and pragmatic perspective: Explainable machine learning needs to recognize its parallels with applied statistics. Concretely, explanations are statistics of high-dimensional functions, and we should think about them analogously to traditional statistical quantities. Among others, this implies that we must think carefully about the matter of interpretation, or how the explanations relate to intuitive questions that humans have about the world. The fact that this is scarcely being discussed in research papers is one of the main drawbacks of the current literature. Moving forward, the analogy between explainable machine learning and applied statistics provides a fruitful way for how research practices can be improved.

Lay Summary:

Machine learning models increasingly influence important decisions, from loan approvals to medical diagnoses. This has led to growing interest in explainable machine learning — methods that aim to make model behavior transparent. However, after years of research, it still remains unclear what exactly it means to "explain" a model.We argue that the solution is surprisingly simple: explainable machine learning is statistics by another name. Just as statistics provides tools to analyze large datasets, explainable machine learning provides tools to analyze large models. While this sounds simple in hindsight, it's actually a fundamental shift in perspective.By treating model explanations as statistics, we can apply basic lessons from statistical practice: always specify what your tool measures, acknowledge its limitations, and ensure users understand how to interpret it. It is important to get this right because explainable machine learning is not only used in research but also in applications and policy contexts.

Chat is not available.