Poster
What makes an Ensemble (Un) Interpretable?
Shahaf Bassan · Guy Amir · Meirav Zehavi · Guy Katz
East Exhibition Hall A-B #E-1206
Ensemble models - where multiple smaller base-models are combined to make predictions - are commonly used in ML and are known for being hard to interpret. For example, a single decision tree is fairly easy to understand, but when many trees are combined (like in boosted trees), the model is usually treated as a "black box." While this idea is widely accepted, we still lack a clear mathematical understanding of why these models are difficult to interpret.In this work, we take a step toward answering that question. Using tools from computational complexity theory, we study how hard it is to explain the predictions of different types of ensemble models. We look at how factors like the number, size, and type of the individual models within the ensemble affect how difficult it is to generate explanations.Our findings show a high versatility of results. For instance, even if each base-model in the ensemble is very small, explaining the whole ensemble can still be highly difficult. Interestingly, the type of base-model within the ensemble matters a lot - for example, ensembles with just a few decision trees can be interpreted efficiently, but an ensemble with even a significantly small number of linear models is hard to explain.By analyzing these challenges through the lens of computational complexity, our work helps lay a more solid foundation for understanding when and why ensemble models are (un) interpretable.