Poster
in
Workshop: Actionable Interpretability
The Blessing of Reasoning: LLM-Based Contrastive Explanations in Black-Box Recommender Systems
Yuyan Wang · Pan Li · Minmin Chen
Modern recommender systems use machine learning (ML) models to predict user preferences based on consumption history. Although these ``black-box'' models achieve impressive predictive performance, they often suffer from a lack of transparency and explainability. While explainable AI research suggests a tradeoff between the two, we demonstrate that combining large language models (LLMs) with deep neural networks (DNNs) can improve both. We propose LR-Recsys, which augments state-of-the-art DNN-based recommender systems with LLMs' reasoning capabilities. LR-Recsys introduces a contrastive-explanation generator that leverages LLMs to produce human-readable positive explanations (why a user might like a product) and negative explanations (why they might not). These explanations are embedded via a fine-tuned AutoEncoder and combined with user and product features as inputs to the DNN to produce the final predictions. In addition to offering explainability, LR-Recsys also improves learning efficiency and predictive accuracy. To understand why, we provide insights using high-dimensional multi-environment learning theory. Statistically, we show that LLMs are equipped with better knowledge of the important variables driving user decision-making, and that incorporating such knowledge can improve the learning efficiency of ML models. Extensive experiments on three real-world recommendation datasets demonstrate that the proposed LR-Recsys framework consistently outperforms state-of-the-art black-box and explainable recommender systems, achieving a 3–14\% improvement in predictive performance.