Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 1st Workshop on Foundation Models for Structured Data (FMSD)

CLEAR: Contextual Logic-based Explanations for Anomaly Reasoning

Vikash Sharma · Vipul Joshi · Anurag Tripathi · Mayank Jauhari · Amir Raza


Abstract:

Erroneous or fraudulent invoices present significant risks to financial operations in online marketplaces, and anomaly detection offers a better solution to mitigate those risks. Despite advances in machine learning-based anomaly detection, the black-box nature of these models limits their adoption in Finance, where manual review is required. Human investigators often struggle to review numerous flagged invoices due to the absence of clear, contextual explanations, resulting in only 40\% of true defects being detected by investigator. We propose CLEAR, a multi stage model-agnostic framework that combines contrastive learning and large language models (LLMs) to generate context-rich, human-readable explanations. CLEAR projects anomalous examples into a latent space to find semantically similar, non-anomalous counterparts and identifying key distinguishing features using localized interpretable models. These features are passed to a context-aware LLM fine-tuned with historical investigator feedback to generate concise summaries, improving investigation efficiency from 40\% to 50\% and enabling estimated substantial annual savings while providing interpretability through real-case comparisons and contextual semantics.

Chat is not available.