Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models
What do Geometric Hallucination Detection Metrics Actually Measure?
Eric Yeats · John Buckheit · Sarah Scullen · Brendan Kennedy · Loc Truong · Davis Brown · William Kay · Cliff Joslyn · Tegan Emerson · Michael Henry · John Emanuello · Henry Kvinge
Keywords: [ large language models ] [ the geometry of hidden activations ] [ Hallucination detection ]
Hallucination remains a barrier to deploying generative models in high-consequence applications. This is especially true in cases where external ground truth is not readily available to validate model outputs. This situation has motivated the study of geometric signals in the internal state of an LLM that are predictive of hallucination and require limited external knowledge. Given that there are a range of factors that can lead model output to be called a hallucination (e.g., irrelevance vs incoherence), in this paper we ask what specific properties of a hallucination these geometric statistics actually capture. To assess this, we generate a synthetic dataset which varies distinct properties of output associated with hallucination. This includes output correctness, confidence, relevance, coherence, and completeness. We find that different geometric statistics capture different types of hallucinations. Along the way we show that many existing geometric detection methods have substantial sensitivity to shifts in task domain (e.g., math questions vs. history questions). Motivated by this, we introduce a simple normalization method to mitigate the effect of domain shift on geometric statistics, leading to AUROC gains of +34 points in multi-domain settings.