Skip to yearly menu bar Skip to main content


Poster

TruthFlow: Truthful LLM Generation via Representation Flow Correction

Hanyu Wang · Bochuan Cao · Yuanpu Cao · Jinghui Chen

East Exhibition Hall A-B #E-2306
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks.

Lay Summary:

Hallucination, which refers to seemingly plausible but factually inaccurate generation, is a challenging problem for LLMs. We developed an effective mitigation method to accommodate the diversity of input queries better. This will help correct the potential mistakes resulting from hallucinations based on different user inputs, making LLMs more trustworthy.

Chat is not available.