Poster
GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation
Jiashu HE · Mingyu Ma · Jinxuan Fan · Dan Roth · Wei Wang · Alejandro Ribeiro
East Exhibition Hall A-B #E-2308
Large language models are powerful text generators but often stumble on complex questions due to lack of domain-specific knowledge. Existing fixes either lean entirely on the model’s internal “memory” or demand huge external databases—both of which can be impractical for specialized scientific topics. We introduce Graph Inspired Veracity Extrapolation (GIVE), a three-step method that lets the model tap into the right expert facts, think through them step by step, and then craft a clear answer. GIVE first observes by selecting pertinent data, then reflects through query-specific associative thinking, and finally speaks by synthesizing a coherent response. In experiments, GIVE boosts reasoning accuracy across model sizes—enabling smaller models to outperform much larger ones on scientific tasks (for example, GPT-3.5 + GIVE beats GPT-4). It works without any additional training, handles knowledge graphs from a few dozen to hundreds of thousands of nodes, and shines in both open-domain and niche scientific benchmarks. Because its simple observe-reflect-speak process is fully interpretable, GIVE offers a transparent, training-free way to give LLMs real-world reasoning power.