Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Exploration in AI Today (EXAIT)

Gathering Context that Supports Decisions via Entropy Search with Language Models

Sang Truong · Sicong Huang · Pranava Singhal · Tai Dang · Yukang Wen · Duc Nguyen · Violet Xiang · Sanmi Koyejo · Nick Haber

Keywords: [ context gathering ] [ large language model agent ] [ reasoning under uncertainty ]


Abstract:

Real-world decision making systems require background information about the environment to take effective actions. However, this information is frequently incomplete or costly to acquire. Rather than presuming complete context, an effective decision maker must actively gather relevant information through a sequence of targeted follow-up questions before making decisions. This paper presents a framework for adaptive information gathering using large language models (LLMs) as interactive decision-making agents. Guided by an information-theoretic objective, the LLM selects questions that minimize the entropy of the predicted optimal action distribution, effectively prioritizing information that reduces uncertainty. Our method enables instance-specific reasoning under uncertainty and improves decision quality through principled context acquisition. We evaluate our approach on modified versions of three standard benchmarks—1D-ARC, GSM8K, and Fermi—adapted to study partially observable contexts where relevant information must be actively gathered. We assess performance using state-of-the-art LLMs. Empirically, we find that our proposed Entropy Search strategy consistently outperforms strong baselines, demonstrating the effectiveness of uncertainty-guided information gathering for LLM-based decision support. Our implementation is available at https://anonymous.4open.science/r/info-gathering-047B/

Chat is not available.