Poster
in
Affinity Workshop: 4th MusIML workshop at ICML’25
Graph-Guided Prompting for Zero-Shot Multi-Hop Question Generation: Gains without Fine-Tuning, Limits without Adaptation
Samin Jamshidi · Morteza Mahdiani
We propose a zero-shot framework for multi-hop question generation that couples a lightweight Graph Attention Network (GAT) with pretrained large language models. The GAT is trained to identify the entities most indicative of the reasoning chain within a passage–answer pair and to propagate relational information across the resulting entity graph. These predicted entities are then woven back into the passage, forming an entity-enriched prompt that is fed directly to existing language models, specifically Llama-2-7B and DeepSeek-Coder-6.7B, though the approach is extensible to newer LLMs with longer context windows. This decoupled design lets a single reasoning module enhance diverse language models at negligible computational cost. While we test on two open-source models, the modular nature of our framework allows for application to larger and newer models without architectural changes. Preliminary results on HotpotQA show that the GAT-augmented prompts yield consistent improvements in answer containment, syntactic diversity, and automatic metrics such as BLEU and ROUGE-L over plain zero-shot prompting and joint-training baselines. At the same time, performance still trails that of fully fine-tuned task-specific systems, suggesting that structured entity reasoning is complementary rather than a complete substitute to end-to-end adaptation.