Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

Predict and Explain: A Unified Approach to Citation Impact Forecasting


Abstract:

Accurate prediction of a paper’s future citation impact is crucial for scholarly evaluation and technology forecasting. Existing methods often overlook the heterogeneous and evolving nature of academic ecosystems. In this work, we propose a unified framework that combines dynamic heterogeneous graph modeling, self-supervised pretraining, large language model (LLM)-based explanation generation, and reinforcement learning (RL)-based optimization for interpretable citation prediction. Specifically, we construct a temporal heterogeneous graph, encoding papers, authors, topics, and venues with timestamped multi-relational edges. We pretrain a DyGFormer-based encoder using four tailored objectives that capture structural, temporal, and predictive patterns. These embeddings are used for both citation count regression and explanation generation via prompt-adapted LLMs, enhanced by RL-based fine-tuning for output quality. Our framework bridges graph representation learning with language reasoning, providing interpretable and accurate forecasting of scientific influence.

Chat is not available.