Poster
MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-text Decoding
Weikang Qiu · Zheng Huang · Haoyu Hu · Aosong Feng · Yujun Yan · ZHITAO YING
West Exhibition Hall B2-B3 #W-100
Abstract:
Decoding functional magnetic resonance imaging (fMRI) signals into text has been a key challenge in the neuroscience community, with the potential to advance brain-computer interfaces and uncover deeper insights into brain mechanisms. However, existing approaches often struggle with suboptimal predictive performance, limited task variety, and poor generalization across subjects. In response to this, we propose MindLLM, a model designed for subject-agnostic and versatile fMRI-to-text decoding. MindLLM consists of an fMRI encoder and an off-the-shelf LLM. The fMRI encoder employs a neuroscience-informed attention mechanism, which is capable of accommodating subjects with varying input shapes and thus achieves high-performance subject-agnostic decoding. Moreover, we introduce Brain Instruction Tuning (BIT), a novel approach that enhances the model's ability to capture diverse semantic representations from fMRI signals, facilitating more versatile decoding. We evaluate MindLLM on comprehensive fMRI-to-text benchmarks. Results demonstrate that our model outperforms the baselines, improving downstream tasks by $12.0\%$, unseen subject generalization by $24.5\%$, and novel task adaptation by $25.0\%$. Furthermore, the attention patterns in MindLLM provide interpretable insights into its decision-making process.
Lay Summary:
The paper proposes MindLLM, a subject-agnostic fMRI-to-text decoding model that can perform various tasks (i.e., versatile decoding). To enable subject-agnostic decoding, we design a novel fMRI encoder inspired by neuroscientific insights. To enable versatile decoding, we introduce Brain Instruction Tuning and the corresponding datasets. Our model demonstrates the state-of-the-art performance on a variety of benchmarks, including downstream tasks, unseen subject generalization and new task adaptation. Additional analyses and visualizations validate the design of components in our method.
Chat is not available.