Spotlight Poster
Neural Encoding and Decoding at Scale
Yizi Zhang · Yanchen Wang · Mehdi Azabou · Alexandre Andre · Zixuan Wang · Hanrui Lyu · International Brain Laboratory · Eva Dyer · Department of Statistics Liam Paninski · Cole Hurwitz
West Exhibition Hall B2-B3 #W-414
Recent work has demonstrated that large-scale, multi-animal models are powerful tools for characterizing the relationship between neural activity and behavior. Current large-scale approaches, however, focus exclusively on either predicting neural activity from behavior (encoding) or predicting behavior from neural activity (decoding), limiting their ability to capture the bidirectional relationship between neural activity and behavior. To bridge this gap, we introduce a multimodal, multi-task model that enables simultaneous Neural Encoding and Decoding at Scale (NEDS). Central to our approach is a novel multi-task-masking strategy, which alternates between neural, behavioral, within-modality, and cross-modality masking. We pretrain our method on the International Brain Laboratory (IBL) repeated site dataset, which includes recordings from 83 animals performing the visual decision-making task. In comparison to other large-scale modeling approaches, we demonstrate that NEDS achieves state-of-the-art performance for both encoding and decoding when pretrained on multi-animal data and then fine-tuned on new animals. Surprisingly, NEDS's learned embeddings exhibit emergent properties: even without explicit training, they are highly predictive of the brain regions in each recording. Altogether, our approach is a step towards a foundation model of the brain that enables seamless translation between neural activity and behavior.
Understanding how neural activity and behavior influence each other is a central goal in neuroscience. Most current approaches address only one direction, either predicting neural activity from behavior or behavior from neural activity. We present a new model that performs both tasks simultaneously, using data from many animals engaged in a visual decision-making task. Our model not only outperforms existing methods but also uncovers meaningful anatomical patterns in the brain without explicit human supervision. This work marks a major step toward a unified model that links neural activity and behavior.