Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

NeuGPT: Unified multi-modal Neural GPT

Yiqian Yang · Yiqun Duan · Hyejeong Jo · Qiang Zhang · Renjing Xu · ʻŌiwi Parker Jones · Xuming Hu · Chin-teng Lin · Hui Xiong


Abstract:

This paper introduces NeuGPT, a groundbreaking multi-modal language generation model designed to harmonize the fragmented landscape of neural recording research. Traditionally, studies in the field have been compartmentalized by signal type, with EEG, MEG, ECoG, SEEG, fMRI, and fNIRS data being analyzed in isolation. Recognizing the untapped potential for cross-pollination and the adaptability of neural signals across varying experimental conditions, we set out to develop a unified model capable of interfacing with multiple modalities. Drawing inspiration from the success of pre-trained large models in NLP, computer vision, and speech processing, NeuGPT is architected to process a diverse array of neural recordings and interact with speech and text data. Our model mainly focus on brain-to-text decoding, improving SOTA from 6.94 to 12.92 on BLEU-1 and 6.93 to 13.06 on ROUGE-1F. It can also simulate brain signals, thereby serving as a novel neural interface.

Chat is not available.