Skip to yearly menu bar Skip to main content


Poster

Pre-Training Graph Contrastive Masked Autoencoders are Strong Distillers for EEG

Xinxu Wei · kanhao zhao · Yong Jiao · Hua Xie · Lifang He · Yu Zhang

West Exhibition Hall B2-B3 #W-414
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Effectively utilizing extensive unlabeled high-density EEG data to improve performance in scenarios with limited labeled low-density EEG data presents a significant challenge. In this paper, we address this challenge by formulating it as a graph transfer learning and knowledge distillation problem. We propose a Unified Pre-trained Graph Contrastive Masked Autoencoder Distiller, named EEG-DisGCMAE, to bridge the gap between unlabeled and labeled as well as high- and low-density EEG data. Our approach introduces a novel unified graph self-supervised pre-training paradigm, which seamlessly integrates the graph contrastive pre-training with the graph masked autoencoder pre-training. Furthermore, we propose a graph topology distillation loss function, allowing a lightweight student model trained on low-density data to learn from a teacher model trained on high-density data during pre-training and fine-tuning. This method effectively handles missing electrodes through contrastive distillation. We validate the effectiveness of EEG-DisGCMAE across four classification tasks using two clinical EEG datasets with abundant data.

Lay Summary:

Electroencephalography (EEG) is a common way to measure brain activity using sensors placed on the head. While expensive EEG devices with many sensors provide detailed and accurate readings, they are difficult to use outside of clinical settings. Cheaper EEG devices with fewer sensors are more practical for everyday use, but they often miss key brain signals. Our work helps bridge this gap. We created a learning system that first studies large amounts of high-quality EEG data to understand general patterns of brain activity. This process, called pre-training, allows the system to gain useful knowledge even before tackling specific tasks. Then, it teaches smaller models working with low-cost EEG data to perform just as well as those using expensive equipment. This approach makes EEG analysis more accurate, affordable, and ready for use in real-world health and research applications.

Chat is not available.