Skip to yearly menu bar Skip to main content


Poster

Efficient Logit-based Knowledge Distillation of Deep Spiking Neural Networks for Full-Range Timestep Deployment

Chengting Yu · Xiaochen Zhao · Lei Liu · Shu Yang · Gaoang Wang · Erping Li · Aili Wang

East Exhibition Hall A-B #E-1912
[ ] [ ] [ Project Page ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Spiking Neural Networks (SNNs) are emerging as a brain-inspired alternative to traditional Artificial Neural Networks (ANNs), prized for their potential energy efficiency on neuromorphic hardware. Despite this, SNNs often suffer from accuracy degradation compared to ANNs and face deployment challenges due to fixed inference timesteps, which require retraining for adjustments, limiting operational flexibility. To address these issues, our work considers the spatio-temporal property inherent in SNNs, and proposes a novel distillation framework for deep SNNs that optimizes performance across full-range timesteps without specific retraining, enhancing both efficacy and deployment adaptability. We provide both theoretical analysis and empirical validations to illustrate that training guarantees the convergence of all implicit models across full-range timesteps. Experimental results on CIFAR-10, CIFAR-100, CIFAR10-DVS, and ImageNet demonstrate state-of-the-art performance among distillation-based SNNs training methods. Our code is available at https://github.com/Intelli-Chip-Lab/snntemporaldecoupling_distillation.

Lay Summary:

Considering the spatiotemporal characteristics of Spiking Neural Networks, we optimize logit-based knowledge distillation in deep SNNs training. The method is neat and effective, and we provide both theoretical and empirical analysis as evidence.

Chat is not available.