Poster
Efficient ANN-SNN Conversion with Error Compensation Learning
chang liu · Jiangrong Shen · Xuming Ran · Mingkun Xu · Qi Xu · Yi Xu · Gang Pan
West Exhibition Hall B2-B3 #W-412
Artificial neural networks (ANNs) have demonstrated outstanding performance in numerous tasks, but deployment in resource-constrained environments remains a challenge due to their high computational and memory requirements. Spiking neural networks (SNNs) operate through discrete spike events and offer superior energy efficiency, providing a bio-inspired alternative. However, current ANN-to-SNN conversion often results in significant accuracy loss and increased inference time due to conversion errors such as clipping, quantization, and uneven activation. This paper proposes a novel ANN-to-SNN conversion framework based on error compensation learning. We introduce a learnable threshold clipping function, dual-threshold neurons, and an optimized membrane potential initialization strategy to mitigate the conversion error. Together, these techniques address the clipping error through adaptive thresholds, dynamically reduce the quantization error through dual-threshold neurons, and minimize the non-uniformity error by effectively managing the membrane potential. Experimental results on CIFAR-10, CIFAR-100, ImageNet datasets show that our method achieves high-precision and ultra-low latency among existing conversion methods. Using only two time steps, our method significantly reduces the inference time while maintains competitive accuracy of 94.75% on CIFAR-10 dataset under ResNet-18 structure. This research promotes the practical application of SNNs on low-power hardware, making efficient real-time processing possible.
Artificial neural networks (ANNs) are powerful tools for tasks like image recognition, but they consume a lot of energy, making them less ideal for use on small devices. Spiking neural networks (SNNs), inspired by how the brain works, use brief electrical pulses called "spikes" to transmit information and can operate much more efficiently. A common way to build an SNN is to convert an already-trained ANN into an SNN. However, this conversion often causes accuracy loss and slow processing.Our work introduces a new method to convert ANNs into SNNs more accurately and efficiently. We solve three major problems in the conversion process—errors from rounding, clipping, and timing—by using three techniques: a trainable threshold function, a special dual-threshold neuron model, and a smart way to initialize the network’s state. These changes allow us to build SNNs that are fast and accurate, even when running in just two processing steps.Experiments on standard image datasets like CIFAR-10, CIFAR-100, and ImageNet show that our method outperforms previous techniques, achieving high accuracy with much less delay and significantly lower energy use. This work helps bring low-power, brain-like computing closer to real-world applications.