Poster
Efficient Parallel Training Methods for Spiking Neural Networks with Constant Time Complexity
Wanjin Feng · Xingyu Gao · Wenqian Du · Hailong Shi · Peilin Zhao · Pengcheng Wu · Chunyan Miao
East Exhibition Hall A-B #E-3400
Training brain-inspired neural networks—known as Spiking Neural Networks (SNNs)—has long been slow and computationally expensive. These models process information sequentially, like flipping through every frame of a long video, which makes training time-consuming.Our research presents a faster alternative. We developed a method called Fixed-Point Parallel Training (FPT) that replaces this frame-by-frame processing with a few carefully coordinated parallel passes. This significantly speeds up training without altering the model’s structure or requiring additional assumptions.FPT maintains the accuracy of conventional training methods while greatly reducing computational time. In our experiments, it proved especially effective for large-scale, time-intensive tasks, demonstrating its potential for real-world deployment.In short, FPT helps brain-like AI systems learn more quickly and efficiently—paving the way for practical, energy-saving applications of neural computing.