Poster
Enforcing Idempotency in Neural Networks
Nikolaj Jensen · Jamie Vicary
West Exhibition Hall B2-B3 #W-504
Applying an idempotent operation multiple times has the same effect as applying it once. Idempotency is a feature of many data transformation tasks we commonly tackle with machine learning, and recently it has been shown to also promote generative behaviour in neural networks. Gradient descent-based approaches to optimising for idempotency, however, are in many cases inefficient. We propose an alternative way to optimise for idempotency, using ideas from perturbation theory to derive a training scheme that is significantly more effective and without computational overhead. Our work suggests that alternative methods to gradient-based optimisation in neural networks are practically viable, opening the door to new approaches in neural network training generally.