Poster
Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training
Max Milkert · David Hyde · Forrest Laine
East Exhibition Hall A-B #E-2000
An artificial neural network is often compared to a brain, where a brain has neurons and synapses, an artificial neural network has parameters - numbers that govern its behavior. It is common practice for these parameters to be set randomly, and then updated to maximize the network's performance on a task, as if the network is learning. Setting the parameters completely at random causes those located deeper in the network to be used inefficiently, and this is hard to correct through learning alone. The method we develop in this paper constrains parameter values both when they are initialized, and throughout the training process, guiding the network to a solution that uses deep parameters effectively.Extending these ideas will hopefully enable dramatic reductions in the size, energy, and computational cost of neural networks.