Poster
Peri-LN: Revisiting Normalization Layer in the Transformer Architecture
Jeonghoon Kim · Byeongchan Lee · Cheonbok Park · Yeontaek Oh · Beomjun Kim · Taehwan Yoo · Seongjin Shin · Dongyoon Han · Jinwoo Shin · Kang Min Yoo
East Exhibition Hall A-B #E-3500
Training today’s large language models is a bit like building a very tall tower of blocks: unless each layer is carefully aligned, the whole structure can wobble or even collapse. One of the “alignment tools” engineers use is layer normalization, which keeps the numbers inside the model from drifting too high or too low. Most builders put this tool either before or after each layer, but both choices have hidden drawbacks—one can weaken the learning signal, while the other can let problematically large numbers sneak through.Our study shines a spotlight on a quieter third option, where we wrap each layer both before and after with normalization—an arrangement we call Peri-LN (“peri” meaning “around”). By rigorously comparing all three setups across models with up to 3 billion parameters, we show that Peri-LN keeps calculations balanced, prevents training crashes. This simple change could make future language models more reliable, cheaper to train, and accessible to more research groups—helping the field progress without wasting massive computing resources.