Skip to yearly menu bar Skip to main content


Poster

Peri-LN: Revisiting Normalization Layer in the Transformer Architecture

Jeonghoon Kim · Byeongchan Lee · Cheonbok Park · Yeontaek Oh · Beomjun Kim · Taehwan Yoo · Seongjin Shin · Dongyoon Han · Jinwoo Shin · Kang Min Yoo

East Exhibition Hall A-B #E-3500
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: Selecting a layer normalization (LN) strategy that stabilizes training and speeds convergence in Transformers remains difficult, even for today’s large language models (LLM). We present a comprehensive analytical foundation for understanding how different LN strategies influence training dynamics in large-scale Transformers. Until recently, Pre-LN and Post-LN have long dominated practices despite their limitations in large-scale training. However, several open-source models have recently begun silently adopting a third strategy without much explanation. This strategy places normalization layer **peripherally** around sublayers, a design we term **Peri-LN**. While Peri-LN has demonstrated promising performance, its precise mechanisms and benefits remain almost unexplored. Our in-depth analysis delineates the distinct behaviors of LN strategies, showing how each placement shapes activation variance and gradient propagation. To validate our theoretical insight, we conduct extensive experiments on Transformers up to $3.2$B parameters, showing that Peri-LN consistently achieves more balanced variance growth, steadier gradient flow, and convergence stability. Our results suggest that Peri-LN warrants broader consideration for large-scale Transformer architectures, providing renewed insights into the optimal placement of LN.

Lay Summary:

Training today’s large language models is a bit like building a very tall tower of blocks: unless each layer is carefully aligned, the whole structure can wobble or even collapse. One of the “alignment tools” engineers use is layer normalization, which keeps the numbers inside the model from drifting too high or too low. Most builders put this tool either before or after each layer, but both choices have hidden drawbacks—one can weaken the learning signal, while the other can let problematically large numbers sneak through.Our study shines a spotlight on a quieter third option, where we wrap each layer both before and after with normalization—an arrangement we call Peri-LN (“peri” meaning “around”). By rigorously comparing all three setups across models with up to 3 billion parameters, we show that Peri-LN keeps calculations balanced, prevents training crashes. This simple change could make future language models more reliable, cheaper to train, and accessible to more research groups—helping the field progress without wasting massive computing resources.

Chat is not available.