Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models

Mu-Parametrization for Mixture of Experts

Jan Małaśnicki · Kamil Ciebiera · Mateusz Boruń · Maciej Pióro · Jan Ludziejewski · Maciej Stefaniak · Michał Krutul · Sebastian Jaszczur · Marek Cygan · Kamil Adamczewski · Jakub Krajewski


Abstract: Recent years have seen a growing interest and adoption of LLMs, with $\mu$Transfer becoming a key technique for tuning hyperparameters in large-scale training. Meanwhile, Mixture-of-Experts (MoE) has emerged as a leading architecture in extremely large models. However, the intersection of these two advancements has remained unexplored. In this work, we derive a $\mu$-Parameterization ($\mu$P) for MoE, providing theoretical guarantees for feature learning across model widths in both the router and experts. We empirically validate our parameterization and further investigate how scaling the number of experts and granularity affects the optimal learning rate.

Chat is not available.