Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd AI for Math Workshop @ ICML 2025

Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks

Taishi Nakamura · Satoki Ishikawa · Masaki Kawamura · Takumi Okamoto · Daisuke Nohara · Jun Suzuki · Rio Yokota


Abstract: Empirical scaling laws have driven the evolution of large language models (LLMs), yet their coefficients shift whenever the model architecture or data pipeline changes.Mixture‑of‑Experts (MoE) models, now standard in state‑of‑the‑art systems, introduce a new sparsity dimension that current dense‑model frontiers overlook.We investigate how MoE sparsity influences two distinct capability regimes: memorization and reasoning.We train families of MoE Transformers that systematically vary total parameters, active parameters, and top‑$k$ routing while holding the compute budget fixed.For every model we record pre‑training loss, downstream task loss, and task accuracy, allowing us to separate the train‑test generalization gap from the loss‑accuracy gap.Memorization benchmarks improve monotonically with total parameters, mirroring training loss.By contrast, reasoning performance saturates and can even regress despite continued gains in both total parameters and training loss.Altering top‑$k$ alone has little effect when active parameters are constant, and classic hyperparameters such as learning rate and initialization modulate the generalization gap in the same direction as sparsity. Neither post‑training reinforcement learning (GRPO) nor extra test‑time compute rescues the reasoning deficit of overly sparse models.

Chat is not available.