Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models
$\mu$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts
Toshiaki Koike-Akino · Jing Liu · Ye Wang
Abstract:
To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unseen downstream tasks. With an efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called $\mu$-MoE. Several experiments demonstrate that $\mu$-MoE can dynamically adapt to prompt-dependent structured sparsity.
Chat is not available.