Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Tiny Titans: The next wave of On-Device Learning for Foundation Models (TTODLer-FM)

FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training

Filipp Zmushko · Aleksandr Beznosikov · Martin Takac · Samuel HorvΓ‘th

[ ] [ Project Page ]
Fri 18 Jul 1 p.m. PDT — 1:45 p.m. PDT

Abstract:

With the increase in the number of parameters in large language models, the training process increasingly demands larger volumes of GPU memory. A significant portion of this memory is typically consumed by the optimizer state. To overcome this challenge, recent approaches such as low-rank adaptation (LoRA), low-rank gradient projection (GaLore), and blockwise optimization (BAdam) have been proposed. However, in all these algorithms, the effective rank of the weight updates remains low-rank, which can lead to a substantial loss of information from the gradient. This loss can be critically important, especially during the pre-training stage. In this paper, we introduce π™΅πšπš„π™Άπ™°π™» (Full-Rank Updates with GrAdient spLitting), a new memory-efficient optimization framework. π™΅πšπš„π™Άπ™°π™» leverages gradient splitting to perform low-dimensional updates using advanced algorithms (such as Adam), while updates along the remaining directions are executed via state-free methods like SGD or signSGD. Our framework can be integrated with various low-rank update selection techniques, including GaLore and BAdam. We provide theoretical convergence guarantees for our framework when using SGDM for low-dimensional updates and SGD for state-free updates. Additionally, our method consistently outperforms concurrent approaches, achieving state-of-the-art results in pre-training and fine-tuning tasks while balancing memory efficiency and performance metrics.

Chat is not available.