Skip to yearly menu bar Skip to main content


Poster

OmniBal: Towards Fast Instruction-Tuning for Vision-Language Models via Omniverse Computation Balance

Yongqiang Yao · Jingru Tan · Feizhao Zhang · Jiahao Hu · Yazhe Niu · JinXin · Bo Li · Pengfei Liu · Ruihao Gong · Dahua Lin · Ningyi Xu

East Exhibition Hall A-B #E-2906
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract: Vision-language instruction-tuning models have recently achieved significant performance improvements. In this work, we discover that large-scale 3D parallel training on those models leads to an imbalanced computation load across different devices. The vision and language parts are inherently heterogeneous: their data distribution and model architecture differ significantly, which affects distributed training efficiency. To address this issue, we rebalance the computational load from data, model, and memory perspectives, achieving more balanced computation across devices. Specifically, for the data, instances are grouped into new balanced mini-batches within and across devices. A search-based method is employed for the model to achieve a more balanced partitioning. For memory optimization, we adaptively adjust the re-computation strategy for each partition to utilize the available memory fully. These three perspectives are not independent but are closely connected, forming an omniverse balanced training framework. Extensive experiments are conducted to validate the effectiveness of our method. Compared with the open-source training code of InternVL-Chat, training time is reduced greatly, achieving about 1.8$\times$ speed-up. Our method's efficacy and generalizability are further validated across various models and datasets. Codes will be released at https://github.com/ModelTC/OmniBal.

Lay Summary:

Vision-language models, which understand both images and text, are becoming more powerful—but training them is slow and inefficient on large computer clusters. We found that this happens because the image and text parts of the model are very different, leading to an uneven workload across devices.To fix this, we created OmniBal, a new training method that balances the work more fairly. It does this in three ways: by grouping training data more evenly, splitting the model into better-balanced parts, and managing memory more efficiently during training.These improvements work together to make training faster and more stable. In our tests, OmniBal sped up training by about 1.8× compared to current methods. It also works well on different models and datasets.This research matters because it helps developers train large, multi-modal models more efficiently—saving time, energy, and computing resources.

Chat is not available.