Poster
in
Workshop: ES-FoMo III: 3rd Workshop on Efficient Systems for Foundation Models
ConMeZO: Adaptive Directional Sampling for Gradient-Free Finetuning of Language Models
Lejs Behric · Liang Zhang · Bingcong Li · Kiran Thekumparampil
Zeroth‑order optimization (MeZO) is an attractive strategy for finetuning large language models (LLMs) because it eliminates the memory overhead of storing intermediate activations required by backpropagation. However, it converges slowly due to the inherent curse of dimensionality when searching for descent directions in the higher-dimensional parameter space of billion-scale LLMs. We propose ConMeZO, a novel zeroth‑order optimizer that accelerates convergence by adaptive directional sampling. Instead of drawing the direction uniformly at random, ConMeZO restricts the sampling to a cone centered around a momentum estimate. This concentrates the search in the directions where the true gradient is more likely to lie and thus reduces the effect of higher dimensions. We analytically prove that ConMeZO achieves the same worst-case convergence rate as MeZO. Empirically, when finetuning LLMs on natural language benchmarks, ConMeZO is up to 2x faster than MeZO while retaining the low‑memory footprint of zeroth-order methods.