Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Test-Time Adaptation: Putting Updates to the Test (PUT)

Context Tuning for In-Context Optimization

Jack Lu · Ryan Teehan · Zhenbang Yang · Mengye Ren

[ ] [ Project Page ]
Fri 18 Jul 2:30 p.m. PDT — 3:15 p.m. PDT

Abstract:

We introduce Context Tuning (CT), a simple and effective method to significantly enhance few-shot adaptation of language models (LLMs) without fine-tuning model parameters. While prompt-based adaptation techniques have demonstrated the effectiveness of lightweight adaptation methods for large language models (LLMs), they typically initialize a trainable prompt or prefix from irrelevant tokens to the task at hand. In contrast, CT initializes the trainable prompt or prefix with task-specific demonstration examples, leveraging the model’s inherent in-context learning (ICL) ability to extract relevant information. This initialization provides a task-specific starting point for optimization, which improves few-shot performance. We evaluate our method on a broad suite of ICL benchmarks, including CrossFit, UnifiedQA, MMLU, BIG-Bench Hard (BBH), and the Abstraction and Reasoning Corpus (ARC). Empirical results show that CT significantly outperforms both ICL and traditional prompt-based adaptation methods, and achieves competitive performance relative to Test-Time Training with significantly higher training efficiency.

Chat is not available.