Skip to yearly menu bar Skip to main content


Poster

Hyperband-based Bayesian Optimization for Black-box Prompt Selection

Lennart Schneider · Martin Wistuba · Aaron Klein · Jacek Golebiowski · Giovanni Zappella · Felice Antonio Merra

East Exhibition Hall A-B #E-2507
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Optimal prompt selection is crucial for maximizing large language model (LLM) performance on downstream tasks, especially in black-box settings where models are only accessible via APIs.Black-box prompt selection is challenging due to potentially large, combinatorial search spaces, absence of gradient information, and high evaluation cost of prompts on a validation set.We propose HbBoPs, a novel method that combines a structural-aware deep kernel Gaussian Process with Hyperband as a multi-fidelity scheduler to efficiently select prompts.HbBoPs uses embeddings of instructions and few-shot exemplars, treating them as modular components within prompts.This enhances the surrogate model's ability to predict which prompt to evaluate next in a sample-efficient manner.Hyperband improves query-efficiency by adaptively allocating resources across different fidelity levels, reducing the number of validation instances required for evaluating prompts.Extensive experiments across ten diverse benchmarks and three LLMs demonstrate that HbBoPs outperforms state-of-the-art methods in both performance and efficiency.

Lay Summary:

Large language models, like ChatGPT, can answer questions, solve problems, or write text.But how well they do often depends on how we ask them.Finding the best way to ask (called a "prompt") can be tricky, especially when using commercial models where we do not have access to their inner workings.Trying out lots of different prompts can be time-consuming and expensive.We created a new method, called HbBoPs, to help find better prompts more efficiently.It breaks each prompt into two parts, the instructions and the examples, and learns which combinations are most likely to work well. It also uses a clever way of testing prompts quickly and cheaply before spending more time and resources on the most promising ones.We tested HbBoPs across a wide range of tasks and language models.Compared to existing methods, It generally found better prompts while using fewer model calls.This means it can help people get more out of powerful language tools while saving time and cost and therefore makes these tools easier to use in everyday applications.

Chat is not available.