Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization

Abstract

Large language models have demonstrated remarkable capabilities, but their performance is heavily reliant on effective prompt engineering. Automatic prompt optimization (APO) methods are designed to automate this and can be broadly categorized into those targeting instructions (instruction optimization, IO) vs. those targeting exemplars (exemplar optimization, EO). Despite their shared objective, these have evolved rather independently, with IO receiving more research attention recently. This paper seeks to bridge this gap by comprehensively comparing the performance of representative IO and EO techniques both isolation and combination on a diverse set of challenging tasks. Our findings reveal that intelligently reusing model-generated input-output pairs obtained from evaluating prompts on the validation set as exemplars, consistently improves performance on top of IO methods but is currently under-investigated. We also find that despite the recent focus on IO, how we select exemplars can outweigh how we optimize instructions, with EO strategies as simple as random search outperforming state-of-the-art IO methods with seed instructions without any optimization. Moreover, we observe a synergy between EO and IO, with optimal combinations surpassing the individual contributions. We conclude that studying exemplar optimization both as a standalone method and its optimal combination with instruction optimization remain a crucial aspect of APO and deserve greater consideration in future research, even in the era of highly capable instruction-following models.

Publication
Advances in Neural Information Processing Systems 37
The findings of this work also power the recently launched Google Cloud Vertex AI Prompt Optimizer. Read more: [Google Cloud Blog] [Google for Developers Blog] [Vertex AI Documentations]

Vertex AI Prompt Optimizer
Vertex AI Prompt Optimizer

Xingchen Wan
Xingchen Wan
Research Scientist

My research interests include large language models, Bayesian optimization, AutoML, and machine learning on graphs.