Skip to main content

Active Prompting

Active prompting is an adaptive approach to few-shot example selection that borrows from active learning. Rather than picking demonstrations at random or by surface similarity, the method runs the model on a pool of unlabeled examples, measures uncertainty — often as the variance of answers across temperature-sampled runs — and selects the most uncertain examples for human annotation. Those annotated examples then become the few-shot demonstrations used at inference. It targets the model's specific gaps instead of padding the prompt with easy cases the model already handles, and typically beats random or similarity-based selection on reasoning benchmarks when the annotation budget is small.

Example

A legal classification team has 10,000 unlabeled clauses and budget to label 32. They run the base model five times per clause with temperature 0.7 and compute the fraction of runs that disagree on the predicted label. The 32 clauses with highest disagreement are sent to a reviewer for labeling, then used as few-shot examples. On a held-out test set, this beats 32 randomly labeled clauses by several points of F1.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts