Skip to main content

Mixture of Prompts

Mixture of prompts is an ensembling pattern where the same input is run through several different prompts and the resulting outputs are combined — by majority vote, averaging, or a meta-model that reads all of them. The mixture can vary prompt style (direct answer, chain-of-thought, self-ask), prompt persona (different role assignments or expertise framings), or even target model (the same question asked of several models). Combining reduces the variance introduced by any single prompt's blind spots, and often lifts accuracy on hard tasks where no one prompt dominates. The downside is inference cost scales linearly with the number of prompts, so it is typically reserved for settings where quality matters more than cost: high-stakes decisions, evals, and ensembled judges.

Example

A medical-QA evaluation pipeline runs each question through four prompts: a direct-answer prompt, a step-by-step reasoning prompt, a differential-diagnosis-framed prompt, and a guideline-citation prompt. The four answers are combined by a majority-vote step for the final label. On a 1,200-question eval set, any single prompt scores between 0.74 and 0.79; the ensemble scores 0.86 — a gain worth the 4x inference cost for this offline evaluation use case.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts