Skip to main content

Self-Refine Prompting

Self-refine prompting is an iterative pattern in which the model generates an output, critiques its own output against specified criteria, then produces a revised version. Typical implementations run 2–3 rounds, with diminishing returns beyond that. The pattern works best when the critique criteria are explicit and specific — vague self-critique tends to rubber-stamp the original answer. Self-refine is cheap because it requires no external verifier, but it also risks the model agreeing with its own mistakes; for production use it is usually combined with an external eval step or a separate judge model.

Example

A product description prompt runs self-refine with the critique: "Check the draft for: specific numeric claims without sources, banned words (revolutionary, seamless, cutting-edge), passive voice, and any sentence over 20 words. List violations, then rewrite." Round 1 produces a generic draft; round 2 flags three banned words and two long sentences and rewrites them; round 3 finds one remaining long sentence and stops. A separate judge model then scores the final draft before it goes live.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts