Skip to main content

Self-Critique Prompting

Self-critique prompting is a pattern where the model is asked to evaluate its own output against specific criteria, surface weaknesses, and suggest improvements — but deliver the critique as an output, not a rewrite. That distinction is what separates it from Self-Refine, which also revises the original answer. Self-critique is useful as a standalone quality-check step in pipelines where something or someone else — a separate agent, a validator, a human reviewer — will decide what to do with the feedback. It is often paired with the Constitutional AI approach, where the critique criteria are drawn from an explicit set of principles rather than left implicit, giving the critique a stable rubric instead of the model's in-context taste.

Example

A legal-drafting workflow generates a contract redline and then sends it through a self-critique prompt scored against a five-item checklist — ambiguous terms, missing counter-party obligations, undefined defined-terms, conflicting clauses, and jurisdiction gaps. The critique output is attached to the redline as reviewer notes. A human lawyer now opens a redline whose weakest points are already flagged, and spends review time on judgment calls rather than on finding the issues.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts