Step-Back Prompting
Step-back prompting is a technique in which the model first generates a higher-level abstraction, principle, or generalization — a "step back" from the specific question — before answering. Introduced by Zheng et al. in "Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models" (2024), it improves accuracy on knowledge-intensive reasoning where the answer depends on applying a general rule or concept. Asking the model to surface the rule explicitly before applying it reduces shortcut errors and retrieval-style hallucinations. It is complementary to chain-of-thought: step-back focuses on surfacing the right concept, while chain-of-thought focuses on sequencing the derivation once the concept is known.
Example
For a physics question like "What happens to the pressure of an ideal gas if its volume is halved at constant temperature?", the model first steps back to state "The ideal gas law: PV = nRT. At constant temperature and moles, P and V are inversely proportional." It then applies this principle to conclude the pressure doubles. Without the step-back, models sometimes invent a proportional relationship or skip the law entirely.
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts