Least-to-Most Prompting
Least-to-most prompting is a reasoning pattern in which the model first decomposes a complex problem into an ordered sequence of easier sub-problems, then solves each sub-problem in turn, feeding earlier answers into later ones. Introduced by Zhou et al. in "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models" (2022), it extends chain-of-thought by making the decomposition explicit rather than leaving it implicit in a single reasoning trace. It is particularly effective on tasks with clear prerequisite structure — compositional generalization benchmarks, multi-step math word problems, and symbolic reasoning — where solving a hard instance requires first solving simpler versions. The trade-off is increased token usage and a dependency on the quality of the decomposition step; a bad decomposition cascades into bad answers.
Example
Given a problem like "If a train leaves station A at 9 AM going 60 mph, and a second train leaves station B at 10 AM going 80 mph in the opposite direction, when do they meet?", the model first decomposes into (1) compute the head start distance, (2) compute the combined closing speed, (3) compute the remaining distance, (4) compute the time to close. It answers each in order and uses the chain of intermediate answers to produce the final time, rather than attempting the full calculation in one pass.
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts