Self-Debug Prompting
Self-debug prompting is a pattern in which the model generates code, an interpreter executes it, and the model receives the execution result — error messages, failed test output, or unexpected values — as additional context for a revised attempt. The loop continues until the code runs correctly or a retry budget is exhausted. It mirrors a human developer's edit-run-read-error-revise cycle and is particularly effective for code-generation tasks where first-draft code often has small bugs (off-by-one errors, wrong imports, type mismatches) that are obvious once the interpreter surfaces them. Quality depends on the interpreter's error messages being informative; opaque runtimes reduce the benefit.
Example
A coding agent is asked to write a function that parses ISO 8601 durations. Its first attempt raises a ValueError on the test case "PT1H30M". The interpreter's error — "could not convert string to float: '1H30'" — is fed back to the model, which realizes it needs to split on letter boundaries, not just digits, and produces a corrected version that passes on the second try.
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts