Skip to main content
Back to Blog
GPT-5coding promptssoftware engineeringGPT-5 promptscode review2026

40 Copy-Paste GPT-5 Coding Prompts: Frontend, Backend, Refactor, Debug, Migrate

40 copy-paste GPT-5 prompts for software engineers — full-stack feature work, refactoring, code review, debugging, test generation, migrations, infra, and architecture decisions.

SurePrompts Team
May 6, 2026
35 min read

TL;DR

Forty GPT-5 prompts engineers actually use in their day job — frontend feature scaffolds, backend API design, refactor proposals, code review, test generation, migrations, debugging, and architecture decisions. Each prompt produces output you can paste into a PR with light editing.

Generic "write me a function" prompts get you code you have to rewrite from scratch. Structured engineering prompts — with file context, explicit constraints, and a specified output format — get you copy-paste diffs you can paste into a PR, review in five minutes, and ship. These 40 copy-paste templates are the latter.

Why GPT-5 Is Different for Code

Long context changes everything. GPT-5's million-token context window means you can paste an entire file, a full module, or a schema migration history and ask questions across all of it. You no longer have to summarize your codebase for the model — just give it the real thing. This matters most for refactoring and architecture work where the answer depends on what's already there.

Tool calling makes it an active participant. When wired to a code execution environment, GPT-5 can run your test suite, invoke a type checker, or hit a local endpoint to verify its own output. The prompts in this guide are written for the chat interface, but if you're using GPT-5 via the API with tool calling, add a step that tells it to validate by running the relevant command before returning results.

Structured output mode gives you reliable diffs. Earlier models would sometimes return a refactored function buried in prose. GPT-5 in structured output mode returns clean JSON or a unified diff when you ask for one — which is exactly what you want when editing existing code. Several prompts below explicitly request diff format.

It actually writes the tests when you ask. GPT-4-class models had a pattern of saying "and you'd want to add tests for X, Y, Z" without writing them. GPT-5 follows through when tests are a named deliverable in the prompt. Ask for them explicitly and they show up in the same response, not as a suggestion.

Ask for diffs, not full files. When editing existing code, prompt for a unified diff rather than the whole file rewritten. This makes it easier to review what changed, apply the patch selectively, and avoid GPT-5 silently modifying lines you didn't ask it to touch. For the pattern of working with AI on an existing codebase, see our guides on prompting AI coding agents and AI prompts for coding.

40
Copy-paste GPT-5 prompts across 8 engineering categories


Frontend Feature Prompts (1–5)

1. React Component With a State Machine

code
You are a senior React engineer. Implement a [COMPONENT NAME] component in
React [VERSION] with TypeScript.

The component manages these states: [LIST STATES — e.g., idle, loading,
success, error, empty].

Transitions:
- [STATE A] → [STATE B] when [EVENT]
- [STATE B] → [STATE C] when [EVENT]
[ADD MORE AS NEEDED]

Props interface:
[DESCRIBE PROPS WITH TYPES]

Requirements:
- Use XState or a hand-rolled useReducer — your call, but justify the choice
  in a comment at the top of the file
- No useState for the machine itself; all state transitions go through dispatch
- Render a different UI for each state (no boolean soup)
- File size limit: 200 lines. Extract sub-components if needed.
- Export types alongside the component

Output: full TypeScript file including the types. List any assumptions before
writing the implementation.

2. Accessible Form With Validation

code
Build a [FORM NAME] form component in React [VERSION] + TypeScript.

Fields:
[LIST FIELDS WITH TYPE, LABEL, AND VALIDATION RULES — e.g.:]
- email: string, required, valid email format
- password: string, required, min 8 chars, at least one uppercase + one digit
- confirmPassword: string, must match password

Form library: [React Hook Form / Formik / none — hand-roll with useReducer]
Validation: Zod schema (generate the schema alongside the component)

Accessibility requirements:
- aria-describedby wires each field to its error message
- aria-invalid on fields with errors
- Focus moves to first error on submit failure
- Screen reader announces submission result

Output:
1. Zod schema
2. Component file
3. Unit tests using @testing-library/react covering: happy path,
   individual field errors, cross-field validation (confirmPassword),
   and submission handler called only on valid data

List assumptions before writing.

3. Responsive Layout From a Figma Description

code
Implement this layout in React + Tailwind CSS. I'll describe it as a Figma
spec — translate it faithfully.

Layout description:
[PASTE FIGMA SPEC OR WRITTEN DESCRIPTION OF LAYOUT]

Breakpoints to support: mobile (< 640px), tablet (640–1024px), desktop (> 1024px)

Layout behavior at each breakpoint:
- Mobile: [DESCRIBE — e.g., single column, nav collapses to hamburger]
- Tablet: [DESCRIBE]
- Desktop: [DESCRIBE]

Constraints:
- No CSS-in-JS, no inline style props — Tailwind classes only
- Container max-width: [VALUE]
- Components must be presentational (no data fetching, no side effects)
- Pass all data as props with explicit TypeScript types

Output: component file + types. Call out any Figma ambiguities you assumed
away before writing the code.

4. Interactive Data Visualization

code
Build a [CHART TYPE — bar, line, scatter, treemap] visualization component
for this data shape:

[PASTE TYPESCRIPT TYPE OR SAMPLE JSON]

Tech: React [VERSION] + [recharts / visx / d3-direct — your recommendation,
justify it]
TypeScript strict mode: on

Interactive features:
- Hover tooltip showing [FIELDS TO DISPLAY]
- [CLICK / BRUSH / ZOOM] interaction: [DESCRIBE BEHAVIOR]
- Animated on first render (300ms ease-in)

Accessibility:
- SVG title and description for screen readers
- Keyboard-navigable data points (if bar/scatter)
- Color palette must pass WCAG AA contrast on white background

Performance constraint: renders ≤ 10,000 data points without jank on a
mid-range laptop. If d3 is needed for this, say so and show the approach.

Output:
1. Component with full TypeScript types
2. Storybook story (or plain usage example) with sample data
3. Note the failure modes you considered (e.g., empty data, single data
   point, extreme outliers)

5. Animation and Transition System

code
Implement enter/exit animations for [COMPONENT NAME] in React + TypeScript.

Current component (paste it):
[PASTE EXISTING COMPONENT CODE]

Desired animation behavior:
- Enter: [DESCRIBE — e.g., fade in + slide up 8px over 200ms]
- Exit: [DESCRIBE — e.g., fade out over 150ms]
- Trigger: [what causes the enter/exit — prop change, route change, user action]

Library preference: [Framer Motion / React Spring / CSS transitions — or ask
GPT-5 to recommend and justify]

Constraints:
- prefers-reduced-motion must disable or simplify all animations
- No layout thrash: animations must not trigger reflow on the main thread
- Existing component behavior must be preserved exactly

Output: unified diff against the pasted component. Do not rewrite lines
that don't need to change.


Backend & API Prompts (6–10)

6. REST Endpoint With Validation

code
Implement a [HTTP METHOD] /[PATH] endpoint in [Express / Fastify / Hono /
Go net/http / Python FastAPI — specify].

Language: [LANGUAGE + VERSION]
Database: [Postgres / MySQL / SQLite] via [Prisma / GORM / SQLAlchemy /
raw driver — specify]

What it does:
[DESCRIBE BUSINESS LOGIC IN PLAIN ENGLISH]

Request body schema (generate Zod / Pydantic / Go struct alongside the handler):
[DESCRIBE FIELDS WITH TYPES AND CONSTRAINTS]

Success response: [SHAPE]
Error responses: list the status codes and when each fires

Requirements:
- Validate input before touching the database
- Use parameterized queries if writing raw SQL — no string interpolation
- Return RFC 7807 Problem Details for errors
- Log request ID, user ID (if auth'd), and elapsed ms at the end of each request
- Write one integration test that hits the handler with a real (test) database
  connection and covers: valid request, missing required field, duplicate key

List assumptions before writing.

7. GraphQL Resolver

code
Write a GraphQL resolver for the [QUERY / MUTATION / SUBSCRIPTION] field
[FIELD NAME] in [Apollo Server / graphql-yoga / Strawberry / gqlgen — specify].

Language: [LANGUAGE + VERSION]

Schema fragment (paste relevant SDL or Go types):
[PASTE SCHEMA]

What the resolver does:
[DESCRIBE]

Data sources:
- [DATA SOURCE 1 — e.g., Postgres table users via DataLoader]
- [DATA SOURCE 2 — e.g., Redis cache for rate-limit state]

Requirements:
- Use DataLoader to batch and cache per-request if resolving entity lists
- Validate arguments before querying; throw UserInputError for bad input
- Authorize: [describe rule — e.g., user can only fetch their own records unless
  role === 'admin']
- N+1 queries must not appear in the generated code — explain how you avoided them

Output:
1. Resolver implementation
2. DataLoader setup (if applicable)
3. Unit tests mocking the data sources, covering auth failure + happy path

8. Background Job

code
Implement a background job that [DESCRIBE TASK] in [Node.js + BullMQ /
Python + Celery / Go goroutine + channel — specify].

Language: [LANGUAGE + VERSION]
Queue/broker: [Redis / RabbitMQ / SQS — specify]

Job payload type:
[DESCRIBE FIELDS]

Job behavior:
1. [STEP 1]
2. [STEP 2]
3. [STEP N]

Failure handling:
- Retry policy: [e.g., 3 retries with exponential backoff, max 30 min]
- Dead-letter behavior: [e.g., write to dead_jobs table with full payload + error]
- Idempotency: [e.g., job must be safe to run twice with the same payload]

Observability:
- Emit a structured log line at job start, job end, and each retry
- Record job duration as a metric (name: [METRIC NAME])

Output:
1. Job definition and worker
2. Producer helper used to enqueue the job
3. Unit tests covering: success, transient failure + retry, exhausted retries

9. Webhook Handler

code
Build a webhook receiver endpoint for [PROVIDER — e.g., Stripe, GitHub,
SendGrid] in [LANGUAGE + FRAMEWORK].

Webhook docs reference (paste the relevant payload spec):
[PASTE OR DESCRIBE PAYLOAD]

Events to handle:
- [EVENT TYPE 1]: [what to do]
- [EVENT TYPE 2]: [what to do]
- [EVENT TYPE N]: [what to do]

Security requirements:
- Verify the [HMAC-SHA256 / JWT / custom] signature before processing
- Return 200 immediately if the event type is not in the handled list (do not 404)
- Return 200 if the signature check fails after logging a warning — do not leak
  that validation failed to the caller

Idempotency:
- Store processed event IDs in [TABLE / CACHE]; skip and return 200 if already seen
- ID TTL: [e.g., 7 days]

Output:
1. Route handler
2. Signature verification utility (unit-tested in isolation)
3. Integration test with a real sample payload from the provider's docs

10. Token Bucket Rate Limiter

code
Implement a token bucket rate limiter as [Express / Fastify / Go middleware].

Language: [LANGUAGE + VERSION]
State backend: Redis [VERSION] using [ioredis / go-redis / redis-py]

Rate limit rules:
- Default: [N] requests per [WINDOW] per [IP / user ID / API key]
- Elevated tier: [N] requests per [WINDOW] for [CONDITION]

Response when limited:
- HTTP 429
- Retry-After header (seconds until next token)
- X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset headers

Atomic Redis operation: use a Lua script so the read-modify-write is atomic.
Paste the Lua script as a code comment above the call.

Failure mode: if Redis is unreachable, [fail open / fail closed — choose and
justify in a comment].

Output:
1. Middleware implementation with the Lua script
2. Unit tests mocking Redis: under limit, at limit, exceeded, Redis down
3. Benchmark showing ops/sec at [TARGET RPS]


Refactoring Prompts (11–15)

11. Extract Function

code
Refactor the function below by extracting cohesive sub-operations into
named helper functions.

Language: [LANGUAGE]

Current function:
[PASTE FUNCTION]

What it does (your understanding):
[DESCRIBE IN PLAIN ENGLISH]

Rules:
- Each extracted function does exactly one thing
- Extracted functions are pure where possible (no side effects)
- Names must read as a verb phrase describing what the function does
- The original function's public signature must not change
- Behavior must be identical — include a note on any edge cases you are
  relying on in the extraction

Output: unified diff. Do not change anything outside the scope of this
extraction. List assumptions before writing.

12. Dependency-Injection Refactor

code
Refactor this code to use explicit dependency injection instead of
hardcoded dependencies.

Language: [LANGUAGE]

Current code:
[PASTE FILE OR RELEVANT SECTION]

Hardcoded dependencies I can see: [LIST — e.g., imports a specific DB client,
calls a specific HTTP client constructor directly]

Target: the dependencies are passed as constructor arguments / function
parameters so the code can be tested without hitting real infrastructure.

Requirements:
- Define interfaces/protocols for each injected dependency
- Do not use a DI framework — manual injection only
- Update or write unit tests that inject fakes/stubs for each dependency
- The calling code (main / bootstrap / wherever the real deps are wired)
  should be shown separately from the refactored module

Output:
1. Refactored module (unified diff format)
2. Interface/protocol definitions
3. Updated unit tests using injected fakes
4. Updated wiring code

13. Callback-to-Async/Await Migration

code
Migrate this callback-based [Node.js / Python / Go] code to use async/await.

Language: [LANGUAGE + VERSION]

Current code:
[PASTE CODE]

Known constraints:
- [e.g., The outer function is called from a sync context in main.js — note
  how that call site needs to change]
- [e.g., Error handling currently uses a custom err object with a .code field
  — preserve that shape]

Requirements:
- All Promises must be explicitly awaited — no floating Promises
- Error handling must cover the same cases as the original callback error paths
- Unhandled rejection at the process level: ensure there is a handler or explain
  why one is not needed
- Preserve all existing behavior including timeout and cancellation if present

Output: unified diff. Call out any places where the semantics differ subtly
between the callback and async/await versions.

14. Dead-Code Finder

code
Analyze this [LANGUAGE] codebase section for dead code and unused exports.

Files to analyze (paste each with a filename header):
[PASTE FILES]

Entry points (nothing is dead if it's reachable from here):
[LIST — e.g., src/index.ts, src/api/routes.ts]

Find:
1. Exported functions/classes not imported anywhere in the pasted files
2. Internal functions defined but never called
3. Variables assigned but never read
4. Conditional branches that can never be reached (based on visible types)
5. Commented-out code blocks older than [N lines / "any size"]

For each finding:
- File name + line number
- Category (unused export / dead internal / unreachable branch / zombie comment)
- Confidence (high / medium — flag anything where you're inferring from partial
  context)
- Recommended action (delete / keep as test double / move to tests)

Do not suggest removing anything you are not confident is unreachable given the
pasted entry points. Be conservative.

15. Type-Safety Upgrade

code
Upgrade this [JavaScript / Python untyped / Go interface{}-heavy] code to
full strict type safety.

Language target: [TypeScript strict / Python + mypy strict / Go generics]
File to upgrade:
[PASTE FILE]

Rules:
- No `any`, `object`, `interface{}`, or cast-to-unknown unless genuinely
  unavoidable — flag each one with a TODO comment explaining why
- Define explicit types for every function signature (params + return)
- Replace runtime type checks with TypeScript discriminated unions /
  Python TypeGuard / Go type assertions only where necessary
- Do not change runtime behavior — this is a types-only migration

Output:
1. Fully typed file
2. List of every `any` / escape hatch used with rationale
3. Note any places where the original code had a latent type bug that strict
   typing exposed — these are worth reviewing before merging


Code Review Prompts (16–20)

16. Security Review

code
You are performing a security review for a production PR. Review the code
below for security issues only — do not comment on style or performance.

Language: [LANGUAGE]
Context: [what this code does, who calls it, what data it touches]

Code:
[PASTE DIFF OR FILE]

Check for:
- Injection vulnerabilities (SQL, command, template, LDAP)
- Authentication and authorization bypasses
- Insecure deserialization
- Secrets or credentials in source
- Path traversal
- SSRF opportunities
- Overly broad CORS or CSP
- Logging of sensitive data (PII, tokens, passwords)
- Unsafe use of eval, exec, pickle, or equivalent in the language

For each finding:
- Severity: CRITICAL / HIGH / MEDIUM / LOW
- Exact file + line
- Attack scenario (one sentence: how would an attacker exploit this)
- Remediation (show the fixed code, not just describe it)

If you find nothing, say so and name the checks you ran.

17. Performance Review

code
Review this code for performance issues in a production context.

Language: [LANGUAGE]
Runtime context: [e.g., HTTP request handler, batch job, CLI, background worker]
Expected load: [e.g., 500 rps, 10M rows/day, interactive latency < 100ms]

Code:
[PASTE CODE]

Look for:
- Synchronous blocking calls in an async context
- N+1 query patterns
- Missing indexes implied by the query patterns
- Unnecessary serialization / deserialization
- Repeated computation that could be memoized
- Allocation-heavy hot paths (GC pressure)
- Missing connection pool reuse

For each issue:
- Estimated impact: HIGH / MEDIUM / LOW (relative to the stated load)
- Exact location
- Explanation of why it matters at the given scale
- Suggested fix with code

Do not flag style issues. Stick to measurable performance impact.

18. Readability Review

code
Review this code for readability and long-term maintainability. Assume the
next developer to read this has no context beyond the code itself.

Language: [LANGUAGE]
Codebase conventions: [describe style guide, naming conventions, patterns
used elsewhere — or paste a representative file as reference]

Code to review:
[PASTE CODE]

Evaluate:
- Names (variables, functions, classes): do they communicate intent?
- Function length and single-responsibility adherence
- Nesting depth: anything deeper than 3 levels is a flag
- Magic numbers or literals that should be named constants
- Error handling completeness (are all failure paths handled and surfaced?)
- Comments: missing where "why" is non-obvious, or redundant where the code
  speaks for itself

Output format:
- Inline comments in the style: `// REVIEW [SEVERITY]: <issue> → <suggestion>`
- Summary table: issue | location | severity | effort to fix
- One overall sentence verdict: "merge as-is / merge with minor fixes / needs
  rework before merge"

19. API Contract Review

code
Review this API definition for contract quality and forward compatibility.

Format: [OpenAPI 3.1 / GraphQL SDL / Protobuf / TypeScript interface — paste it]

[PASTE API DEFINITION]

Consumers: [who calls this API — internal services, mobile clients,
third-party integrators]

Evaluate:
- Breaking-change risk: which fields or types, if changed, would break existing
  consumers without a version bump?
- Ambiguities: fields or behaviors not clearly specified (nullability, defaults,
  ordering guarantees)
- Missing error contract: are all error shapes defined?
- Versioning strategy: is there one? Should there be?
- Pagination: if list endpoints exist, is pagination specified?
- Authentication: is the auth requirement documented at the spec level?

Output:
- Issues table: issue | field/endpoint | severity | recommended fix
- Proposed changes to the definition (show diffs)
- Fields that are safe to add later without a version bump

20. Accessibility Review

code
Review this frontend code for accessibility issues against WCAG 2.2 AA.

Component/page:
[PASTE JSX / HTML / TEMPLATE CODE]

Context: [describe the UI — what does this render, who are the users]

Check for:
- Missing or incorrect ARIA roles, labels, descriptions
- Keyboard navigation: can all interactive elements be reached and activated
  without a mouse?
- Focus management: does focus move predictably after state changes (modals,
  toasts, route changes)?
- Color contrast: flag any hardcoded colors that may fail AA (I'll verify
  exact values separately)
- Form inputs: every input must have an associated label (not placeholder-only)
- Image alternatives: are all meaningful images described? Are decorative
  images hidden from AT?
- Motion: is there a prefers-reduced-motion path?

For each issue:
- WCAG criterion violated
- Component location (line or JSX element)
- Severity: blocker / major / minor
- Code fix (show the corrected JSX or HTML)

Do not comment on visual design or style.


Test Generation Prompts (21–25)

21. Unit Tests With Edge Cases

code
Write unit tests for the function below. Do not hold back on edge cases.

Language: [LANGUAGE]
Test framework: [Jest / Vitest / pytest / Go testing / RSpec — specify]

Function to test:
[PASTE FUNCTION]

Cover:
1. Happy path (typical valid input)
2. Boundary values (min, max, empty, single element)
3. Type/shape edge cases: null, undefined, empty string, zero,
   negative numbers, strings-that-look-like-numbers, Unicode
4. Error conditions: what should throw/return an error, and does it?
5. Any invariant I should assert about the return value regardless of input

Test naming convention: `[function name] [scenario description in plain English]`
Follow Arrange-Act-Assert structure.
Target: 100% branch coverage of the pasted function.

Before writing tests, list the edge cases you plan to cover so I can add
any you missed.

22. Integration Test for an API Endpoint

code
Write an integration test for this API endpoint that runs against a real
(test) database, not mocks.

Language: [LANGUAGE]
Framework: [FRAMEWORK]
Test runner: [JEST + SUPERTEST / pytest + httpx / Go testing + net/http/httptest]
Database: [Postgres / SQLite in-memory for tests]

Endpoint handler (paste it):
[PASTE HANDLER CODE]

Route: [METHOD] /[PATH]

Test scenarios:
1. Valid request → expected response shape and status
2. Missing required field → 400 with RFC 7807 error body
3. [DOMAIN-SPECIFIC FAILURE — e.g., duplicate email] → [EXPECTED STATUS + BODY]
4. Unauthorized request (no token) → 401
5. Successful request mutates the database correctly — assert the DB state
   after the call, not just the response

Setup/teardown: transaction rollback after each test so tests are order-independent.

Output:
1. Test file
2. Any fixtures or seed helpers needed
3. Note which tests require network/disk I/O so they can be tagged and skipped
   in unit-only CI runs

23. E2E Flow Test

code
Write an end-to-end test for the following user flow using [Playwright /
Cypress / Selenium — specify].

Language: [TypeScript / Python / Java]

Flow to test: [DESCRIBE THE COMPLETE USER JOURNEY — e.g.:]
1. User lands on /signup
2. Fills in email and password
3. Submits form
4. Receives confirmation email (mock the email provider)
5. Clicks confirmation link
6. Lands on /dashboard with "Welcome" message visible

Base URL: [TEST ENVIRONMENT URL — e.g., http://localhost:3000]
Auth state: [how to set up a logged-out state before the test]

Selectors: prefer data-testid attributes over CSS class selectors.
If the app does not have data-testid attributes, note where they need to
be added and use accessible role selectors (getByRole) as fallback.

Failure modes to handle:
- Network request timeout (set explicit wait timeouts, not arbitrary sleeps)
- Flaky animation (wait for element stability, not fixed delays)

Output:
1. Test file
2. List of data-testid attributes that need to be added to the app
3. CI configuration snippet showing how to run this test in a headless browser

24. Snapshot Test

code
Write snapshot tests for this React component.

Component:
[PASTE COMPONENT]

Test framework: [Jest + @testing-library/react / Vitest]
Renderer: @testing-library/react (not react-test-renderer — use
prettyDOM or toMatchInlineSnapshot)

Scenarios to snapshot:
1. Default props
2. [VARIANT 1 — e.g., loading state]
3. [VARIANT 2 — e.g., error state]
4. [VARIANT 3 — e.g., empty data]

Important:
- Use inline snapshots (toMatchInlineSnapshot) so snapshot diffs appear in
  the test file, not a separate file
- Mock any date/time values so snapshots are deterministic
- Mock any random IDs or UUIDs

After writing the tests, note which props or children I should add data-testid
to so the snapshots are stable across CSS refactors (i.e., they capture
structure, not class names).

25. Property-Based Test

code
Write property-based tests for the function below.

Language: [TypeScript / Python / Haskell / Rust — specify]
Library: [fast-check / Hypothesis / QuickCheck / proptest — or recommend one]

Function to test:
[PASTE FUNCTION]

Properties to verify (suggest more if you see them):
- [PROPERTY 1 — e.g., round-trip: encode(decode(x)) === x]
- [PROPERTY 2 — e.g., idempotency: f(f(x)) === f(x)]
- [PROPERTY 3 — e.g., commutativity: combine(a, b) === combine(b, a)]

Arbitrary generators needed:
- [TYPE 1 — e.g., valid email strings]
- [TYPE 2 — e.g., non-empty arrays of positive integers]

Output:
1. Arbitrary/generator definitions
2. Property test cases
3. A failing example for each property if you can construct one by inspection
   (helps verify the tests actually catch bugs)

Before writing, list the properties you plan to test and flag any that
require a reference implementation to verify against.


Debug & Troubleshoot Prompts (26–30)

26. Stack Trace Analysis

code
Analyze this stack trace and identify the root cause. Do not just describe
what failed — tell me why it failed and what I need to change.

Language: [LANGUAGE + VERSION]
Runtime: [Node.js / CPython / JVM / Go runtime — version]
Framework: [FRAMEWORK + VERSION]

Stack trace:
[PASTE FULL STACK TRACE]

Relevant source files (paste each with filename):
[PASTE FILES]

Reproduction steps:
[WHAT YOU DID TO TRIGGER THIS]

What I've already tried:
[LIST ATTEMPTS SO FAR]

Deliver:
1. Root cause in one sentence
2. Explanation of the call chain that led to the error
3. The fix — show the exact code change as a diff
4. How to verify the fix works
5. Any related failure modes this fix might uncover

27. Race Condition Hunt

code
I have a suspected race condition in this concurrent code. Help me find it.

Language: [LANGUAGE]
Concurrency model: [Go goroutines / Node.js event loop + async / Python
asyncio / Java threads — specify]

Code:
[PASTE THE RELEVANT CODE]

Observed symptom: [DESCRIBE — e.g., "intermittently returns stale data",
"occasionally panics with nil pointer", "deadlocks under load"]

Reproduction: [e.g., "happens roughly 1 in 50 times under concurrent load,
never in unit tests"]

Analyze:
1. Identify every shared mutable state access point in the pasted code
2. For each: is it protected? How?
3. Describe the specific interleaving that causes the symptom
4. Propose the fix: mutex, channel, atomic, lock-free structure, or
   redesign — justify the choice
5. Show the fixed code as a diff
6. Describe a test that can reliably reproduce the race (e.g., using
   go test -race or asyncio gather with tight timing)

28. Memory Leak Investigation

code
Help me identify a memory leak in this [LANGUAGE] service.

Service description: [WHAT IT DOES, HOW LONG IT RUNS]

Observed behavior: [e.g., "RSS grows ~50MB/hour, never drops, OOMs after 12h"]

Profiler output (paste or describe):
[PASTE HEAP SNAPSHOT DIFF / PPROF OUTPUT / TRACEMALLOC OUTPUT / etc.]

Relevant code sections (paste files you suspect):
[PASTE CODE]

Analyze:
1. Likely leak source based on the profiler data
2. Walk through the code and identify where references are held longer
   than necessary
3. Common patterns in [LANGUAGE] that cause leaks — does this code
   exhibit them? (e.g., event listener accumulation, closure captures,
   goroutine leaks, unclosed resources)
4. The fix as a diff
5. How to add a memory growth assertion to the CI test suite so this
   class of leak is caught before production

If the profiler output is ambiguous, tell me exactly what additional data
to collect.

29. Performance Regression Triage

code
A performance regression was introduced between these two commits.
Help me find it.

Language: [LANGUAGE]
Service: [WHAT IT DOES]

Before: [COMMIT SHA or description] — p99 latency [X]ms
After:  [COMMIT SHA or description] — p99 latency [Y]ms

Diff between the commits (paste or describe):
[PASTE GIT DIFF OR FILE CHANGES]

Load profile: [e.g., "200 rps, 95% GET /api/items, 5% POST /api/items"]
Profiler output from the slow version (if available):
[PASTE FLAMEGRAPH TEXT / TOP HOTSPOTS]

Analyze:
1. Changes in the diff most likely to cause latency increase
2. For each candidate: explain the mechanism (e.g., added a synchronous
   DB call in a hot path, changed O(n) to O(n²), removed connection pool reuse)
3. Ranked list of suspects by likelihood
4. Suggested fix for the top suspect as a diff
5. Benchmark or load test command to confirm the regression is fixed

30. Flaky Test Triage

code
This test fails intermittently. Help me make it deterministic.

Language: [LANGUAGE]
Test framework: [FRAMEWORK]

Test code:
[PASTE TEST]

Source code under test:
[PASTE RELEVANT SOURCE]

Failure pattern: [e.g., "fails roughly 1 in 10 runs in CI, always passes locally",
"fails only when run in parallel with other tests", "fails on slow machines"]

CI environment: [OS, CPU count, any relevant env vars]

Analyze:
1. Identify all non-deterministic elements in the test (timing assumptions,
   random values, global state, file system, network, process environment)
2. For each: is it in the test or the source? How should it be controlled?
3. If it's a timing issue: replace sleeps/arbitrary timeouts with explicit
   polling or event-driven waits — show the code
4. If it's shared state: show how to isolate each test run
5. Output the fixed test as a diff

Do not suggest increasing timeouts as a fix. Diagnose the root cause.


Migration & Upgrade Prompts (31–35)

31. Framework Version Migration

code
Migrate this codebase from [FRAMEWORK] [OLD VERSION] to [NEW VERSION].

Language: [LANGUAGE]

Files to migrate (paste each):
[PASTE FILES]

Known breaking changes between the versions:
[PASTE RELEVANT CHANGELOG ENTRIES OR MIGRATION GUIDE SECTION]

Migration requirements:
- Preserve all existing behavior
- Do not introduce new dependencies without flagging them
- Flag any patterns that are deprecated in the new version even if they
  still work (so we can address them separately)
- Update imports, API calls, config format as needed

Output:
1. Unified diff for each file
2. Summary table: change | file | reason | risk
3. Any manual steps that cannot be automated (e.g., config file format change)
4. Test command to verify the migration worked

32. Language Port

code
Port this [SOURCE LANGUAGE] code to [TARGET LANGUAGE].

Source code:
[PASTE SOURCE]

Target environment:
- Language version: [VERSION]
- Key libraries available: [LIST — or ask GPT-5 to recommend idiomatic alternatives]
- Performance requirements: [e.g., must match source within 10%]

Rules:
- Use idiomatic [TARGET LANGUAGE] patterns, not a line-by-line translation
- Preserve the external interface (function signatures, return shapes)
  unless a more idiomatic equivalent exists — flag any interface changes
- Error handling must be as comprehensive as the source
- Write unit tests for the ported code covering the same cases as the
  source test suite (paste source tests if available)

Output:
1. Ported code
2. Notes on idiom differences (where you deviated from literal translation
   and why)
3. Unit tests
4. Any functionality the target language cannot express cleanly — and
   the closest equivalent

33. ORM Swap

code
Migrate the data access layer in this codebase from [OLD ORM/QUERY BUILDER]
to [NEW ORM/QUERY BUILDER].

Language: [LANGUAGE + VERSION]

Files to migrate:
[PASTE REPOSITORY / DATA ACCESS LAYER FILES]

Database: [Postgres / MySQL / SQLite]
Schema (paste migration files or CREATE TABLE statements):
[PASTE SCHEMA]

Requirements:
- All queries must produce identical SQL (verify by checking query logs
  or using explain)
- Transaction semantics must be preserved
- Connection pool configuration must be migrated (show the new equivalent)
- Soft-delete behavior: [describe if present]

Output:
1. Migrated files as unified diffs
2. Any queries where the new ORM produces different SQL — flag these
   explicitly as needing manual verification
3. Updated test file showing how to mock the new ORM in unit tests

34. Monolith-to-Services Slice

code
Extract the [FEATURE/DOMAIN] capability from this monolith into a
standalone service.

Language: [LANGUAGE]
Monolith framework: [FRAMEWORK]
Target service type: [REST API / gRPC / event-driven worker]

Relevant monolith code (paste the files that own this domain):
[PASTE FILES]

Current coupling points:
[LIST — e.g., "shares Postgres connection", "calls UserService methods directly",
"reads from shared Redis cache"]

Slice requirements:
1. The service owns its own database (define the schema it needs)
2. Communication with the monolith: [synchronous HTTP / async events — specify]
3. Define the API contract between monolith and service
4. The monolith must continue to work during a phased rollout
   (strangler fig pattern — show how to wrap the existing code)

Output:
1. New service scaffold with the extracted logic
2. API contract definition (OpenAPI stub or Protobuf)
3. Adapter in the monolith that calls the new service (with feature flag)
4. Data migration plan: how to move the relevant rows to the new service's DB
5. Rollback plan if the new service has issues

35. CI/CD Pipeline Migration

code
Migrate our CI/CD pipeline from [SOURCE PLATFORM — e.g., CircleCI, Jenkins,
Travis CI] to [TARGET PLATFORM — e.g., GitHub Actions, GitLab CI].

Current pipeline config (paste it):
[PASTE EXISTING PIPELINE FILE]

What the pipeline does:
[DESCRIBE STAGES — e.g., lint, test, build Docker image, push to ECR, deploy
to ECS staging on merge to main, manual approval gate for production]

Target platform constraints:
- Secrets are stored in [GitHub Actions secrets / environment variables / Vault]
- Docker registry: [ECR / GCR / Docker Hub]
- Deployment target: [ECS / GKE / Fly.io / bare metal — specify]

Requirements:
- Preserve all existing behavior including parallelism and caching
- Cache dependencies between runs ([node_modules / .venv / Go module cache])
- Pipeline must fail fast: type check and lint before running tests
- Deployment only on [branch / tag / manual trigger — specify]

Output:
1. New pipeline config file(s)
2. List of secrets that need to be configured in the new platform
3. Any behaviors the target platform cannot replicate — and workarounds
4. Instructions for dry-running the pipeline without triggering a deployment


Architecture & Decision Prompts (36–40)

36. Architecture Decision Record (ADR) Draft

code
Draft an Architecture Decision Record for the following decision.

Decision: [WHAT WE ARE DECIDING — e.g., "Choose a message broker for
async job processing"]

Context:
- System: [DESCRIBE THE SYSTEM]
- Scale: [current + projected load]
- Team: [size, expertise level]
- Constraints: [budget, existing infrastructure, ops burden]

Options considered:
1. [OPTION A — e.g., Redis + BullMQ]
2. [OPTION B — e.g., RabbitMQ]
3. [OPTION C — e.g., AWS SQS + Lambda]

Evaluation criteria (ranked by importance):
1. [CRITERION 1 — e.g., operational simplicity]
2. [CRITERION 2 — e.g., at-least-once delivery guarantee]
3. [CRITERION 3 — e.g., dead-letter handling]
4. [CRITERION N]

Format the ADR using the Michael Nygard template:
- Title
- Status: Proposed
- Context
- Decision
- Consequences (positive + negative)
- Alternatives considered

Be direct about the tradeoffs. Do not write marketing copy for the chosen
option. Name the downsides.

37. System Design Memo

code
Write a system design memo for [FEATURE OR SYSTEM].

Audience: [e.g., engineering team + EM, no external stakeholders]
Format: internal technical memo (not a marketing doc)

Scope: [what this design covers and explicitly does NOT cover]

Requirements:
Functional:
- [REQUIREMENT 1]
- [REQUIREMENT 2]

Non-functional:
- Availability: [SLA]
- Latency: [p99 target]
- Throughput: [rps or events/sec]
- Data retention: [duration]

Include:
1. High-level architecture diagram (ASCII or described in text)
2. Component breakdown: what each part does and why it exists
3. Data model: key entities and relationships
4. API surface: key endpoints or event schemas
5. Failure modes: what happens when each component fails
6. Open questions: things not yet decided with a named owner

Do not pad the memo. If a section doesn't apply, say so in one line.

38. Tradeoff Analysis

code
Analyze the tradeoffs between these two technical approaches for [PROBLEM].

Context:
[DESCRIBE THE PROBLEM AND CURRENT SYSTEM STATE]

Approach A: [NAME]
Description: [HOW IT WORKS]
Relevant code or design (paste if exists): [CODE OR SKIP]

Approach B: [NAME]
Description: [HOW IT WORKS]
Relevant code or design: [CODE OR SKIP]

Evaluate each approach on:
- Implementation complexity (time to build, lines of code, new dependencies)
- Operational complexity (monitoring, debugging, on-call burden)
- Performance at [STATED SCALE]
- Testability
- Reversibility (how hard is it to undo this choice in 18 months?)
- Cost ([compute / licensing / engineering time])

Output:
1. Comparison table
2. Recommendation with a single clear sentence of justification
3. The conditions under which you'd reverse the recommendation
   (what would have to be true for the other option to win?)

Do not hedge everything. Pick a side.

39. Capacity Estimation

code
Estimate the infrastructure capacity required for [SYSTEM OR FEATURE].

Assumptions I'm providing (challenge any that seem wrong):
- Daily active users: [N]
- Requests per user per day: [N]
- Peak traffic multiplier: [e.g., 3x average]
- Average request payload: [KB]
- Average response payload: [KB]
- Read/write ratio: [e.g., 95/5]
- Data retention: [DURATION]
- Data growth rate: [GB/month]

For each layer, estimate and show your work:
1. Web/API tier: rps, CPU cores needed, instance count
2. Database: queries/sec, IOPS required, storage at 1 year / 3 years
3. Cache: hit rate assumption, memory required
4. Storage: total bytes, growth curve
5. Network: egress GB/month

Output:
- Estimation table with numbers and formulas
- Bottom-line infrastructure cost estimate ([AWS / GCP / Azure — specify]
  on-demand pricing for reference only, will vary)
- Top 3 assumptions that, if wrong, would change the estimate by >2x

Show arithmetic, not just conclusions. I need to be able to audit the numbers.

40. Post-Mortem Template

code
Write a post-mortem document for the following incident.

Incident summary:
- Service affected: [SERVICE]
- Impact: [DESCRIBE — e.g., "100% of users could not log in for 47 minutes"]
- Start time: [DATETIME UTC]
- End time: [DATETIME UTC]
- Severity: [SEV1 / SEV2 / SEV3]

Timeline (paste your incident log or describe from memory):
[PASTE OR DESCRIBE EVENTS IN CHRONOLOGICAL ORDER]

Root cause (your current understanding):
[DESCRIBE]

What we know worked:
[e.g., "alerting fired within 2 minutes", "rollback procedure executed cleanly"]

What we know failed:
[e.g., "no circuit breaker on the downstream call", "runbook was out of date"]

Format the post-mortem using this structure:
1. Executive summary (3 sentences — impact, cause, resolution)
2. Timeline (chronological, UTC timestamps)
3. Root cause analysis (5 Whys or fault tree — use whichever fits)
4. Impact analysis (users affected, revenue/SLA implications if known)
5. What went well
6. Action items table: item | owner | due date | priority
7. Lessons for the wider team (one paragraph)

Tone: blameless. Name systems and processes, not individuals.
The goal is learning, not accountability theater.


GPT-5 Coding Power Tips

1

Paste the full file, not a summary. GPT-5's million-token context can hold your entire module. Summaries lose the exact signatures, imports, and edge-case handling that the model needs to give you accurate output. If a file is relevant, paste the whole thing.

2

Ask for diffs, not full files. When editing existing code, use: "Output a unified diff, not the full file rewritten." This makes the change reviewable, prevents silent modifications to lines you didn't ask about, and keeps your git history clean.

3

Require a list of assumptions before the first line of code. Add "List your assumptions before writing any implementation" to every prompt. This surfaces misunderstandings in five seconds instead of after you've reviewed 200 lines of wrong code.

4

Require tests in the same response. Don't ask for tests as a follow-up. Put "Include unit tests covering happy path, edge cases, and failure modes in the same response." Models that generate code and tests together produce more internally consistent output than those that generate tests cold against finished code.

5

Use structured output for refactor proposals. For multi-file refactors, ask for the proposal as a JSON array of { file, change, reason } objects before generating any code. Review the plan first. Once approved, ask GPT-5 to execute each entry. This catches bad ideas before they become diffs.

6

Ask what failure modes it considered. After generating an implementation, ask: "What failure modes did you consider when writing this, and which ones did you not account for?" GPT-5 will surface the gaps it knowingly left — those gaps are your review checklist.

Before

Fix this bug in my checkout function.

After

Here is the full checkout.ts file. The bug: when a coupon code is applied and the user has store credit, the total goes negative. Reproduction: apply coupon "SAVE50" to a $30 order with $40 store credit. Expected: total floors at $0.00, store credit is partially consumed. Actual: total is -$10.00 and the order is placed. Output a unified diff. List your assumptions before writing the fix.

Build Better Engineering Prompts

Every prompt above is a template — adapt the [VARIABLES] to your stack, paste your real code, and you'll get output that's reviewable in minutes rather than hours.

The pattern that makes them work: context first, output format specified, constraints explicit, and assumptions surfaced before the first line of code. That's not GPT-5 magic. That's just good communication with a very capable model.

Use the AI prompt generator to build custom engineering prompts for your specific stack and task, or explore the full GPT-5 prompt guide for prompts across every category beyond code. For integrating AI into your daily engineering workflow, see AI prompts for coding and the complete guide to prompting AI coding agents.

Try it yourself

Build expert-level prompts from plain English with SurePrompts — 350+ templates with real-time preview.

Open Prompt Builder

AI prompts built for developers

Skip the trial and error. Our curated prompt collection is designed specifically for developers — ready to use in seconds.

See Developers Prompts