Skip to main content
Back to Blog
GPT-5GPT-5 promptsOpenAIprompt templatesAI prompts2026

50 Best GPT-5 Prompts in 2026: Copy-Paste Templates That Actually Work

50 copy-paste GPT-5 prompts for writing, coding, agents, research, productivity, and creative work. Optimized for GPT-5's 1M context, native tool use, and improved reasoning.

SurePrompts Team
May 6, 2026
40 min read

TL;DR

50 GPT-5-tested copy-paste prompts across writing, coding, agents, research, productivity, business, and creative work. Built around GPT-5's 1M-token context, native tool calling, planner-executor patterns, and the model's tighter format adherence — so each template gets first-pass usable output.

Most GPT-5 prompts being shared right now are GPT-4o prompts with "GPT-5" pasted over them. That's a waste. GPT-5 has a 1M-token context window, native tool calling, structured output support, and meaningfully better planning — all of which change what a good prompt looks like. These 50 copy-paste GPT-5 prompts are built for GPT-5 specifically, not retrofitted from last year's advice.

Why Most GPT-5 Prompts Underperform

GPT-5 follows format instructions more precisely than its predecessors. If you specify JSON output, you get JSON. If you say "no preamble," you get none. But that precision cuts both ways — vague prompts still produce vague output, and GPT-5 is more likely to fill ambiguity with verbose explanation rather than reasonable assumptions.

The most important shift is how you use context. A 1M-token context window means you can paste an entire codebase, a year of meeting notes, or a full product spec and ask GPT-5 to reason across all of it in a single turn. Prompts that worked by keeping things short now leave capability on the table. Front-load the document, tell GPT-5 exactly what to find or do with it, and the model will handle the rest.

Agent and tool-use patterns are different with GPT-5 too. Instead of asking GPT-5 to browse, then analyze, then summarize as three separate prompts, you can specify the full tool chain in one instruction — search for X, use Code Interpreter to analyze the result, return a formatted report. The model is designed to plan and execute multi-step tool sequences without you babysitting each step.

The templates below are organized by category and built around these specific capabilities. For one-click structured prompt generation, use the AI prompt generator or the ChatGPT prompt generator to get GPT-5-tuned output formats automatically.

50
Copy-paste prompts organized by category — writing, coding, agents, research, productivity, business, and creative

Writing Prompts (1–8)

1. Long-Form Article With Full Context

code
You are a senior editor and subject-matter expert. Write a 2,000-word article on [TOPIC].

Context I'm providing (read the full document before writing):
[PASTE REFERENCE MATERIAL — research notes, transcripts, data, source docs]

Audience: [TARGET READER — e.g., senior product managers at B2B SaaS companies]
Tone: direct, specific, no throat-clearing
Publication: [WHERE THIS WILL RUN]

Structure:
- Opening: a specific, counterintuitive claim — no "In today's landscape" opener
- 4–5 H2 sections, each advancing a distinct argument
- Paragraphs: 3–4 sentences max
- Closing: one concrete, actionable takeaway the reader can act on today

Use only the source material I've provided. Do not introduce claims from outside it.
Flag anything where the source material is thin or contradictory.
No preamble — start with the article headline.

2. Cold Email That Actually Gets Replies

code
Write a cold outreach email to [RECIPIENT ROLE] at [COMPANY TYPE].

Goal: [WHAT YOU WANT — 15-minute call, intro, feedback, referral]
My context: [WHO YOU ARE AND WHAT YOU DO]
Their most likely frustration right now: [SPECIFIC PAIN POINT]
One thing I know about their company: [RECENT NEWS, PRODUCT, HIRE, OR INITIATIVE]

Rules:
- Under 110 words in the body
- No "I hope this finds you well" or "I wanted to reach out"
- First sentence references something specific about them — not a generic opener
- Exactly one ask, stated plainly
- Subject line: 5 words or fewer, lowercase, no punctuation

Write the subject line, then the email body. Nothing else.

3. Ghostwritten Thought Leadership Piece

code
Ghost-write a thought leadership article for [PERSON'S ROLE — e.g., "a Chief Revenue Officer"].

Their POV: [THEIR OPINION OR THESIS — the more specific, the better]
Their voice: [DESCRIBE — e.g., "blunt, data-driven, skeptical of conventional wisdom, rarely uses
adjectives, prefers short sentences"]
Audience: [WHERE THIS WILL PUBLISH AND WHO READS IT]
Length: [WORD COUNT]

The piece should:
- Open with a take that will make some readers disagree
- Support the thesis with specific examples, not general statements
- Avoid phrases like "the future of," "game-changing," or "transformative"
- Sound like a real person, not a content brief brought to life
- End with a clear implication for what the reader should do or think differently

Write it ready to publish. No notes about what you did.

4. Sales Page Copy With Objection Handling

code
Write sales page copy for [PRODUCT/SERVICE].

Product: [DESCRIPTION]
Price: [PRICE POINT AND BILLING STRUCTURE]
Target buyer: [WHO BUYS THIS AND WHY THEY'D JUSTIFY THE SPEND]
Primary competitor: [WHAT THEY'RE CURRENTLY USING INSTEAD]
Key differentiator: [THE ONE THING THAT MAKES THIS GENUINELY DIFFERENT]
Top 3 objections buyers have: [LIST THEM]

Structure:
1. Headline: benefit-driven, under 10 words, no exclamation mark
2. Subheadline: addresses the primary objection directly
3. Problem section: 3 specific pain points, each in 2 sentences
4. Solution: how the product eliminates each pain point
5. Proof section: placeholder format for [TESTIMONIAL] and [METRIC]
6. Objection FAQ: 5 Q&As — write the question the way a skeptical buyer would ask it
7. CTA: one action, specific, no "Get started today" filler

Tone: confident, not hype. Write like you're making a case to a smart, skeptical buyer.

5. Technical Documentation From Rough Notes

code
Convert these rough notes into polished technical documentation.

Audience: [WHO WILL READ THIS — e.g., "developers integrating our API for the first time"]
Rough notes:
[PASTE EVERYTHING YOU KNOW — disorganized, incomplete, as-is]

Output format:
## Overview (one paragraph, what this does and why it matters)
## Prerequisites (bulleted list — software, permissions, dependencies)
## Step-by-Step Guide (numbered, one action per step, include code blocks)
## Configuration Options (table: option | type | default | description)
## Common Errors (table: error | cause | fix)
## FAQ (3–5 questions a new user would actually ask)

Rules:
- No passive voice
- Every code example must be copy-paste ready
- Flag with [NEEDS CONFIRMATION] anywhere my notes were unclear or incomplete

6. Repurpose a Long Document Into Five Formats

code
I have this long-form document. Repurpose it into five distinct formats optimized for different
contexts. Read the full document before producing any output.

Document:
[PASTE FULL ARTICLE, REPORT, TRANSCRIPT, OR DECK]

Create:
1. Executive summary (150 words — what happened, what it means, what to do)
2. Twitter/X thread (10 tweets, each self-contained, no tweet is just a teaser for the next)
3. Email newsletter section (200 words — insider-voice, links to the full piece)
4. Slide deck outline (8 slides — title + 3 bullet points each, no full sentences)
5. Internal Slack announcement (3 sentences — what it is, why it matters, where to read more)

Each version should emphasize the angle most relevant to that format's audience.
Don't just shorten the original — adapt the message.

7. Rewrite for Three Audiences, With Reasoning

code
Rewrite this text for three different audiences. Keep the underlying information identical.
Change vocabulary, complexity, emphasis, and tone to match each audience.

Original text:
[PASTE TEXT]

Audience 1: [e.g., "Non-technical executives — care about outcomes, not process, 
have 90 seconds for this"]
Audience 2: [e.g., "Engineers implementing this — need precision, want to know edge cases"]
Audience 3: [e.g., "End users — just need to know what changed and what to do about it"]

After each rewrite, give a 2-sentence explanation of the biggest changes you made and why.

8. Newsletter Intro With a Specific Take

code
Write the opening section (150–200 words) for an issue of [NEWSLETTER NAME], 
a newsletter about [TOPIC].

This week's hook: [THE THING THAT HAPPENED, THE ANGLE YOU WANT TO TAKE]
Voice: [DESCRIBE — e.g., "contrarian, precise, hates vague optimism, writes like 
an analyst who's also funny"]
Running themes readers expect: [2–3 recurring elements your readers know]

The opening should:
- Start mid-thought — no greetings, no "This week in..."
- Establish a clear point of view on the hook topic
- Create tension or curiosity that makes readers want to keep reading
- Transition naturally into "here's what we're covering this issue"

End with a two-sentence bridge to the first section. No sign-off.

Coding Prompts (9–15)

9. Build a Full-Stack Feature End-to-End

code
Build [FEATURE] for my [APPLICATION TYPE] using [TECH STACK].

Tech stack: [FRAMEWORK, LANGUAGE, DATABASE, AUTH METHOD]
Existing architecture context:
[PASTE RELEVANT FILES OR DESCRIBE KEY PATTERNS IN YOUR CODEBASE]

Feature requirements:
- [REQUIREMENT 1]
- [REQUIREMENT 2]
- [REQUIREMENT 3]

Deliver:
1. Exact file structure — what to create, what to modify
2. Complete, working code for each file (no ellipses, no "add your logic here")
3. Database migration or schema changes with rollback
4. Input validation and error handling at every boundary
5. Manual testing steps (step-by-step, reproducible)
6. 5 unit test cases covering happy path and the most likely failure modes

Production-ready means: error handling, loading states, input sanitization.
Do not skip any of these. Flag anything that depends on environment variables I need to set.

10. Structured Code Review

code
Review this code as a senior engineer preparing it for a production release.

Language/framework: [LANGUAGE AND FRAMEWORK]
What this code does: [BRIEF DESCRIPTION]
Context in the larger system: [WHERE THIS FITS]

Code:
[PASTE CODE]

Evaluate in this order:
1. Correctness bugs (logic errors, off-by-ones, incorrect assumptions)
2. Security issues (injection, auth gaps, exposed data, unsafe deserialization)
3. Performance (N+1 queries, unnecessary allocations, blocking calls)
4. Error handling (unhandled exceptions, silent failures, bad error messages)
5. Readability (naming, structure, unnecessary complexity)

For each issue found:
- Quote the specific line(s)
- Classify severity: BLOCKING / SHOULD FIX / SUGGESTION
- Show the corrected version

End with: "The one change with the highest impact on production reliability is..." 
and explain why.

11. Refactor With Preserved Behavior

code
Refactor this code for clarity, maintainability, and reduced complexity.
Do not change any observable behavior or edge case handling.

Code:
[PASTE CODE]

Constraints:
- Preserve all function signatures that are called from outside this file
- Do not change how errors are surfaced to callers
- Maintain existing test compatibility

Improve:
- Variable and function names (self-documenting, no abbreviations)
- Function length (break anything over 40 lines with a single responsibility)
- Nesting depth (max 3 levels — extract or invert conditions)
- Dead code removal
- Comments (remove obvious ones, add "why" comments where logic is non-obvious)

After the refactored code, provide a changelog: one line per change, explaining 
what changed and why. Flag any changes that carry even small behavioral risk.

12. Comprehensive Test Suite Generation

code
Write a complete test suite for this code.

Code:
[PASTE CODE]

Test framework: [jest / pytest / go test / vitest / etc.]
What this code is responsible for: [DESCRIPTION]

Coverage requirements:
- Every public function and exported method
- Happy path with realistic inputs
- Edge cases: empty/null/undefined inputs, zero values, maximum values
- Error cases: what should throw, what should return error objects
- At least two integration-style tests that exercise multiple functions together

Test naming format: describe("functionName", () => { it("returns X when Y") })
— each test name should read as a specification, not a description.

After the tests, list any scenarios you considered but couldn't test without 
additional mocks or fixtures, and explain what those fixtures would need to contain.

13. Database Migration With Safety Plan

code
Write a database migration for this schema change.

Database: [PostgreSQL / MySQL / SQLite / etc.]
Current schema (relevant tables):
[PASTE CURRENT SCHEMA OR DDL]

Change needed: [DESCRIBE THE CHANGE — add column, rename table, split table, etc.]
Data in this table: [APPROXIMATE ROW COUNT AND WHETHER ANY DATA NEEDS TRANSFORMATION]
Zero-downtime requirement: [YES / NO — if yes, note deployment constraints]

Provide:
1. Forward migration SQL (with comments explaining each step)
2. Rollback migration SQL
3. Data migration script if any rows need to be updated
4. Index updates (new indexes needed, old ones to drop)
5. Application code changes required before, during, and after running this migration
6. Risk assessment: what could go wrong, how to verify success, recovery steps

Flag any step that requires a table lock and its expected duration at your estimated scale.

14. Architecture Decision Record

code
Write an Architecture Decision Record (ADR) for this technical decision.

Decision to document: [WHAT YOU DECIDED]
Context: [WHAT PROBLEM LED TO THIS DECISION — constraints, requirements, existing system]
Options considered:
1. [OPTION A — what it is, pros, cons]
2. [OPTION B — what it is, pros, cons]
3. [OPTION C — if applicable]

ADR format:
## Status: [Accepted / Proposed / Deprecated]
## Context
## Decision
## Consequences (positive, negative, neutral)
## Alternatives Considered
## Trade-offs

Write it so a new engineer joining the team 18 months from now understands why this 
decision was made, what was ruled out and why, and what would need to change to 
revisit it. Be specific — no generic "scales better" statements without qualification.

15. Debugging Root-Cause Analysis

code
Debug this issue. I need root cause, not just a fix.

Language: [LANGUAGE]
Expected behavior: [WHAT SHOULD HAPPEN]
Actual behavior: [WHAT IS HAPPENING, INCLUDING ERROR MESSAGE IF ANY]
Frequency: [ALWAYS / INTERMITTENT — if intermittent, under what conditions]
Recent changes: [WHAT CHANGED BEFORE THIS STARTED]

Code:
[PASTE THE RELEVANT CODE — file(s) involved, not just the error line]

Steps:
1. Identify the exact root cause (not just the symptom)
2. Explain the sequence of events that produces this bug
3. Show the fix
4. Explain why your fix works and doesn't introduce new issues
5. Add a regression test that would catch this if it regressed
6. Suggest one structural change that would make this class of bug impossible

Agent & Tool-Use Prompts (16–22)

16. Research Agent With Web Search and Synthesis

code
You are a research agent. Use web search to answer this research question, 
then synthesize the results into a structured report.

Research question: [YOUR QUESTION]
Scope: [NARROW OR BROAD — e.g., "focus on peer-reviewed sources" or "include industry reports"]
Recency: results from [TIME RANGE — e.g., "the last 12 months only"]

Search strategy:
1. Run at least 4 distinct search queries covering different facets of the question
2. For each source, note: URL, publication date, key claim
3. Cross-reference claims across sources — flag contradictions
4. Discard any source you cannot verify as credible

Final report structure:
## Summary (3-5 sentences — the answer to the research question)
## Key Findings (bulleted, each with source attribution)
## Conflicting Evidence (where sources disagree and why)
## Confidence Assessment (high / medium / low for the main conclusion, with reasoning)
## Recommended Next Steps (what additional research would increase confidence)

17. Planner-Executor Agent Loop

code
You are a planner-executor agent. Break this goal into a concrete task list, 
execute each task using available tools, and report results step by step.

Goal: [WHAT YOU WANT ACCOMPLISHED END-TO-END]

Available tools: [LIST WHAT GPT-5 HAS ACCESS TO — web search, code interpreter, 
file access, etc.]

Execution protocol:
1. PLAN: Output a numbered task list before doing anything. Each task = one tool call 
   or one reasoning step.
2. EXECUTE: Work through each task in order. After each task, output:
   - Task number and name
   - What you did
   - Result or output
   - Whether to continue or if you hit a blocker
3. SUMMARIZE: After all tasks, provide a final summary of what was accomplished, 
   what was skipped, and any follow-up needed.

Do not skip to the summary before completing all executable tasks. 
If a task fails, document the failure and continue with the next task.

18. Browse-and-Summarize With Structured Output

code
Use web browsing to research [TOPIC OR COMPANY OR URL].

Task: [WHAT YOU WANT — "summarize the main arguments", "extract all pricing tiers",
"find recent news about", "compare X and Y across these sources"]

Sources to check: [LIST URLS OR "find the top 5 relevant sources via search"]

Return results as a structured JSON object with this schema:
{
  "topic": "[TOPIC]",
  "sources": [
    {
      "url": "string",
      "title": "string",
      "date": "string or null",
      "key_points": ["string"],
      "relevance": "high | medium | low"
    }
  ],
  "synthesis": "string (200 words max)",
  "confidence": "high | medium | low",
  "gaps": ["string — what you couldn't find"]
}

Use structured outputs mode. Return only the JSON object, no surrounding text.

19. Code Interpreter Data Pipeline

code
Use Code Interpreter to process this dataset and answer my questions.

[UPLOAD FILE OR PASTE DATA BELOW]

Questions:
1. [SPECIFIC QUESTION — e.g., "What's the distribution of [COLUMN]?"]
2. [SPECIFIC QUESTION — e.g., "Is there a correlation between [A] and [B]?"]
3. [SPECIFIC QUESTION — e.g., "Which rows match [CONDITION]?"]
4. [SPECIFIC QUESTION — e.g., "What's the trend over [TIME COLUMN]?"]

For each question:
- Write and run Python code to answer it
- Show the code used
- Output the result (number, table, or chart)
- Write a plain-English interpretation a non-technical reader would understand

At the end, generate a downloadable CSV of any transformed or filtered dataset 
you created during analysis.

20. Multi-Step File Handler

code
You are a file processing agent. I'm going to provide you with [FILE TYPE OR DOCUMENT].
Complete the following tasks in sequence, using each output as input for the next step.

File/Document:
[PASTE CONTENT OR DESCRIBE THE UPLOADED FILE]

Task chain:
Step 1: [EXTRACT OR PARSE — e.g., "Extract all action items from this meeting transcript"]
Step 2: [TRANSFORM — e.g., "Categorize each action item by team: Engineering, Design, Product"]
Step 3: [ENRICH — e.g., "For each item, suggest a realistic deadline based on scope"]
Step 4: [FORMAT — e.g., "Format the final list as a Markdown table with columns: 
         Item | Owner Team | Suggested Deadline | Priority"]

After step 4, output the final table and nothing else.
Flag any items from Step 1 that were ambiguous and explain how you handled them.

21. Parallel Tool-Call Research Sprint

code
I need you to run multiple research threads in parallel and combine the results.

Overall question: [THE HIGH-LEVEL QUESTION YOU'RE TRYING TO ANSWER]

Run these searches simultaneously:
Thread A: [SEARCH QUERY OR ANGLE 1]
Thread B: [SEARCH QUERY OR ANGLE 2]
Thread C: [SEARCH QUERY OR ANGLE 3]

After gathering results from all three threads:
1. Identify where all three threads agree — that's your high-confidence finding
2. Identify where threads conflict — flag these explicitly
3. Identify what Thread A found that B and C missed (and vice versa)
4. Synthesize into a unified answer that credits which thread provided each key fact

Output format: synthesis paragraph first, then a "Sources by Thread" appendix.
Total synthesis length: under 400 words.

22. Computer-Use Task Specification

code
You are assisting with a computer-use task. I'll describe the goal; 
you plan and execute the steps.

Goal: [WHAT NEEDS TO BE DONE ON THE COMPUTER — e.g., "fill out this web form", 
"extract data from this page", "navigate to X and download Y"]

Starting state: [WHAT SCREEN OR APP IS CURRENTLY OPEN]
Expected end state: [WHAT SHOULD BE TRUE WHEN DONE]
Constraints: [ANYTHING TO AVOID — e.g., "don't click on ads", "don't submit anything 
without showing me first"]

Before taking any action:
1. Describe what you see on the screen
2. State your plan (numbered steps)
3. Ask for confirmation before any action that cannot be undone

After each action, describe what changed on screen and whether it matched expectations.
If anything unexpected happens, stop and describe the situation before proceeding.

Research & Analysis Prompts (23–29)

23. Literature Review Across a Large Document Set

code
Conduct a structured literature review on [TOPIC] using the documents I'm providing.
Read all documents completely before writing any output.

Documents:
[PASTE OR UPLOAD MULTIPLE PAPERS, REPORTS, OR ARTICLES]

Review structure:
1. Scope: what questions does this body of literature address?
2. Consensus findings: what do most sources agree on? (cite which ones)
3. Active debates: where do sources disagree and why?
4. Methodology overview: what research methods appear most frequently?
5. Key gaps: what questions does this literature leave unanswered?
6. Recency assessment: which findings might be outdated and why?
7. Top 3 implications for [YOUR FIELD OR USE CASE]

Cite sources inline using [Author, Year] format. 
Flag any claim that only a single source supports.

24. Competitive Landscape Analysis

code
Analyze the competitive landscape for [PRODUCT CATEGORY].

My product: [DESCRIPTION]
Competitors to analyze:
- [COMPETITOR 1]: [URL OR BRIEF DESCRIPTION]
- [COMPETITOR 2]: [URL OR BRIEF DESCRIPTION]
- [COMPETITOR 3]: [URL OR BRIEF DESCRIPTION]

Use web search to research current positioning, pricing, and messaging for each.

Deliver:
1. Feature comparison matrix (table — rows are features, columns are competitors + me)
2. Pricing comparison (table — tiers, prices, what's included)
3. Positioning analysis: the core claim each competitor leads with
4. Where they're weakest (based on what I can observe)
5. Market gap: the customer segment or use case nobody is owning well
6. My best defensible positioning given this landscape
7. Three specific moves I should make in the next 60 days

Base every claim on what you actually find. If a competitor's page doesn't disclose
pricing, say so rather than estimating.

25. Market Sizing With Documented Assumptions

code
Estimate the market size for [MARKET OR PRODUCT CATEGORY].

Context: [WHAT YOU'RE BUILDING OR EVALUATING]
Geography: [GLOBAL / US / SPECIFIC REGION]
Time horizon: [CURRENT YEAR OR PROJECTION YEAR]

Use a bottoms-up approach:
1. Define the target customer precisely (not "everyone who uses X")
2. Estimate the number of potential customers (show your math)
3. Estimate average spend per customer per year
4. Calculate TAM, SAM, and SOM with clear definitions of each
5. List every assumption you made and its basis
6. Show a sensitivity analysis: what happens to the estimate if your 
   biggest assumption is off by 50%?

Then: what's the one assumption in this model that most needs validation, 
and how would you validate it?

26. Dataset Exploration and Hypothesis Generation

code
Use Code Interpreter to explore this dataset and generate hypotheses.

[UPLOAD OR PASTE DATA]

Exploration steps:
1. Profile the data: shape, types, null rates, value distributions for each column
2. Identify the 3 most interesting patterns or anomalies
3. Test two specific correlations: [VARIABLE A] vs [VARIABLE B], [VARIABLE C] vs [VARIABLE D]
4. Generate a visualization for each finding (chart type that best shows the pattern)
5. Based on what you find, propose 3 testable hypotheses about what's driving the patterns

For each hypothesis, specify:
- The hypothesis in "If X, then Y" format
- What data you'd need to test it
- What analysis would confirm or refute it

Output a final summary: the dataset in 3 sentences, the most actionable finding, 
and the hypothesis with the highest expected value to pursue.

27. Source Verification and Claim Audit

code
I'm going to give you a piece of content that makes factual claims. 
Audit each claim for accuracy and verifiability.

Content:
[PASTE ARTICLE, REPORT, OR DOCUMENT]

For each factual claim:
1. Quote the exact claim
2. Classify it: VERIFIABLE (can be checked), OPINION (subjective), 
   CITATION NEEDED (asserted without evidence)
3. For VERIFIABLE claims: use web search to find a credible source that confirms 
   or contradicts the claim
4. For contradicted claims: describe what the evidence actually shows
5. Rate each claim: SUPPORTED / UNSUPPORTED / PARTIALLY SUPPORTED / OUTDATED

Final summary:
- Total claims audited
- Count by classification
- The 3 claims with the weakest support
- Overall reliability assessment of the document

28. Structured Comparison Matrix

code
Build a structured comparison of [THINGS TO COMPARE — products, approaches, 
frameworks, candidates, vendors].

Items to compare:
1. [ITEM 1]
2. [ITEM 2]
3. [ITEM 3]
[ADD MORE AS NEEDED]

Evaluation criteria:
- [CRITERION 1 — e.g., "ease of implementation"]
- [CRITERION 2 — e.g., "total cost of ownership"]
- [CRITERION 3 — e.g., "community and support"]
- [CRITERION 4 — e.g., "fit for our specific use case: DESCRIBE IT"]

For each item × criterion combination:
- Rating: 1–5
- One sentence of evidence supporting the rating
- Source or basis for the claim

Final output:
1. The comparison table
2. A ranked recommendation for [YOUR CONTEXT]
3. The one criterion that would change the ranking if it were weighted higher

If you need information you don't have, use web search before scoring.

29. Hypothesis-Driven Analysis Plan

code
I have a business question and some data. Help me design a rigorous analysis.

Question: [WHAT YOU WANT TO KNOW — e.g., "Why did churn increase in Q1?"]
Data available: [DESCRIBE DATASETS — tables, time range, fields]
Current hypothesis: [YOUR BEST GUESS AT THE ANSWER]
Stakeholder: [WHO NEEDS THE ANSWER AND WHAT THEY'LL DO WITH IT]

Design an analysis plan that includes:
1. The null hypothesis and what would disprove it
2. 3–5 analyses to run (each with: what to measure, how to visualize it, 
   what result would support vs. refute the hypothesis)
3. Potential confounds to control for
4. Data quality checks to run first
5. The decision this analysis should drive and the threshold for acting

If using Code Interpreter: write the Python outline (function names + docstrings, 
no implementation) showing the full analysis pipeline structure.

Productivity Prompts (30–36)

30. Meeting Notes to Structured Action Items

code
Convert these meeting notes into a structured action item document.

Meeting notes:
[PASTE RAW NOTES, TRANSCRIPT, OR VOICE-TO-TEXT OUTPUT — messy is fine]

Meeting type: [e.g., sprint planning, customer call, leadership sync]
Participants by role: [LIST ROLES, NOT NAMES — e.g., "PM, 2 engineers, designer"]

Extract:
1. Decisions made (each as one sentence, with enough context to understand it cold)
2. Action items — format: [ROLE] will [specific action] by [deadline]
   If owner or deadline is unclear from the notes, write [ASSIGN OWNER] or [SET DEADLINE]
3. Open questions not resolved in the meeting
4. Key context for someone who wasn't in the room (3 sentences max)
5. Suggested agenda item for the follow-up meeting

Do not infer owners or deadlines that weren't stated. Flag gaps explicitly.

31. Weekly Review and Planning Session

code
I'm going to paste my raw notes, task list, and calendar from this week. 
Run a structured weekly review and set up next week.

[PASTE EVERYTHING — completed tasks, open tasks, calendar events, notes, 
messages you haven't responded to]

Process:
1. Wins: what actually moved the needle (not just what got checked off)
2. Misses: what didn't happen, and honest reason why — not charitable spin
3. Carry-forward: items to reschedule vs. items to drop permanently
4. Next week's top 3 priorities (the three things that matter most — not the longest list)
5. Energy drain: one thing I should stop doing or delegate
6. Calendar audit: which meetings should I decline or shorten next week and why

Format the output as a clean weekly review document I can save and reference.

32. 30-Day Learning Plan for a Specific Skill

code
Build a 30-day learning plan to help me get to [SKILL LEVEL] with [SKILL].

My current level: [BE HONEST — what you can do now]
Time budget: [HOURS PER DAY OR WEEK]
Learning style: [HOW YOU LEARN BEST — reading, building, video, spaced repetition]
Concrete goal: [WHAT YOU WANT TO BE ABLE TO DO ON DAY 30 — specific and testable]

Plan structure:
- Week 1: foundations (diagnosis of starting point, core concepts, first project)
- Week 2: core skills (the 20% that gives you 80% of capability)
- Week 3: applied practice (build something real with what you've learned)
- Week 4: edge cases and consolidation (fill gaps, prepare the Day 30 project)

For each week:
- 3–4 specific resources (name them — books, courses, docs, channels)
- Daily practice task (15–30 minutes, scaffolded on previous days)
- End-of-week milestone (something I can demonstrate or ship)

Day 30 project: a specific, scoped thing I build to prove the skill is real.

33. Decision Matrix for a High-Stakes Choice

code
Help me make this decision using a structured weighted scoring approach.

Decision: [WHAT YOU'RE DECIDING]
Options:
A. [OPTION A — describe it]
B. [OPTION B — describe it]
C. [OPTION C — if applicable]

Context: [RELEVANT BACKGROUND — what's driving this decision, who's affected]
Hard constraints: [THINGS THAT ELIMINATE OPTIONS AUTOMATICALLY]
What I value most: [RANK YOUR CRITERIA — e.g., "speed > cost > quality > risk"]

Steps:
1. Check hard constraints — eliminate any options that fail
2. Build a weighted scoring matrix: criteria × options, with weights from my ranking
3. Score each option on each criterion (1–5) with one sentence of evidence per cell
4. Calculate weighted totals
5. Identify the sensitivity: which criterion, if weighted differently, would change the winner?
6. Recommendation: the winning option, plus the conditions under which I should reconsider

What's the reversibility of each option? Note which decisions can be undone cheaply.

34. Automation Discovery Audit

code
I'm going to describe my recurring work. Identify what should be automated 
and give me a prioritized plan.

My role: [JOB TITLE AND WHAT YOU ACTUALLY SPEND TIME ON]
Recurring tasks (describe each):
[BRAIN DUMP — list every repetitive thing you do, roughly how often, how long it takes]

Tools I already use: [SOFTWARE, SUBSCRIPTIONS, INTERNAL SYSTEMS]
Technical ability: [NON-TECHNICAL / COMFORTABLE WITH ZAPIER/NO-CODE / CAN WRITE SCRIPTS]
Budget: [FREE TOOLS ONLY / UP TO $X/MONTH / FLEXIBLE]

For each task worth automating, provide:
- Task name and current time cost per week
- Automation approach (tool or script)
- Estimated setup time
- Estimated weekly time saved
- Priority score (time saved × ease of setup)

Sort by priority. For the top 3, give step-by-step setup instructions specific 
to the tools I already use.

35. Calendar and Commitment Audit

code
I'm going to paste my calendar for the next two weeks. Help me audit it.

Calendar:
[PASTE CALENDAR EXPORT, SCREENSHOT DESCRIPTION, OR LIST OF EVENTS WITH DURATION]

My top 3 priorities right now: [LIST THEM]

For each event or meeting, classify it as:
- ESSENTIAL (directly moves a top-3 priority)
- USEFUL (valuable but not urgent)
- QUESTIONABLE (unclear value or someone else could attend)
- DECLINE (I shouldn't be here)

Then provide:
1. The 3 meetings I should decline or skip first and what to say
2. Meetings I should shorten (with suggested duration)
3. Blocks of time I should protect for deep work (based on what's available)
4. One commitment I've said yes to that I should renegotiate

Be direct. Don't suggest I keep things out of politeness.

36. Message and Email Triage System

code
I have a backlog of messages. Help me triage them efficiently.

Messages:
[PASTE EMAIL SUBJECTS AND FIRST LINES, OR SLACK MESSAGE SUMMARIES — 
 you don't need to include full message bodies unless relevant]

My role: [TITLE]
Current priorities: [WHAT YOU'RE FOCUSED ON THIS WEEK]

For each message, assign:
- ACTION NEEDED (requires response or decision — from me specifically)
- DELEGATE (someone else should handle this — suggest who by role)
- READ LATER (useful but not urgent — no reply needed now)
- ARCHIVE (can be ignored safely)

For ACTION NEEDED items:
- Draft a response under 50 words, or suggest the key point to make
- Note the deadline if there is one

Output as a triage table: message identifier | classification | action/response | deadline.

Business & Strategy Prompts (37–43)

37. Strategy Memo for a Key Decision

code
Write a strategy memo on [DECISION OR STRATEGIC QUESTION].

Context I'm providing:
[PASTE RELEVANT BACKGROUND — data, prior decisions, constraints, context docs]

Audience: [WHO READS THIS — e.g., "founding team, 5 people, technical background"]
Decision needed by: [DATE OR TIMEFRAME]
Options on the table: [LIST THEM]

Memo structure:
## Situation (what's true right now — facts, no spin)
## Complication (what's changed or what problem this creates)
## Options (2–3 alternatives with pros/cons and estimated outcomes)
## Recommendation (your recommended option and the 3 strongest reasons for it)
## Risks (what could go wrong with the recommendation, and mitigations)
## Decision needed (what you need the reader to decide or approve)

One page max. No section longer than 4 sentences. 
Write for someone who will read this once and make a call.

38. Pricing Analysis and Recommendation

code
Help me set pricing for [PRODUCT/SERVICE].

What it does: [DESCRIPTION]
Target customer: [WHO AND THEIR BUDGET RANGE]
Current price (if any): [AMOUNT OR "not yet priced"]
What competitors charge: [RANGE — or "unknown, please research"]
My cost per unit or per customer: [AMOUNT]
Value delivered: [QUANTIFY — time saved, revenue gained, cost avoided]

Analyze:
1. Cost-plus floor (minimum viable price)
2. Value-based ceiling (what the outcome is worth to the customer)
3. Competitive positioning options: premium, mid-market, challenger
4. Recommended tier structure (if applicable — what goes in each tier)
5. Willingness-to-pay test: one question to ask customers that would reveal 
   whether I'm underpriced or overpriced
6. When to raise prices: the signal I should look for

Show your math. Cite any competitor pricing from web search if relevant.

39. Job Spec That Attracts the Right Candidate

code
Write a job description for [ROLE TITLE] at [COMPANY TYPE].

What this person actually does (be honest — not aspirational):
[LIST THE REAL DAILY TASKS AND RESPONSIBILITIES]
What makes someone exceptional at this: [3-5 specific qualities or capabilities]
What this person will own (metrics or outcomes they're responsible for): [LIST]
What we won't consider essential: [NICE-TO-HAVES WE WON'T GATEKEEP ON]
Compensation: [RANGE OR "competitive" if you won't disclose]

Format:
1. Role summary (3 sentences — what it is, who it's for, why it matters)
2. What you'll do (bulleted, specific actions — not "responsibilities include")
3. What we're looking for (required vs. nice-to-have clearly separated)
4. What you won't do (what's out of scope — sets honest expectations)
5. Compensation and benefits
6. How to apply

Tone: direct and specific. Cut anything that sounds like it was written by an 
HR committee. The goal is to attract exactly the right person and screen out everyone else.

40. Quarterly OKR Review and Reset

code
Help me review last quarter's OKRs and set new ones for [NEXT QUARTER].

Last quarter's OKRs:
[PASTE YOUR OKRs — objectives + key results + final attainment scores]

What happened that wasn't in the plan: [SURPRISES — good and bad]
Strategic priority for next quarter: [ONE SENTENCE — what matters most]
Things we're explicitly deprioritizing: [WHAT'S OFF THE TABLE]

For the review:
1. Score each KR: hit, missed, or blocked (with honest reason)
2. Root cause for each miss (not excuses — what actually broke)
3. One thing to stop measuring because it doesn't drive real progress

For next quarter's OKRs:
- 3 Objectives (qualitative and meaningful — not "improve metrics")
- 3 Key Results per Objective (specific metric, current baseline, target)
- One health metric per Objective (guardrail to prevent gaming)
- One KR that's a stretch (60% confidence of hitting)

41. Customer Journey Map

code
Build a customer journey map for [PRODUCT/SERVICE].

Customer profile: [WHO THIS JOURNEY BELONGS TO — be specific]
Entry point: [HOW THEY FIRST ENCOUNTER YOUR PRODUCT]
End state: [WHAT SUCCESS LOOKS LIKE FOR THEM]

For each stage (Aware → Consider → Purchase → Onboard → Retain → Expand):
- What the customer is trying to do
- What they're thinking (their internal question or goal at this stage)
- What they're feeling (emotion — frustrated, hopeful, confused, etc.)
- Touchpoints (where they interact with you — channels, content, product)
- Pain points (where they get stuck or drop off)
- Your opportunity (what you could do better at this stage)

Format as a table with one row per stage and one column per dimension.
After the table, identify the two stages with the highest dropout risk 
and recommend one specific improvement for each.

42. Board or Investor Update

code
Write a [MONTHLY / QUARTERLY] board update for [COMPANY].

Reporting period: [DATE RANGE]
Audience: [BOARD COMPOSITION — e.g., "3 investors, 2 independent directors, all financially fluent"]

Data:
[PASTE YOUR NUMBERS — revenue, users, burn, key metrics, whatever you track]

Key narrative points to hit:
- [THING THAT WENT WELL]
- [THING THAT DIDN'T AND WHAT YOU'RE DOING ABOUT IT]
- [THE DECISION OR RESOURCE YOU NEED FROM THE BOARD]

Structure:
1. Headline metrics (table — metric | this period | last period | plan | variance)
2. Three things that happened (facts + so what)
3. Risks and mitigations (honest, not sanitized)
4. Focus for next period (what you're optimizing for and why)
5. Ask (specific — vote, intro, guidance, approval)

One page when printed. No paragraph should be longer than 4 sentences.
Write for people who track 12 companies and will read this in 4 minutes.

43. Post-Mortem Document

code
Write a post-mortem for [INCIDENT OR FAILED PROJECT].

What happened: [TIMELINE AND DESCRIPTION — as factual and specific as possible]
Impact: [WHO WAS AFFECTED, WHAT WAS LOST — quantify where possible]
Detection: [HOW AND WHEN DID ANYONE NOTICE]
Resolution: [WHAT WAS DONE TO FIX IT AND HOW LONG IT TOOK]

Post-mortem format:
## Summary (3 sentences — what happened, impact, how it was resolved)
## Timeline (chronological, each entry: time | what happened | who was involved)
## Root Cause Analysis (use the "5 Whys" method — go deep, not just one level)
## Contributing Factors (things that made this worse or more likely)
## What Went Well (honest — what prevented this from being worse)
## Action Items (each: specific action | owner role | deadline | success metric)
## What We're Not Changing (and why — avoids over-engineering the response)

Blameless tone throughout. The goal is to fix systems, not assign fault.

Creative Prompts (44–50)

44. Story Development From a Single Premise

code
Help me develop a story from this premise into a full narrative structure.

Premise: [ONE SENTENCE — the core idea or "what if"]
Genre: [GENRE AND ANY SUBGENRE]
Intended length: [SHORT STORY / NOVELLA / NOVEL / SCREENPLAY]
Tone: [DESCRIBE — e.g., "tense and psychological, minimal humor, grounded"]
Audience: [WHO THIS IS FOR]

Develop:
1. The core conflict (what's at stake and why the reader should care)
2. Protagonist: goal, wound, flaw, and the lie they believe at the start
3. Antagonist or opposing force: what they want and why they're formidable
4. Three-act structure outline:
   - Act 1: Setup through inciting incident
   - Act 2A: Rising action through midpoint
   - Act 2B: Darkest moment through climax
   - Act 3: Resolution
5. The theme (what the story is actually about, beneath the plot)
6. The first scene: write it in full (500 words) to establish voice and stakes

After the outline, ask me 3 questions that would most improve the story.

45. World-Building System With Internal Logic

code
Build a detailed world for [GENRE] fiction with consistent internal rules.

Core premise: [THE CENTRAL CONCEIT — what's fundamentally different about this world]
Setting: [TIME PERIOD, LOCATION TYPE, SCALE — city? continent? multiverse?]
Tone: [DARK / HOPEFUL / SATIRICAL / CLINICAL / MYTHIC / etc.]

Develop these systems in detail:
1. Geography: 3–5 locations with distinct character (describe each in 3 sentences)
2. Power structure: who controls what, through what mechanism, and who resents it
3. The rules: what is and isn't possible in this world (and the limits of what's possible)
4. History: 3 events that shaped the current state of the world
5. Factions: 3–4 groups with competing goals — describe each faction's logic, 
   not just their position
6. Daily life: what ordinary people eat, fear, celebrate, and avoid
7. Internal logic check: identify any contradictions in what I've set up

End with 3 questions that would deepen the world if I answered them.
Don't make things generic — every detail should feel specific to this world.

46. Character Design With Depth

code
Build a fully realized character for [ROLE IN STORY — protagonist, antagonist, 
supporting character].

Story context: [BRIEF PREMISE AND GENRE]
What this character needs to do in the story: [THEIR NARRATIVE FUNCTION]

Develop:
1. External profile (age, appearance, speech patterns — only what's character-revealing)
2. Core wound: the thing that happened that shaped everything since
3. Deepest desire: what they consciously want
4. Hidden need: what they actually need (often different from what they want)
5. Flaw: the thing that works against them and will be tested
6. Voice: write 5 lines of dialogue that could only come from this character
7. Relationship dynamic: how they behave with people who have power over them 
   vs. people who don't
8. Arc possibility: where they start, what tests them, where they could end up

One constraint: the character should be contradictory in a way that makes them human, 
not in a way that makes them inconsistent.

47. Constrained Brainstorming Session

code
Generate creative ideas for [CHALLENGE] with these constraints. 
The constraints are not obstacles — they're the brief.

Challenge: [WHAT YOU'RE SOLVING OR CREATING]
Hard constraints:
- Budget: [AMOUNT — treat this as fixed, not approximate]
- Timeline: [DEADLINE]
- Resources: [WHAT'S ACTUALLY AVAILABLE]
- Must-include: [NON-NEGOTIABLES]
- Must-avoid: [HARD NOs]

Generate three tiers:
1. Practical (5 ideas — proven approaches that work within constraints, low risk)
2. Creative (5 ideas — unconventional but feasible within constraints)
3. Reframe (3 ideas — that challenge whether the constraint should exist at all, 
   with a note on what it would take to remove it)

For each idea: one-sentence description, the key risk, and the single first step.
Do not pad the list — if you can't find 5 strong ones for a tier, give 3 good ones.
At the end, recommend your top 2 with specific reasoning.

48. Concept Generation for a Product or Campaign

code
Generate names and concepts for [WHAT YOU'RE NAMING — product, feature, campaign, brand].

Context:
- What it is: [DESCRIPTION]
- Target audience: [WHO]
- Feeling to evoke: [DESCRIBE — e.g., "reliable but not boring", "playful but credible"]
- What to avoid: [NAMES/VIBES/ASSOCIATIONS THAT ARE OFF-LIMITS]
- Competitive names to differentiate from: [LIST COMPETITORS' NAMES]

Generate 20 names. Organize them into 4 groups of 5:
1. Descriptive (names that explain what it does)
2. Evocative (names that evoke a feeling or world)
3. Abstract (invented words or repurposed words)
4. Bold (names that take a stance or make a claim)

For the top 3 overall, include:
- Rationale (why it works for this brief)
- Trademark risk flag (common English words vs. unusual — not legal advice)
- One tagline that pairs naturally with it

49. Meta-Prompt: Improve My Prompt

code
I want better results from GPT-5 for [TASK TYPE].

My current prompt:
[PASTE YOUR CURRENT PROMPT EXACTLY AS YOU USE IT]

What I'm getting:
[DESCRIBE THE OUTPUT AND WHAT'S WRONG — too long, wrong format, misses the point, etc.]

What I actually want:
[DESCRIBE THE IDEAL OUTPUT AS SPECIFICALLY AS POSSIBLE]

Diagnose and fix:
1. What's wrong with my current prompt (be specific — quote the weak parts)
2. Rewrite the prompt with specific improvements
3. For each change, explain the mechanism: why does this change produce better output?
4. Identify one GPT-5 capability my current prompt isn't using 
   (structured outputs, long context, tool calling) and show me how to leverage it
5. Give me a 5-item checklist I can apply to any prompt I write going forward

Return the improved prompt in a code block, ready to copy-paste.

50. Story-Within-a-Story Narrative Frame

code
Write the opening of a story that uses a nested narrative frame — 
a story told within a story.

Outer frame: [DESCRIBE THE FRAMING SITUATION — who is telling the story, 
to whom, and why now]
Inner story: [THE STORY BEING TOLD — premise and genre]
Connection: [HOW THE INNER STORY REFLECTS OR COMMENTS ON THE OUTER SITUATION]
Tone of outer frame: [HOW THE NARRATOR FEELS ABOUT THIS STORY]
Tone of inner story: [CAN DIFFER FROM OUTER FRAME]

Write:
- The outer frame opening: establish who is telling this, the setting, 
  and the reason the inner story is being told now (200–300 words)
- The transition line that shifts into the inner story
- The first scene of the inner story (400–500 words) — it should feel complete 
  enough to stand on its own but clearly connected to the frame

After writing, explain in 2 sentences how the inner story illuminates something 
the outer narrator can't say directly.

GPT-5 Power Tips

1

Frontload constraints in the system prompt. Tell GPT-5 who you are, who the output is for, and what's out of bounds before making your request. Context at the top changes every downstream word — context buried in a long prompt often gets deprioritized.

2

Use structured outputs and JSON schema for reliability. When you need predictable output structure — especially for downstream processing — specify the JSON schema explicitly rather than hoping for consistent formatting. GPT-5 honors this strictly, which eliminates post-processing surprises.

3

Chain tool calls in one prompt instead of many. GPT-5 can plan and execute multi-step tool sequences — browse, then analyze with Code Interpreter, then return a formatted report — in a single instruction. Splitting this into three prompts is slower and loses context between steps.

4

Leverage 1M context with retrieval-style framing. Paste the full document and ask GPT-5 to "find all instances of X in this doc" or "identify the three most relevant sections to question Y." Don't summarize your documents before sending them — send the full thing and let the model do the extraction.

5

Signal reasoning effort for hard tasks. For analysis, decisions, or anything where you want the model to slow down, add an explicit instruction like "think through this carefully before answering" or "consider at least three alternative interpretations before concluding." GPT-5 responds to these cues by allocating more effort to the response.

6

Trim verbosity explicitly. GPT-5 defaults to thorough over terse. Add "no preamble," "skip the explanation of what you're about to do," and "don't summarize at the end" to any prompt where you want clean output. These three phrases alone cut most of the noise.

Before

Analyze our competitors and tell me how we can position against them.

After

You are a market strategist. Use web search to research these three competitors: [A], [B], [C]. For each, find their current pricing page, main value proposition, and top customer complaint (from review sites). Return a JSON object with keys: competitors (array with name, pricing, positioning, weakness), market_gap (the segment none of them own), and top_move (one specific action I should take in the next 30 days). No preamble — return only the JSON.

Start Using GPT-5 at Its Full Capability

These 50 templates are built around what GPT-5 actually does differently from previous models: longer context, tighter format adherence, native tool orchestration, and structured output support. The gap between a prompt that works with GPT-4o and one that works well with GPT-5 is mostly about specificity and structure — and now you have both.

The AI prompt generator builds prompts like these automatically for any task — describe what you need and get a structured, GPT-5-ready template in seconds. For a categorized library of expert templates, the Template Builder has hundreds of pre-built starting points across every category. If you want to go deeper on prompting strategy across models, read the complete guide to prompting reasoning models in 2026, or compare prompting approaches across frontier models in advanced prompt engineering for Claude, GPT-5, and Gemini.

Build prompts like these in seconds

Use the Template Builder to customize 350+ expert templates with real-time preview, then export for any AI model.

Open Template Builder