Prompts that work on GPT-5 often underperform on Claude Opus 4.7 — not because Opus is worse, but because it has different defaults. It pauses to ask clarifying questions when your intent is ambiguous, reasons more deeply when you give it explicit budget, and responds to XML structure the way other models respond to markdown. These 50 copy-paste templates are built around those differences, not despite them.
Why Opus 4.7 Prompts Look Different
Opus 4.7 is not a drop-in replacement for whatever model you used last week. Four or five things in the way it processes prompts actually change what good prompting looks like — and if you ignore them, you pay for a reasoning model and get a fast-draft model.
Extended thinking is opt-in, and it matters. Opus 4.7 has a tunable reasoning budget — a scratchpad where it thinks through hard problems before responding. The model doesn't use it unless you activate it. For anything that rewards step-by-step logic (debugging, architecture, analysis), explicitly cue it: "Think step by step in <thinking> tags before responding." For drafting and summarization, leave it off — the budget burns without improving the output. For a full breakdown of when to use it and when not to, read the Claude Opus 4.7 prompting guide.
1M context is a ceiling, not a starting point. Opus handles long documents, but performance holds best when you anchor the model to what matters. Place the key question or task near the top of the prompt, then insert the document, then restate the task at the bottom. That sandwich structure keeps the model's attention on your actual request rather than the bulk of the text.
Opus is more cautious by default. It will ask a clarifying question rather than guess when your intent is genuinely ambiguous. For single-turn tasks this can feel like friction — but it's the behavior that prevents expensive wrong answers in long agent runs. Lean into it: explicitly invite clarification ("ask up to 3 questions before starting") or pre-empt it by giving dense context upfront. Don't fight the behavior — route around it.
XML tags are first-class structure. Opus was trained to treat <tags> as semantic separators, not decoration. Wrapping your context in <context>, your instructions in <task>, and your output requirements in <output_format> produces reliably better results than unstructured prose. The prompts in this list use tags wherever the payoff is clear.
Tool-use loops work differently here. Opus is strong at multi-step tool use, but needs the right loop pattern: call tool, observe result, reflect, decide next step. For agentic prompts, the pause-and-reflect cue after each tool call prevents the model from optimistically chaining steps that depend on results it hasn't seen yet. See the AI reasoning models prompting guide for the full agentic pattern.
Writing & Editing Prompts (1–8)
1. Long-Form Essay With Extended Thinking
<task>
Write a 2,000-word essay on [TOPIC].
Think step by step in <thinking> tags before writing. Use that
reasoning to identify the strongest argument structure before
committing to a draft.
</task>
<context>
Audience: [WHO WILL READ THIS]
Purpose: [PERSUADE / INFORM / ANALYZE]
My position or angle: [YOUR TAKE, IF ANY]
</context>
<output_format>
- Title
- Hook paragraph (concrete, not abstract)
- 4–5 sections with ## headers
- Each section: clear claim + evidence + implication
- Closing paragraph: one actionable or provocative takeaway
- No filler transitions ("In conclusion", "As we have seen")
</output_format>
2. Ghostwriting With Voice Matching
<context>
I am going to paste 3 samples of my writing. Study them.
Note my vocabulary range, sentence length patterns, how I open
paragraphs, where I use rhetorical questions, and how I handle data.
SAMPLE 1:
[PASTE 300–500 WORDS]
SAMPLE 2:
[PASTE 300–500 WORDS]
SAMPLE 3:
[PASTE 300–500 WORDS]
</context>
<task>
Write a [PIECE TYPE — newsletter, article, essay] on [TOPIC] in
my voice. Length: [TARGET WORD COUNT].
</task>
<rules>
- Do not ask for more samples — infer from what's provided
- If you're uncertain about a stylistic choice, pick the version
that feels more natural to the samples, don't hedge
- After the piece, add a short note on the 3 strongest voice
signals you matched
</rules>
3. Editorial Review With Hierarchy
<document>
[PASTE YOUR DRAFT HERE]
</document>
<task>
Edit this draft. Work in three passes — substance first, structure
second, language last. Do not fix commas before fixing argument gaps.
</task>
<output_format>
**Pass 1 — Substance**
- Claims that need stronger evidence
- Logical gaps or unsupported leaps
- Anything that contradicts itself
**Pass 2 — Structure**
- Sections in the wrong order
- Ideas that belong together but are split
- Missing transitions (describe, don't fix yet)
**Pass 3 — Language**
- Sentences longer than 30 words that could be split
- Filler phrases to cut
- Word choice improvements (5 max)
Then: provide the revised full draft incorporating all edits.
</output_format>
4. Technical Documentation From Spec
<role>You are a technical writer. You write documentation that
developers use under deadline pressure — clear, scannable, and
immediately actionable.</role>
<context>
[PASTE API SPEC, CODE, OR TECHNICAL DESCRIPTION HERE]
</context>
<task>
Write developer documentation for the above.
Ask up to 3 clarifying questions before starting if anything is
ambiguous about the intended audience or usage context.
</task>
<structure>
1. One-sentence summary of what this does
2. Quick-start example (complete, runnable, no pseudocode)
3. Parameters / options table: name | type | required | default | description
4. Three usage examples (simple → intermediate → edge case)
5. Common errors and what they mean
6. Related references
</structure>
5. Narrative Restructure
<document>
[PASTE DRAFT OR OUTLINE — can be rough or out of order]
</document>
<task>
This narrative is structurally broken. Diagnose and fix it.
Think step by step in <thinking> tags: identify where the reader
loses the thread, where tension deflates too early, and where the
payoff lands wrong.
</task>
<output_format>
1. Diagnosis (3–5 specific structural problems, quoted from the text)
2. Recommended structure (reorder sections, label what moves where and why)
3. Rewritten structure (the full piece in the corrected order)
4. One sentence on what made the original order feel wrong
</output_format>
6. Voice Transfer Across Brands
<source_content>
[PASTE THE ORIGINAL CONTENT]
</source_content>
<target_brand>
Brand name: [NAME]
Voice attributes: [3 ADJECTIVES — e.g., "authoritative, dry, exact"]
Voice NOT: [WHAT TO AVOID — e.g., "casual, punchy, sales-y"]
Audience: [WHO READS THIS BRAND'S CONTENT]
</target_brand>
<task>
Rewrite the source content in the target brand's voice.
Preserve all information. Change how it sounds.
After the rewrite, list the 5 specific choices you made to shift
the voice, and why each one was correct for this brand.
</task>
7. Executive Briefing From Long Source
<document>
[PASTE THE FULL DOCUMENT, REPORT, OR TRANSCRIPT — up to 100K tokens]
</document>
<task>
The task follows the document.
Write a 400-word executive briefing based on the document above.
Intended reader: [ROLE — e.g., CEO with limited time, board member].
</task>
<output_format>
- **Situation** (2–3 sentences: what is happening and why it matters)
- **Key findings** (3–4 bullets, specific — numbers where available)
- **Risks or open questions** (2–3 bullets)
- **Recommended action** (1–2 sentences, unambiguous)
No summaries of process. No hedging language.
If the source doesn't support a clean recommendation, say so directly.
</output_format>
8. Ghost Rewrite With Changes Tracked
<original>
[PASTE ORIGINAL TEXT]
</original>
<task>
Rewrite this for [GOAL — e.g., sharper argument, shorter length,
different audience]. Then show every significant change you made
and explain the reasoning behind it.
</task>
<output_format>
**Rewritten version** (full text, clean)
---
**Change log**
For each material change:
- Original: [quote the original phrase or sentence]
- Changed to: [quote the new version]
- Reason: [one sentence on why this change improves the piece]
Only log changes that affect meaning, tone, or structure — skip
minor word substitutions that don't matter.
</output_format>
Coding & Engineering Prompts (9–15)
9. Full-File Refactor With Reasoning
<context>
Language: [LANGUAGE]
Framework: [FRAMEWORK IF APPLICABLE]
What this file does: [ONE-PARAGRAPH DESCRIPTION]
</context>
<file>
[PASTE COMPLETE FILE CONTENTS]
</file>
<task>
Refactor this file. Think step by step in <thinking> tags before
writing any code. In your thinking, identify:
(a) the top 3 structural problems,
(b) any correctness bugs,
(c) the refactor order (what to fix first and why).
Then provide the complete refactored file.
</task>
<rules>
- Preserve all existing behavior unless a behavior is a bug
- No breaking interface changes without flagging them explicitly
- Add comments only where the "why" is non-obvious
- After the file, list every change made in a numbered change log
</rules>
10. Multi-File Feature Plan
<context>
Codebase: [DESCRIBE ARCHITECTURE — languages, frameworks, key patterns]
Feature request: [WHAT NEEDS TO BE BUILT]
Constraints: [PERFORMANCE, BACKWARD COMPAT, DEPLOYMENT LIMITS]
</context>
<task>
Plan the implementation of this feature across the codebase.
Ask up to 3 clarifying questions before starting if the feature
requirements leave critical decisions open.
</task>
<output_format>
1. Feature summary (one paragraph — what it does, what it touches)
2. File-by-file implementation plan:
- File path (new or existing)
- Change type (create / modify / delete)
- What changes and why
- Dependencies on other files in this plan
3. Suggested implementation order (with rationale)
4. Testing strategy (unit + integration coverage plan)
5. Risks and edge cases I should handle
</output_format>
11. Security + Clarity Code Review
<context>
Language: [LANGUAGE]
What this code does: [PURPOSE]
Environment: [WHERE IT RUNS — browser, server, edge, embedded]
</context>
<code>
[PASTE CODE]
</code>
<task>
Review this code for security vulnerabilities and clarity issues.
Think step by step in <thinking> tags: first pass for security,
second pass for logic correctness, third pass for readability.
</task>
<output_format>
**Security** (P0 first)
For each issue: line(s), vulnerability type, exploit scenario, fix
**Logic & correctness**
For each issue: line(s), problem, correct behavior, fix
**Clarity**
For each issue: line(s), what's confusing, suggested rename or restructure
**Bottom line:** One sentence — the single highest-priority change.
</output_format>
12. Test Design From Requirements
<context>
System under test: [WHAT YOU'RE TESTING]
Language and test framework: [e.g., TypeScript + Jest, Python + pytest]
Existing test coverage: [NONE / PARTIAL — describe gaps]
</context>
<requirements>
[PASTE FEATURE SPEC, USER STORIES, OR ACCEPTANCE CRITERIA]
</requirements>
<task>
Design a complete test suite for these requirements.
Include: unit tests for individual functions, integration tests for
data flow, and edge case tests for boundary conditions.
For each test provide:
- Test name (reads as a specification: "should X when Y")
- Test type (unit / integration / edge)
- What it asserts
- Setup/teardown requirements
- The actual test code
</task>
13. Database Migration Plan
<context>
Current schema: [PASTE DDL OR DESCRIBE TABLES]
Target state: [WHAT NEEDS TO CHANGE AND WHY]
Database: [POSTGRES / MYSQL / SQLITE / etc.]
Production data: [ROUGH ROW COUNTS FOR AFFECTED TABLES]
Downtime tolerance: [ZERO / MAINTENANCE WINDOW / FLEXIBLE]
</context>
<task>
Plan this database migration. Think step by step: identify risks
(data loss, locking, constraint violations), then design the safest
migration sequence.
</task>
<output_format>
1. Risk assessment (what can go wrong, ranked by severity)
2. Migration steps (each step is one reversible operation)
3. Rollback plan for each step
4. Test queries to validate before and after each step
5. Estimated lock duration per step
6. Full migration SQL
</output_format>
14. Debug With Extended Thinking
<context>
Language: [LANGUAGE]
Runtime: [NODE / PYTHON / JVM / BROWSER / etc.]
Expected behavior: [WHAT SHOULD HAPPEN]
Actual behavior: [WHAT ACTUALLY HAPPENS — include error messages verbatim]
</context>
<code>
[PASTE ALL RELEVANT CODE — include imports, dependencies, and
any adjacent functions the bug might interact with]
</code>
<task>
Debug this. Think step by step in <thinking> tags before answering.
In your thinking, trace the execution path, identify where the
actual behavior diverges from expected, and consider at least
two alternative hypotheses before settling on a root cause.
</task>
<output_format>
1. Root cause (specific line(s), specific reason)
2. Why the bug exists (not just what's wrong — why it was written this way)
3. Fix (minimal, correct, with explanation)
4. One defensive improvement to prevent this class of bug recurring
</output_format>
15. Architecture Decision Record
<context>
System: [WHAT YOU'RE BUILDING OR MODIFYING]
Decision needed: [THE ARCHITECTURAL CHOICE YOU'RE MAKING]
Constraints: [SCALE, TEAM SIZE, TIMELINE, TECH DEBT BUDGET]
Options being considered:
A. [OPTION A — brief description]
B. [OPTION B — brief description]
C. [OPTION C IF APPLICABLE]
</context>
<task>
Write an Architecture Decision Record (ADR) for this decision.
Think step by step before writing — consider second-order effects
and the reversibility of each option.
</task>
<output_format>
**Title:** ADR-[NUMBER]: [Decision in one line]
**Status:** Proposed
**Context:** What situation forces this decision (2–3 sentences)
**Options considered:** For each — summary, pros, cons, unknowns
**Decision:** Which option and the core reason
**Consequences:** What becomes easier, what becomes harder
**Revisit condition:** What would cause us to reopen this decision
</output_format>
Agent & Tool-Use Prompts (16–22)
16. Research Agent Loop
<task>
You are a research agent. Your goal: [RESEARCH QUESTION].
Use the following tools: web_search, document_reader.
After each tool call, pause and reflect in <thinking> tags:
- Did the result answer what I needed?
- What's still missing?
- What's the next most valuable query?
Continue the loop until you have enough to write a complete answer,
or until you've made 5 tool calls — whichever comes first.
</task>
<output_format>
After the research loop:
1. Summary of findings (with source citations)
2. Confidence level: [HIGH / MEDIUM / LOW] and why
3. Gaps: what you could not verify and why it matters
4. Recommended next steps if the gap is significant
</output_format>
17. Calendar Scheduling Agent
<task>
You are a scheduling assistant with access to the calendar tool.
Goal: schedule [MEETING TYPE] for [ATTENDEES — describe roles,
not names] with these constraints:
- Duration: [MINUTES]
- Preferred window: [TIME OF DAY / DAYS OF WEEK]
- Hard constraints: [BLOCKERS, DEADLINES, TIMEZONE DIFFERENCES]
</task>
<agent_rules>
1. Check availability before proposing any time
2. After each calendar call, reflect: does this slot satisfy all
constraints? If not, what needs to change?
3. Propose exactly 3 options (ranked by preference with reasoning)
4. If no slot works within the preferred window, surface the
conflict explicitly rather than proposing outside constraints silently
</agent_rules>
18. SQL Data Agent
<context>
Database: [DATABASE TYPE]
Schema:
[PASTE RELEVANT TABLE DEFINITIONS]
</context>
<task>
Answer this question using the database: [QUESTION]
You have access to the sql_query tool. After each query result,
reflect in <thinking> tags:
- Does this result fully answer the question?
- Are there edge cases (NULLs, duplicates, timezone issues) that
could make this answer wrong?
- Should I run a validation query?
Run no more than 4 queries total.
</task>
<output_format>
1. Final answer (plain English, data-backed)
2. The query that produced the answer
3. One data quality caveat if relevant
</output_format>
19. Document Classifier Agent
<task>
You are a document classification agent. Classify each document
below into exactly one category from this taxonomy:
Categories:
[LIST CATEGORIES WITH ONE-SENTENCE DESCRIPTIONS]
After classifying each document, reflect in <thinking> tags:
- Is there ambiguity between categories?
- If yes, what additional signal would resolve it?
- Is my confidence high enough to commit, or should I flag for review?
</task>
<documents>
DOCUMENT 1:
[PASTE]
DOCUMENT 2:
[PASTE]
[CONTINUE FOR EACH DOCUMENT]
</documents>
<output_format>
For each document:
- Assigned category
- Confidence: HIGH / MEDIUM / LOW
- One-sentence rationale
- If LOW confidence: flag for human review with the specific ambiguity
</output_format>
20. Browse-and-Summarize Loop
<task>
Research [TOPIC] using the web browsing tool.
Target: [SPECIFIC QUESTION OR DECISION YOU NEED TO INFORM]
Scope: focus on sources from [TIME RANGE — e.g., the last 12 months]
Source types to prioritize: [e.g., primary research, official docs,
not opinion pieces]
After each page visit, reflect in <thinking> tags:
- Does this source add new information or repeat what I already have?
- Is this source credible for this topic?
- Am I converging on an answer or still in discovery mode?
Stop after 5 pages or when you have enough to give a confident answer.
</task>
<output_format>
- Answer to the target question (direct, 2–3 paragraphs)
- Sources used (URL + one-sentence description of what each contributed)
- What remains uncertain and why
</output_format>
21. File-Handling Pipeline Agent
<task>
Process these files using the file tool:
Input: [DESCRIBE WHAT THE FILES CONTAIN]
Transformation: [WHAT NEEDS TO HAPPEN — parse, filter, transform, merge]
Output: [WHAT THE FINAL FILE(S) SHOULD CONTAIN AND IN WHAT FORMAT]
After reading each file, reflect in <thinking> tags:
- Does the file match the expected structure?
- Are there anomalies (missing fields, encoding issues, unexpected nulls)?
- Should I adjust the transformation plan based on what I found?
If a file fails validation, log the failure and continue to the next
file rather than stopping the whole pipeline.
</task>
<output_format>
1. Processing summary (files processed, files failed, total records)
2. Anomalies found (file name, issue, action taken)
3. Final output file(s)
</output_format>
22. Escalation Pattern Agent
<task>
You are an autonomous agent working on: [TASK DESCRIPTION].
You have access to: [LIST TOOLS].
Work through the task independently. After each step, assess:
- Am I making progress?
- Is this within my authority to decide?
- Is there a decision that requires human judgment?
Escalation rules — stop and ask the user if:
1. The task requires a decision that could cause irreversible change
2. You've hit the same blocker twice
3. You're more than 3 tool calls in and less than 50% complete
4. The task scope turns out to be significantly larger than expected
Otherwise, proceed autonomously and report when done.
</task>
Research & Analysis Prompts (23–29)
23. Literature Review With Source Grading
<task>
Conduct a structured literature review on [TOPIC].
Think step by step in <thinking> tags before writing:
identify the key sub-questions, note where consensus is strong
vs. contested, and flag where your knowledge may be outdated.
</task>
<output_format>
For each major sub-question:
**[Sub-question]**
- Consensus view: [what most sources agree on]
- Contested area: [where credible disagreement exists]
- Key finding: [the most important specific result]
- Confidence: HIGH (well-established) / MEDIUM (emerging) / LOW (thin evidence)
- Knowledge cutoff flag: [if this area moves fast, say so]
Closing section: Gaps — what the literature does not yet answer
</output_format>
24. Multi-Document Synthesis
<document_1>
[PASTE DOCUMENT 1]
</document_1>
<document_2>
[PASTE DOCUMENT 2]
</document_2>
<document_3>
[PASTE DOCUMENT 3]
</document_3>
<task>
The task follows all documents above.
Synthesize these documents into a unified analysis.
Do not summarize each document separately — find the threads
that run across them and build an argument from the convergence
and divergence.
The synthesis question is: [SPECIFIC QUESTION OR DECISION].
</task>
<output_format>
1. Points of agreement (across all three — with direct quotes)
2. Points of contradiction (with the documents' specific positions)
3. Gaps (what all three documents avoid or leave unresolved)
4. Synthesis answer to the question (your analysis, not just description)
</output_format>
25. Hypothesis Generation
<context>
Domain: [FIELD OR PROBLEM AREA]
Observation: [WHAT YOU'VE NOTICED OR MEASURED]
Known factors: [WHAT YOU ALREADY KNOW ABOUT CAUSES]
</context>
<task>
Generate hypotheses to explain the observation.
Think step by step in <thinking> tags:
first generate at least 5 candidate hypotheses,
then evaluate each for plausibility and testability,
then rank them before presenting your answer.
</task>
<output_format>
**Top 3 hypotheses** (ranked by plausibility):
For each:
- Hypothesis statement (falsifiable, specific)
- Mechanism (why this would produce the observed effect)
- Evidence that would confirm it
- Evidence that would rule it out
- Difficulty to test: LOW / MEDIUM / HIGH
**Rejected hypotheses** (name them + one-sentence reason for rejection)
</output_format>
26. Source-Graded Answer
<sources>
SOURCE A:
[PASTE SOURCE A]
SOURCE B:
[PASTE SOURCE B]
SOURCE C:
[PASTE SOURCE C]
</sources>
<task>
Answer this question using only the sources above: [QUESTION]
Grade each claim in your answer by source strength:
- [A] = supported directly by a source (quote it)
- [I] = inferred from sources but not stated explicitly
- [U] = your analysis — not grounded in the provided sources
Do not use knowledge outside these sources to answer the question.
Flag any part of the question the sources cannot answer.
</task>
27. Contradiction Finder
<document>
[PASTE LONG DOCUMENT, REPORT, OR SET OF MEETING TRANSCRIPTS]
</document>
<task>
The task follows the document.
Find internal contradictions, inconsistencies, and claims that
conflict with each other in the document above.
Think step by step in <thinking> tags:
scan for (a) factual contradictions,
(b) policy or rule contradictions,
(c) implied contradictions (two statements that can't both be true).
</task>
<output_format>
For each contradiction found:
- Contradiction type (factual / policy / implied)
- Claim A: [quote] — location: [section/page]
- Claim B: [quote] — location: [section/page]
- Why they conflict
- How to resolve (which claim to trust, or how to reconcile)
If no contradictions: say so explicitly and note what you checked.
</output_format>
28. Structured Comparison
<task>
Compare [OPTION A] vs [OPTION B] on [SPECIFIC DECISION CONTEXT].
Ask up to 3 clarifying questions before starting if the
comparison criteria are unclear or if you need more context
to make the comparison meaningful.
</task>
<output_format>
**Comparison matrix**
| Criterion | Weight | [Option A] | [Option B] | Winner |
[Fill in 6–8 criteria relevant to the decision context]
**Narrative analysis** (2–3 paragraphs)
- Where the matrix understates a real difference
- Where the matrix overstates a nominal difference
- Non-quantifiable factors that matter
**Recommendation**: [Option A / Option B / It depends on X]
If "it depends": give the specific condition that tips the decision.
</output_format>
29. Evidence Reconstruction
<conclusion>
[STATE A CONCLUSION OR CLAIM YOU WANT TO PRESSURE-TEST]
</conclusion>
<task>
Work backward from this conclusion.
Think step by step in <thinking> tags:
what evidence would have to be true for this conclusion to hold?
Which pieces of that evidence are verifiable?
Which are assumed?
Then: steelman the conclusion, then attack it.
Steelman first so the attack is against the best version.
</task>
<output_format>
**Evidence map**
Required assumptions (ranked: most load-bearing first):
- [Assumption] — verifiable / assumed — strength: HIGH/MED/LOW
**Steelman**: The strongest version of this conclusion
**Attack**: The most credible challenge to the conclusion
**Verdict**: Does the conclusion hold, fall, or need qualification?
If qualification needed: state exactly what would need to change.
</output_format>
Business & Strategy Prompts (30–36)
30. Strategy Memo
<context>
Company stage: [STAGE — e.g., Series B, 80 employees]
Strategic decision: [WHAT IS BEING DECIDED]
Relevant background:
[PASTE RELEVANT DATA, MARKET CONTEXT, OR PRIOR DECISIONS]
</context>
<task>
Write a strategy memo that recommends a course of action.
Think step by step in <thinking> tags before writing:
examine the assumptions behind each option, second-order effects,
and what you'd need to be wrong about for the recommendation to fail.
</task>
<output_format>
**To:** [ROLE]
**From:** [ROLE]
**Re:** [DECISION]
**Situation** (2–3 sentences: the problem and why it matters now)
**Options** (2–3, with honest tradeoffs for each)
**Recommendation** (specific, with a clear rationale)
**Key risks** (2–3, with mitigation for each)
**Decision needed by:** [DATE — and why the timing matters]
</output_format>
31. Board Update
<context>
Quarter: [QUARTER AND YEAR]
Company: [DESCRIPTION]
Last quarter's commitments: [WHAT YOU SAID YOU'D DO]
Results this quarter:
[PASTE RAW DATA — METRICS, HIGHLIGHTS, MISSES]
</context>
<task>
Write the board update narrative. Do not spin misses —
describe them accurately and explain what's changing.
</task>
<output_format>
**Headline** (one sentence: the quarter in plain terms)
**Performance vs. commitments** (table: metric | committed | actual | delta | note)
**What went well** (2–3 bullets — specific, quantified)
**What didn't** (2–3 bullets — honest assessment, no hedging)
**What we're changing** (for each miss: specific action + owner + timeline)
**Forward look** (next quarter's 3 most important targets)
</output_format>
32. Post-Mortem
<context>
Incident or failure: [WHAT HAPPENED]
Timeline: [WHEN IT STARTED, WHEN IT WAS RESOLVED]
Impact: [WHO WAS AFFECTED, HOW SEVERELY]
Systems or processes involved: [RELEVANT CONTEXT]
</context>
<task>
Write a post-mortem. Think step by step before writing:
trace the causal chain all the way back to the root, not the
proximate cause. Resist the urge to stop at "human error."
</task>
<output_format>
**Summary** (3 sentences: what happened, impact, resolution)
**Timeline** (chronological, specific times, factual)
**Root cause analysis**
- Proximate cause (immediate trigger)
- Contributing factors (conditions that made it possible)
- Root cause (what would need to change to prevent recurrence)
**5 Whys** (trace from symptom to root)
**Action items** (owner | action | deadline | type: prevent / detect / respond)
**What we'd do differently** (honest, forward-facing)
</output_format>
33. Hiring Spec
<context>
Role: [JOB TITLE]
Team: [WHAT THE TEAM DOES]
Why we're hiring: [SPECIFIC GAP OR GROWTH NEED]
What this person will own in 90 days: [CONCRETE DELIVERABLES]
What success looks like in 12 months: [MEASURABLE OUTCOMES]
</context>
<task>
Write a hiring specification, not a job description.
The difference: a spec is about what the person will do and
what we need them to be — not a laundry list of requirements.
Ask up to 3 clarifying questions before starting if the
role scope or reporting structure is unclear.
</task>
<output_format>
**Role in one sentence** (what they own, not job title boilerplate)
**Why this role, why now** (internal context, honest)
**What you'll do** (5–7 bullet points, outcomes not tasks)
**What we need** (must-have: 3–5 | nice-to-have: 2–3 — no padding)
**What you won't find here** (honest about limitations or challenges)
**How we'll evaluate** (the actual interview process and what each step tests)
</output_format>
34. OKR Drafting With Pressure Testing
<context>
Team: [TEAM NAME AND FUNCTION]
Quarter: [Q AND YEAR]
Company priority this quarter: [ONE SENTENCE]
Last quarter: [TOP WIN / TOP MISS — one each]
</context>
<task>
Draft 3 Objectives and 3 Key Results each.
Then pressure-test each KR: could we hit this number while the
actual goal fails? If yes, revise it.
</task>
<output_format>
For each Objective:
**O[N]: [Objective statement — qualitative, inspiring]**
- KR1: [metric | baseline | target | owner]
- KR2: [metric | baseline | target | owner]
- KR3: [metric | baseline | target | owner]
- Health metric: [one guardrail to prevent gaming]
- Pressure test: [where this could be gamed — and the revised version if needed]
</output_format>
35. Customer Journey Map
<context>
Product: [WHAT YOU SELL]
Customer type: [WHO THIS JOURNEY IS FOR — be specific about the segment]
Entry point: [HOW THEY FIRST ENCOUNTER YOU]
End goal: [WHAT SUCCESS LOOKS LIKE FOR THE CUSTOMER]
</context>
<task>
Map the customer journey from first awareness to retention
or churn. Think step by step: trace what the customer thinks,
feels, and does at each stage — not what you hope they do.
</task>
<output_format>
For each stage: [Awareness → Consideration → Decision → Onboarding → Value → Retention / Churn]
- What the customer is trying to do
- Emotional state (frustrated / curious / hopeful / skeptical)
- Your touchpoints (what they see/experience from you)
- Friction points (what makes them slow down or drop off)
- Opportunity (one thing you could change to improve this stage)
</output_format>
36. Decision Tree Under Uncertainty
<decision>
[DESCRIBE THE DECISION AND WHAT MAKES IT UNCERTAIN]
</decision>
<context>
Key unknowns: [WHAT YOU DON'T KNOW THAT WOULD CHANGE THE ANSWER]
Constraints: [BUDGET, TIME, REVERSIBILITY]
Stakeholders: [WHO IS AFFECTED AND HOW]
</context>
<task>
Build a decision tree for this situation.
Think step by step in <thinking> tags:
identify the critical uncertainties first, then the decision
branches, then the likely outcomes and their relative weights.
Avoid false precision — use qualitative confidence levels.
</task>
<output_format>
**Decision tree** (text-based, branching format)
**Key decision nodes** (the forks that matter most)
**Recommendation under each scenario**:
- If [uncertainty A] resolves favorably: do X
- If [uncertainty A] resolves unfavorably: do Y
- If [uncertainty B] is the dominant risk: do Z
**First action** (what to do right now, regardless of how uncertainty resolves)
</output_format>
Personal Productivity Prompts (37–43)
37. Weekly Review
<raw_input>
[PASTE YOUR WEEK: TASK LIST, CALENDAR NOTES, EMAIL HIGHLIGHTS,
BRAIN DUMP — messy is fine]
</raw_input>
<task>
Run my weekly review using this input.
Don't soften the misses. If I said I'd do something and didn't,
call it a miss, not a "carry-forward."
</task>
<output_format>
**Wins** (3–5 bullets — specific accomplishments that mattered)
**Misses** (3–5 bullets — what didn't happen and the honest reason)
**Lessons** (2–3 bullets — what I'd do differently)
**Next week's top 3** (the highest-leverage things, not the busiest)
**Energy audit** (one thing to drop, one thing to protect)
**Calendar surgery** (2–3 meetings I should decline or shorten)
</output_format>
38. Learning Plan With Bloom's Taxonomy
<task>
Create a learning plan for [SKILL OR SUBJECT].
Structure it using Bloom's Taxonomy — move from recall to
application to synthesis, in that order. Don't start with
synthesis tasks; don't waste time on recall after I've demonstrated
understanding.
</task>
<context>
Current level: [BEGINNER / SOME EXPERIENCE / INTERMEDIATE]
Time available: [HOURS PER WEEK]
Target: [WHAT I WANT TO BE ABLE TO DO — make it specific and testable]
Deadline: [DATE OR TIMEFRAME]
</context>
<output_format>
**Phase 1 — Recall & Comprehension** (Week 1–[N])
- Daily activity (15–30 min): [specific task]
- Resources: [specific titles or types]
- Checkpoint: [what I can demonstrate at the end of this phase]
**Phase 2 — Application** (Week [N]–[M])
[same structure]
**Phase 3 — Analysis & Synthesis** (Week [M]–end)
[same structure]
**Final project** (proves mastery — specific, observable)
</output_format>
39. Decision Matrix
<decision>
[DESCRIBE WHAT YOU'RE DECIDING]
</decision>
<options>
Option A: [DESCRIBE]
Option B: [DESCRIBE]
Option C: [DESCRIBE IF APPLICABLE]
</options>
<task>
Help me make this decision using a weighted decision matrix.
Ask up to 3 clarifying questions before starting — particularly
about what criteria matter most and any hard constraints.
</task>
<output_format>
**Criteria selection** (why these 5–7 criteria, given my situation)
**Weighted matrix**
| Criterion | Weight | A | B | C | Notes |
[Fill in with 1–10 scores per criterion, weights summing to 1.0]
**Weighted totals**
**Qualitative overlay** (what the matrix can't capture — and whether it changes the answer)
**Recommendation**: [specific choice] because [one-sentence reason]
</output_format>
40. Message Triage
<inbox>
[PASTE EMAIL THREADS, SLACK MESSAGES, OR MEETING REQUESTS —
raw and unedited]
</inbox>
<task>
Triage this inbox. Sort by: respond today / respond this week /
delegate / archive / no response needed.
For each item in "respond today" and "respond this week,"
draft the response.
</task>
<rules>
- Responses under 100 words unless the thread requires more
- No "Thanks for reaching out" openers
- If a response requires a decision I haven't made, flag the
decision needed rather than drafting around it
- If something can be handled by someone else, say who
</rules>
41. Life Audit
<context>
I'm going to describe where I am across these life domains.
Be direct — this is a diagnostic, not therapy.
Career: [CURRENT STATE AND FEELING]
Finances: [CURRENT STATE AND FEELING]
Health: [CURRENT STATE AND FEELING]
Relationships: [CURRENT STATE AND FEELING]
Learning/Growth: [CURRENT STATE AND FEELING]
Autonomy/Freedom: [CURRENT STATE AND FEELING]
</context>
<task>
Audit this honestly.
Think step by step in <thinking> tags:
identify which domains are genuinely fine, which have
surface problems masking deeper ones, and which I'm avoiding.
</task>
<output_format>
For each domain: [Green / Yellow / Red] + one-sentence honest assessment
**Highest-leverage intervention** (the single change that affects the most domains)
**What I'm probably avoiding** (the thing I didn't mention but you can infer)
**First concrete action** (specific, completable in the next 48 hours)
</output_format>
42. Calendar Surgery
<calendar>
[PASTE OR DESCRIBE YOUR CURRENT WEEK'S SCHEDULE —
meetings, commitments, recurring blocks]
</calendar>
<task>
Audit this calendar against a single filter:
does this event move my most important goal forward, or not?
Be ruthless. Most calendars are other people's priorities wearing
the disguise of commitment.
</task>
<output_format>
**Keep** (clear why it moves the needle)
**Shorten** (the event is necessary but the time allocation isn't)
**Decline or delegate** (someone else should own this — or no one should)
**Protect** (time that's currently unblocked but should be guarded)
For each "decline" recommendation: draft the message to send.
</output_format>
43. Explain Me to My Future Self
<context>
I am going to paste a decision I made, a project I worked on,
or a period I lived through. I want to understand it better —
not justify it, understand it.
[PASTE THE SITUATION, DECISION, OR PERIOD — be specific]
</context>
<task>
Write a letter from my present self to my future self explaining
this period, the decisions made, and what they reveal about
priorities and values — stated and unstated.
Think step by step in <thinking> tags:
identify what the facts reveal that the narrative doesn't,
what patterns appear across the decisions, and what I probably
didn't say but clearly valued.
</task>
<rules>
- Not a justification letter — an honest letter
- Name the tradeoffs that were made, even the uncomfortable ones
- End with one question my future self should revisit
</rules>
Creative & Conceptual Prompts (44–50)
44. Story Development With Extended Thinking
<context>
Genre: [GENRE]
Core premise: [ONE SENTENCE — the irreducible idea]
Protagonist: [WHO, WHAT THEY WANT, WHAT'S IN THEIR WAY]
Tone: [E.G., LITERARY, COMMERCIAL, DARK, HOPEFUL]
Target length: [SHORT STORY / NOVELLA / NOVEL]
</context>
<task>
Develop the story structure. Think step by step in <thinking> tags
before presenting anything to me. In your thinking:
- Test at least 3 structural approaches (three-act, hero's journey,
five-act, kishōtenketsu, non-linear)
- Identify what the premise demands structurally
- Pick the approach and explain why it serves this story specifically
</task>
<output_format>
1. Structural choice and rationale
2. Scene-by-scene breakdown (high level — 20–30 scenes)
3. The central tension and its resolution (no vague resolutions)
4. Character arc: where do they start, what breaks them open, where do they land?
5. Three scenes you need to write first (the load-bearing ones)
6. What this story is actually about (the theme under the plot)
</output_format>
45. World-Building
<premise>
[ONE SENTENCE: WHAT MAKES THIS WORLD DIFFERENT FROM OURS]
</premise>
<task>
Build the world from that premise outward.
Ask up to 3 clarifying questions before starting — particularly
about genre, intended audience, and how far from reality
this world should drift.
Apply the premise consistently. Every element of the world
should be shaped by it — not just the obvious ones.
</task>
<output_format>
1. Physical world (geography, climate, how the premise changed them)
2. Social structure (power, class, culture — shaped by the premise)
3. Economy (what's scarce, what's abundant, what that creates)
4. Rules of the world (3–5 specific rules — the premise's logical extensions)
5. History (2–3 key events that established the current state)
6. Sensory details (what it smells, sounds, looks like — make it tactile)
7. Internal consistency check (2–3 tensions or contradictions to resolve)
</output_format>
46. Character Study
<character_brief>
Name: [NAME]
Role in story: [PROTAGONIST / ANTAGONIST / SUPPORTING]
What they want (external): [THE GOAL THEY'D STATE IF ASKED]
What they want (internal): [THE NEED THEY CAN'T ARTICULATE]
What's in their way (external): [THE VISIBLE OBSTACLE]
What's in their way (internal): [THE WOUND OR FLAW]
World context: [BRIEF DESCRIPTION OF THEIR WORLD]
</character_brief>
<task>
Build a full character study. Think step by step:
find the contradiction at the core of this person —
the place where their external and internal wants conflict.
That contradiction is the character.
</task>
<output_format>
1. Core contradiction (one sentence — this is the engine)
2. Backstory that explains the wound (specific, not generic)
3. Behavioral patterns (how the wound shows up in behavior)
4. Voice (how they speak — 3 sample lines of dialogue)
5. Arc (what they need to confront and what changes — or doesn't)
6. The scene that reveals them most fully (describe it)
7. What they'd never say (and why)
</output_format>
47. Concept Generation With Constraints
<task>
Generate concepts for [PRODUCT / CAMPAIGN / EXPERIENCE / DESIGN].
Constraints:
- Must: [NON-NEGOTIABLES]
- Must not: [HARD EXCLUSIONS]
- Budget/scope: [RANGE]
- Audience: [WHO]
Think step by step in <thinking> tags:
generate 10 concepts before filtering. Eliminate the ones that
don't survive first contact with the constraints.
Present only the survivors — ranked.
</task>
<output_format>
**Concept [N]: [Name]**
- What it is (2–3 sentences)
- Why it fits the constraints
- The core risk
- First test to validate it
Present 3–5 concepts. For the top concept, provide a
next-step execution sketch.
</output_format>
48. Naming With Constraints
<context>
What we're naming: [PRODUCT / COMPANY / FEATURE / CAMPAIGN]
What it does or is: [DESCRIPTION]
Target audience: [WHO]
Desired feeling: [WHAT THE NAME SHOULD EVOKE]
Constraints:
- Must work in: [LANGUAGES / MARKETS]
- Must not sound like: [COMPETITORS OR CATEGORIES TO AVOID]
- Length: [SYLLABLE OR CHARACTER LIMIT IF ANY]
- Domain / trademark: [WHETHER THIS MATTERS]
</context>
<task>
Generate 20 candidate names. Think step by step in <thinking> tags:
consider different naming strategies (descriptive, coined,
metaphorical, abstract, founder-based) before generating.
Apply constraints as filters only after generating —
not during ideation.
Then evaluate and rank the top 5.
</task>
<output_format>
**Top 5 names** (ranked):
For each:
- Name
- Pronunciation guide if non-obvious
- Why it works for this context
- The one risk (trademark, pronunciation, cultural, connotation)
- Domain availability check recommendation
</output_format>
49. Brainstorming With Antagonist
<idea>
[DESCRIBE YOUR IDEA, PLAN, OR PROPOSAL IN FULL]
</idea>
<task>
Run a structured brainstorm on this idea.
Round 1: Generate 10 ways this idea could be made stronger.
Round 2: Generate 10 ways this idea could fail.
Round 3: For each failure mode in Round 2, generate one design
change that prevents it.
Think in <thinking> tags before each round — don't blend the
rounds. Strengthening and attacking require different mental modes.
</task>
<output_format>
**Round 1 — Strengths to amplify** (10 specific suggestions)
**Round 2 — Failure modes** (10 specific, plausible ways this fails)
**Round 3 — Failure mitigations** (paired with Round 2 items)
**Net assessment**:
Is the idea fundamentally sound? What's the one change that
matters most?
</output_format>
50. Prompt Engineering Meta-Prompt
<current_prompt>
[PASTE YOUR EXISTING PROMPT]
</current_prompt>
<task>
Improve this prompt for Claude Opus 4.7 specifically.
Think step by step in <thinking> tags before rewriting:
- What is this prompt trying to accomplish?
- Where is it unclear, underspecified, or missing context?
- Which Opus 4.7 strengths is it failing to leverage
(extended thinking, XML structure, clarification behavior,
long context)?
- What output format would make the response more useful?
</task>
<output_format>
1. **Diagnosis** (3–5 specific problems with the current prompt)
2. **Rewritten prompt** (complete, ready to paste)
3. **Change log** (each change + why it improves performance on Opus 4.7)
4. **One alternative approach** (a structurally different prompt
for the same task — in case the use case needs a different mode)
</output_format>
Opus 4.7 Power Tips
Use XML tags for every structural boundary. Wrap context in <context>, instructions in <task>, and output requirements in <output_format>. Opus treats tags as semantic separators — this is not decoration. Unstructured walls of text produce unstructured walls of output.
Explicitly cue extended thinking for hard problems. Add "Think step by step in <thinking> tags before responding" when the task involves debugging, architectural tradeoffs, analysis, or anything where the first plausible answer is often wrong. Leave it off for drafting, summarization, and classification — the budget burns without improving those.
Invite clarification for ambiguous tasks. Add "Ask up to 3 clarifying questions before starting if anything is unclear." Opus's instinct to ask before assuming is a feature — it catches ambiguity that would otherwise produce expensive wrong work. For single-turn tasks with full context, this isn't needed; for agentic or long-running tasks, it is.
Anchor questions before and after long documents. When passing a large document, place the task question before the document AND restate it after. Opus handles long context well, but task proximity to the document boundary strengthens attention on what you actually need.
Use the pause-and-reflect pattern in tool loops. After each tool call in an agentic prompt, instruct: "reflect in <thinking> tags — did this result answer what I needed? What's the next step?" This prevents optimistic chaining where the model proceeds on results it hasn't yet verified.
Override the verbosity default explicitly. Opus defaults to thorough. If you want concise, say "no preamble — output directly" or specify a word count. If you want dense and comprehensive, you get that for free; if you want lean, you have to ask for it.
Review this code and tell me if there are any problems.
<context>\nLanguage: TypeScript\nWhat this code does: user authentication middleware\n</context>\n\n<code>\n[PASTE CODE]\n</code>\n\n<task>\nThink step by step in <thinking> tags: first pass for security vulnerabilities, second pass for logic correctness, third pass for readability. For each issue: quote the line, name the problem, show the fix.\n</task>
Build Prompts Like These Automatically
These 50 templates share one thing: they give Opus 4.7 what it needs to do its best work — structured context, explicit output format, and the right reasoning cues for the task. The difference between a prompt that produces decent output and one that produces output you can actually use is almost always in those three elements.
The AI prompt generator builds structured prompts like these automatically — describe what you need, get a ready-to-paste prompt optimized for Claude. The Claude prompt generator targets Anthropic's model specifically. If you want to go deeper on Opus 4.7's specific capabilities before building prompts, the Opus 4.7 prompting guide covers extended thinking, 1M context, and tool-use patterns in full. For more copy-paste templates across Claude's full range, see 50 best Claude prompts for 2026.