Skip to main content
Back to Blog
Claudeprompt engineeringAI modelsAnthropicClaude promptssystem prompts

Claude 4 Prompting Guide: Adaptive Thinking, Extended Context, and Best Practices

Master Claude prompting with practical techniques for system prompts, XML formatting, extended thinking, and long-context workflows.

SurePrompts Team
April 13, 2026
17 min read

TL;DR

Claude rewards structured prompts with XML tags, detailed system instructions, and explicit formatting requests. This guide covers every technique that matters.

You paste a 50-page document into Claude, ask a question, and get an answer that actually references page 37. You give it a twelve-part instruction and every single constraint is honored. That is Claude at its best — but only if you know how to prompt it.

Claude is built differently from other large language models. Its architecture prioritizes instruction following, honest reasoning, and careful handling of long inputs. This guide covers the specific techniques that unlock those strengths — not generic prompting advice, but Claude-specific strategies tested across real workflows.

What Makes Claude Distinct

Before diving into techniques, understanding Claude's design philosophy changes how you approach every prompt.

Instruction Fidelity

Claude is unusually good at following detailed, multi-constraint instructions. Where other models might drop requirement #8 from a ten-part prompt, Claude tends to honor all of them. This has a practical implication: investing time in a detailed prompt pays off more with Claude than with models that approximate your intent.

If you tell Claude to "write exactly 5 bullet points, each under 20 words, using only active voice, with no jargon," you will typically get exactly that. This precision makes Claude particularly effective for tasks where the output format matters — legal documents, technical specifications, structured data extraction.

Long-Context Handling

Claude supports a large context window — capable of processing the equivalent of hundreds of pages of text in a single conversation. This is not just a specification number. Claude actively uses information from throughout the context window, which means you can paste entire codebases, research papers, or book manuscripts and ask questions that reference specific sections.

The practical impact: you do not need to summarize or chunk your documents before giving them to Claude. Paste the full source material and let Claude work with it directly.

Reasoning and Analysis

Claude's reasoning capabilities are strong across both everyday logic and technical domains. It handles multi-step analysis, identifies edge cases, and can hold multiple perspectives simultaneously. For coding tasks specifically, Claude consistently produces well-structured, idiomatic code with appropriate error handling.

Honesty About Uncertainty

Claude is more likely than other models to say "I'm not sure about this" or "I don't have enough information to answer that definitively." This is a feature, not a limitation. It makes Claude more trustworthy for research and analysis. When you need Claude to commit to a position despite uncertainty, you can prompt it explicitly: "Give me your best assessment even if you're not fully certain."

System Prompts: Your Most Powerful Tool

System prompts (also called system instructions) set persistent context for an entire conversation. They are the single highest-leverage element in Claude prompting.

What Goes in a System Prompt

A good system prompt defines three things:

  • Identity and expertise — Who Claude should be for this conversation
  • Behavioral rules — What to always do and never do
  • Output expectations — Format, tone, length defaults

Here is an example for a code review assistant:

code
You are a senior software engineer conducting code reviews. Your priorities:

1. Security vulnerabilities (flag immediately)
2. Logic errors and edge cases
3. Code readability and maintainability
4. Performance (only when it matters)

Rules:
- Be direct. No filler phrases like "Great question!" or "That's a good approach!"
- When you find an issue, show the fix, not just the problem
- If code is fine, say so briefly. Don't manufacture feedback
- Use the same programming language as the code being reviewed
- Flag assumptions you're making about the codebase

Default response format: bullet points grouped by severity (critical → minor).

System Prompt Best Practices

Be specific about what to avoid. Claude's defaults are generally helpful, but if there are patterns you do not want — like excessive hedging, emoji usage, or overly formal language — state that in the system prompt. Claude respects "don't" instructions well.

Set the output format once. Rather than specifying format in every message, put it in the system prompt: "Always respond with a brief summary first, followed by detailed analysis. Use markdown headers for sections longer than 3 paragraphs."

Include examples of ideal output. If you want a specific format or tone, showing one example in the system prompt is more effective than describing it abstractly.

Keep it under 500 words. System prompts that are too long can dilute the most important instructions. Put the critical rules first — Claude gives more weight to instructions that appear early.

XML Tag Formatting

Claude was trained to handle structured markup, and XML tags are one of the most effective ways to organize complex prompts. Tags create clear boundaries between different parts of your input, reducing ambiguity.

When to Use XML Tags

Use XML tags when your prompt contains:

  • Separate sections that serve different purposes (context vs. instructions vs. examples)
  • Source material that should be treated as data, not as instructions
  • Multiple examples that need clear boundaries
  • Complex instructions where you want Claude to distinguish meta-instructions from content

Basic Tag Usage

xml
<context>
You are helping a marketing team at a B2B SaaS company that sells
project management software. The target audience is team leads
at companies with 50-200 employees. Current pain point: teams
are using 3-4 different tools and want consolidation.
</context>

<task>
Write three email subject lines for a product launch announcement.
Each should be under 60 characters and emphasize the "one tool
replaces many" angle.
</task>

<constraints>
- No exclamation marks
- No "revolutionary" or "game-changing" — use concrete language
- Include the product name "FlowDesk" in at least two of three
</constraints>

Nesting Tags for Complex Prompts

For prompts with multiple examples or multi-part instructions, nesting tags keeps everything organized:

xml
<examples>
  <example type="good">
    Subject: FlowDesk replaces your 4 project tools with 1
    Why it works: Specific number, concrete benefit, product name included
  </example>
  <example type="bad">
    Subject: Introducing the Revolutionary New Way to Manage Projects!!!
    Why it fails: Vague, hyperbolic, no product name, excessive punctuation
  </example>
</examples>

<task>
Using the examples above as guidance, write three subject lines
that follow the "good" pattern and avoid the "bad" pattern.
</task>

Tags for Document Analysis

When pasting long documents for Claude to analyze, wrap them in tags so Claude knows where the source material ends and your questions begin:

xml
<document>
[paste your 30-page report here]
</document>

<questions>
1. What are the three strongest arguments in this report?
2. Which claims lack sufficient supporting evidence?
3. Summarize the methodology section in 100 words.
</questions>

This is especially important with long inputs. Without tags, Claude might not be sure where the document ends and where your instructions begin, particularly if the document itself contains directive language.

Extended Thinking

Claude has extended thinking capabilities — the ability to reason through complex problems step by step before producing a final answer. This is particularly useful for tasks that require multi-step logic, code debugging, mathematical reasoning, or weighing tradeoffs.

When Extended Thinking Helps

  • Complex code debugging: Finding logic errors across multiple interacting functions
  • Multi-step analysis: Working through a business problem with many variables
  • Mathematical reasoning: Problems requiring multiple calculation steps
  • Decision frameworks: Evaluating options against weighted criteria
  • Edge case identification: Systematically thinking through what could go wrong

How to Prompt for Better Reasoning

You do not always need to explicitly request extended thinking. For complex problems, Claude will often engage deeper reasoning automatically. However, you can encourage more thorough analysis:

code
Analyze this database schema for potential performance issues
at scale (10M+ rows). Think through each table and relationship
carefully before giving your assessment. Consider:
- Query patterns that would be slow
- Missing indexes
- N+1 query risks
- Denormalization opportunities

The phrase "think through carefully" or "consider each X before answering" signals that you want thorough reasoning, not a quick surface-level answer.

When to Skip Deep Reasoning

Not every task benefits from extended reasoning. Simple factual lookups, straightforward formatting tasks, or creative writing prompts do not need it. Asking Claude to "think step by step" about writing a haiku wastes tokens without improving the output.

Use extended thinking for: Code review, debugging, analysis, planning, complex reasoning.

Skip it for: Simple Q&A, creative writing, formatting, translations.

Long-Context Strategies

Claude's large context window is one of its biggest advantages, but using it effectively requires some technique.

Pasting Full Documents

When working with long documents, place the document first and your questions after:

xml
<document title="Q1 2026 Financial Report">
[full document text]
</document>

Now answer the following questions based only on the document above:
1. What was the year-over-year revenue change?
2. Which business unit underperformed relative to projections?
3. Are there any inconsistencies between the executive summary
   and the detailed financials?

Putting the document first gives Claude the full context before encountering your questions. Putting questions first forces Claude to hold them in memory while processing a long document, which is less effective.

Multi-Document Analysis

Claude can compare multiple documents in a single conversation:

xml
<document_a title="Company A - Annual Report 2025">
[text]
</document_a>

<document_b title="Company B - Annual Report 2025">
[text]
</document_b>

<task>
Compare the growth strategies of Company A and Company B.
Identify three areas where their approaches differ significantly
and assess which strategy is better positioned for a market downturn.
Use specific numbers from the reports.
</task>

Codebase Analysis

For code tasks, paste relevant files with clear labels:

xml
<file path="src/auth/middleware.ts">
[code]
</file>

<file path="src/auth/session.ts">
[code]
</file>

<file path="src/api/users.ts">
[code]
</file>

<task>
Review these files for security vulnerabilities. Focus on:
- Authentication bypass possibilities
- Session handling weaknesses
- Input validation gaps
Pay special attention to how the middleware interacts with
the session module.
</task>

The file path labels help Claude understand the relationship between files and reference them specifically in its response.

Prompting Claude for Code

Claude is particularly strong at coding tasks. Here are the techniques that produce the best results.

Be Specific About Requirements

Vague coding prompts produce vague code. Compare these:

Vague:

code
Write a function to handle user authentication.

Specific:

code
Write a TypeScript function called authenticateUser that:
- Takes email (string) and password (string) as parameters
- Returns Promise<{ success: boolean; user?: User; error?: string }>
- Checks email format before querying the database
- Uses bcrypt for password comparison
- Returns specific error messages: "Invalid email format",
  "User not found", "Incorrect password"
- Includes JSDoc comments
- Handles database connection errors gracefully

Use early returns. Follow the existing codebase pattern of
returning result objects rather than throwing exceptions.

Provide Codebase Context

When asking Claude to write code that fits into an existing project, give it context about conventions:

code
Our codebase conventions:
- TypeScript strict mode, no `any` types
- Functional style: prefer immutable data, pure functions
- Error handling: Result objects, not exceptions
- Naming: camelCase for functions, PascalCase for types
- Testing: Jest with describe/it blocks

Given these conventions, write a service that...

Ask for Tests Alongside Code

Claude produces better code when you ask it to write tests too:

code
Write the function and include unit tests. The tests should
cover: happy path, invalid input, edge cases (empty string,
null), and error conditions. Use Jest syntax.

The act of thinking about test cases often leads Claude to handle edge cases in the implementation that it might otherwise miss.

Debugging With Claude

When debugging, provide the full error context:

xml
<error>
TypeError: Cannot read properties of undefined (reading 'map')
    at UserList (src/components/UserList.tsx:24:31)
    at renderWithHooks (node_modules/react-dom/...)
</error>

<code file="src/components/UserList.tsx">
[paste the file]
</code>

<context>
This component worked before I added the filtering feature
in the useEffect on line 18. The users data comes from a
React Query hook that fetches from /api/users.
</context>

What is causing this error and what is the correct fix?

The more context you give — error messages, stack traces, relevant code, what changed recently — the more accurate the diagnosis.

Structured Output

Claude reliably produces structured output when you specify the format clearly.

JSON Output

code
Extract the following information from this job posting and
return it as a JSON object with these exact keys:
{
  "title": string,
  "company": string,
  "location": string,
  "salary_range": { "min": number, "max": number, "currency": string } | null,
  "required_skills": string[],
  "experience_years": number | null,
  "remote_policy": "remote" | "hybrid" | "onsite" | "unknown"
}

If a field is not mentioned in the posting, use null.
Do not include any text outside the JSON object.

Providing the exact schema — including types and null handling — produces consistently parseable output.

Markdown Tables

code
Compare these three frameworks in a markdown table with columns:
| Framework | Learning Curve | Performance | Ecosystem | Best For |

Rate learning curve as Easy/Medium/Hard.
Rate performance as a relative comparison (fastest/middle/slowest).
Keep each cell under 10 words.

Consistent List Formats

code
For each item, use this exact format:

**[Name]** — [One-sentence description]
- Pros: [2-3 bullet points]
- Cons: [2-3 bullet points]
- Best for: [One specific use case]

Do not deviate from this format.

When to Use Claude vs. Other Models

Claude is not the best tool for every task. Here is honest guidance on when Claude excels and when another model might serve you better.

Claude Excels At

  • Long document analysis: Paste a 100-page document and ask questions about specific sections. Claude's context handling is strong here.
  • Complex instruction following: Multi-part prompts with many constraints. Claude honors details other models drop.
  • Code generation and review: Particularly for TypeScript, Python, and Rust. Claude produces well-structured, idiomatic code.
  • Research and analysis: When you need honest assessment rather than confident-sounding guesses.
  • Maintaining tone consistency: Long-form content that needs to stay in the same voice throughout.

Consider Other Models When

  • You need real-time information: Claude does not browse the web during conversations. If your task requires current data, consider models with search integration.
  • You want image generation: Claude analyzes images but does not generate them. Use a model with integrated image generation for visual tasks.
  • You need deep Google ecosystem integration: For tasks involving Gmail, Google Docs, or Google Search grounding, Gemini has native advantages.
  • You want a more conversational, casual tone by default: ChatGPT's default personality is lighter and more conversational. Claude defaults to thorough and precise.

Matching Task to Model

Task TypeClaude StrengthNotes
Code reviewVery strongHandles full file context well
Creative writingStrongNuanced, but can be verbose
Data extractionVery strongReliable structured output
Quick Q&AModerateOther models are faster for simple queries
Multimodal analysisModerateCan analyze images, but not its primary strength
Real-time researchWeakNo web browsing during conversations

Practical Prompt Templates

Here are ready-to-use templates for common Claude workflows.

Document Analysis Template

xml
<document>
[paste document]
</document>

<task>
Analyze this document and provide:
1. A 3-sentence executive summary
2. The 3 strongest arguments or findings
3. Any gaps, weaknesses, or unsupported claims
4. 5 questions a skeptical reader would ask
</task>

<format>
Use markdown headers for each section.
Be specific — reference page numbers or exact quotes.
</format>

Code Review Template

xml
<code language="typescript" file="[filename]">
[paste code]
</code>

<review_instructions>
Review this code for:
1. Security vulnerabilities
2. Logic errors or edge cases
3. Readability and maintainability
4. Performance issues (only if significant)

For each issue found:
- Quote the problematic line
- Explain the risk
- Provide the corrected code

If the code is solid, say so briefly. Do not manufacture issues.
</review_instructions>

Writing Assistant Template

xml
<context>
Audience: [who]
Purpose: [what this content achieves]
Brand voice: [describe or paste example]
Length: [word count or page count]
</context>

<task>
[Specific writing assignment]
</task>

<constraints>
- [What to avoid]
- [Required elements]
- [Style rules]
</constraints>

Common Mistakes When Prompting Claude

Being Too Vague With Smart Models

Because Claude is good at inferring intent, people sometimes give it less instruction than they should. Claude will produce something reasonable from a vague prompt — but "reasonable" is not "exactly what you needed." Treat Claude's capability as a reason to be more specific, not less.

Ignoring the System Prompt

Many users put everything in the conversation and leave the system prompt empty. This forces you to repeat context in every new conversation. Move your persistent rules, identity, and format preferences into the system prompt. It makes every subsequent message more efficient.

Not Using the Full Context Window

If you have a 30-page document, paste the whole thing. Do not try to summarize it yourself first — you will lose nuance that Claude would have caught. The context window exists to handle large inputs; use it.

Over-Prompting Simple Tasks

Not every question needs a CRAFT framework prompt with XML tags and constraints. "What is the capital of France?" does not need a system prompt, role assignment, and output format specification. Match your prompt complexity to the task complexity.

Accepting the First Response

Claude's first response is good, but iterating makes it better. Tell Claude specifically what to change: "Make section 2 more concise," "Add a counterargument to point 3," "The tone is too formal — make it sound like a blog post, not a white paper." Claude handles revision instructions well because of its strong instruction-following capability.

Building a Claude Workflow

The most effective Claude users build repeatable workflows rather than starting from scratch each time.

  • Create system prompts for your common tasks — code review, writing, analysis, brainstorming. Switch between them as needed.
  • Save your best prompts — When a prompt produces great output, save the template. Tools like SurePrompts' Template Builder let you organize and reuse prompts across sessions.
  • Build prompt chains for complex tasks — Break large projects into sequential prompts where each builds on the previous output. This produces better results than one massive prompt.
  • Use Claude's Projects feature — Upload reference documents, style guides, or codebases as persistent context that you do not have to re-paste every time.

For more Claude-specific prompt examples, check out our collection of best Claude prompts for 2026. If you are comparing Claude to other models, our model comparison guide breaks down how each model handles prompts differently.

FAQ

How many tokens can Claude handle in a single conversation?

Claude supports a large context window capable of handling the equivalent of hundreds of pages of text. The exact limit depends on which Claude model you are using — newer models tend to support larger contexts. In practice, you can paste entire research papers, codebases, or long documents without needing to chunk or summarize them first.

Do XML tags actually improve Claude's responses?

Yes, for complex prompts. XML tags help Claude distinguish between source material and instructions, separate examples from tasks, and understand the structure of multi-part requests. For simple questions, they add unnecessary complexity. The rule of thumb: if your prompt has more than two distinct sections, XML tags will improve clarity and response quality.

Can I use the same prompts for Claude and ChatGPT?

Core prompting principles — specificity, context, role assignment — transfer across models. However, Claude-specific techniques like XML tag formatting and certain instruction-following patterns work better on Claude than other models. If you are optimizing for Claude specifically, use the techniques in this guide. If you need cross-model prompts, stick to the universal principles covered in our prompt writing guide.

Try it yourself

Build expert-level prompts from plain English with SurePrompts — 350+ templates with real-time preview.

Open Prompt Builder

Get ready-made Claude prompts

Browse our curated Claude prompt library — tested templates you can use right away, no prompt engineering required.

Browse Claude Prompts