You type a prompt. The AI gives you something vaguely useful but not what you actually wanted. You rephrase. Try again. Still not right. Ten minutes later, you're writing the thing yourself anyway.
Sound familiar? The problem isn't the AI. It's the prompt.
Learning how to write AI prompts effectively is the single highest-leverage skill you can build in 2026. Whether you use ChatGPT, Claude, or Gemini, the difference between a mediocre response and a genuinely useful one comes down to how you ask.
This guide breaks down exactly what makes prompts work, gives you a repeatable framework, and shows you 10 real before/after transformations so you can see the difference in practice.
Why Most Prompts Fail
There's a gap between what you type and what you actually mean. You know what you want — the AI doesn't.
When you type "write me a blog post about marketing," your brain fills in dozens of implicit details: the audience, the tone, the length, the purpose, what angle to take, what to include. The AI gets none of that context. So it guesses. And generic input produces generic output.
Most prompts fail for three reasons:
- They're too vague. "Help me with an email" could mean anything from a cold sales outreach to a resignation letter.
- They skip context. The AI doesn't know your audience, your brand voice, your constraints, or your goals.
- They don't specify the output format. Without direction, the AI defaults to whatever pattern it sees most often in its training data — usually a generic essay format.
The fix isn't writing longer prompts. It's writing structured prompts that close the gap between what you mean and what you type.
The Anatomy of a Great Prompt
Every effective AI prompt contains up to five components. You don't always need all five, but the more you include, the better your results.
1. Role
Tell the AI who to be. This primes it to draw on specific expertise, vocabulary, and perspective.
Without role: "Explain quantum computing."
With role: "You are a physics professor who specializes in making complex topics accessible to high school students. Explain quantum computing."
The role doesn't just change the tone — it changes the depth, the examples chosen, and the structure of the response.
2. Context
Give the AI the background information it needs. What's the situation? Who's the audience? What's already been tried?
Without context: "Write a welcome email."
With context: "Write a welcome email for new subscribers to a B2B SaaS newsletter about project management. Our audience is team leads at mid-size companies. Our tone is professional but conversational."
3. Task
Be specific about what you want done. "Write" is vague. "Write a 300-word product description highlighting three key benefits" is actionable.
Vague task: "Help me with my resume."
Specific task: "Rewrite the experience section of my resume to emphasize leadership skills and quantifiable results. I'm applying for a VP of Engineering role."
4. Constraints
Set boundaries: word count, format, things to avoid, specific requirements.
Without constraints: "Write social media posts."
With constraints: "Write 5 LinkedIn posts, each under 200 words. Use a hook in the first line. No hashtags. End each with a question to drive engagement."
5. Output Format
Tell the AI exactly how to structure the response. Table? Bullet points? JSON? Step-by-step numbered list?
Without format: "Compare these project management tools."
With format: "Compare Asana, Monday.com, and Notion in a table with columns for pricing, best use case, key limitation, and team size fit."
The CRAFT Framework
Remembering five components in the middle of a busy workday isn't practical. That's why we created the CRAFT framework — a memorable system that covers all the essentials:
- C — Context: What's the situation? Who's the audience? What background does the AI need?
- R — Role: Who should the AI be? What expertise should it draw on?
- A — Action: What specific task needs to be completed?
- F — Format: How should the output be structured?
- T — Tone: What voice and style should the response use?
CRAFT in Practice
Here's a CRAFT prompt for a real task:
Context: I run a small e-commerce business selling handmade candles. We just launched a new fall collection with three scents: Cinnamon Harvest, Autumn Rain, and Fireside Evening. Our customers are women aged 25-45 who value artisanal products.
Role: You are an experienced e-commerce copywriter who specializes in luxury artisan goods.
Action: Write product descriptions for each of the three candles in our fall collection.
Format: For each candle, write a headline (under 10 words), a 2-sentence emotional hook, 3 bullet points of product details, and a closing CTA.
Tone: Warm, sensory, and inviting — like describing the candle experience to a friend. Not overly salesy.
The response from that prompt will be dramatically better than "write descriptions for my candles."
You don't need to label each section with the CRAFT letters. Once the framework is internalized, you'll naturally include these elements. The labels are training wheels — useful for learning, unnecessary once it's muscle memory.
CRAFT for Different Task Types
The framework adapts to any domain. Here's how the emphasis shifts:
For analytical tasks (data analysis, research, comparisons), lean heavily on Context and Format. The AI needs to know what kind of analysis you want and how to present it. Role matters less — "data analyst" works for most cases.
For creative tasks (writing, brainstorming, marketing copy), Tone and Role carry the most weight. The AI's creative output is shaped primarily by whose perspective it takes and what voice it uses. Be specific about personality, not just adjectives.
For technical tasks (code, documentation, debugging), Action and Context dominate. The AI needs precise specifications — language, framework, constraints, edge cases. Format helps ensure you get code in the right structure with the right comments.
For communication tasks (emails, presentations, proposals), all five components matter roughly equally. The AI needs to know who you are (Role), who you're talking to (Context), what you're saying (Action), how it should look (Format), and how it should sound (Tone).
Quick-Start CRAFT Templates
If you're new to CRAFT, start with these fill-in-the-blank templates:
For writing tasks:
Context: I need [content type] for [audience]. [Any relevant background].
Role: You are [expert type] with experience in [relevant domain].
Action: Write [specific deliverable] that [key requirements].
Format: [Structure — bullets, paragraphs, table, etc.]. [Length constraint].
Tone: [2-3 adjectives describing the voice]. [One thing to avoid].
For analysis tasks:
Context: I have [data/information] about [subject]. [What decision this supports].
Role: You are [analyst type] presenting to [audience].
Action: Analyze [what] and identify [specific outputs — trends, risks, recommendations].
Format: [How to structure — executive brief, detailed report, comparison table].
Tone: [Objective/direct/accessible]. [Level of technical depth].
Tip
Don't want to build CRAFT prompts manually every time? SurePrompts' AI Prompt Generator structures your input using these principles automatically. Describe what you need in plain English and get a well-structured prompt back.
10 Before/After Examples
Theory is fine. Seeing the transformation is better. Here are 10 real prompts across different use cases, each showing the generic version and the improved version.
Example 1: Email Writing
Before:
Write a follow-up email after a sales call.
After:
You are a senior sales rep at a B2B SaaS company. Write a follow-up email to a VP of Operations I spoke with yesterday about our workflow automation tool. During the call, she mentioned her team wastes 10+ hours/week on manual data entry. She was interested but concerned about onboarding time. Write a concise follow-up (under 150 words) that: references her specific pain point, addresses the onboarding concern with a specific stat (our average onboarding is 5 business days), and suggests a 15-minute demo as the next step. Tone: professional, confident, not pushy.
Example 2: Blog Post
Before:
Write a blog post about productivity.
After:
You are a productivity coach who writes for busy professionals — not productivity enthusiasts. Write a 1,200-word blog post titled "5 Productivity Systems That Actually Work for People Who Hate Productivity Systems." Target audience: mid-career professionals who've tried and abandoned multiple systems. For each system, explain: what it is (2 sentences), who it works best for, how to start in under 5 minutes, and one common mistake. Tone: practical, slightly irreverent, zero fluff. Use short paragraphs and subheadings.
Example 3: Data Analysis
Before:
Analyze this sales data.
After:
You are a data analyst presenting findings to a non-technical executive team. I'll paste our Q1 2026 sales data below. Analyze it and provide: 1) Top 3 trends with supporting numbers, 2) One concern or anomaly that needs investigation, 3) Two actionable recommendations. Format as an executive brief — bullet points, no jargon, each point in 1-2 sentences. Lead with the most important finding.
Example 4: Code Generation
Before:
Write a function to validate emails.
After:
Write a TypeScript function called validateEmail that takes a string input and returns an object with { isValid: boolean, error?: string }. Requirements: check for @ symbol and domain, reject emails with spaces, allow subdomains (user@mail.company.com), reject consecutive dots, and return specific error messages for each failure case. Include JSDoc comments and 5 unit test cases covering edge cases. Follow the existing codebase pattern of using early returns.
Example 5: Creative Writing
Before:
Write a short story about a dog.
After:
You are a literary fiction writer with a talent for emotional subtlety. Write a 500-word short story from the perspective of an aging golden retriever whose family just brought home a new puppy. The story should convey the mix of jealousy, confusion, and eventual acceptance without ever naming those emotions directly — show them through the dog's observations and behaviors. Use sensory details. End on a quietly hopeful note. No anthropomorphizing the dog's thoughts beyond what a dog might actually notice.
Example 6: Marketing Copy
Before:
Write an ad for my fitness app.
After:
You are a direct-response copywriter. Write a Facebook ad (primary text + headline + description) for a fitness app targeting busy parents aged 30-45. Key differentiator: workouts are 15 minutes or less and require no equipment. Social proof: 50,000+ users, 4.8-star rating. The primary text should be 3-4 short lines with a hook that calls out the "no time to exercise" objection. Headline under 8 words. Include one emoji maximum.
Example 7: Meeting Preparation
Before:
Help me prepare for a meeting.
After:
I'm a product manager meeting with my engineering lead tomorrow to discuss why our Q1 feature release is 3 weeks behind schedule. I suspect the delay is partly due to unclear requirements (my responsibility) and partly scope creep from stakeholder requests. Help me prepare: 1) An honest assessment of what went wrong (3-4 bullet points), 2) Three specific questions to ask the engineering lead to understand their perspective, 3) A proposed plan to get back on track that doesn't just say "work harder." Keep it realistic — no corporate platitudes.
Example 8: Research Summary
Before:
Tell me about climate change solutions.
After:
You are a science journalist writing for a well-educated general audience. Summarize the current state of climate change mitigation technology as of 2026. Cover these categories: renewable energy, carbon capture, transportation, and agriculture. For each: the most promising development, its current scale vs. what's needed, and the biggest barrier to wider adoption. Cite specific companies or projects where possible. Format as 4 sections with subheadings, 150-200 words each. Prioritize accuracy over optimism.
Example 9: Learning and Explanation
Before:
Explain machine learning.
After:
Explain machine learning to me. I have a business background — I understand statistics basics (mean, median, correlation) but have never coded. I need to understand ML well enough to make informed decisions about whether to invest in an ML-powered feature for our product. Cover: what ML actually does (skip the textbook definition), 3 types of ML with a business example for each, what ML needs to work well (data requirements), and common ways ML projects fail. Use analogies from business, not math. If something is genuinely complex, say so rather than oversimplifying.
Example 10: Process Documentation
Before:
Write documentation for our onboarding process.
After:
You are a technical writer creating internal documentation. Write an onboarding guide for new customer support agents at a SaaS company. The guide should cover: Day 1 (system access + tool setup), Days 2-3 (product training — our tool is a project management platform), Days 4-5 (shadowing senior agents), and Week 2 (handling tickets with supervision). For each phase, include: goals, specific tasks with checkboxes, who to contact if stuck, and a "you're ready to move on when..." checklist. Format as a clean document with clear headings. Tone: welcoming but practical.
Common Mistakes That Kill Your Results
Even with the CRAFT framework, there are patterns that consistently produce poor output. Here's what to avoid.
Mistake 1: Being Vague About the Audience
"Write a blog post about investing" produces completely different content depending on whether the audience is college students, retirees, or hedge fund managers. Always specify who the content is for.
The fix: Add one sentence about the reader. "The audience is first-time investors in their 20s who know nothing about the stock market" transforms the output entirely.
Mistake 2: Asking for Too Many Things at Once
A prompt that asks the AI to "write a marketing strategy, create social media posts, draft email campaigns, and build a content calendar" will produce shallow work across all four. Break complex requests into focused prompts.
The fix: One prompt, one deliverable. Chain multiple prompts for multi-part projects. The output quality improvement is dramatic.
Mistake 3: Not Specifying What to Avoid
Sometimes the best prompts include what not to do. "Don't use jargon." "No bullet points." "Avoid clichés like 'in today's fast-paced world.'" These constraints prevent the AI's default patterns from taking over.
The fix: After writing your prompt, add 2-3 "do not" instructions. Think about what the AI is most likely to default to, and explicitly prevent it.
Mistake 4: Forgetting to Set Length
Without a word count or length constraint, the AI guesses. Sometimes it writes 100 words when you needed 1,000. Sometimes it writes 2,000 when you needed a paragraph. Be explicit.
The fix: Always include a word count, paragraph count, or page count. "Approximately 500 words" or "keep it under 3 paragraphs" — either works.
Mistake 5: Accepting the First Output
AI prompting is iterative. The first response is a starting point. Tell the AI what to fix: "Make this more concise," "The tone is too formal — make it conversational," "Expand the section on pricing." The best results come from 2-3 rounds of refinement.
The fix: Treat the first output as a draft. Spend 30 seconds identifying the biggest gap, then ask for a specific improvement. Two rounds of this produces significantly better results than one "perfect" prompt.
Mistake 6: Not Providing Examples
When you want a specific style or format, show the AI an example. "Here's an email I wrote that I like the tone of — write the new email in this same style" is more effective than describing the style in abstract terms.
The fix: Keep a folder of writing samples you like — your own or others'. Paste them into prompts when style matters. Two good examples are worth a paragraph of style descriptions.
Mistake 7: Using AI Like a Search Engine
The most underused technique is treating AI as a thinking partner, not an answer machine. Instead of "what's the best marketing strategy?", try "here's my current marketing approach — what am I missing? What would you change and why?"
The fix: Give the AI context about your situation, then ask for analysis, not just information. The shift from "tell me about X" to "here's my situation — help me think through X" is transformative.
For a deeper dive into prompt mistakes and how to fix them, check out our guide to common prompt mistakes.
Model-Specific Tips
The same prompt can produce different results on different models. Here's what to know about the big three.
ChatGPT (GPT-4o)
ChatGPT responds well to structured prompts and handles creative tasks with natural flair. It tends to be verbose by default — add word count constraints or say "be concise." Custom Instructions and memory features let you set persistent context so you don't repeat yourself in every conversation.
ChatGPT-specific techniques:
- Use the memory feature to store recurring context (your role, brand voice, audience)
- Custom GPTs let you create specialized assistants for specific tasks
- DALL-E integration means image generation prompts work inline
- Data analysis capabilities handle CSV and Excel files directly
For advanced ChatGPT techniques, see our guide to using ChatGPT like a pro.
Claude
Claude excels at following nuanced instructions and handling long documents. It's particularly strong at maintaining consistent tone across long outputs and being honest about uncertainty. Use XML tags to separate sections of your prompt — Claude was trained to handle structured markup effectively.
Claude-specific techniques:
- Wrap different parts of your prompt in XML tags:
<context>,<task>,<constraints> - Claude's 200K context window handles entire books, codebases, or research papers
- The Projects feature lets you upload persistent reference documents
- Claude tends to be more cautious — add "give your best assessment even if uncertain" for confident responses
Gemini
Gemini's strength is multimodal understanding — it handles images, video, and code alongside text. It integrates tightly with Google's ecosystem, so prompts involving search, Google Workspace, or data from Google tools tend to perform well. It can sometimes be more conservative in creative tasks compared to ChatGPT or Claude.
Gemini-specific techniques:
- Upload images and ask questions about them directly
- Gemini integrates with Google Search for current information
- It handles code generation well, especially for Google Cloud and Firebase
- Use Gemini for tasks that involve your Google Workspace data
Universal Principles
Regardless of model, these always help:
- More context beats less context
- Specific beats vague
- Examples beat descriptions
- Constraints beat open-ended requests
- Iteration beats single-shot prompting
Testing and Iterating on Your Prompts
Writing a prompt is step one. Testing it is where real improvement happens.
The 3-Run Test
Run your prompt three times and compare the outputs. If all three are consistently good, your prompt is well-structured. If they vary wildly, you need more constraints — the AI is making too many decisions on its own.
The "Would I Use This?" Test
After the AI generates output, ask yourself: would I actually use this? Not "is this technically correct" but "would I send this email / publish this post / share this analysis?" If not, identify what's missing and add it to the prompt.
Version Your Prompts
When you find prompts that work well, save them. Create a personal prompt library organized by task type. This sounds obvious, but most people write one-off prompts and lose them.
Tools like SurePrompts' Template Builder let you save, organize, and iterate on prompts across sessions — building up a library of templates customized to your specific needs.
The Feedback Loop
The fastest way to improve is to tell the AI what was wrong with its output:
That response was too long and too formal. Rewrite it in half
the words and make it sound like I'm talking to a colleague,
not presenting at a conference.
Each correction teaches you what your prompts need. After a few iterations, you'll start including those instructions upfront.
Putting It All Together
Writing AI prompts well is a skill, and like any skill, it develops with practice. Start with the CRAFT framework until it becomes automatic. Study the before/after examples to internalize what "specific enough" actually looks like.
If you want to skip the learning curve and generate well-structured prompts instantly, SurePrompts' AI Prompt Generator applies these principles automatically. Describe what you need in plain English, and it builds a detailed, structured prompt for you. You can also explore 320+ ready-made templates in the Template Builder for specific use cases.
The core principle is simple: close the gap between what you mean and what you type. Every detail you add to a prompt is one less thing the AI has to guess about. And AI is better at following instructions than it is at reading your mind.
Beyond the Basics: Advanced Prompt Techniques
Once the fundamentals are solid, these techniques take your prompts to the next level.
Meta-Prompting
Ask the AI to help you write a better prompt before you use it:
I want to write a blog post about remote work productivity for team leaders. Before you write anything, suggest 5 ways I could improve this prompt to get a significantly better result.
This teaches you what information the AI finds useful — and each suggestion becomes part of your prompting intuition.
Negative Prompting
Specify what you don't want. This is surprisingly powerful because AI models have strong defaults that need explicit overriding:
Write the email, but:
- No "I hope this email finds you well"
- No exclamation marks
- No passive voice
- No sentences longer than 20 words
- Don't start with "Dear [name]" — use a first-name greeting
Chain Prompting
Break a complex task into a series of connected prompts where each builds on the previous output:
- "Generate 10 headline options for a blog post about X"
- "Take headline #3 and create a detailed outline for a 2,000-word post"
- "Write the introduction based on this outline. Make the hook a personal anecdote."
- "Now write section 2, maintaining the tone established in the intro."
Each prompt is focused, which produces better results than asking for everything at once.
Constraint Stacking
Layer constraints one at a time in follow-up messages rather than front-loading them all:
First message: "Write a product description for a smartwatch."
Second message: "Good. Now make it under 100 words."
Third message: "Now adjust the tone for a luxury audience — think Apple, not Amazon."
Fourth message: "Replace any technical jargon with benefits-focused language."
This lets you evaluate each constraint's impact individually and produces more controlled results than a single complex prompt.
For even more frameworks and techniques, see our guide to AI prompt frameworks.
FAQ
What is the best format for writing AI prompts?
The best format includes five components: role (who the AI should be), context (background information), task (what to do), constraints (boundaries like word count), and output format (how to structure the response). You don't always need all five, but including more components consistently produces better results. The CRAFT framework — Context, Role, Action, Format, Tone — is an easy way to remember the essentials.
How long should an AI prompt be?
There's no ideal length — the right length depends on the complexity of the task. A simple question might need one sentence. A complex content brief might need 200 words. The goal isn't length for its own sake; it's providing enough specific detail that the AI doesn't have to guess. If your output isn't good enough, the prompt probably needs more context, not more words.
Do the same prompts work on ChatGPT, Claude, and Gemini?
The core principles — specificity, context, role assignment, format instructions — work across all models. However, each model has strengths. ChatGPT handles creative tasks well, Claude excels at long-form instruction following, and Gemini is strong with multimodal inputs. If you're optimizing for a specific model, learn its strengths and adjust accordingly.
How do I get AI to match my writing style?
The most reliable method is providing an example. Paste a paragraph or two of your existing writing and say "Match this tone and style." You can also describe specific attributes: "Use short sentences. Conversational tone. Occasional dry humor. No jargon." Combining both approaches — example plus description — works best.
What's the difference between a prompt and a system prompt?
A regular prompt is what you type in the chat. A system prompt (or custom instruction) is persistent context that applies to every message in a conversation. System prompts are ideal for setting a consistent role, tone, or set of rules that you don't want to repeat every time. For more on system prompts, see our guide to system prompts and custom instructions.
Can AI help me write better prompts?
Yes — and this is one of the most underused techniques. You can ask the AI to improve your prompt before executing it: "I want to ask you to write a blog post. Before you write it, suggest how I could improve this prompt to get better results." This meta-prompting approach helps you learn what information the AI finds useful. Or use SurePrompts' Prompt Generator to automate the process entirely.