Campaign copy is the output marketing teams use AI for most often, and the output they are most often disappointed in. Someone types "write Facebook ad copy for our product launch," the model returns three breezy headlines with exclamation marks, and the copy goes in the doc as a placeholder nobody wants to ship. The failure is not the model. Campaign copy does not exist in the abstract — it lives inside channels, briefs, and a voice the brand has already committed to.
Good AI campaign copy prompts do three things the generic version skips. They name the channel explicitly — a paid search ad is not a landing page headline is not the fourth email in a nurture sequence. They feed the creative brief in as input rather than asking the model to guess. And they ask for three to five variants at different angles, because campaign copy is a testing surface, not a single best answer.
This guide sits in the marketing track of our prompt engineering for business teams guide and pairs with AI brief writing prompts upstream and AI competitor analysis prompts for the positioning work that feeds the voice guidelines.
Why Generic "Write Ad Copy" Prompts Fail
Three things break at once when you ask a model to write ad copy without structure.
The first is channel blindness. A paid search ad has a 30-character headline and exists beside competitors on a results page. A homepage hero works on a blank canvas and is read in a scan. A cold email subject line has to survive a one-second inbox sort. A welcome email body has space to tell a short story. When the prompt says "write ad copy," the model picks a generic middle and produces something that is no format's best answer.
The second is brief blindness. The creative brief — the upstream document that names audience, objective, insight, promise, and mandatories — is where campaign copy gets its specificity. Without the brief in the prompt, the model substitutes category-generic phrasing ("transform your workflow," "unlock your potential") that could describe any vendor in the category, because it does.
The third is the one-best-answer assumption. Campaign copy is not prose; it is a set of hypotheses the market will test. The useful deliverable is three to five variants built against different angles — benefit-led, pain-led, social-proof, curiosity, price-led — so the team has something to test. Asking for "the best headline" gets you one polished sentence that may be the third-best angle poorly optimized.
Channel-Aware Prompting
The first discipline is naming the channel — not "an ad," but "a Google Search ad headline under 30 characters" or "a LinkedIn sponsored post first line under 150 characters that survives the feed cut." Channels have shape, length, and reading context; prompts that do not encode them produce shape-agnostic copy.
| Channel | Shape constraint | Reading context | What wins |
|---|---|---|---|
| Paid search ad | 30-char headline, 90-char description | Beside competitors on a results page | Explicit benefit, keyword match, clear CTA |
| Paid social ad | 125-char primary text (visible), 40-char headline | Scroll-speed feed | Pattern interrupt in line one, benefit visible pre-cut |
| Landing page hero | 5-10 word headline, 15-25 word sub | Post-click, looking for promise match | Mirrors ad promise verbatim; names the outcome |
| Landing page body | Scannable sections, 1-2 sentence paragraphs | Skimming for proof | Benefits over features, proof near claims |
| Cold email subject | 30-50 chars | One-second inbox sort | Specific, low-hype, curiosity or named-benefit |
| Nurture email body | 100-250 words | Half-attention read on phone | Story + single CTA, voice consistent with brand |
| Push / SMS | 40-120 chars | Interrupt context | One verb, one benefit, one link |
The table is a starting point — channels evolve, limits change, what wins in your category might differ. The point is that every channel has these three axes and the prompt has to name them. A channel-aware prompt sounds like: "Write a Google Search ad with a 30-character headline and 90-character description. The reader is on a results page comparing our product to three competitors. The CTA is a free trial sign-up." That frame produces usable drafts. "Write an ad" does not.
A useful diagnostic: if you can copy-paste the prompt between channels and nothing feels wrong, the prompt is underspecified.
Feeding the Creative Brief
The brief is the second half of the specificity problem. A creative brief names the audience (not "small business owners" — "early-stage SaaS founders who just raised a seed round and are hiring their first marketer"), the insight (the specific belief or tension the campaign addresses), the promise (the one-sentence claim), the proof (reasons to believe the promise), the mandatories (legal disclaimers, product names as written), and the forbidden words.
That document exists or it should. AI campaign copy prompts paste it in verbatim. The template:
ROLE:
You are a senior copywriter producing variants for a campaign. You
work from the supplied brief. You do not invent audience details,
proof points, or claims beyond what the brief supports.
CONTEXT:
Creative brief (use verbatim — do not paraphrase the brief into
generic phrasing):
---
[paste brief: audience, insight, promise, proof, mandatories,
forbidden words]
---
Channel and constraints:
- Channel: [specific channel — e.g., Meta paid social, carousel first slide]
- Format: [character/word limits]
- Placement context: [what surrounds this copy in the user's view]
- CTA: [specific call to action]
Brand voice notes:
[1-2 paragraphs on voice — pace, formality, humor, forbidden
phrasing, any example lines the brand has run before]
TASK:
Produce 5 variants. Each variant must:
1. Hit the character/word limit for the channel.
2. Use the promise from the brief as the core claim (phrased
differently in each variant, but not drifting to a new claim).
3. Include only proof points named in the brief.
4. Honor mandatories and avoid forbidden words.
5. Take a distinct angle — label the angle at the start of each
variant (benefit-led, pain-led, social-proof, curiosity, price-led,
or another angle you justify).
FORMAT:
Numbered list. For each variant:
- Angle: [label]
- Copy: [the actual copy]
- Note: [one line on what the variant optimizes for]
ACCEPTANCE:
- No variant exceeds the channel limit.
- No claim appears that is not supported by the brief.
- No forbidden word appears.
- The five angles are distinct from each other — not five versions
of the same angle with different verbs.
Three things make this prompt work where "write me some ad copy" fails. The brief is pasted in, so the model reasons against specific inputs rather than category priors. The channel and constraints are named, so the output fits the format. The variants clause forces different angles.
Variants, Not One Answer
The variants discipline deserves its own beat because teams under-use it. The temptation, especially with a model that produces confident polished prose, is to accept the first output and move on. The prompt above resists that by forcing distinct labeled angles — a benefit-led variant, a pain-led variant, a social-proof variant, a curiosity variant, a price-led variant.
Two or three of the five will usually feel wrong for the brand. Good. The team picks the two that feel right and runs them as a test; the loser gets retired; the winner becomes the control for the next round. Without variants, there is no test; without a test, there is no learning.
A lighter version works for rapid iteration: give the model the winning variant and ask for three new challengers at angles the current control does not use. Over time the control improves because the challenger set keeps exploring the space.
Voice and Tone in the Prompt
Brand voice is the part most prompts skip and most readers notice. A model left to its own defaults lands in a polite, energetic, over-exclamatory register that is recognizably AI. Brands that sound like anything in particular sound unlike that default.
The way to keep voice in the output is to put voice into the prompt — not "write in our brand voice" (meaningless to the model) but three concrete inputs:
- Voice description in plain words. "Dry, direct, low on superlatives. Uses specific numbers where possible. Never says 'unlock,' 'transform,' 'empower.' Short sentences. Avoids rhetorical questions."
- Three to five example lines the brand has shipped. Real lines from real campaigns. The model pattern-matches against them more reliably than against description alone.
- A forbidden list. Five to ten words and phrases the brand has disowned. Short list, but it kills the worst of the default register.
Keep a reusable voice block as part of your prompt template library and paste it into every campaign prompt. Treat it as a document that evolves: when a campaign retires a phrase, add it to forbidden; when a headline lands well, add it to examples.
Testing Prompts
Two prompts close the loop on campaign copy. The first generates A/B variants — covered above. The second reviews performance after the fact and extracts lessons.
ROLE:
You are a performance copywriter reviewing live campaign results.
You work from the supplied results data and the original brief.
You do not invent external benchmarks or metrics.
CONTEXT:
Original brief:
---
[paste brief]
---
Variants that ran:
---
[paste each variant with its angle label and actual copy]
---
Performance data:
---
[paste impressions, CTR, conversion rate, or whatever metric the
team uses, per variant]
---
TASK:
Produce a short post-campaign review with:
1. Which angle won and by how much.
2. A hypothesis about why the winning angle worked for this audience,
referencing the brief.
3. Which angle lost and a hypothesis about why.
4. Three new angles to test next round that the current set did
not cover.
FORMAT:
Markdown. Four short sections. No metrics invented — if a number
is not in the supplied data, write "not measured in this round."
This prompt is short because it does not need to be long. Forcing the hypothesis to reference the brief keeps the review from being "people clicked the one with 'free' in it" surface-level pattern-matching. Banning invented metrics shuts the door on half-remembered industry benchmarks creeping in as though they were measured.
Example Paid-Ad Prompt
A worked example for a hypothetical B2B launch. The specifics below are illustrative, not a real product or campaign.
ROLE:
Senior performance copywriter producing Meta paid social variants
from a supplied brief. Work only from the brief; do not invent
audience details or proof points.
CONTEXT:
Brief:
---
Audience: solo marketing leads at 10-50-person B2B SaaS companies,
owning content and demand with no team.
Insight: They distrust marketing tools that promise "agency-quality
output" because they have tried several and each took more setup
than it saved.
Promise: [illustrative product name] ships a first-draft campaign in
under 10 minutes from a one-paragraph brief.
Proof: Ten-minute benchmark measured internally across 50 briefs;
campaign outputs include ad variants, landing page draft, and
three emails.
Mandatories: Product name written as [illustrative product name].
Forbidden: "unlock," "transform," "empower," "revolutionize," any
exclamation points.
---
Channel: Meta feed, single-image ad.
Format: Primary text max 125 chars (first 125 visible pre-cut),
headline max 40 chars, description max 30 chars.
Placement: Mobile feed, surrounded by other B2B sponsored posts.
CTA: "Start free" (button, not copy).
Voice: Dry, direct, specific numbers where possible. Short sentences.
Example line from prior campaign: "Briefs in; campaigns out. Your
weekend stays yours."
TASK:
Produce 5 variants with distinct angles, labeled. Each hits all
character limits and honors the forbidden list.
FORMAT:
Numbered list. For each variant: Angle, Primary Text, Headline,
Description, and one-line optimization note.
ACCEPTANCE:
- All character limits respected.
- No forbidden word appears.
- No claim appears that is not in the brief's Promise or Proof.
- Five distinct angles.
Output from a prompt like this reads tighter than a generic ad-copy prompt — every constraint is named and every claim traces back to the brief. A team can look at five labeled angles, pick two to test, kill the other three, and ship.
Example Email-Sequence Prompt
A shorter example for a three-email welcome sequence.
ROLE:
Lifecycle copywriter producing a 3-email welcome sequence from a
supplied brief. Work only from the brief.
CONTEXT:
Brief: [paste audience, insight, promise, proof, mandatories, forbidden]
Voice block: [paste]
Sequence goal: Move the signup from account-created to first value
moment (their first generated campaign) within 72 hours.
TASK:
Write three emails:
1. Email 1 (send: on signup) — welcome + one small first action.
2. Email 2 (send: +24h if action not taken) — remove the objection
the brief's Insight names, with one proof point.
3. Email 3 (send: +48h) — short, a single CTA to the first action,
no new claims.
FORMAT:
For each email: Subject (max 50 chars), Preheader (max 90 chars),
Body (100-180 words), CTA line.
ACCEPTANCE:
- Each email has exactly one primary CTA.
- No email introduces a claim outside the brief.
- Voice block honored in all three.
- Email 3 is shorter than Emails 1 and 2.
Narrative sequences shift channel-awareness from "length" to "pacing." Email 3 is explicitly shorter because third-touch emails at the length of first-touch emails read as repetitive — the constraint encodes a pacing rule the model would otherwise miss. For longer sequences the structure extends; the key is labeling each email's job rather than leaving the model to infer.
Common Anti-Patterns
Most campaign-copy failures map to one of these.
- Prompting for "copy" without naming the channel. Produces format-agnostic output that fits nowhere. Fix: name the channel, limits, and reading context.
- Skipping the brief. Produces category-generic copy that could belong to any vendor. Fix: paste the brief verbatim.
- Asking for "the best headline." No testing surface. Fix: ask for three to five variants at distinct, labeled angles.
- Leaving voice undefined. Defaults to the model's generic marketing register. Fix: a reusable voice block with description, examples, and forbidden words.
- Letting claims drift past the brief. The model embellishes promises; unreviewed copy ships with claims the brief does not support. Fix: forbid it in the acceptance block; human-read against the brief before ship.
- Skipping post-campaign review. Next quarter's prompt starts from the same priors. Fix: run the performance-review prompt every round and feed its angles into the next variant set.
For adjacent marketing outputs, pair this guide with AI brief writing prompts, AI competitor analysis prompts, and AI proposal writing prompts.
FAQ
Can I use the same prompt across channels if I just change the character limit?
Not usefully. Character limit is only one of three channel axes — reading context and placement matter as much. A 30-character headline in a results page is different work from a 30-character push notification. Keep one prompt template per channel family and change the inputs, not the template.
How many variants should I actually ask for?
Three is the floor; five is the ceiling. Below three there is no testing surface. Above five the angles collapse into each other and the last variants are minor rewrites. Five labeled, distinct angles is the sweet spot.
What if my brand does not have a documented voice?
Build one before writing more prompts. A usable voice block is a paragraph of description, three to five example lines from shipped work, and a forbidden list — an afternoon of work, not a quarter.
Should I let AI write the brief too?
Sometimes. Brief-writing prompts work well when the inputs exist — discovery notes, audience research, competitive landscape — and the model is structuring them. They fail the same way campaign copy fails when handed a blank page. See AI brief writing prompts for the pattern.
How do I keep voice consistent when multiple team members write prompts?
Make the voice block a shared artifact — a page in the brand wiki or a pinned doc — and require every campaign prompt to paste it in. Review and update quarterly.
Campaign copy is channel-shaped, brief-shaped, and voice-shaped. Prompts that encode all three produce drafts that feel like the brand and fit the surface. Prompts that ask for one best answer get one answer; prompts that ask for five labeled variants get a testing surface, and the testing surface is where campaign learning actually happens. The generic "write some ad copy" prompt is not broken — it is underspecified.