A sales rep types "write me a proposal for Acme Logistics based on our discovery call" into the nearest AI assistant the night before a deadline and gets back three pages of confident prose that could have been written for any company. Executive summary, "proposed solution," bullets about partnership. The rep edits the company name, tweaks a number, sends it. The buyer reads one paragraph, tags it as templated, and forgets it.
Proposals are where AI promises the most time savings and delivers the least, because the task looks deceptively familiar. A proposal looks like a long document, so writers reach for a long-document prompt. What it actually is: a compressed decision brief for a specific buyer, built from specific discovery notes, with a specific price against a specific scope. Feed the model those specifics and it writes a proposal the buyer can evaluate. Skip them and it writes filler.
This guide sits in the sales track of our prompt engineering for business teams guide and pairs with AI discovery call prompts, AI campaign copy prompts, and AI pipeline forecasting prompts.
Why "Write Me a Proposal" Prompts Fail
The failure is not that the model writes badly. It writes fluently, which is the problem. Given "write a proposal for a mid-market logistics company to use our ops platform," the model averages across every proposal-shaped document in training data and produces the midpoint: a structure that looks correct, a value proposition that could apply to any vendor, pricing framed as "investment," a close invoking "partnership." The output is a proposal the way a stock photo is a photo — technically yes, but nothing in it is about this buyer.
A proposal carries five inputs a generic prompt cannot supply: the scope the buyer actually asked about; the buyer's own words for the problem; the pricing structure and what it anchors to; the SOW — what is delivered, by when, and what "done" looks like; and the objections the buyer will raise privately, preempted so the internal champion can use the document. Miss any one and the proposal reads as templated.
The rest of this guide is five prompts, one per input, plus a synthesis prompt.
Scoping Prompts: Translating Discovery Into Scope
The first failure is scope drift. The buyer mentioned four pains; the proposal addresses all four with equal weight — or worse, mentions capabilities the buyer never asked about because the vendor is proud of them. Good scoping is subtractive. The prompt reads the discovery notes, pulls out what the buyer signaled, and produces a scope statement the buyer will recognize as their own problem.
| Input from discovery | What the scoping prompt extracts | What it deliberately excludes |
|---|---|---|
| Raw call notes | Pains the buyer stated in their own words, ranked by emphasis | Pains the rep wishes the buyer had mentioned |
| Buyer's stated priorities | The 2-3 outcomes that would define a successful engagement | Outcomes the vendor's product is best at but the buyer did not value |
| Organizational context | Who signs off, who runs the pilot, who reports on results | Org-chart names the buyer did not actually say |
| Existing systems mentioned | Touchpoints the proposed scope will integrate with | Systems the vendor would like to replace but the buyer did not raise |
The instinct during proposal-writing is expansion — cover more, because more looks like more value. The opposite is true: a scope the buyer recognizes as theirs is more persuasive than a scope that covers everything the vendor could deliver. The prompt enforces that discipline by reading the discovery notes as the only source.
ROLE:
You are a sales engineer converting discovery call notes into a
proposal scope statement. You work only from the notes supplied.
You do not add scope the buyer did not raise.
CONTEXT:
Discovery call notes (raw):
---
[paste call notes or structured post-call synthesis]
---
Our product's capabilities (for reference only — do not include
any capability the buyer did not raise):
[paste the capability list]
TASK:
Produce a scope statement with these sections:
1. Problems in scope (ranked by how heavily the buyer emphasized
them, with a direct quote from the notes for each).
2. Outcomes in scope (2-3 outcomes the engagement will drive,
each tied to a pain in section 1).
3. Integration touchpoints (only systems the buyer mentioned).
4. Explicitly out of scope (things the buyer ruled out or did
not raise; list them so the SOW does not drift later).
FORMAT:
Markdown. Every item in sections 1-3 ends with a [source: quote]
tag pointing to the line in the notes.
ACCEPTANCE:
- No pain appears that is not quoted from the notes.
- Section 4 is non-empty — every scope has exclusions.
- No capability from the reference list appears unless a buyer
quote justifies it.
The "source quote" acceptance clause is load-bearing. It tethers every element of scope to something the buyer actually said — exactly the discipline a rep with unlimited patience would apply.
Value-Prop Translation: The Buyer's Language, Not Yours
The second failure is vocabulary. Vendors write in internal product language — "unified orchestration layer," "AI-powered workflow automation," "enterprise-grade governance" — because that is the product team's vocabulary. Buyers do not use those words. Reading a proposal in vendor vocabulary forces the buyer to translate every sentence before they can evaluate it, and translation costs attention the vendor does not get back. The fix is to rewrite the value proposition in the buyer's language, using terms from their call notes, industry, and role.
This is a good task for AI because it is literal translation.
ROLE:
You are a sales writer translating an internal value proposition
into the buyer's language. You use only vocabulary present in the
discovery notes or in the buyer's industry norm, not internal
product vocabulary.
CONTEXT:
Internal value prop (how the product team describes the product):
[paste 1-2 paragraphs]
Discovery notes (for buyer vocabulary):
[paste]
Buyer role and industry:
[role] at [industry], [company size]
TASK:
Produce a rewritten value prop that:
1. Uses the buyer's own terms for the problem (pulled from the
discovery notes).
2. Replaces internal product jargon with the nearest plain-
language equivalent.
3. Names outcomes in the units the buyer's role is measured by
(e.g., hours saved per week, not "efficiency gains").
4. Ends with a two-sentence statement the internal champion
could paste into an internal Slack message without editing.
FORMAT:
Markdown. Two versions:
- Long form (3-4 sentences for the proposal body).
- Short form (2 sentences for the internal-champion paste).
ACCEPTANCE:
- No internal product jargon remains (list of jargon terms to
remove: [paste list]).
- Every claim about outcomes uses a unit the buyer's role cares
about.
- The short form reads like one person explaining to a peer, not
a marketing tagline.
The "champion paste" framing does real work. Buyers forward parts of a proposal internally before they respond; the rep does not see that forwarding but lives with its consequences. A line the champion can paste without editing lowers the friction of that forward. If they would not paste it, the value prop is still in vendor vocabulary. For the library-management structure behind reusing this across a team, see the prompt template pattern.
Pricing Presentation: Anchoring, Comparison, Tier Framing
Pricing is where AI-written proposals commit the biggest unforced errors. The model's default is a single price in "investment" language with no anchor. That leaves the buyer to anchor it themselves, which usually means comparing to whichever price they saw last — often a cheaper competitor. Good pricing anchors deliberately: to the cost of the status quo, to the cost of an in-house build, and to the outcomes the buyer's own notes said they wanted.
A pricing prompt needs three moves. Anchor to real costs the buyer mentioned. Present tiers with a clear recommendation, because buyers asked to choose between three options pick the middle by default, and a prompt that omits a recommendation leaves the middle undefended. Make the price specific to this SOW, not a list-price reference.
ROLE:
You are a sales writer presenting pricing in a proposal. You
anchor price against real costs the buyer mentioned and frame
tiers with a clear recommendation.
CONTEXT:
Scope statement (from earlier prompt):
[paste]
Pricing inputs (vendor-supplied, internal):
- Recommended tier: [tier name, price, what's included]
- Lower tier: [tier name, price, what's included]
- Higher tier: [tier name, price, what's included]
Anchors from the buyer's discovery notes:
- Status-quo cost: [quote from notes, e.g., "we lose ~15 hours
a week to manual reconciliation"]
- In-house alternative cost (if mentioned): [quote]
- Budget signal (if given): [quote]
TASK:
Produce a pricing section with:
1. A short anchor paragraph (3-4 sentences) framing the cost
against the status-quo cost quoted from the notes.
2. A tier table (three rows) with: tier name, price, what's
included, who it's for.
3. A recommendation paragraph (2-3 sentences) stating which
tier fits the scope and why, referencing the scope statement.
4. A one-line note on payment terms if the buyer raised any
constraint in notes.
FORMAT:
Markdown table for tiers. Anchor and recommendation as short
prose paragraphs.
ACCEPTANCE:
- The anchor paragraph uses a direct quote from the buyer's notes,
not an invented figure.
- The recommendation explicitly names one tier and justifies it
against the scope — not "it depends on your needs."
- No "investment" euphemism — say "price" and then the number.
The "say price and then the number" rule earns its place. Soft pricing language reads as a seller who does not believe their own price.
SOW Structure: Deliverables, Timeline, Acceptance
The Statement of Work is where AI-written proposals lose the buyer's procurement and legal teams. Unchecked, the model writes an SOW that sounds professional and commits to nothing: "collaborative engagement," "ongoing optimization," "quarterly business reviews as needed." Procurement flags that immediately — a vague SOW is a scope dispute waiting to happen.
Procurement looks for three things: deliverables (what physically changes hands), timeline (dates tied to deliverables), and acceptance criteria (what "done" looks like for each). An SOW that skips any of the three is dead on arrival.
ROLE:
You are a sales engineer writing an SOW section of a proposal.
You produce concrete deliverables, dated milestones, and
measurable acceptance criteria. You do not use vague phrases
like "ongoing optimization" or "as needed."
CONTEXT:
Scope statement (from earlier prompt):
[paste]
Engagement shape (duration, team size, cadence):
[paste, e.g., "12-week engagement, 2 implementers from our
side, weekly sync, kickoff April 29"]
Known constraints:
[paste any deadlines, blackouts, dependencies the buyer raised]
TASK:
Produce an SOW with:
1. Deliverables table: name, description (one sentence), due
week, owner, acceptance criterion (one measurable line).
2. Milestones section: 3-5 named milestones with dates.
3. Dependencies section: what the buyer must provide for each
milestone (access, data, sign-off).
4. Out-of-scope section: explicitly echoed from the scope
statement.
FORMAT:
Markdown. Deliverables as a table.
ACCEPTANCE:
- No deliverable is vague ("optimization," "support," and
"alignment" used alone are failures).
- Every deliverable has an acceptance criterion with a
measurable signal.
- Dependencies explicitly list buyer-side obligations; the SOW
does not read as if only the vendor has commitments.
The "buyer-side obligations" clause is the one most often skipped. Vendors list only their own deliverables, creating a document where the vendor is responsible for everything and the buyer is responsible for paying. If the engagement slips because the buyer did not provide access or data, the SOW has no language to reference. Naming dependencies up front protects both sides.
Objection Preempt: Addressing Concerns In-Document
The proposal is read without the rep in the room. Whatever objections the buyer has, they will raise privately, and the document has to survive that conversation without the rep to reframe.
The trick is not to list objections and refute them — that reads as defensive. The trick is to weave likely objections into the natural sections of the proposal, as if answering them is part of explaining the scope. "Why this scope and not a bigger one" goes into scope. "Why not build this in-house" goes into value prop. "Why this tier" goes into pricing.
ROLE:
You are a sales reviewer preempting likely buyer objections
inside a draft proposal. You rewrite the relevant sections to
address objections naturally; you do not add an "objections"
section or a FAQ.
CONTEXT:
Draft proposal (scope + value prop + pricing + SOW):
[paste]
Discovery notes:
[paste]
Segment objection library (from prior deals in this segment):
[paste 3-5 common objections]
TASK:
1. Identify the top 3 objections this buyer is likely to raise
internally, based on the discovery notes and segment library.
2. For each, name where in the proposal it should be addressed
(scope / value prop / pricing / SOW).
3. Rewrite the relevant sentences to address the objection
implicitly — no "you may wonder," no "some buyers ask."
4. Output a diff-style summary: which sentences changed and
why.
FORMAT:
Markdown. First section: the 3 objections and where each is
addressed. Second section: the rewritten sentences, one block
per changed passage. Third section: the summary.
ACCEPTANCE:
- No explicit "objection" framing remains in the proposal text.
- Every objection is addressed in the section most relevant to
it, not in a catch-all.
- The rewrites do not introduce new claims — they reframe
existing ones.
The "no new claims" acceptance clause matters. The easiest way for a model to "address" an objection is to invent a proof point that was not in the source material. That is the hallucination risk of this pattern, and the clause closes it off.
A Good Proposal Prompt In Full
The five prompts compose. In practice, the rep runs scoping, value-prop translation, pricing, SOW, then objection preempt, and either stitches the outputs together or passes them into a synthesis prompt. The synthesis prompt is the shortest, because the work is already done.
ROLE:
You are a proposal editor assembling a finished proposal from
pre-built sections. You preserve the section content; you only
adjust connective tissue (transitions, headings, cover page,
summary).
CONTEXT:
Buyer: [name, role, company, industry]
Deal stage: [e.g., "post-discovery, pre-procurement"]
Sections (in final order):
1. Scope statement:
[paste]
2. Value proposition:
[paste long form]
3. Pricing section:
[paste]
4. SOW:
[paste]
Optional: objection-preempt rewrites (already applied above).
TASK:
Produce a finished proposal:
1. Cover with buyer name, date, and a one-line framing ("A
proposal for [buyer] on [problem]").
2. One-paragraph executive summary (4-5 sentences; uses the
short form of the value prop).
3. The four sections in order, with H2 headings.
4. A one-paragraph close naming the next step (specific — not
"we look forward to your feedback").
FORMAT:
Markdown, suitable for export to a styled PDF.
ACCEPTANCE:
- No content is added beyond what the supplied sections contain,
except the cover, summary, and close.
- The executive summary does not repeat the value-prop paragraph
verbatim; it compresses.
- The close names a specific next step with a date or a named
artifact (signed SOW, kickoff call, etc.).
That is the pattern. No one prompt writes the proposal. Six prompts, each doing one narrow job, produce a document that reads as written for this buyer.
Common Anti-Patterns
- One "write the proposal" prompt. Averages every proposal in training data; produces generic structure and vendor vocabulary. Fix: split into scope / value prop / pricing / SOW / objection preempt / synthesis.
- Pasting the capability catalog into the scope prompt without guardrails. Model will happily include every capability, drifting scope. Fix: explicit "do not include anything the buyer did not raise."
- Letting the model invent pricing anchors. The model will invent a "typical cost of manual reconciliation" figure if not given one. Fix: pass a direct quote from discovery notes as the anchor; forbid invented figures.
- Vague SOW language ("ongoing optimization," "as needed"). Fails procurement review. Fix: acceptance criterion for every deliverable; dated milestones; named dependencies.
- Adding an "Objections" or "FAQ" section. Flags the vendor as defensive and introduces objections the buyer had not thought of. Fix: preempt inside the relevant sections; no standalone objection block.
- "Investment" instead of "price." Triggers buyer distrust of soft pricing language. Fix: name the number.
For adjacent sales prompts, pair this guide with AI discovery call prompts, AI campaign copy prompts, and AI pipeline forecasting prompts.
FAQ
How long should an AI-assisted proposal take?
Running the six prompts takes roughly thirty to forty-five minutes if the discovery notes are clean. The time is spent curating inputs — pulling quotes, confirming tiers, checking the capability list — not waiting on the model. If it is taking longer, the discovery notes are the bottleneck.
Should I share the AI-generated proposal verbatim?
Not on the first pass. Read it as a draft, confirm that every quote from discovery notes is correctly attributed, and rewrite any sentence that does not sound like your own voice. The prompts get structure and specificity right; the editing pass is for voice and tone.
What about cold proposals with no discovery notes?
Skip scoping and run value-prop translation from the buyer's public materials — website, recent statements, industry norms. The quality is lower than a post-discovery proposal, and that is honest: a proposal without discovery is a long-form cold pitch, not a real proposal.
How do I keep pricing anchors honest?
Only pass anchors that came from the buyer's notes or from industry data the buyer would recognize. If no status-quo cost was mentioned, ask the buyer before writing rather than letting the prompt invent one. A proposal anchored against a made-up figure loses credibility on first read.
Can these prompts become a team library?
Yes — that is the point. Saving each prompt as a prompt template with named input fields turns the pattern into a team capability: new reps run the same prompts with their own inputs, senior reps tune the templates quarterly. See prompt engineering for business teams for the library-management pattern.
Proposals are a specificity problem disguised as a writing problem. Split the specificity into its component inputs — scope, vocabulary, pricing anchor, SOW structure, likely objections — feed each to a narrow prompt, and the model does the composition. The result is a document the buyer can evaluate, and a rep who does not have to stare at a blank page the night before the deadline.