Skip to main content
Back to Blog
sales promptsdiscovery callsAI salesprospect researchprompt patterns

AI Discovery Call Prompts (2026)

Prompt patterns for sales discovery — pre-call research, question generation, objection handling, and post-call synthesis.

SurePrompts Team
April 20, 2026
15 min read

TL;DR

Discovery calls have four stages — research, question design, objection handling, post-call synthesis. Each gets its own prompt with the right inputs. 'Help with this call' is not a prompt.

Discovery calls are the part of sales where AI looks useful from outside and tends to disappoint on the inside. A rep types "help me prepare for a discovery call with an ops director at a mid-market logistics company," the model returns a generic list of questions and a reminder to "listen actively," and the rep closes the tab and opens the prospect's website anyway. The model did not fail; the prompt was pointed at the wrong shape of problem. A discovery call is not one task. It is four, and each has its own inputs, output, and prompt.

Good AI discovery call prompts treat the call as a pipeline. Pre-call research takes public inputs and produces a compact prospect brief. Question generation takes the brief plus the rep's hypothesis and produces five to seven questions the rep could not easily have written themselves. Objection handling takes the likely objections for the segment and produces response patterns to rehearse. Post-call synthesis takes the call notes and produces next steps, risks, and a fit score. Four prompts. Four outputs. One call.

This guide fits in the sales track of our prompt engineering for business teams guide and pairs with AI competitor analysis prompts, AI proposal writing prompts, and AI pipeline forecasting prompts.

The Four Stages of Discovery

A discovery call is a research-then-synthesis loop compressed into a scheduled thirty to forty-five minutes. The rep walks in with a hypothesis about the prospect, tests it against what the prospect says, updates the hypothesis, and walks out with a sharper view of whether there is a deal and what it looks like. AI helps at four specific points in that loop.

StageTimingInputOutputPrompt shape
Pre-call research15-30 min beforeCompany site, LinkedIn, recent news, any CRM notesCompact prospect briefRole-bounded summarizer over pasted inputs
Question generation10 min beforeProspect brief + rep's hypothesis + ideal-customer-profile notes5-7 discovery questionsGenerator constrained by brief, labeled by intent
Objection handlingPre-call rehearsalLikely objections for segment + proof points the rep can citeResponse patterns per objectionPattern generator with a "do not overpromise" rule
Post-call synthesis15 min afterRaw call notes (rep's or transcript) + brief from stage 1Next steps, risks, fit score, CRM pasteSummarizer with explicit fields

The table is the through-line for the rest of this guide. Keep it in mind: every prompt in this post maps to one row, and the prompts stop working the moment you try to collapse two rows into one "help me with this discovery call" prompt.

Stage 1: Pre-Call Research

Pre-call research is the lowest-risk place to use AI in sales and the easiest to do badly. The risk is low because the inputs are public and the output never reaches the prospect. The failure mode is that reps paste a company name and ask "tell me about this company," which gets a training-data summary — stale, partial, and confident.

The right pattern is to treat the model as a summarizer over inputs the rep provides, not as a database. The rep skims the company site, LinkedIn, recent press, comparison pages, and any funding news. The rep pastes those inputs into the prompt. The model compresses them into a one-page brief.

code
ROLE:
  You are a sales research analyst producing a pre-call prospect brief.
  Work only from the inputs I paste below. Do not add facts from your
  training data; if something is missing, say "not in the supplied
  inputs" rather than guessing.

CONTEXT:
  Prospect:
    Name: [first last]
    Title: [title]
    Company: [company]
  Inputs (pasted from public sources; I will mark each with a source
  label):
    [SOURCE: Company about page]
    ---
    [paste verbatim]
    ---
    [SOURCE: Prospect LinkedIn profile summary + last 3 roles]
    ---
    [paste]
    ---
    [SOURCE: Recent news or blog posts, last 6 months]
    ---
    [paste 2-4 items]
    ---
    [SOURCE: Existing CRM notes, if any]
    ---
    [paste]
    ---

TASK:
  Produce a one-page brief with these sections:
    1. Company at a glance (3 bullets: what they sell, who they sell to,
       size indicators in the inputs).
    2. Likely current priorities (3 bullets, each tagged with the source
       that supports it).
    3. Prospect specifically (3 bullets: their role scope, tenure,
       anything notable in their LinkedIn activity).
    4. Open questions the inputs do not answer (3-5 bullets).
    5. Two hypotheses about the pain that might bring them to us.

FORMAT:
  Markdown. Each bullet under "priorities" and "prospect" ends with a
  [source: label] tag referencing where in the inputs it came from.

ACCEPTANCE:
  - No fact appears without a source tag in sections 1-3.
  - Section 4 is not empty — every brief has gaps.
  - Hypotheses are labeled as hypotheses, not claims.

Three things make this prompt work. It binds the model to the inputs, so the output stays current to what the rep pasted. It forces source tagging, so any claim is traceable back to a line in a profile or a post — useful if a claim gets contradicted on the call. And it reserves a section for gaps, reminding the rep what they still do not know. The brief is not for the prospect to see; it is a rep's cheat sheet, read once before the call and set aside.

Stage 2: Question Generation

A rep who walks in with seven generic questions ("What are your biggest challenges?" "What does success look like?") runs the call as an interview. A rep who walks in with five questions built from a specific brief and hypothesis runs the call as a conversation, because the questions earn attention by showing the rep has already done some thinking.

Question generation is where AI most often saves a rep real time, if the prompt is set up right. The key is feeding the model the brief from stage 1 plus the rep's hypothesis about the deal, and asking for questions that test the hypothesis — not questions in general.

code
ROLE:
  You are a sales coach helping a rep design discovery questions. You
  build questions that test a specific hypothesis against a specific
  prospect, using the supplied brief.

CONTEXT:
  Prospect brief (from pre-call research):
    [paste the one-page brief from stage 1]

  Rep's working hypothesis (one sentence):
    "[e.g., 'They are hitting scaling pain on their current logistics
    ops stack because their team tripled last year and the tools did not.']"

  Ideal customer profile notes:
    [paste the 3-5 bullets on what an ideal-fit customer looks like]

TASK:
  Produce 5-7 discovery questions. Each question must:
    1. Be open-ended (cannot be answered yes/no).
    2. Reference something specific to this prospect — their role, a
       priority in the brief, or a fact from the inputs.
    3. Be labeled with its intent: one of [context, pain, impact,
       decision-process, competition, timing].
    4. Include a short note on what a useful answer would tell you and
       what the next follow-up would be.

FORMAT:
  Numbered list. For each question:
    - Intent: [label]
    - Question: [the question, 1-2 sentences]
    - Why this: [one line on what it tests about the hypothesis]
    - Follow-up if they answer: [one line on the next probe]

ACCEPTANCE:
  - No question is generic ("what are your challenges" is a failure).
  - Every question ties to something concrete in the brief or the
    hypothesis.
  - The intent labels cover at least 4 different categories across the
    set — you are not asking five pain questions.

A useful diagnostic: if you could ask the same question of a completely different prospect without changing a word, the question is too generic. The intent labels also fight a common rep failure — running out the clock on pain discovery and never asking about decision process or timing. For the underlying pattern that makes the model take on the coach persona cleanly, see role prompting.

Stage 3: Objection Handling

Objection handling is where AI usefulness drops off a cliff if the prompt lets the model roleplay as the rep. The model will, if asked, write a confident-sounding response that overpromises on a capability, underprices a feature gap, or invents a customer story to prove a point. None of that can be said on the call. Objection handling prompts have to be scoped as preparation, not as scripts.

The right framing: the prompt generates a response pattern the rep can rehearse — the shape of a good answer — not the exact words. The rep says the words.

code
ROLE:
  You are a sales coach helping a rep prepare for likely objections.
  You do not produce scripts to be read verbatim. You produce response
  patterns — the shape of a good answer — that the rep can put in
  their own words.

CONTEXT:
  Prospect segment:
    [e.g., mid-market ops director at logistics SaaS, 200-500 employees]
  Our product's positioning:
    [one paragraph — what we do, who we do it for]
  Proof points the rep can cite (only these — do not add others):
    - [proof point 1, with source]
    - [proof point 2, with source]
    - [proof point 3, with source]
  Known capability gaps the rep must acknowledge if asked:
    - [gap 1]
    - [gap 2]

TASK:
  Produce response patterns for 5 objections common in this segment.
  For each:
    1. The objection in the prospect's likely words.
    2. The underlying concern the objection is pointing at (it is
       rarely the surface wording).
    3. A response pattern in 3 steps: acknowledge, reframe, probe.
    4. Which proof point to cite if needed (from the list above — no
       others).
    5. If the objection relates to a known gap, acknowledge the gap
       honestly rather than deflecting.

FORMAT:
  Markdown, one objection per section.

ACCEPTANCE:
  - No invented proof points or customer stories.
  - Where the product has a known gap, the pattern acknowledges it.
  - The response pattern ends with a probe (a question back to the
    prospect), not a monologue.

Two details do most of the work. The "no invented proof points" clause closes off the model's instinct to embellish. The "response pattern ends with a probe" keeps the rep from using objection handling as an excuse to lecture — a good answer to an objection is a question that keeps the conversation going. Run this prompt quarterly against the current objection list, proof points, and gap list, and you get a living objection library that reps read before calls and new reps read during onboarding.

Stage 4: Post-Call Synthesis

The post-call note is the deliverable nobody wants to write and nobody reads if it is bad. Most reps either over-write (a transcript dump) or under-write ("great call, send contract"), and the CRM becomes useless for forecasting. The pattern: the rep pastes their call notes or transcript. The model produces a structured note with the fields the CRM wants. No embellishment, no "next steps" the prospect did not agree to.

code
ROLE:
  You are a sales operations analyst synthesizing a call note from
  raw inputs. You work only from the notes supplied. You do not invent
  commitments, next steps, or statements the prospect did not make.

CONTEXT:
  Pre-call brief:
    [paste brief from stage 1]
  Raw notes or transcript:
    ---
    [paste]
    ---

TASK:
  Produce a structured post-call note with these fields:
    1. Summary (3-4 sentences, facts only).
    2. Stated pain (bullets, direct quotes where possible).
    3. Decision process (who, timeline, any approval steps mentioned).
    4. Objections raised and how the rep responded.
    5. Agreed next steps (only things both parties actually agreed to).
    6. Risks / open questions (things that could kill the deal that
       were not resolved on the call).
    7. Fit score 1-5 with one-line justification.

FORMAT:
  Markdown with those seven section headers, then a one-paragraph
  CRM-paste version at the bottom.

ACCEPTANCE:
  - Every "stated pain" bullet is supported by a line in the notes.
  - "Agreed next steps" contains only steps the prospect explicitly
    confirmed — not steps the rep hopes they will take.
  - The CRM-paste paragraph is under 120 words.

The "explicitly confirmed" clause is the line that matters. Without it, models write aspirational next steps ("send the questionnaire by Friday, demo with the CTO next week, pilot in 30 days") that the prospect never agreed to, and the forecast built on those steps is fiction. Reps who adopt this prompt notice their forecast calls get shorter — the manager stops asking "but did they actually commit to that?" because the note already answered.

Avoiding the Creep

There is a boundary worth naming. AI helps the rep prepare, understand, and synthesize. AI does not talk to the prospect. The temptation — especially as voice and live-transcription tools get better — is to drift from "help me prepare" into "help me sell," where the model feeds real-time talking points into the rep's ear or, worse, writes messages the prospect reads.

The creep fails on two axes. Prospects can tell. A rep running live AI suggestions in-ear hesitates, over-explains, and sounds like a language model in a trench coat — the cadence is off, the phrasing too neat. And regulators, in several jurisdictions, are beginning to require disclosure when AI participates in a call. Running a live-AI setup without disclosure is a compliance risk that closes no deals worth the exposure.

The durable answer: use AI to prepare harder than the rep could alone, and then let the rep have the conversation. The four prompts above do that — they compress research and synthesis work that used to happen in the thirty minutes before and after the call, and they leave the call itself to the human.

Common Anti-Patterns

  • One "help me with this call" prompt. Collapses four different tasks into one vague ask; the model averages the output toward generic. Fix: separate prompt per stage.
  • Asking the model to "tell me about this company" without pasted inputs. Gets stale training-data summaries. Fix: paste current public inputs, bind the model to them, forbid outside facts.
  • Generating questions without a hypothesis. Produces interview-style questions that signal the rep has not thought about the prospect. Fix: state the hypothesis in the prompt; ask for questions that test it.
  • Letting the objection-handling prompt write scripts. Scripts sound scripted; reps read them flat. Fix: ask for response patterns, not scripts, and keep the words the rep's own.
  • Post-call notes with aspirational next steps. Creates a fictional pipeline. Fix: acceptance clause that requires explicit confirmation before a step counts.
  • Running AI live in the call. Compliance exposure and a worse conversation. Fix: keep AI to the preparation and synthesis stages.

For adjacent sales outputs, pair this guide with AI proposal writing prompts, AI competitor analysis prompts, and AI pipeline forecasting prompts.

FAQ

How long should a pre-call research brief take?

Fifteen minutes of input gathering plus two minutes of prompt output. If it is taking longer, the rep is reading the model's output instead of the prospect's actual site. The brief is a compression of inputs the rep has already skimmed — not a substitute for the skim.

Can one prompt combine research and question generation?

You can, but the output degrades. Combined prompts produce generic-sounding questions because the model is doing two jobs under one set of instructions. Splitting them takes thirty extra seconds and the question quality climbs visibly. Keep them separate.

What if the prospect asks a question the rep's notes do not cover?

That is what stage 4 is for. The post-call note's "open questions" section captures the moment, and the rep follows up by email with the answer — or a short call if the question reveals something bigger. Discovery is rarely resolved in one call; the prompts assume a sequence.

How do I keep the objection library current?

Run the stage 3 prompt quarterly with the updated proof points and gap list, and compare the output to the prior version. Objections change slowly, but proof points and gaps change every release. If the library still references a capability that shipped to GA two quarters ago, it is stale — regenerate.

Do these prompts work if the rep uses a different AI assistant per week?

Yes, with a caveat. The prompts are written to be model-agnostic — the structure (role, context, task, format, acceptance) works across Claude, GPT, and Gemini. Tone of output will vary between models, and reps may find one model writes better briefs while another writes tighter questions. Pick per stage, not per week.

Discovery is not a single task. Separating it into research, question design, objection handling, and synthesis gets you four tight prompts instead of one loose one, and four useful outputs instead of a generic wash. The prep gets sharper. The call gets more honest. The forecast gets less fictional. None of that requires AI in the call itself — which is the right place for the line.

Build prompts like these in seconds

Use the Template Builder to customize 350+ expert templates with real-time preview, then export for any AI model.

Open Template Builder