Skip to main content
Back to Blog
competitor analysismarketing promptsmarket researchAI marketingprompt patterns

AI Competitor Analysis Prompts (2026)

Prompt patterns for AI-powered competitor analysis — source-gathering, feature matrices, positioning statements. Built to avoid the hallucinated-feature trap.

SurePrompts Team
April 20, 2026
14 min read

TL;DR

AI competitor analysis breaks when you ask the model to 'analyze my competitors' cold — it hallucinates features. Good prompts supply real inputs (URLs, docs, user-collected facts), then use AI for comparison and synthesis.

Competitor analysis is the task most likely to embarrass a team that trusts AI. Ask a model to "analyze our three biggest competitors" and it will cheerfully produce a page of confident claims — pricing tiers the competitor does not offer, features that were sunset two years ago, integrations that never shipped. The output reads like analysis. It is closer to fiction.

The fix is not a cleverer prompt. The fix is changing what the model is being asked to do. Good AI competitor analysis prompts are not about discovery — they are about structuring inputs a human has already collected and using the model for the comparison and synthesis work that comes after. Three patterns carry most of the load: source-gathering (plan what to collect), feature matrix (synthesize what you collected), and positioning statement (translate the matrix into strategy).

This guide sits under the marketing section of our prompt engineering for business teams guide and pairs with AI brief writing prompts upstream of the campaign work it feeds.

Why Generic Competitor-Analysis Prompts Fail

The failure mode is consistent. Given "analyze the top three competitors to [your product]," a model will list three plausible competitors, then produce confident bullet points about their pricing, features, positioning, and target market. Some content will be broadly correct — vendor category, general approach. A meaningful fraction will be wrong. Nothing in the output flags which is which.

Three things go wrong at once:

  • Features get invented. The model knows what a "competitor in this category" usually offers, and it lists those features as though verified for the specific competitor named. It is pattern-matching against a genre, not reporting on a company.
  • Pricing gets hallucinated. Pricing pages change frequently; training data is stale. The model fills the gap with numbers that sound reasonable for the segment, delivered in the same tone as accurate facts.
  • Positioning gets flattened. Real positioning lives in hero copy and category pages. Without those inputs, the model substitutes generic phrases ("AI-powered," "all-in-one") that could describe any vendor.

None of this is fixable with a better system prompt. The model cannot report on facts it does not have access to. The structural fix is to stop asking the model to recall competitor facts and start feeding it the facts directly. That changes the job from research to synthesis: humans collect — hero copy, pricing page snapshot, feature list, review excerpt — and the AI compares, structures, and writes up what is in front of it.

Pattern 1: Source-Gathering

Before any comparison happens, someone has to decide what to collect and from where. Most teams do this ad hoc and end up with inconsistent inputs — a screenshot from one competitor's pricing page, a vague recollection of another's, nothing at all from the third. A source-gathering prompt produces a research plan: what artifacts to collect, for which competitors, from which surfaces.

code
ROLE:
  You are a competitive research lead preparing a collection plan for
  an analyst. You do not produce findings. You produce a checklist of
  sources to gather before analysis begins.

CONTEXT:
  - Our product: [1-2 sentences on what we do and who we serve]
  - Competitors to cover: [list 3-5 named competitors]
  - Analysis goal: [e.g., inform Q3 positioning refresh, price review,
    sales battlecards]
  - Time budget for collection: [e.g., half a day]

TASK:
  Produce a collection checklist with:
    1. For each competitor, list the specific public surfaces to collect
       from (homepage, pricing page, specific product pages, changelog,
       docs, careers page, third-party review sites, analyst coverage).
    2. For each surface, name the artifact to save (screenshot, pasted
       hero copy, pricing table as text, feature list, specific review
       quotes with dates).
    3. Flag any surfaces unlikely to be public (enterprise pricing,
       internal docs) so the analyst knows to deprioritize them.
    4. Suggest 2-3 questions the analyst should try to answer for each
       competitor by end of collection.

FORMAT:
  Markdown. One H2 per competitor. Under each, a numbered list of
  surfaces with the artifact type in brackets.

ACCEPTANCE:
  - Every surface named is a specific URL pattern or page type, not
    a vague category.
  - No findings, claims, or conclusions about any competitor appear
    in the output — only collection instructions.
  - The plan fits inside the stated time budget.

The critical constraint is the third acceptance criterion — the model is explicitly forbidden from producing findings. That separates collection-planning from analysis. The model's strength is knowing what surfaces exist and what to look for; its weakness is being confident about what it will find. The prompt scopes to the first and rules out the second. Output becomes a ticket the analyst works through; when done, every competitor has a consistent set of artifacts ready for pattern two.

Pattern 2: Feature Matrix

The feature matrix prompt takes collected artifacts and produces a comparison table. This is the step where the structural problem from the generic prompt gets solved: the model is no longer asked what features a competitor has, it is handed the feature list and asked to normalize it.

code
ROLE:
  You are a product analyst synthesizing a feature matrix from supplied
  source material. You do not use prior knowledge about any vendor
  listed. Every cell in the matrix is traceable to a quote or snippet
  in the supplied sources.

CONTEXT:
  Source material (use only these — do not add information from prior
  knowledge about any vendor):

  ---
  OUR PRODUCT:
  [paste our current feature list, hero copy, pricing page]

  COMPETITOR A — [name]:
  [paste hero copy, feature list, pricing page text, 2-3 review quotes]

  COMPETITOR B — [name]:
  [paste same artifacts]

  COMPETITOR C — [name]:
  [paste same artifacts]
  ---

TASK:
  Produce a feature comparison matrix covering:
    1. Core capabilities (what each product does)
    2. Pricing tiers (with numbers only where they appear in sources)
    3. Target customer segment (as described in each hero/about copy)
    4. Integrations (only those named in supplied material)
    5. One-line positioning summary per vendor (paraphrased from
       their own hero copy, not generated fresh)

FORMAT:
  Markdown table. Rows are comparison dimensions. Columns are vendors,
  starting with our product.

ACCEPTANCE:
  - Every claim is supported by a specific phrase in the supplied
    sources. Where support is absent, the cell reads "Not stated in
    sources" — never a guess.
  - Pricing cells contain only numbers that appear verbatim in the
    supplied pricing pages. If a tier is hidden behind "contact sales,"
    the cell reads "Contact sales" rather than an estimate.
  - Positioning summaries paraphrase the vendor's own words. No generic
    category phrases ("AI-powered," "all-in-one") unless they appear
    in the source.
  - No feature or integration is listed for a vendor unless it is
    named in the supplied material.

The "Not stated in sources" escape hatch is what makes this work. Without it, the model fills empty cells with plausible guesses to make the matrix feel complete. With it, the matrix has honest holes — and those holes become the next collection tickets. A variant swaps features for customer jobs-to-be-done as rows, with each cell answering "does this vendor address this job, and how, according to the supplied sources." Same shape, more strategically useful when features do not map one-to-one across vendors.

Pattern 3: Positioning Statement

Once the matrix is stable, the last step translates it into a positioning brief — a short document that names where you are strong, where competitors are strong, and the wedge you will push. This is the prompt where synthesis is the point, and it is still constrained to the collected material plus the matrix.

code
ROLE:
  You are a senior positioning strategist translating a competitive
  feature matrix into a short positioning brief. You work only from
  the supplied matrix and source quotes. You do not introduce external
  claims about any vendor.

CONTEXT:
  - Our product: [1-2 sentences]
  - Strategic goal: [e.g., refresh homepage positioning, build sales
    battlecard, inform Q3 brief]
  - Feature matrix (from pattern 2):
    ---
    [paste matrix]
    ---
  - Supporting quotes bank:
    ---
    [paste the specific hero/pricing/review quotes that appeared in
    pattern 2 sources]
    ---

TASK:
  Write a positioning brief with five short sections:
    1. Where we are uniquely strong — capabilities the matrix shows us
       doing that no competitor does, with the matrix cells cited.
    2. Where we are at parity — capabilities every vendor offers, so
       we cannot differentiate on them.
    3. Where a competitor is ahead — capabilities the matrix shows a
       named competitor doing that we do not, cited.
    4. Positioning wedge — a one-sentence claim about what we uniquely
       offer, framed against the parity row. Must be defensible from
       the matrix alone.
    5. Proof points — 3-5 quotes from the supporting quotes bank that
       support the wedge.

FORMAT:
  Markdown. Five H2 sections. 300-500 words total. Each section must
  cite at least one matrix cell or quote by reference.

ACCEPTANCE:
  - Every claim in the brief is supported by a cited matrix cell or
    a direct quote from the bank.
  - The wedge sentence is not a category phrase — it names a specific
    capability or posture.
  - The "competitor is ahead" section is not skipped or softened.
  - If the matrix does not support a confident wedge, the section
    reads "Wedge not defensible from current evidence — need stronger
    differentiator or more source material."

That last acceptance criterion is the ceiling on over-claiming. A matrix with thin differentiation produces a brief that honestly says so — more useful than one that papers over the gap with adjectives. Teams that treat the "not defensible" output as a signal to rework the product or the positioning get more from the pattern than teams that read it as an AI failure. Downstream, a defensible wedge becomes the anchor for channel copy — see AI campaign copy prompts — and for the sales conversations that follow, covered in AI discovery call prompts.

Grounding the Analysis

Across all three patterns the same rule appears: the model uses only supplied sources. That rule is the whole point of the approach and worth naming explicitly, because the temptation to relax it is constant. Three guardrails make grounding hold up:

  • Paste sources verbatim. Summarizing source material before feeding it in reintroduces hallucination risk — the summary becomes a bottleneck that may have invented details. Paste hero copy, pricing tables, and feature lists as they appear.
  • Require citations in every cell. A feature-matrix prompt without a citation requirement produces a matrix that feels complete and is partially made up. With the requirement, every cell either points to a source phrase or reads "Not stated in sources."
  • Re-read the output against the sources. The final check is a human reading the matrix row by row with the source material open. Ten minutes of that catches the model's remaining embellishments. Not optional — grounding reduces hallucination, it does not eliminate it.

A competitor analysis is only as fresh as its sources. Pricing pages change, feature lists expand. A matrix from three months ago is a starting point for collection, not a finished artifact. Teams that treat the matrix as a living document — re-collected quarterly, compared against the previous snapshot — get better positioning work than teams that ship a one-time deep dive and call it done.

The underlying technique is a plain prompt template with strong context constraints — role, task, format, and acceptance are generic; the heavy lifting is in the source-material block and the "use only these" instruction.

Example Feature-Matrix Prompt

A worked example, with illustrative placeholders. The bracketed inputs below are hypothetical for a B2B workflow tool — not real competitor names or claims.

code
ROLE:
  You are a product analyst synthesizing a feature matrix from supplied
  source material. You use no prior knowledge about any vendor. Every
  claim is traceable to the source block.

CONTEXT:
  Source material (use only these):

  ---
  OUR PRODUCT — [illustrative: internal workflow builder]:
  Hero: "Automation your ops team ships without engineering."
  Pricing: Starter $29/user/mo, Team $79/user/mo, Enterprise contact.
  Features: drag-drop builder, 40+ native integrations, audit log,
  SSO, approvals, template library.

  COMPETITOR A — [illustrative name: "FlowForge"]:
  Hero: "Build automations in minutes, not sprints."
  Pricing: Free tier, Pro $99/user/mo, Enterprise contact.
  Features listed on product page: visual builder, 80+ integrations,
  version history, SAML SSO, role-based access.
  Review quote (G2, Jan 2026): "Love the integrations; approvals
  workflow is weak."

  COMPETITOR B — [illustrative name: "OpsRoute"]:
  Hero: "The approval-first workflow platform."
  Pricing page: "Pricing available on request."
  Features: approval chains, audit trail, SSO, 20 integrations,
  compliance templates.
  Review quote (G2, Dec 2025): "Best approvals in the category. Fewer
  integrations than competitors."
  ---

TASK:
  Produce a feature comparison matrix covering capabilities, pricing,
  target segment, integrations, and positioning summary. Cite source
  phrases for every non-empty cell.

FORMAT:
  Markdown table. Rows are dimensions, columns are vendors starting
  with our product.

ACCEPTANCE:
  - Every cell is either a direct quote/number from the source block
    or reads "Not stated in sources."
  - Positioning summary paraphrases the vendor's own hero copy.
  - No feature appears that is not named in the source block.

A correct output flags OpsRoute's pricing as "Not stated in sources" rather than inventing a number, and notes FlowForge's approvals as reported weak in one review without extending the judgment beyond what the quote supports. That is a grounded matrix. An ungrounded one would have invented OpsRoute's pricing and given FlowForge a confident score on approvals.

Common Anti-Patterns

Most failures reduce to relaxing the grounding rule.

  • "Analyze my competitors" with no sources. The base failure. Fix: never run this prompt. Always gate on source collection first.
  • Summarizing sources before pasting. The summary becomes a second hallucination surface. Fix: paste hero copy, pricing pages, and feature lists verbatim.
  • Letting the model fill "Not stated" cells. Empty cells get quietly filled with plausible guesses unless the prompt bans it. Fix: require "Not stated in sources" as the default for missing evidence.
  • Asking for "competitors we should watch." Invites the model to recall — and invent — vendors. Fix: maintain the competitor list yourself; use AI only to synthesize against a fixed list.
  • Using the matrix as the final artifact. The matrix is a midpoint, not a deliverable. Fix: always run pattern three — or a human equivalent — over a finished matrix.
  • Shipping without a human audit. Even with grounding, a final human pass against sources is required. Fix: budget 10-15 minutes per competitor before any brief or battlecard ships from the matrix.

FAQ

Can AI do competitor analysis without me collecting sources first?

Not reliably. The model does not have current, vendor-specific facts in a way you can trust, and it does not distinguish between knowing and guessing. The pattern that works: humans collect current sources, the model synthesizes. Skipping collection is where the hallucination stories come from.

What about tools that connect AI to live web data — do they fix the problem?

They reduce it. Retrieval-augmented setups that pull live pages into context give the model real material to work from, closer to the grounded pattern here. The risk shifts from "inventing facts" to "citing stale or wrong retrieved pages" — smaller but not zero. Human audit of the final artifact is still required.

How often should we refresh a competitor feature matrix?

Quarterly is a reasonable default; monthly if the space is moving fast or if the matrix drives sales collateral that goes stale quickly. The collection plan from pattern one is designed to be re-run on a schedule.

Does this approach work for private companies with little public material?

Partially. For private companies, the pasted-in material often comes from review sites, win/loss intel, or analyst coverage rather than the vendor's own pages. The grounding rule holds either way: the model works from what you supply, and "Not stated in sources" cells mark the gaps.

Competitor analysis is the domain where AI fabrication is most expensive — a bad brief leaks into pricing, sales enablement, and positioning for quarters. The fix is to stop asking the model for discovery and start using it for synthesis against collected inputs. Source-gathering plans what humans need to find, feature matrices structure what they found, positioning statements translate the matrix into strategy. Each step bans the model from inventing. The output is narrower than "analyze my competitors" promised — and unlike that output, it is defensible.

Build prompts like these in seconds

Use the Template Builder to customize 350+ expert templates with real-time preview, then export for any AI model.

Open Template Builder