Skip to main content
Back to Blog
prompt patternsdata analysisAI promptsdata analysis promptsprompt templates

5 Prompt Patterns for Data Analysis That Actually Work

Copy-paste these 5 prompt patterns to get useful data analysis from AI. Covers trend spotting, anomaly detection, comparisons, forecasting, and executive summaries.

SurePrompts Team
April 13, 2026
14 min read

TL;DR

Five ready-to-use prompt patterns that turn raw data into clear trends, anomalies, comparisons, forecasts, and executive summaries.

Data analysis is one of the strongest use cases for AI — and one of the easiest to get wrong. The difference between "here's a vague summary of your numbers" and "here are three actionable insights with supporting evidence" comes down entirely to how you structure the prompt.

The challenge with data analysis prompts is specificity. Most people paste in a spreadsheet and say "analyze this." The AI has no idea what you're looking for, so it gives you everything and nothing at the same time. These five patterns solve that by telling the AI exactly what kind of analysis you need and how to present it.

Each pattern below is copy-pasteable. Swap in your own data and context, and you'll get results you can actually use.

One important note before you start: these patterns work best when you describe your data before pasting it. Raw numbers without context produce shallow analysis. A sentence like "this is monthly revenue by product line for our B2B SaaS company" changes the quality of the output dramatically.

Another consideration: if your data has more than a few hundred rows, summarize or sample it before pasting. AI tools have context limits, and truncated data produces truncated insights. For large datasets, paste a representative sample and state the total size so the AI can reason about the full picture.

Pattern 1: The Trend Spotter

This pattern extracts directional insights from time-series or sequential data. It forces the AI to identify what's changing, by how much, and why it might matter.

code
You are a data analyst preparing insights for a non-technical stakeholder.

Here is [describe your data — e.g., "monthly revenue data for the past 12 months by product line"]:

[Paste your data here]

Analyze this data and identify:
1. The 3 most significant trends (include direction, magnitude, and timeframe)
2. For each trend, one possible explanation based on the data
3. For each trend, one follow-up question we should investigate

Format as a numbered list. Each trend should be 2-3 sentences max. Lead with the most impactful finding. Use plain language — no statistical jargon.

Why it works: The numbered structure prevents the AI from rambling. Asking for explanations and follow-up questions forces deeper analysis instead of surface-level observations. Specifying "non-technical stakeholder" controls the vocabulary.

Example output snippet:

1. Revenue from Product B grew 34% between March and June, then plateaued. The growth aligns with the Q2 marketing campaign launch. Follow-up: Did the campaign drive new customers or increase order frequency among existing ones?

2. Product A shows a steady 5% month-over-month decline since January. This could indicate market saturation or competitive pressure in the mid-tier segment. Follow-up: Have we lost specific customers to competitors, or is average order value shrinking?

3. Product C had a single-month spike of 52% in April, then returned to baseline. This coincides with the spring promotion, but the lack of sustained lift suggests the promotion attracted deal-seekers rather than long-term customers. Follow-up: What was the retention rate of customers acquired during the April promotion?

When to adapt this pattern: If your data is more granular (daily instead of monthly), add "identify both long-term trends and short-term anomalies" to the prompt. If you're presenting to a technical audience, remove the "no statistical jargon" instruction and ask for confidence levels.

Pattern 2: The Anomaly Detector

This pattern finds things that don't fit the expected pattern. It's useful for quality checks, fraud detection, and finding data entry errors before they become problems.

code
You are a senior data analyst conducting a data quality and anomaly review.

Here is [describe your data — e.g., "daily transaction data for Q1 2026"]:

[Paste your data here]

Review this data for anomalies. Specifically:
1. Values that are statistical outliers (significantly above or below the norm for that category)
2. Patterns that break from established trends
3. Missing data points or suspicious gaps
4. Any values that seem like potential data entry errors

For each anomaly found:
- Describe what's unusual
- Rate severity: HIGH (could affect decisions), MEDIUM (worth investigating), LOW (minor)
- Suggest one way to verify whether it's a real anomaly or an error

Present as a table with columns: Anomaly | Severity | Verification Step

Why it works: The severity rating prioritizes the analyst's time. The verification step turns a passive observation into an actionable next step. The four categories ensure the AI checks for different types of anomalies rather than just looking at outliers.

Example output snippet:

| Anomaly | Severity | Verification Step |

|---------|----------|-------------------|

| March 15: Transaction volume dropped 87% compared to the surrounding days | HIGH | Check if there was a system outage or holiday in that region |

| Customer #4421: Average order value is 12x the category mean | MEDIUM | Cross-reference with order history to confirm these are legitimate purchases |

Example output snippet:

| Anomaly | Severity | Verification Step |

|---------|----------|-------------------|

| March 15: Transaction volume dropped 87% compared to the surrounding days | HIGH | Check if there was a system outage or holiday in that region |

| Customer #4421: Average order value is 12x the category mean ($2,340 vs $195 avg) | MEDIUM | Cross-reference with order history to confirm these are legitimate purchases |

| Week 8-9: Revenue shows a perfect linear increase of exactly $5,000/day | LOW | Verify data source — perfectly linear patterns can indicate placeholder or test data |

When to adapt this pattern: For financial data, add "check for values that could indicate rounding errors or currency conversion issues." For time-series data, add "flag any gaps in the expected cadence (missing days, weeks, or months)." For datasets with multiple dimensions, add "check for anomalies both within individual categories and across the aggregate totals."

Pattern 3: The Comparison Framework

This pattern structures side-by-side analysis when you need to compare segments, time periods, products, or any two groups against each other.

code
You are a business analyst preparing a comparison report.

I need to compare [Group A] vs [Group B] using the following data:

[Paste your data here]

Structure the comparison as follows:
1. **Summary verdict**: One sentence on which group performs better overall and by what margin
2. **Where [Group A] wins**: 2-3 metrics where Group A outperforms, with specific numbers
3. **Where [Group B] wins**: 2-3 metrics where Group B outperforms, with specific numbers
4. **Surprising finding**: One insight that isn't immediately obvious from a surface-level look
5. **Recommendation**: Based on this data, one specific action to take

Keep each section to 2-3 sentences. Use percentages for comparisons, not just raw numbers.

Why it works: The "surprising finding" requirement pushes the AI past obvious observations. Starting with a verdict gives the reader the answer first. The recommendation bridges analysis to action, which is where most data summaries fall short.

Example output snippet:

Summary verdict: Region B outperforms Region A by 22% in total revenue, but Region A has a 15% higher profit margin per transaction.

>

Where Region A wins:

- Profit margin per transaction: 28% vs 13% (Region A more than doubles Region B)

- Customer lifetime value: $340 vs $215, driven by higher repeat purchase rates

- Support cost per customer: $12 vs $31 (Region A customers submit fewer tickets)

>

Where Region B wins:

- Total transaction volume: 14,200 vs 8,900 per month

- New customer acquisition: 2,100 new customers in Q1 vs 890 for Region A

>

Surprising finding: Region A's customer retention rate is 41% higher despite lower marketing spend — suggesting organic word-of-mouth may be stronger in that market.

>

Recommendation: Investigate what's driving Region A's retention and apply those practices to Region B. The combination of Region B's volume with Region A's margins would represent a significant revenue opportunity.

When to adapt this pattern: When comparing more than two groups, replace the "where A wins / where B wins" structure with a ranked table format. Add "highlight any metric where the gap is narrowing or widening over time" for time-aware comparisons.

Pattern 4: The Forecast Builder

This pattern asks the AI to project forward based on historical data. It's not a statistical model — it's a structured way to think through what the data suggests about the future.

code
You are a strategic analyst preparing a forward-looking brief.

Here is [describe your historical data — e.g., "18 months of user growth data"]:

[Paste your data here]

Based on this data, provide:
1. **Base case projection**: If current trends continue unchanged for the next [timeframe], what should we expect? Include specific numbers.
2. **Upside scenario**: What would need to happen for results to exceed the base case by 20%+? Name 1-2 realistic drivers.
3. **Downside risk**: What's the most likely thing that could underperform the base case? How would we see it coming?
4. **Key assumption**: What is the single biggest assumption baked into the base case projection?
5. **Leading indicator**: What metric should we watch weekly to know if we're tracking above or below the base case?

Format as labeled sections. Be specific with numbers where the data supports it. Where the data doesn't support a specific number, say so explicitly.

Why it works: The three scenarios (base, upside, downside) prevent single-point forecasting, which is almost always wrong. The "key assumption" question forces the AI to be honest about uncertainty. The leading indicator gives you something to act on immediately.

Example output snippet:

Base case projection: At the current 8% month-over-month growth rate, we should reach approximately 45,000 monthly active users by September 2026. Monthly recurring revenue would reach approximately $112K at current ARPU.

>

Upside scenario: If the referral program improvements proposed for Q2 increase viral coefficient from 0.3 to 0.5, growth could accelerate to 12% month-over-month, reaching 58,000 MAU by September. The primary driver would be reducing friction in the referral onboarding flow.

>

Downside risk: The most likely headwind is increased competition in the mid-market segment. Two competitors launched free tiers in Q4 2025. If this compresses our organic signup rate by 30%, growth slows to 5% month-over-month and we reach only 38,000 MAU. Early warning sign: watch weekly organic signup conversion rate — a drop below 4.2% (current: 5.1%) signals competitive pressure.

>

Key assumption: This assumes the current organic acquisition channel (referrals accounting for 62% of new signups) continues at its present rate. Any change to the referral program would invalidate this projection.

>

Leading indicator: Track the ratio of new referral signups to active users weekly. This currently sits at 0.08. A sustained decline below 0.06 means the base case projection is too optimistic.

Pattern 5: The Executive Summary

This pattern condenses a large dataset into a brief that a busy decision-maker can read in under two minutes. It's the pattern to use when someone asks "what does the data say?" and you need a clear answer.

code
You are a senior analyst briefing the executive team. They have 2 minutes to read this.

Here is [describe your data]:

[Paste your data here]

Write an executive summary that covers:
1. **Bottom line** (1 sentence): What is the single most important takeaway?
2. **Three key metrics**: The numbers that matter most, with context (is each good, bad, or neutral compared to our target/last period?)
3. **One concern**: The most important risk or negative signal in the data
4. **One opportunity**: The most promising positive signal
5. **Recommended next step**: One specific action, not a vague suggestion

Total length: under 200 words. Use bullet points. No filler phrases like "it's worth noting" or "interestingly." Every sentence should carry information.

Why it works: The 200-word constraint forces ruthless prioritization. The "no filler phrases" instruction prevents the AI's tendency to pad with transitional language. The structure mirrors how executives actually want to receive information: conclusion first, evidence second, action third.

Example output snippet:

Bottom line: Q1 revenue beat target by 11%, driven entirely by enterprise accounts — SMB revenue declined for the second consecutive quarter.

>

Three key metrics:

- Total revenue: $2.4M (11% above $2.16M target)

- Enterprise ARR: $1.8M (+28% QoQ) — strong

- SMB churn: 8.2% (up from 5.1% last quarter) — concerning

>

One concern: The SMB segment is subsidizing enterprise growth. If SMB churn continues at this rate, we lose $480K ARR by Q3, which offsets 60% of projected enterprise gains.

>

One opportunity: Enterprise deal size increased 18% in Q1 without a price change — customers are self-expanding to higher tiers. A proactive upsell campaign could accelerate this trend.

>

Recommended next step: Schedule a cross-functional meeting this week to decide: do we invest in retaining SMB customers, or deliberately shift focus to enterprise? The data supports either strategy, but we can't effectively do both.

When to adapt this pattern: For board-level presentations, change the word limit to under 100 words and cut it to two key metrics. For operational teams, expand to 300 words and add an "actions in progress" section.

Quick Tips for Data Analysis Prompts

  • Always describe your data before pasting it. Tell the AI what the columns mean, what time period it covers, and what business context matters. Don't make it guess. "Monthly revenue data for a B2B SaaS company, January through December 2025, broken down by product line and customer segment" is ideal.
  • Specify the audience. "For a technical data team" produces very different analysis than "for the CEO." The audience determines the depth, vocabulary, and format.
  • Ask for specific numbers. "Revenue increased" is useless. "Revenue increased 23% from $1.2M to $1.48M" is actionable. Tell the AI to include magnitudes, percentages, and absolute values.
  • Request caveats. Add "flag any conclusions where the data is insufficient to be confident" to prevent the AI from overstating weak signals. This is especially important for small sample sizes.
  • One analysis type per prompt. Don't ask for trends, anomalies, and forecasts in the same prompt. Each deserves focused attention. Run them sequentially and you'll get deeper insights.
  • Tell the AI what you already know. If you already know that Q2 revenue spiked because of a specific campaign, say so. This lets the AI focus on findings you don't already have, rather than restating the obvious.
  • Include your targets or benchmarks. "Revenue was $2.4M" doesn't tell the AI if that's good or bad. "Revenue was $2.4M against a $2.16M target" gives it the context to provide a meaningful assessment.

When to Use Templates vs. Write From Scratch

Use these patterns when:

  • You're doing a type of analysis you repeat regularly (weekly reports, monthly reviews, quarterly comparisons)
  • You need consistent formatting across multiple analyses
  • You're handing off the prompt to a team member who's less experienced with AI

Write from scratch when:

  • The analysis is genuinely novel and doesn't fit any standard pattern
  • You need to incorporate very specific domain knowledge that changes the analysis framework
  • You're in an exploratory phase and don't yet know what questions to ask — in that case, start with "What are the most interesting things in this data?" and refine from there

Combining Patterns for Deeper Analysis

These patterns work individually, but they're even more powerful in sequence. A typical workflow:

  • Start with the Trend Spotter to understand what's happening in the data
  • Run the Anomaly Detector to find anything unexpected or problematic
  • Use the Comparison Framework to dig into the most interesting segments
  • Apply the Forecast Builder to project what happens next based on your findings
  • Finish with the Executive Summary to distill everything for stakeholders

Each prompt builds on the insights from the previous one. Reference earlier findings explicitly: "Based on the trends identified in my previous analysis, now build a forecast that accounts for the seasonal pattern in Product B."

This sequential approach produces analysis that's layered and contextualized — significantly better than running a single "analyze everything" prompt.

If you find yourself modifying these patterns heavily every time, SurePrompts' Template Builder lets you save customized versions so you're not starting from scratch each session.

Build prompts like these in seconds

Use the Template Builder to customize 350+ expert templates with real-time preview, then export for any AI model.

Open Template Builder