Skip to main content
Back to Blog
RCAFprompt templatesbusiness promptsAI for workprompt library

10 RCAF Prompt Templates for Everyday Business Tasks

Copy-pasteable RCAF-structured (Role · Context · Action · Format) prompt templates for weekly standups, sales emails, meeting notes, competitor briefs, and 6 more recurring business tasks.

SurePrompts Team
April 22, 2026
14 min read

TL;DR

10 copy-pasteable RCAF prompt templates — Role, Context, Action, Format — for weekly standups, sales emails, meeting notes, competitor briefs, and other recurring business tasks.

Tip

TL;DR: Ten copy-pasteable RCAF templates — Role, Context, Action, Format — for business tasks teams repeat every week. Fill in Context, score with the Quality Rubric, stop re-inventing prompts.

Key takeaways:

  • RCAF is four slots: Role, Context, Action, Format — see the RCAF canonical. This post is the companion.
  • Templates below are hypothetical patterns, not case studies. Swap placeholder variables for your situation.
  • Keep Role stable. Tune Context per instance. Adjust Action per run. Match Format to your downstream tool.
  • Score each filled template with the SurePrompts Quality Rubric; a solid fill lands at 28+ out of 35.
  • The same template works across Claude, GPT, and Gemini with minor tuning inside slots.

How to use these templates

Treat each as a filled-in scaffold built on the four slots from the RCAF canonical. Keep Role stable. Tune Context per situation: company, audience, source material, constraints. Prompt templates fail more often from Context bloat than starvation — keep it to what the task needs. Swap Action per instance. Match Format to your downstream tool.

After customizing, score with the SurePrompts Quality Rubric and fix the lowest-scoring dimension first. Most under-performing fills fail on Context sufficiency or Format precision. Everything in {{double_braces}} is a variable — SurePrompts' template engine renders them natively; in a chat window, replace manually.

The 10 templates

1. Weekly team standup summary

When to use: Turning raw standup notes, Slack threads, or a meeting transcript into a compact weekly summary for the team or a manager.

Role: You are a senior engineering manager writing a weekly standup summary. You value brevity, clear ownership, and naming blockers early.

Context: Team: {{team_name}}. Period: {{date_range}}. Audience: {{audience}} (e.g., VP of Engineering) — cares about what shipped, what's at risk, and what needs a decision; does not want a per-person log. Raw standup notes, Slack threads, and shipped PRs:

code
{{raw_notes_and_threads}}

Action: Group this week's activity into Shipped, In Progress, Blocked, and Decisions Needed. For each Blocked item, name the owner and the specific unblocker. Collapse routine items into one-line bullets; expand only what changed status or needs attention.

Format: Markdown, four H3 headings in order: Shipped, In Progress, Blocked, Decisions Needed. Each bullet starts with the item in bold, then a dash, then one sentence. Blocked items end with (owner: {{name}}, needs: {{unblocker}}). Under 300 words total.

Why this works: Standup summaries default to per-person logs; the Format forces a status-centric shape, and the Role posture (manager, not secretary) makes the model compress rather than transcribe.

2. Sales follow-up email after a demo

When to use: Drafting the follow-up email to a prospect within 24 hours of a demo call, referencing what they said on the call.

Role: You are a B2B account executive writing a post-demo follow-up. You are direct, not pushy. You sell by reducing friction and summarizing what was agreed, not by manufacturing urgency.

Context: Prospect: {{contact_name}}, {{title}} at {{company_name}}. Demo date: {{date}}. Product: {{product_name}}. Top concern: {{top_concern}}. Agreed next step: {{next_step}}. From: {{sender_name}}, {{sender_title}}. Call notes (what they care about, pushback, agreements):

code
{{call_notes}}

Action: Write a follow-up that (a) thanks them for the time, (b) recaps the one or two things they said they cared most about — in their words, (c) addresses their top concern with a specific answer or named follow-up, (d) confirms the next step with a concrete time. Do not restate the full demo. No urgency pressure.

Format: Email with subject and body. Subject under 60 characters. Body under 150 words: three short paragraphs plus a one-line sign-off. No bullets unless the next step is multi-part.

Why this works: Sales follow-ups fail on generic benefit language and manufactured urgency. The Role posture and the Action constraints ("in their words," "no urgency pressure") make those failures explicit rather than hoping the model avoids them.

3. Meeting notes → action items

When to use: Converting a meeting transcript or rough notes into a clean action-item list with owners and due dates.

Role: You are a project coordinator extracting action items. You are conservative: an action item requires a clear owner and a clear commitment. Ambiguous statements become flagged questions, not action items.

Context: Meeting: {{meeting_title}}, {{date}}. Attendees: {{attendees}}. Statements by {{decision_maker}} are load-bearing for commitments. Transcript or raw notes:

code
{{transcript_or_notes}}

Action: Extract every action item with (a) named owner, (b) specific task, (c) stated due date or next-meeting checkpoint. Anything missing one of the three goes into "Clarify" with the question needed. Also extract any decisions made and risks explicitly called out.

Format: Three markdown tables: Action Items (Owner | Task | Due), Clarify (Item | Question | Who can answer), Decisions & Risks (Type | Statement | Source). Output only the tables. No preamble, no summary paragraph.

Why this works: The conservative Role prevents inventing owners for vague statements. The three-table Format gives ambiguous items a home ("Clarify") so they do not leak into the action list.

4. Competitor one-pager from a website

When to use: Building an internal briefing on a competitor's positioning, pricing, and claims from their public website.

Role: You are a competitive intelligence analyst writing an internal one-pager. You distinguish what the competitor claims from what they imply, and flag unverifiable claims instead of restating them as fact.

Context: Competitor: {{competitor_name}}. URL: {{url}}. Our comparison product/segment: {{our_product_or_segment}}. Our positioning (for contrast, not replication): {{our_positioning}}. Homepage, pricing, and features page content:

code
{{pasted_pages}}

Action: Produce a one-pager covering: positioning statement (one line), target customer, stated features (factual), pricing structure (tiers, prices, gotchas), marketing claims (quoted) with a flag column — Verifiable / Unverifiable / Vague — and a 3-5 bullet "Gap vs. us" list. Do not speculate beyond the pages provided.

Format: Markdown one-pager under 600 words with H3 headings per section. Marketing Claims is a two-column table: Claim | Flag. End with a "Sources reviewed" line listing the URLs from Context.

Why this works: The Role separates claim from implication, preventing marketing copy from getting laundered into fact-sounding bullets. The Format's flag column makes unverifiable claims visible rather than promoted.

5. Customer feedback tagging and triage

When to use: Categorizing a batch of raw customer feedback (support tickets, reviews, survey responses) into themes and severity levels.

Role: You are a product operations analyst triaging feedback. You categorize on the user's actual problem, not the surface wording. You are strict about severity: "annoying" is not "blocking."

Context: Source: {{source}}. Period: {{date_range}}. Product area: {{product_area}}. Feedback batch:

code
{{feedback_batch}}

Existing theme taxonomy (use these if they fit; propose new ones only if nothing matches):

code
{{theme_taxonomy}}

Action: For each item, produce: (a) theme tag from the taxonomy (or new theme with short rationale), (b) severity tag from {Blocker, Major, Minor, Annoyance}, (c) one-sentence paraphrase of the underlying problem, (d) kind: bug, missing feature, pricing objection, or other. Then summarize top 3 themes by count and top 3 by severity-weighted count (Blocker=4, Major=3, Minor=2, Annoyance=1).

Format: JSON array of objects with fields id, theme, is_new_theme, severity, paraphrase, kind, followed by a "Summary" object with top_by_count and top_by_severity. Valid JSON only. No prose.

Why this works: Feedback triage fails when models tag surface wording instead of the underlying problem. The Role makes that distinction explicit, and asking for structured output (JSON) forces a decision on every field.

6. Job description from a role brief

When to use: Turning a hiring manager's rough role brief (bullets, a Slack message, a meeting note) into a proper job description.

Role: You are a technical recruiter writing a job description. You translate internal shorthand ("ninja," "rockstar") into behaviorally observable requirements. You write inclusively and do not pad with generic benefits.

Context: Manager: {{manager_name}}. Team: {{team_name}}. Company: {{company_name}}. Location: {{location_or_remote}}. Level: {{level}}. Salary band: {{band}} (include per policy: {{include_band_yes_no}}). Boilerplate to use verbatim: {{boilerplate}}. Role brief:

code
{{role_brief}}

Action: Produce a full JD with: role summary (2-3 sentences), what you'll do (4-7 bullets of observable work), required experience (3-5 concrete bullets), nice-to-have (2-4 bullets), how we work (rituals, on-call, travel), interview process if provided. Rewrite any "ninja/rockstar/wizard" language into behavioral requirements. Do not invent benefits or perks.

Format: Markdown with H2 headings per section. Under 700 words. Final line is "Salary band" — either the band from Context or "See offer letter" based on {{include_band_yes_no}}.

Why this works: The Role carries the translation posture ("ninja" → specific behavior) and the Action bans invented benefits — the single most common failure mode in AI-generated JDs.

7. Project status update for execs

When to use: Writing the monthly or bi-weekly project status for executives who want a paragraph of truth, not a dashboard screenshot.

Role: You are a program manager writing a status update for a non-technical executive sponsor. You tell the truth about risk without burying or performing it. You do not use green/yellow/red without explicit definitions.

Context: Project: {{project_name}}. Period: {{period}}. Sponsor: {{sponsor}}. Last period's update: {{prior_update}}. Raw inputs (milestones hit/slipped, new risks, decisions needed, burn vs. budget):

code
{{raw_inputs}}

Status definitions: Green — on track against the committed plan. Yellow — at risk; a specific intervention would put it back. Red — off track; the committed plan will not be hit without a scope or date change.

Action: Produce: (a) executive summary (under 80 words) leading with overall color and the one thing the sponsor needs to know, (b) what shipped, (c) what slipped and why, (d) risks (named, with owner and mitigation), (e) decisions requested, each phrased as a yes/no or pick-from-list question. If status is Yellow or Red, name the specific intervention.

Format: Markdown. H2 headings for each section. Executive summary first. Decisions Requested section uses a numbered list where each item ends with "Sponsor decision needed by: {{date}}." Total output under 400 words.

Why this works: Exec status updates fail by burying red items in prose or overstating risk to look diligent. The Context pins color definitions, and the Action requires a named intervention for non-green — forcing the writer to have one rather than hedge.

8. Technical spec review

When to use: Reviewing a technical spec or design doc before it goes to broader review, surfacing the issues early.

Role: You are a principal engineer doing a pre-review of a technical spec. You ask before you assert. You distinguish (a) missing information, (b) debatable choices, and (c) outright errors. You review; you do not rewrite.

Context: Spec: {{spec_title}} by {{author}}. Scope: {{scope}}. Relevant internal conventions: {{conventions_link_or_text}}. Full spec:

code
{{spec_text}}

Action: Produce a review with four sections: (1) Summary-of-understanding — restate the proposal in 3-5 bullets so the author can verify you read it correctly. (2) Missing information — what the spec must decide but has not, each as a question. (3) Debatable choices — defensible decisions that could go differently, with the trade-off named. (4) Errors — wrong or inconsistent statements, cited by quote. Do not propose a rewrite or assign a grade.

Format: Markdown with four H3 sections in the order above. Each section is a bulleted list. Items in Errors quote the exact phrase from the spec in backticks before commenting. Total output under 600 words.

Why this works: Spec reviews fail when reviewers conflate missing info with errors, or propose a rewrite instead of a review. The Role posture and the four-category Action prevent that conflation. The "restate the proposal" section catches misreads before they cost a round.

9. Marketing brief → ad copy variants

When to use: Turning a marketing brief into a batch of ad copy variants for A/B testing across channels.

Role: You are a direct-response copywriter producing ad variants. You write to audience pain, not to a feature. You vary headlines by angle, not by synonym.

Context: Product: {{product_name}}. Target audience: {{audience}}. Primary pain: {{pain}}. Unique mechanism (what specifically solves the pain): {{mechanism}}. Proof point (verifiable only; do not fabricate): {{proof_point}}. Channel: {{channel}} (e.g., LinkedIn feed, Google Search, Meta feed). Voice guidelines: {{voice_rules}}.

Action: Produce 5 headline variants, each pursuing a different angle — pain, outcome, mechanism, social proof, objection reversal. For each, write a 2-sentence body and a one-line CTA. Stay within the channel's length limits (state your assumed limit). If the proof point is empty or unverifiable, omit the social-proof variant rather than fabricating one.

Format: Markdown table with columns: Angle | Headline | Body | CTA | Char count. Below the table, list any assumptions made about channel length limits. If any variant was omitted (e.g., no proof point), note it below the table and state why.

Why this works: The most common ad-copy failure is five variants that say the same thing five ways. The Action requires different angles, not different wording. The honesty rail ("omit rather than fabricate") is load-bearing — without it, models invent testimonials.

10. Incident postmortem first draft

When to use: Drafting the first version of a blameless postmortem from an incident timeline and Slack channel log.

Role: You are an SRE writing a blameless postmortem draft. You focus on systems, not people. You describe what happened, what was detected when, and what the resolution was. You flag anything you cannot determine rather than guess.

Context: Incident ID: {{incident_id}}. Severity: {{severity}}. Start: {{start_time}}. End: {{end_time}}. Systems: {{systems}}. Customer impact: {{impact}}. Sections to fill: Summary, Impact, Timeline, Root cause, Contributing factors, What went well, What went poorly, Action items (Preventive vs. Detective), Lessons learned. Inputs (channel transcript, alerts fired, post-incident comments):

code
{{channel_log_and_alerts}}

Action: Fill every section above. Timeline uses HH:MM timestamps in the incident's timezone and describes observations before interpretations. Root cause distinguishes the triggering event from the underlying condition that allowed it. Tag each Action item as Preventive (stops recurrence) or Detective (catches it sooner). Where inputs are insufficient, write [TBD — need: {{what_is_needed}}] instead of guessing.

Format: Markdown matching template sections. Timeline is a two-column table: Time | Event. Action items are a numbered list, each tagged [Preventive] or [Detective] with Owner: {{TBD}}. Under 900 words.

Why this works: Postmortems fail by mixing observation with interpretation in the timeline and by hiding uncertainty in plausible prose. The Role requires [TBD] markers, and the two-column Format separates times from events so interpretation cannot sneak into the time column.

Scoring these templates with the Rubric

A filled template is only as good as its Rubric score. Take Template 1 (Weekly standup summary) scored against the seven SurePrompts Quality Rubric dimensions:

  • Role clarity: 5/5 — names seniority, activity, and stance.
  • Context sufficiency: 4/5 — could tighten by naming which Slack channels to include.
  • Action specificity: 5/5 — four explicit output buckets with the blocker rule.
  • Format precision: 5/5 — H3 headings, bullet rules, word limit, blocker line template.
  • Constraint fit: 4/5 — word cap and anti-per-person-log rule clear.
  • Failure-mode coverage: 4/5 — kills the per-person log mode; weaker on cross-team dependencies.
  • Reusability: 5/5 — three variable slots, stable elsewhere.

Total: 32 out of 35. If you score below 28, the lowest sub-score is usually Context (not enough inputs) or Format (shape implied rather than stated). Fix that slot, re-score.

Our position

A few opinionated stances, drawn from filling these templates many times:

  • Skip the Role slot for pure computation. Arithmetic, deterministic transformations, sorting — "You are a senior X" adds nothing. RCAF is for tasks where identity shapes output; if it doesn't, drop Role.
  • Context is where teams under-specify. Most disappointing template runs fail because the user pasted less than the task needed. If the output is vague, the Context was vague. Fix Context first, not Action.
  • Format is where teams over-trust the model. "Make it concise" is not a format. "Three bullets, each under 15 words, benefit-first" is. Assume nothing about shape you haven't written down.
  • Action should name the success criterion, not just the task. "Summarize the meeting" is a task. "Summarize so a person who missed it can pick up their part tomorrow" is a task plus criterion — and that's what separates a usable template from a plausible one.
  • Don't chain templates until each one scores 28+ alone. Chaining amplifies errors. A 22-scoring Template 3 feeding Template 7 just produces a longer bad output.

Build prompts like these in seconds

Use the Template Builder to customize 350+ expert templates with real-time preview, then export for any AI model.

Open Template Builder