Skip to main content
Back to Blog
UX designAI promptsuser researchwireframesusability testing

35 AI Prompts for UX Designers: Research, Wireframes, and Usability Testing (2026)

Copy-paste AI prompts for UX designers. User research, personas, information architecture, wireframe copy, usability testing, and design systems — tested and ready.

SurePrompts Team
March 27, 2026
29 min read

35 AI Prompts for UX Designers: Research, Wireframes, and Usability Testing

AI won't replace UX designers. But UX designers who use AI will replace those who don't. These 35 prompts handle the scaffolding — research plans, persona drafts, test scripts, accessibility audits — so you can focus on the design decisions that actually require human judgment. Or skip the manual work and use our AI prompt generator to create custom UX prompts instantly.

40%
Of a UX designer's week goes to documentation, not design — AI cuts that to under 15%

Info

How these prompts work: Each prompt uses bracketed placeholders like [PRODUCT NAME] that you fill in with your specifics. The more context you provide, the better the output. Copy the full code block, replace the brackets, and paste into ChatGPT, Claude, or Gemini.

AI as a UX Research Assistant (Not a Replacement)

Let's be direct: AI cannot talk to users. It cannot observe someone struggling with your checkout flow. It cannot detect the pause before a confused click. What it CAN do is help you structure research plans, synthesize notes faster, generate persona hypotheses to validate, and draft test scripts so you spend more time in sessions and less time in Google Docs. Use these prompts as starting points — then pressure-test every output against real user data.

User Research Prompts

1. User Research Plan

code
You are a senior UX researcher planning a research study.

Create a user research plan for [PRODUCT/FEATURE — e.g., a new checkout flow for a direct-to-consumer furniture brand].

Research objectives:
- [OBJECTIVE 1 — e.g., understand why 43% of users abandon cart at the shipping step]
- [OBJECTIVE 2 — e.g., identify which delivery options users expect and are willing to pay for]

Provide:
- Research questions (5-7, prioritized by impact on design decisions)
- Methodology recommendation with rationale (usability testing, contextual inquiry, diary study, survey — explain why this method for this question)
- Participant criteria (sample size, demographics, behavioral screener questions)
- Recruitment strategy and screener questionnaire
- Session structure with timing
- Discussion guide or task scenarios (detailed enough to hand to a junior researcher)
- Analysis framework (affinity mapping, thematic analysis, etc.)
- Deliverables and how findings will be shared with the team
- Timeline and resource requirements

Target: [NUMBER] participants over [TIMEFRAME]. Budget: [AMOUNT or "limited — mostly guerrilla research"].

2. Interview Discussion Guide

code
Write a discussion guide for [NUMBER]-minute user interviews about [TOPIC — e.g., how freelancers manage invoicing and payments].

PRODUCT: [PRODUCT NAME]
RESEARCH GOAL: [WHAT YOU NEED TO LEARN]
PARTICIPANT PROFILE: [WHO — e.g., freelance designers and developers billing $5K-$50K/month]
INTERVIEW FORMAT: [REMOTE / IN-PERSON / CONTEXTUAL]

Include:
- Opening script (rapport building, consent, recording permission — exact words)
- Warm-up questions (2-3, about their general workflow)
- Core questions (8-10, moving from broad context to specific pain points)
- Probing follow-ups for each core question ("Can you walk me through the last time you...?")
- Activity or artifact review section (if applicable — "Show me your current invoice template")
- Closing questions (ideal future state, anything we didn't ask, referral request)
- Time allocation per section

Write questions that are:
- Open-ended (no leading questions)
- Behavioral ("Tell me about the last time..." not "Do you like...")
- Specific enough to get stories, not opinions

3. Research Synthesis Framework

code
Help me synthesize findings from [NUMBER] user interviews about [TOPIC].

PRODUCT: [PRODUCT NAME]
RAW DATA: I have [NOTES / TRANSCRIPTS / RECORDINGS] from [NUMBER] participants.

PARTICIPANT SUMMARY:
- P1: [ROLE, KEY QUOTE OR BEHAVIOR]
- P2: [ROLE, KEY QUOTE OR BEHAVIOR]
- P3: [ROLE, KEY QUOTE OR BEHAVIOR]
[ADD MORE]

Create a synthesis framework that includes:
1. Affinity diagram categories — suggest initial groupings based on the participant data above
2. Pattern identification template (what to look for across participants)
3. Insight statement format: "We observed [BEHAVIOR] among [SEGMENT] because [MOTIVATION], which means [DESIGN IMPLICATION]"
4. Evidence strength rating (1 participant mentioned it vs. 8 out of 10 demonstrated it)
5. Findings prioritization matrix (frequency × impact on design decisions)
6. Stakeholder-ready summary template (1 page, with key findings, supporting evidence, and recommended next steps)

Output the framework, then show me an example filled in with the participant data I provided.

4. Competitive UX Audit

code
Create a competitive UX audit framework for [PRODUCT CATEGORY — e.g., project management tools for creative teams].

COMPETITORS TO ANALYZE:
1. [COMPETITOR 1 — e.g., Asana]
2. [COMPETITOR 2 — e.g., Monday.com]
3. [COMPETITOR 3 — e.g., Notion]
4. [COMPETITOR 4 — e.g., Linear]

KEY FLOWS TO COMPARE:
1. [FLOW 1 — e.g., onboarding and first project creation]
2. [FLOW 2 — e.g., task assignment and status updates]
3. [FLOW 3 — e.g., reporting and dashboard customization]

For each competitor and flow, create evaluation criteria for:
- Task completion steps (count and complexity)
- Cognitive load indicators (decisions required, information density)
- Error prevention and recovery
- Accessibility (keyboard navigation, screen reader support, color contrast)
- Mobile responsiveness
- Onboarding and progressive disclosure
- Delight factors and differentiators

Output as a comparison matrix I can fill in during the audit, plus a template for documenting screenshots and annotations.

5. Survey Design for Quantitative Research

code
Design a survey to quantify findings from qualitative research.

PRODUCT: [PRODUCT NAME]
QUALITATIVE FINDINGS TO VALIDATE:
1. [FINDING 1 — e.g., most users prefer keyboard shortcuts over menu navigation]
2. [FINDING 2 — e.g., onboarding feels too long — users want to "just start"]
3. [FINDING 3 — e.g., the export feature is hard to find]

TARGET RESPONDENTS: [WHO AND HOW MANY — e.g., 200+ active users who've been on the platform 30+ days]
DISTRIBUTION: [IN-APP / EMAIL / SOCIAL / PANEL]

Create a survey with:
- Screener questions (2-3, to filter for relevant respondents)
- 12-15 questions max (mix of Likert scale, multiple choice, ranking, and 1-2 open-ended)
- Question order that avoids priming bias
- Response options that are balanced and mutually exclusive
- At least one attention-check question
- Estimated completion time (target: under 5 minutes)

For each question, note which qualitative finding it validates and what design decision it informs.

6. Research Repository Entry Template

code
Create a research repository entry template for our UX team.

TEAM SIZE: [NUMBER OF RESEARCHERS/DESIGNERS]
TOOLS: [WHERE THE REPO LIVES — Notion, Confluence, Dovetail, Google Sheets]
RESEARCH TYPES WE DO: [USABILITY TESTS, INTERVIEWS, SURVEYS, ANALYTICS REVIEWS]

Design a template that includes:
- Study metadata (date, researcher, method, participants)
- Research questions and objectives
- Key findings (structured format: finding, evidence, confidence level)
- Participant quotes (tagged by theme)
- Design recommendations (with priority and effort estimate)
- Artifacts (links to recordings, notes, decks)
- Tags/categories for searchability
- Status tracking (in progress → analysis → findings shared → implemented)

Also create:
- A tagging taxonomy for our research themes
- Guidelines for when to create a new entry vs. append to existing
- A quarterly review template for identifying research gaps

Persona Creation Prompts

7. Research-Based Persona

code
Create a user persona based on research data.

PRODUCT: [PRODUCT NAME]
RESEARCH SOURCE: [INTERVIEWS / SURVEYS / ANALYTICS / ALL]
SEGMENT: [THE USER SEGMENT THIS PERSONA REPRESENTS — e.g., "power users who create 10+ projects per month"]

DATA POINTS:
- Demographics: [AGE RANGE, ROLE, INDUSTRY, COMPANY SIZE from research]
- Behaviors: [OBSERVED PATTERNS — e.g., "uses keyboard shortcuts exclusively, creates templates for everything"]
- Pain points: [QUOTED OR OBSERVED — e.g., "P3: 'I spend 30 minutes every Monday recreating the same project structure'"]
- Goals: [WHAT THEY'RE TRYING TO ACHIEVE]
- Tools: [WHAT ELSE THEY USE AND WHY]

Create a persona that:
- Has a realistic name, role, and context (not "Marketing Mary")
- Includes a narrative scenario (a day in their life using our product)
- Lists goals as Jobs To Be Done format ("When I [situation], I want to [motivation], so I can [outcome]")
- Separates observed behaviors from inferred motivations
- Includes "this persona is NOT" section (who they shouldn't be confused with)
- Has a design principles section: "When designing for [PERSONA], always... / never..."

Mark which attributes are research-backed vs. hypothesized.

8. Proto-Persona for Lean Projects

code
Create a proto-persona for a product in early development.

PRODUCT CONCEPT: [WHAT YOU'RE BUILDING]
TARGET USERS: [WHO YOU THINK THEY ARE]
ASSUMPTIONS: [WHAT YOU BELIEVE BUT HAVEN'T VALIDATED]
AVAILABLE DATA: [ANALYTICS, COMPETITOR REVIEWS, SOCIAL MEDIA, SUPPORT TICKETS — whatever you have]

Build a proto-persona that:
- Clearly labels every attribute as ASSUMPTION or EVIDENCE-BASED
- Includes a "riskiest assumptions" section (what you need to validate first)
- Suggests 3 specific research activities to validate or invalidate this persona
- Is formatted as a one-page card suitable for printing and posting on a wall
- Includes a "revision log" section for updating as you learn more

Don't make up fake research. Be honest about what's assumed.

9. Persona Comparison Matrix

code
Create a comparison matrix for [NUMBER] user personas.

PRODUCT: [PRODUCT NAME]
PERSONAS:
1. [PERSONA 1 NAME AND SEGMENT — e.g., "Priya, Enterprise Project Manager"]
2. [PERSONA 2 NAME AND SEGMENT — e.g., "Jake, Solo Freelancer"]
3. [PERSONA 3 NAME AND SEGMENT — e.g., "Maria, Agency Team Lead"]

Compare across:
- Primary goals and Jobs To Be Done
- Technical proficiency
- Feature usage patterns (heavy/moderate/light for each feature area)
- Willingness to pay and price sensitivity
- Onboarding needs
- Support channel preferences
- Key pain points and where they overlap/diverge

Output:
1. Comparison table (personas as columns, attributes as rows)
2. Prioritization recommendation (which persona to design for first and why)
3. Design tension map (where persona needs conflict and how to resolve)

10. Persona Validation Checklist

code
Create a validation checklist for an existing persona.

PERSONA: [NAME AND DESCRIPTION]
LAST UPDATED: [DATE]
BASED ON: [ORIGINAL RESEARCH SOURCE AND SAMPLE SIZE]

Generate a checklist for validating this persona that includes:
- Signals the persona is still accurate (metrics, behaviors, support tickets)
- Signals the persona needs updating (market changes, product changes, new segments)
- 5 specific questions to ask in the next user interview to validate or invalidate
- Analytics queries to check against real user behavior
- Stakeholder interview questions ("Do you still encounter this persona?")
- Recommended validation cadence (quarterly, bi-annually, etc.)
- Template for documenting what changed and why

11. Empathy Map

code
Create an empathy map for [PERSONA NAME] using [PRODUCT NAME].

CONTEXT: [SPECIFIC SCENARIO — e.g., "first time setting up a team workspace"]

Map across four quadrants:
- SAYS: Direct quotes from research (or realistic quotes based on the scenario)
- THINKS: Internal thoughts they wouldn't say aloud (concerns, comparisons, self-talk)
- DOES: Observable behaviors and actions
- FEELS: Emotional states and their triggers

Also include:
- PAINS: Specific frustrations in this scenario
- GAINS: What would make this scenario feel great

Format as a visual-ready template. Mark items derived from research vs. hypothesized.

Information Architecture Prompts

12. Card Sort Analysis

code
Analyze card sort results and recommend an information architecture.

PRODUCT: [PRODUCT NAME]
CARD SORT TYPE: [OPEN / CLOSED / HYBRID]
NUMBER OF PARTICIPANTS: [NUMBER]
CARDS (FEATURES/PAGES): [LIST ALL ITEMS SORTED — e.g., Dashboard, Settings, Team Members, Billing, Projects, Templates, Reports, Integrations, Notifications, Help Center]

RESULTS SUMMARY:
- [GROUPING PATTERN 1 — e.g., "80% grouped Templates, Projects, and Reports together"]
- [GROUPING PATTERN 2 — e.g., "Team Members was split — 50% put it with Settings, 40% made it a top-level item"]
- [OUTLIERS — e.g., "Integrations had no consensus — distributed across 4 different groups"]

Analyze and provide:
1. Similarity matrix (which items users consistently group together)
2. Recommended navigation structure (primary nav, secondary nav, utility nav)
3. Confidence level for each grouping decision
4. Items needing further research (low agreement)
5. Label recommendations (most intuitive category names from the sort)
6. Alternative structures to A/B test

13. Site Map Review

code
Review and improve this site map for [PRODUCT NAME].

CURRENT STRUCTURE:
[PASTE YOUR CURRENT SITE MAP OR NAVIGATION HIERARCHY]

USER GOALS (PRIORITIZED):
1. [TOP USER GOAL]
2. [SECOND GOAL]
3. [THIRD GOAL]

KNOWN ISSUES:
- [ISSUE — e.g., "users can't find the billing page"]
- [ISSUE — e.g., "settings is too deep — 4 clicks to change notification preferences"]

Analyze and recommend:
- Depth vs. breadth tradeoffs (is the hierarchy too deep or too flat?)
- Cross-linking opportunities for common task flows
- Items that should be promoted or demoted in the hierarchy
- Naming improvements for ambiguous labels
- Progressive disclosure strategy for complex sections
- Revised site map with rationale for each change

14. Navigation Naming Test

code
Help me design a navigation naming test for [PRODUCT NAME].

CURRENT NAVIGATION LABELS: [LIST THEM — e.g., Dashboard, Workspace, Library, Analytics, Settings]
ALTERNATIVE LABELS TO TEST: [IF ANY — e.g., "Library" vs "Resources" vs "Assets"]
USER TASKS: [WHAT USERS NEED TO FIND — e.g., "change your password", "create a new project", "see last month's usage"]

Create:
1. A tree test script with 8-10 tasks (ordered from easy to hard)
2. For each task:
   - Task prompt (plain language, no label hints)
   - Expected correct path
   - Success criteria (first click correct, completed within 3 clicks, etc.)
3. Analysis plan:
   - How to calculate directness (first-click accuracy)
   - How to identify "problem" labels (high confusion, wrong paths)
   - Decision criteria for when to rename vs. restructure
4. Recommended tools for running the test (Treejack, Optimal Workshop, etc.)

15. Content Inventory Audit

code
Create a content audit framework for [PRODUCT NAME / WEBSITE].

SCOPE: [SECTION OR ENTIRE SITE]
TOTAL ESTIMATED PAGES/SCREENS: [NUMBER]
CONTENT TYPES: [e.g., help articles, product pages, blog posts, landing pages]
KNOWN PROBLEMS: [e.g., "duplicate content across help center and blog", "outdated screenshots"]

Build an audit spreadsheet template with columns for:
- URL / screen name
- Content type
- Owner / last editor
- Last updated date
- Traffic / usage data
- Quality score (1-5) with defined criteria
- Action recommendation (keep as-is / update / consolidate / remove)
- Priority (high / medium / low)
- Dependencies (what links to this content)

Include:
- Instructions for conducting the audit efficiently
- Red flags to watch for (outdated dates, broken links, inconsistent terminology)
- Governance recommendations post-audit

Wireframe & UX Copy Prompts

16. Wireframe Annotation Guide

code
Write detailed annotations for a wireframe of [SCREEN/FLOW — e.g., the onboarding wizard for a SaaS project management tool].

SCREEN PURPOSE: [WHAT THIS SCREEN DOES]
USER STATE: [FIRST-TIME USER / RETURNING USER / SPECIFIC CONTEXT]
KEY ACTIONS: [PRIMARY AND SECONDARY ACTIONS ON THIS SCREEN]
CONTENT BLOCKS: [LIST THE SECTIONS — e.g., progress indicator, welcome message, form fields, CTA, skip option]

For each element, provide:
- Behavior spec (what happens on click, hover, focus, error)
- Content requirements (character limits, dynamic vs. static, personalization)
- Responsive notes (how it adapts to mobile/tablet)
- Accessibility requirements (ARIA labels, keyboard behavior, screen reader announcements)
- Edge cases (what if the user's name is 47 characters? What if they have no profile photo?)
- Developer handoff notes (any non-obvious implementation details)

Format as numbered annotations matching wireframe element labels.

17. Empty State Copy

code
Write empty state copy for [PRODUCT NAME].

EMPTY STATES NEEDED:
1. [SCREEN 1 — e.g., Projects page before creating first project]
2. [SCREEN 2 — e.g., Notifications center with no notifications]
3. [SCREEN 3 — e.g., Search results with no matches]
4. [SCREEN 4 — e.g., Team members page for a solo user]
5. [SCREEN 5 — e.g., Analytics dashboard with insufficient data]
6. [SCREEN 6 — e.g., Recently deleted items when empty]

For each empty state, write:
- Headline (5-8 words, action-oriented or encouraging)
- Body text (1-2 sentences, explains the state and what to do next)
- CTA button label (verb + noun)
- Illustration suggestion (brief concept description for the designer)
- Tone calibration: [PRODUCT VOICE — e.g., "professional but warm, occasionally playful"]

Rules:
- Never blame the user ("You haven't created anything yet")
- Always include a clear next action
- Match the emotional weight of the moment (search with no results = frustrating; new account = exciting)

18. Microcopy Audit and Rewrite

code
Audit and rewrite the microcopy for [FLOW — e.g., the checkout process].

CURRENT COPY:
- [LABEL/MESSAGE 1 — e.g., Form field: "Enter your electronic mail address"]
- [LABEL/MESSAGE 2 — e.g., Error: "Error: Invalid input in field 3"]
- [LABEL/MESSAGE 3 — e.g., Button: "Submit"]
- [LABEL/MESSAGE 4 — e.g., Confirmation: "Your request has been successfully processed"]
[ADD MORE]

PRODUCT VOICE: [DESCRIBE — e.g., "Clear, concise, conversational. Like a helpful coworker, not a legal document."]
AUDIENCE: [WHO — e.g., "Small business owners, not technical, often on mobile"]

For each piece of copy:
1. Issue diagnosis (what's wrong — tone, clarity, verbosity, jargon, anxiety)
2. Rewrite (with rationale)
3. Character count comparison (before vs. after)
4. Accessibility consideration (does it work with screen readers? Is the meaning clear out of context?)

Also flag any missing microcopy (helper text, loading states, confirmation messages, or error states you'd expect to see but didn't).

19. Error Message System

code
Design a comprehensive error message system for [PRODUCT NAME].

ERROR CATEGORIES:
1. Form validation errors (required fields, format issues, character limits)
2. Authentication errors (wrong password, expired session, account locked)
3. Permission errors (unauthorized access, plan limits)
4. System errors (server down, timeout, maintenance)
5. Connectivity errors (offline, slow connection)
6. Data errors (404, deleted content, conflicting edits)

For each category, provide:
- Pattern template (reusable structure for this error type)
- 3 specific example messages
- Tone guidelines (validation = helpful, system error = apologetic, permission = clear)
- Technical details policy (when to show error codes, when to hide them)
- Recovery action (what the user should do, with a CTA)
- Escalation path (when to show "Contact support")

Voice: [PRODUCT VOICE DESCRIPTION]

Design principles:
- Never blame the user
- Always provide a next step
- Use plain language, not HTTP codes
- Be specific about what went wrong

20. Onboarding Flow Copy

code
Write all copy for an onboarding flow.

PRODUCT: [PRODUCT NAME]
USER JUST DID: [SIGNED UP / ACCEPTED INVITE / STARTED TRIAL]
ONBOARDING STEPS:
1. [STEP 1 — e.g., Set up profile (name, role, company)]
2. [STEP 2 — e.g., Choose a template or start from scratch]
3. [STEP 3 — e.g., Invite team members]
4. [STEP 4 — e.g., Connect integrations]
5. [STEP 5 — e.g., First key action (create first project)]

For each step, write:
- Screen headline
- Supporting description (1-2 sentences)
- Form labels and helper text (if applicable)
- CTA button text
- Skip/later option text
- Progress indicator text

Also write:
- Welcome message (first thing they see after signup)
- Completion celebration (what they see after finishing onboarding)
- "Resume onboarding" prompt (for users who skipped steps)
- Email triggered if they don't complete onboarding within 24 hours

Voice: [PRODUCT VOICE]. Keep total onboarding under [TARGET TIME — e.g., 3 minutes].

21. Feature Announcement Copy

code
Write UX copy for a new feature announcement.

PRODUCT: [PRODUCT NAME]
FEATURE: [FEATURE NAME AND DESCRIPTION]
USER BENEFIT: [WHAT IT LETS THEM DO THAT THEY COULDN'T BEFORE]
AVAILABILITY: [ALL USERS / PAID ONLY / BETA / ROLLING OUT]
COMPLEXITY: [SIMPLE ENOUGH TO DISCOVER / NEEDS WALKTHROUGH]

Write copy for:
1. In-app tooltip or banner (under 30 words)
2. Feature discovery modal (headline, 2-3 bullet benefits, CTA, dismiss option)
3. Guided walkthrough steps (if applicable — 3-5 steps with screen descriptions)
4. Changelog entry (50-100 words)
5. "What's new" section (for settings or help center)

Rules:
- Lead with the user benefit, not the feature name
- Don't interrupt — announce in context, at the relevant moment
- Include a way to dismiss and find it later
- Avoid "NEW!" badges on everything

Usability Testing Prompts

22. Usability Test Plan

code
Create a usability test plan for [PRODUCT/FEATURE].

WHAT WE'RE TESTING: [SPECIFIC FLOW — e.g., the redesigned dashboard and reporting experience]
PROTOTYPE FIDELITY: [PAPER / LOW-FI / HIGH-FI / LIVE PRODUCT]
HYPOTHESES:
1. [HYPOTHESIS — e.g., "Users will find the new filter system faster than the current dropdown approach"]
2. [HYPOTHESIS — e.g., "The new data visualization will reduce time-to-insight by 50%"]

Provide:
- Test objectives (what we want to learn — not "is it usable" but specific questions)
- Participant criteria (behavioral, not just demographic)
- Screener questionnaire (5-7 questions to filter the right participants)
- Number of participants with rationale
- Test environment setup (tools, recording, think-aloud protocol)
- Task list (6-8 tasks, from simple to complex, with success criteria for each)
- Metrics to capture (task completion rate, time on task, errors, SUS score, satisfaction rating)
- Observer guide (what to watch for, how to take notes)
- Post-test interview questions (5-7)
- Analysis and reporting plan

23. Task Scenarios for Usability Testing

code
Write realistic task scenarios for usability testing [PRODUCT NAME].

PRODUCT: [NAME AND BRIEF DESCRIPTION]
FLOW BEING TESTED: [WHICH PART OF THE PRODUCT]
PARTICIPANT PROFILE: [WHO THEY ARE]

Write [NUMBER] task scenarios that:
- Set context without revealing the UI path ("You need to check last month's sales" not "Click on Reports, then select Sales Report")
- Feel realistic (based on actual user goals)
- Vary in difficulty (mix of simple, moderate, and complex)
- Can be completed independently (no task depends on another)
- Have clear, measurable success criteria

For each task:
- Scenario text (what to read to the participant)
- Success criteria (specific — "user finds the correct page within 2 minutes")
- Optimal path (for comparison — how many clicks/steps in the intended flow)
- Things to watch for (common confusion points, hesitations, workarounds)
- Follow-up probe question (after task completion or failure)

Include a warm-up task that builds confidence.

24. Think-Aloud Protocol Script

code
Write a complete think-aloud protocol script for a usability test.

PRODUCT: [NAME]
SESSION LENGTH: [MINUTES]
FORMAT: [MODERATED REMOTE / MODERATED IN-PERSON / UNMODERATED]
TASKS: [NUMBER OF TASKS]

Write the full script including:

Pre-session:
- Welcome and introduction (put them at ease)
- Consent and recording permission language
- Think-aloud instructions (with practice example)
- "We're testing the product, not you" speech

During session:
- Task introduction template
- Neutral probing prompts when they go silent ("What are you thinking right now?", "What do you expect to happen?")
- Prompts for when they're stuck (graduated: wait → prompt → hint → move on)
- Transition language between tasks

Post-session:
- Debrief questions (overall impression, difficulty ranking, comparison to current tools)
- SUS questionnaire introduction
- Thank you and incentive delivery

Observer instructions:
- Note-taking template
- When to intervene (never vs. rarely vs. they're about to throw the laptop)
- Time-keeping signals

25. Usability Test Results Report

code
Help me structure a usability test results report.

PRODUCT: [NAME]
WHAT WAS TESTED: [FEATURE/FLOW]
PARTICIPANTS: [NUMBER, SEGMENT]
TEST DATE: [DATE]
METHOD: [MODERATED/UNMODERATED, REMOTE/IN-PERSON]

RESULTS (I'll provide, help me structure):
- Task completion rates: [LIST PER TASK]
- Average time on task: [LIST PER TASK]
- SUS score: [SCORE]
- Critical issues observed: [LIST]
- Minor issues observed: [LIST]
- Positive findings: [LIST]
- Participant quotes: [KEY QUOTES]

Create a report template with:
1. Executive summary (1 paragraph — what we tested, what we found, what to do)
2. Methodology section (concise)
3. Findings by task (success rate, time, issues, quotes, severity rating)
4. Cross-cutting themes
5. Severity-prioritized issue list with recommended fixes
6. Positive findings (what's working — don't lose these in a redesign)
7. Recommendations with effort/impact matrix
8. Appendix (participant details, raw data, session recordings)

Format for stakeholders who will skim the first page and only read details if concerned.

26. Remote Usability Test Setup Guide

code
Create a setup guide for running remote unmoderated usability tests.

PRODUCT: [NAME]
TOOL: [MAZE / USERTESTING / LOOKBACK / LYSSNA / OTHER]
NUMBER OF TASKS: [NUMBER]
TARGET PARTICIPANTS: [NUMBER]
BUDGET: [AMOUNT OR "RECRUITING FROM OUR OWN USER BASE"]

Include:
- Tool configuration checklist
- Test flow structure (welcome → screener → tasks → questions → thank you)
- Instructions text for participants (clear, friendly, no jargon)
- Tips for writing effective unmoderated task prompts (they can't ask clarifying questions)
- Quality criteria for filtering usable vs. unusable responses
- Sample size recommendations for statistical confidence
- Analysis workflow (how to process results efficiently)
- Reporting template for unmoderated results
- Common pitfalls and how to avoid them (low completion rates, unclear tasks, biased questions)

Accessibility Audit Prompts

27. WCAG Compliance Checklist

code
Create a WCAG 2.2 accessibility audit checklist customized for [PRODUCT TYPE — e.g., a SaaS dashboard with data tables, charts, and interactive forms].

TARGET CONFORMANCE: [LEVEL A / AA / AAA]
KEY COMPONENTS: [LIST MAIN UI ELEMENTS — e.g., data tables, modal dialogs, dropdown menus, date pickers, drag-and-drop lists, charts]

For each component, list:
- Applicable WCAG 2.2 success criteria (with numbers — e.g., 1.4.3 Contrast Minimum)
- How to test (manual check, automated tool, assistive technology)
- Common failures specific to this component type
- Pass/fail criteria with examples
- Recommended fix patterns

Also include:
- Testing tools checklist (axe, WAVE, VoiceOver, NVDA, keyboard-only)
- Browser/device combinations to test
- Priority order for fixing issues (legal risk, user impact, effort)
- Template for documenting findings with screenshots

Organize by POUR principles (Perceivable, Operable, Understandable, Robust).

28. Accessibility Annotation for Designs

code
Write accessibility annotations for a design mockup.

SCREEN: [SCREEN DESCRIPTION — e.g., a settings page with toggles, dropdown selects, a save button, and a tab navigation]
COMPONENTS ON SCREEN: [LIST EACH COMPONENT]

For each interactive component, annotate:
- Semantic HTML element (what this should be in code — not a <div>)
- ARIA role, state, and properties (if semantic HTML isn't sufficient)
- Keyboard interaction pattern (Tab order, Enter/Space behavior, Escape to dismiss, arrow keys)
- Focus management (where focus goes after actions like submit, close, delete)
- Screen reader announcement (what VoiceOver/NVDA should say)
- Color contrast requirements (text size → minimum ratio)
- Touch target size (minimum 44x44px for mobile)
- Error announcement strategy (how errors are communicated to assistive tech)
- Alternative text requirements (for icons, images, charts)

Output as a numbered list matching design element labels. Include a focus order diagram.

29. Accessible Component Specification

code
Write an accessible component specification for [COMPONENT — e.g., a custom date picker].

DESIGN REQUIREMENTS: [VISUAL/FUNCTIONAL DESCRIPTION]
EXISTING PATTERN: [IS THERE A WAI-ARIA PATTERN? — e.g., "ARIA APG Dialog Date Picker pattern"]

Include:
- Recommended WAI-ARIA design pattern to follow
- Full keyboard interaction specification
- ARIA attributes for every state (expanded, selected, disabled, invalid, required)
- Screen reader behavior for each interaction (open picker → "Date picker dialog opened, March 2026")
- Focus trap behavior (within modals/popups)
- Edge cases (empty state, invalid date, date range selection, min/max dates)
- Mobile and touch interaction
- Reduced motion behavior
- High contrast mode considerations
- Test script (exact steps for QA to verify with VoiceOver and NVDA)

30. Accessibility Testing Script

code
Write a manual accessibility testing script for [FLOW — e.g., the complete signup and onboarding flow].

PRODUCT: [NAME]
SCREENS IN FLOW: [LIST SCREENS IN ORDER]
ASSISTIVE TECHNOLOGIES TO TEST: [VOICEOVER, NVDA, JAWS, KEYBOARD-ONLY — pick 2-3]

For each screen, write test cases covering:
- Keyboard-only navigation (can every action be completed without a mouse?)
- Screen reader flow (does the reading order make sense? Are all elements announced?)
- Focus management (is focus visible? Does it go where expected after actions?)
- Error handling (are errors announced? Can the user find and fix them?)
- Dynamic content (are loading states, success messages, and updates announced?)
- Zoom/magnification (does the layout work at 200% and 400% zoom?)

For each test case, provide:
- Steps to reproduce (exact — "Press Tab 3 times, you should land on the Email field")
- Expected behavior
- Pass/fail criteria
- How to log the issue if it fails (severity, WCAG criterion, screenshot)

Design System Documentation Prompts

31. Component Documentation Template

code
Write documentation for a design system component.

COMPONENT: [NAME — e.g., Button]
VARIANTS: [LIST — e.g., Primary, Secondary, Ghost, Danger, Icon-only]
SIZES: [LIST — e.g., Small, Medium, Large]
STATES: [LIST — e.g., Default, Hover, Active, Focus, Disabled, Loading]
USAGE CONTEXT: [WHERE IT'S USED — e.g., forms, toolbars, modals, empty states]

Document:
1. Component overview (when and why to use this component)
2. Anatomy diagram description (label each part — icon slot, label, container, focus ring)
3. Variant guidelines (when to use each variant — with DO/DON'T examples)
4. Sizing guidelines (which size for which context)
5. Content guidelines (label text rules, character limits, icon pairing)
6. Behavior specs (click, hover, focus, disabled states)
7. Accessibility requirements (role, keyboard, ARIA states, color contrast per variant)
8. Responsive behavior
9. DO/DON'T examples (5 each, with explanations)
10. Related components (when to use this vs. Link vs. IconButton vs. MenuItem)
11. Code usage examples (basic, with icon, loading state, full-width)

32. Design Token Documentation

code
Document the design token system for [DESIGN SYSTEM NAME].

TOKEN CATEGORIES: [COLORS, TYPOGRAPHY, SPACING, BORDER RADIUS, SHADOWS, Z-INDEX, BREAKPOINTS, MOTION]

For each category, document:
- Token naming convention (explain the pattern — e.g., color.background.primary)
- Complete token list with values
- Semantic mapping (why this token exists — what design decision it represents)
- Usage guidelines (when to use which token)
- Alias/reference relationships (token A references token B)
- Platform-specific output (CSS custom properties, iOS, Android)
- Dark mode / theme mapping

Also include:
- How to request a new token
- When to use a token vs. a one-off value
- Migration guide from hardcoded values to tokens
- Versioning and deprecation policy

33. Design System Contribution Guidelines

code
Write contribution guidelines for [DESIGN SYSTEM NAME].

TEAM SIZE: [NUMBER OF CONTRIBUTORS]
TOOLS: [FIGMA / STORYBOOK / ZeroHeight / etc.]
TECH STACK: [REACT / VUE / WEB COMPONENTS / etc.]
CURRENT COMPONENT COUNT: [NUMBER]
MATURITY: [EARLY / GROWING / MATURE]

Create guidelines covering:
1. When to propose a new component vs. extend an existing one
2. Component proposal template (problem statement, usage evidence, competitive examples)
3. Design review checklist (accessibility, responsiveness, theming, edge cases)
4. Code review checklist (API consistency, testing, documentation, bundle size)
5. Figma file conventions (naming, layer structure, variant organization)
6. Component lifecycle (proposal → design → development → review → published)
7. Versioning and breaking change policy
8. Communication channels and decision-making process
9. Quality gates (what must be true before a component can be published)

34. Pattern Library Entry

code
Document a UX pattern for the pattern library.

PATTERN: [NAME — e.g., Inline Editing, Bulk Selection, Infinite Scroll, Drag and Drop Reordering]
PRODUCT: [PRODUCT NAME]
FREQUENCY OF USE: [HOW OFTEN USERS ENCOUNTER THIS]

Document:
1. Problem statement (what user need this pattern solves)
2. When to use this pattern (and when NOT to — suggest alternatives)
3. Interaction flow (step by step, from trigger to completion)
4. Required components (which design system components make up this pattern)
5. States and transitions (idle → active → saving → success → error)
6. Content guidelines (labels, confirmations, error messages for this pattern)
7. Accessibility requirements (keyboard flow, screen reader announcements, focus management)
8. Responsive behavior (how the pattern adapts across breakpoints)
9. Performance considerations (lazy loading, optimistic updates, debouncing)
10. Implementation examples (links to production usage)
11. Known limitations and edge cases

35. Design System Health Report

code
Help me create a quarterly design system health report.

DESIGN SYSTEM: [NAME]
QUARTER: [Q# YEAR]
COMPONENT COUNT: [TOTAL]
ADOPTION: [% OF PRODUCT USING DESIGN SYSTEM COMPONENTS vs. CUSTOM]

Create a report template covering:
- Adoption metrics (coverage %, new adoptions, detachments)
- Component usage ranking (most/least used)
- New components shipped this quarter
- Deprecations and migrations
- Bug reports and fixes
- Accessibility audit results
- Performance metrics (bundle size trends)
- Designer/developer satisfaction (survey results)
- Contribution stats (proposals received, approved, rejected with reasons)
- Top requests and feature gaps
- Goals for next quarter

Include benchmarks:
- What "good" looks like for adoption rate at our stage
- Industry comparison for component coverage
- Target metric improvements

Format as a deck-friendly structure with data callouts and talking points.

How to Get the Most From These Prompts

1

Fill in every bracket with real product context — "B2B invoicing tool for freelancers" beats "our app"

2

Feed outputs back into your process — use the research plan (#1) to run real sessions, then feed real data back into the persona prompt (#7)

3

Never ship AI copy without user testing — the microcopy prompts give you a strong draft, but real users find the gaps

4

Chain prompts across the design lifecycle — research synthesis (#3) → persona (#7) → empathy map (#11) → task scenarios (#23)

5

Save your best prompts as templates — SurePrompts Builder lets you create reusable prompts with custom fields for your team

Before

Spending 4 hours writing a usability test plan from scratch for each study

After

Generating a structured test plan in 5 minutes, then spending 30 minutes customizing it with real research context

Tip

Start with the prompts closest to your current project phase. If you're in discovery, begin with the research prompts (#1-6). If you're already designing, jump to wireframe copy (#16-21). If you're validating, use the usability testing section (#22-26). For developer-facing prompts, check our curated templates.

Keep Reading

Ready to Level Up Your Prompts?

Stop struggling with AI outputs. Use SurePrompts to create professional, optimized prompts in under 60 seconds.

Try AI Prompt Generator