AI can help with your practice. But where's the line? A practical guide to using AI as a therapist without compromising your ethics or your clients.
The Question You're Not Asking Out Loud
You've thought about using AI for your practice. Maybe you've tried it.
Then the guilt hits.
"Should I be using AI for this?"
"What if I'm outsourcing something I should be doing myself?"
"Is this ethical?"
You're not alone in asking these questions. Every therapist I've talked to feels this tension.
AI could save you hours. But at what cost?
Let me be direct: There are right ways and wrong ways to use AI in your practice.
This isn't about whether you should use it. It's about how.
Where the Line Is
Let's start with what's not up for debate.
Never Use AI For:
Clinical decision-making. Period.
Don't ask AI to diagnose. Don't ask it to create treatment plans for specific clients. Don't use it to make clinical judgments.
That's your job. Your training. Your license. Your responsibility.
Processing client information. Don't put client details into AI.
Not names. Not identifiable information. Not session notes that could identify someone.
Even if you think it's anonymized. Even if you're "just asking generally."
The risk isn't worth it. HIPAA violations are career-ending.
Replacing your clinical judgment. AI doesn't understand nuance. It doesn't read body language. It doesn't have clinical intuition.
If you're using AI to tell you what to do with a client, stop.
Where AI Can Help:
Administrative tasks. The paperwork that eats your life but doesn't require clinical judgment.
Psychoeducation materials. Resources for clients about general mental health topics.
Professional communication. Emails to referral sources, insurance companies, administrative staff.
Practice development. Marketing, website copy, workshop outlines.
Continuing education support. Understanding new research, exploring theoretical concepts.
The line is clear: AI for administrative and educational tasks. Human judgment for clinical work.
The Five Tasks That Are Ethically Safe
These are the areas where AI can help without crossing ethical boundaries.
Task 1: Writing Psychoeducation Handouts
The situation: You explain the same concept to multiple clients. Anxiety cycles. Cognitive distortions. Boundary-setting. You want a handout they can take home.
Why AI is appropriate: This is general education, not individualized clinical content.
The prompt:
Create a client-friendly handout about [mental health concept] for adults in therapy.
Tone: Warm, hopeful, normalizing. Not clinical jargon.
Reading level: 8th grade
Length: One page (about 300 words)
Include:
- What this is in simple terms
- Why it's common (normalize it)
- How it shows up in daily life (specific examples)
- 2-3 small steps someone can take
- Reminder that working with a therapist helps
Avoid:
- Anything that sounds like a diagnosis
- Medical terminology without explanation
- Promises of quick fixes
- Anything that could be misinterpreted as clinical advice for a specific person
Example output:
This creates general educational content. It's not tailored to a specific client. It's not diagnostic. It's not treatment.
It's the same information you'd give in a psychoeducation group or on your website.
The boundary: You review it. You adjust it. You decide if it's appropriate for a specific client. AI doesn't make that call.
Task 2: Drafting Insurance Authorization Letters
The situation: You need to request continued authorization for treatment. The insurance company wants specific language.
Why AI is appropriate: This is administrative documentation, not clinical notes.
The prompt:
Write a letter requesting continued authorization for outpatient therapy sessions.
Do NOT include any specific client information. I will add that separately.
Include:
- Professional request for [number] additional sessions
- General statement about treatment progress without specifics
- Standard language about medical necessity
- Reference to continued treatment plan
- Professional closing
Tone: Professional, formal, concise. Insurance-appropriate language.
Leave brackets [like this] for where I'll add specific client information and clinical details.
The boundary: AI creates the template. You fill in the clinical specifics. You're not sharing protected health information with AI.
Task 3: Creating Workshop Outlines
The situation: You're presenting a workshop on mindfulness for anxiety, boundaries in relationships, or parenting strategies. You need an outline.
Why AI is appropriate: This is public-facing education, not individual treatment.
The prompt:
Create an outline for a 90-minute public workshop on [topic] for [audience].
Workshop goals:
- [what participants should understand]
- [what participants should be able to do]
- [how they should feel leaving]
Include:
- Opening activity (10 min) - something engaging that introduces the topic
- Key concepts (30 min) - 3-4 main ideas, each explained simply
- Interactive exercise (20 min) - something participants can practice
- Q&A preparation (15 min) - likely questions and how to address them
- Closing (15 min) - summary and takeaways
For each section, give me talking points and time estimates.
Tone: Accessible, not academic. Educational, not therapeutic.
The boundary: You're creating educational content for the public, not doing therapy. Same as writing a blog post or giving a community talk.
Task 4: Writing Professional Emails
The situation: You need to email a referral source, respond to a consultation request, or communicate with professional colleagues.
Why AI is appropriate: This is professional communication, not protected health information.
The prompt:
Write a professional email [about specific situation].
Context:
- I am a [your credential] in [your location/setting]
- This email is to [recipient type]
- Purpose: [what you need]
Tone: Professional but warm. Collegial.
Include:
- Brief introduction if we haven't corresponded before
- Clear purpose of email
- Specific request or information
- Professional closing
Keep under 150 words. Length matters for response rate.
Example situations:
- Thanking a referral source
- Requesting consultation with a colleague
- Responding to a workshop inquiry
- Following up on a professional networking connection
The boundary: No client information. This is business communication.
Task 5: Developing Marketing Content
The situation: You need website copy, blog posts, or social media content about your practice.
Why AI is appropriate: This is public-facing marketing, not clinical content.
The prompt:
Write [website copy / blog post / social media caption] for a therapy practice.
Practice information:
- Specialization: [your areas]
- Client population: [who you serve]
- Approach: [your therapeutic orientation]
- Location: [where you practice]
Key message: [what you want potential clients to know]
Tone: Warm and professional. Approachable but competent. Hope without overpromising.
Include:
- What it's like to work with you
- Who you work best with
- How to get started
- [specific elements for the content type]
Avoid:
- Guarantees of outcomes
- Clinical jargon
- Anything that sounds like diagnosis or treatment
- Anything that could be misconstrued as medical advice
The boundary: You're marketing your practice, not providing clinical services. Same as hiring a marketing consultant.
The Questions You're Still Asking
"What about session notes?"
Don't. Write them yourself.
Even if you're just "cleaning up" your rough notes with AI. Even if you're "making them more professional."
Those notes are clinical documentation. They're your professional opinion. They're protected health information.
They're yours to write.
"Can I ask AI about theoretical approaches?"
Yes. For your own learning.
"Explain the difference between CBT and ACT approaches to anxiety" is fine. That's education.
"What CBT intervention should I use with this client who presents with..." is not. That's asking for clinical judgment.
"What if I anonymize client information?"
Still risky.
You might think details are anonymous. But combinations of demographic information, presenting problems, and history can be identifying.
And honestly? If you need AI to help you think through a clinical situation, what you actually need is supervision or consultation with another therapist.
That's not a failure. That's good practice.
"Can I use AI to understand research papers?"
Yes. This is continuing education.
Copy the abstract, ask AI to explain it in plain language. Ask about methodology. Ask how findings might apply to different populations.
Just remember: AI isn't a substitute for critical thinking about research. It's a tool to help you understand it faster.
The Real Ethical Question
Here's what you're really asking: "Am I taking shortcuts that hurt my clients?"
That's the right question.
And the answer is: It depends what you're using AI for.
Using AI to write insurance letters faster? That's efficiency. More time for clients.
Using AI to draft marketing copy? That's delegation. Same as hiring someone to design your website.
Using AI to create psychoeducation materials? That's leveraging technology. Same as using a therapy app to assign homework.
Using AI to decide what intervention to use? That's abdicating responsibility. That's the problem.
The question isn't "Is AI ethical in therapy?" It's "Am I using AI in ways that maintain my professional responsibilities?"
What to Tell Clients
You don't need to announce you use AI for administrative tasks. Just like you don't announce you use spell-check.
But if a client asks? Be honest.
"I use AI tools to help with things like creating educational handouts and managing administrative work. I never share any information about our sessions with any AI tool, and all my clinical decisions are based on my training and judgment."
Done. Transparent. Boundaried.
The Prompts to Avoid
Some prompts sound harmless but aren't. Watch for these:
"Based on these symptoms, what's the likely diagnosis?"
That's asking AI to diagnose. Hard no.
"What should I do when a client says [specific thing]?"
Clinical decision. Your job, not AI's.
"Help me write a treatment plan for [client description]."
Treatment planning is clinical work. You can't outsource it.
"What questions should I ask to assess for [condition]?"
Assessment is clinical judgment. AI can't replace your training.
If the prompt involves a specific client or clinical decision-making, don't use it.
When You're Not Sure
Ask yourself three questions:
- Would I be comfortable if my licensing board saw this?
If you'd need to explain or defend it, probably don't do it.
- Am I sharing any information that could identify a client?
If yes, stop.
- Am I using AI to make a decision that requires clinical judgment?
If yes, stop.
When in doubt, talk to a colleague. Get supervision. Consult your ethics guidelines.
Don't ask AI to tell you if it's ethical to use AI. That's recursive nonsense.
The Burnout Problem
Here's the real reason this matters.
Therapists are burning out. You're drowning in paperwork. You're exhausted from insurance companies. You're spending hours on administrative tasks that have nothing to do with helping people.
If AI can give you some of that time back? That's not unethical. That's sustainable practice.
Burning out and leaving the field helps no one.
Using AI for appropriate tasks so you can stay in the work? That's ethical too.
Your Action Plan
If you're going to use AI in your practice, do it right:
This week:
- Identify one administrative task AI could help with
- Create a prompt for it (using the templates above)
- Test it with non-client-specific information
- Review the output carefully
- Decide if it's useful
This month:
- Review your state licensing board's stance on AI (if they have one)
- Talk to a colleague about their approach
- Update your informed consent if needed
- Establish your own clear boundaries for AI use
Ongoing:
- Never get comfortable. Always question whether each use is appropriate.
- Stay updated on ethics guidelines as they evolve
- When in doubt, err on the side of caution
The Bottom Line
You can use AI ethically in your practice.
But you have to be intentional. You have to maintain boundaries. You have to stay in your role as the clinician.
AI is a tool. Like any tool, it can be used well or poorly.
The fact that you're reading this—that you're thinking about the ethics—means you're probably going to use it well.
Trust your clinical judgment.
That's something AI will never replace.