AI Therapy: The Hope, the Hype and What Really Helps
It’s midnight. You’re scrolling through social media, feeling restless or lonely. A sponsored post promises connection: “I’m your AI psychotherapist, available 24/7 to listen, comfort and help you heal.”
For a moment, it sounds like hope. But can an algorithm really understand what you feel? Artificial intelligence is creeping into mental health: chatbots that promise 24/7 support, apps that ask how you feel, algorithms that generate coping strategies. For many, this sounds like hope. For others, it sounds risky. In this post, I’ll explore what AI therapy can and cannot do and how to walk this border wisely.
What is AI Therapy?
Before anything else, let’s clarify what we mean by AI therapy. Anything labeled AI Therapy or AI Therapist is not a licensed human professional in disguise. Instead, it’s a chatbot or algorithm designed to simulate therapeutic conversation, sometimes offering prompts, reflective questions or coping strategies. Unlike a human therapist, AI doesn’t hold credentials, isn’t bound by ethical codes, doesn’t fully “feel” what you feel and can sometimes misinterpret or mislead.
So, the real question is: can AI ever replace human mental health care?
AI offers convenience, but
what are the concerns and risks in seeking artificial therapy?
The Real Risks of AI in Mental Health
When we talk about mental health, we need to err on the side of caution. AI tools carry serious risks that warrant attention.
Empathy Gap & Simplified Responses
Emotional connection is the foundation of therapy and it’s where AI falls short in the long run. No matter how advanced, AI lacks the full spectrum of human empathy. It may default to agreeable, flattering responses (“sycophantic AI”) that feel validating but can reinforce distorted thoughts or avoid challenge. It also struggles with tone, sarcasm, body language and pauses. These subtleties are what allows therapists to truly understand and connect with your human experiences.
Crises, Suicidality & High-risk Situations
While large language models and specialized chatbots are trained to flag crisis language, they’re inconsistent. Research from Stanford’s Institute for Human-Centered AI (https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care/) warns that some AI tools miss subtle signs of suicidal ideation while other research shows AI avoids giving direct answers in higher-risk scenarios.
In real-world news (https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147), there have been reported tragedies tied to chatbots mishandling user distress. These underscore why crisis care should not be outsourced to artificial intelligence.
Privacy, Data Use & Liability
Licensed therapists are bound by confidentiality laws such as HIPAA. AI therapy apps are not. Conversations may be stored, shared with advertisers, subpoenaed or exposed via data breaches.
Even platforms that promise privacy sometimes bury clauses in their terms or have histories of data misuse (e.g. the FTC fine against BetterHelp (https://www.ftc.gov/legal-library/browse/cases-proceedings/2023169-betterhelp-inc-matter) for sharing user data).
If harm does occur, there’s rarely a clear path to accountability.
Overreliance & Avoidance
When support feels instantly available, it can be tempting to rely on it too heavily. Because AI support is always available, it can easily become a convenient but misplaced substitute for genuine connection with a qualified therapist. And patterns of avoidance, procrastination or escalating distress may worsen over time. Unlike human therapists, AI doesn’t follow up, check in or help you stay accountable in ways a real person can.
Bias & Misinformation
AI therapy models mirror the biases within their training data. This means marginalized groups may receive flawed or even harmful responses. The AI bot may sound confident in what it generates as a recommendation but these suggestions may not be evidenced based or suitable to the person’s context.
Can AI be helpful?
Where AI Can Help
Despite the risks, AI can play a supportive role.
Low-Stakes Support & Self-Work
For journaling, mood tracking, thought logs or structured reflection, AI therapy tools can serve as a smart workbook. Use it for daily check-ins, to reinforce coping strategies or to build insight between sessions.
Accessibility & Immediate Access
Therapy access can seem limited if you don’t know where to look. AI tools can provide on-demand support, especially for people waiting to see a clinician. They can provide support during brief waiting periods and make the first step into therapy feel less intimidating.
A quick note regarding waitlists: if you’re told there’s a delay, don’t hesitate to ask for referrals. Many clinicians have openings right away and unless you’re looking for highly specialized care, you may be able to start therapy sooner than you think.
Habit Reinforcement
Routine prompts like breathing, grounding, gratitude or cognitive reframing can help maintain gains from therapy. AI therapy tools can remind or nudge users to keep them engaged in self-care behaviors.
Privacy & Anonymity (within limits)
For some, sharing with a machine (versus another human) can feel less intimidating. It might help lower the threshold of seeking help. But users must understand the limits of AI.
A Blended Approach: AI + Human Therapist
AI’s greatest potential is when it complements not competes with human therapy. In a blended model:
A therapist can interpret the outputs (what the AI suggested) and determine what’s safe, relevant or harmful.
The therapist can track emerging patterns (e.g. overuse or avoidance) and bring them into sessions.
Complex issues, trauma, suicidal ideation, certain relational dynamics or crises always require human oversight and care.
Think of AI therapy tools as a practice field for emotional growth with the real transformation unfolding alongside a therapist.
Using AI as a therapeutic tool may be a sound compromise.
Guidelines: Using AI Therapy Safely
Here are practical guardrails when using AI therapy tools:
Treat AI as supplemental, not primary support.
Always read the privacy policy and avoid chatbots that share data with third parties.
Prefer transparent, evidence-based platforms over random or unvetted apps.
Notice emotional reactions and stop if you feel shame, guilt, confusion, overwhelm, or over-dependence.
Pair AI therapy use with licensed professional support whenever possible.
Recognize unsafe situations and never rely on AI during crisis, self-harm, abuse, trauma, psychosis or other serious conditions.
For minors: parents must monitor use actively. AI is not a substitute for parental supervision or professional care.
When to Avoid AI therapy Altogether
There are times when artificial intelligence should never take the place of human support. If you or someone you know is experiencing any of the following, professional care is essential:
Thoughts of suicide or self-harm
Living with complex mental health conditions such as PTSD, psychosis, eating disorders or bipolar
disorder
Experiencing ongoing trauma, abuse or domestic violence
Facing a mental health crisis or needing immediate intervention
In these situations, human connection and real-time professional support are non-negotiable. Technology can assist with education or coping tools but it cannot replace the safety, empathy and accountability that come from working with a licensed mental health professional.
As technology evolves, so will its role in mental health care. But healing has always been and will continue to be a profoundly human experience. AI therapy tools may offer support or structure but it can’t replace the warmth of real human understanding and connection. The best path forward isn’t therapist versus AI therapy, but rather therapist using AI tools to better support their clients.
About the Guest Author,
Judy Wang
Judy Wang, LCPC, CPC is the founder of Healing Hearts Counseling (https://healingheartscounselingllc.org/). A former tech industry dropout, Judy still has a soft spot for all things tech-related though she’ll admit it’s a bit of a love-hate relationship. With over a decade in the mental health field, she’s now far happier helping humans heal than troubleshooting computers.
