Using AI for Therapy: The Risks of Replacing a Therapist


The billion-dollar question isn’t if AI can support mental health, but how it can do so meaningfully and ethically — while remaining affordable and accessible. Even still, it has a clear limit: AI can't fully replace human mental health professionals.

So, how many people are already turning to it? A 2024 study in JMIR Mental Health, cited by the National Library of Medicine, found that about 28% of people use AI for quick mental health support or even as a personal therapist.

It’s easy to see the appeal. AI offers immediacy, affordability, and freedom from stigma — things not always found in traditional therapy. But the real question is: can it ever be as safe, effective, and healthy as working with a trained professional?

In this article, I’ll break down the risks of relying on AI for therapy and the lasting benefits of working with a human therapist.

Robot with laptop to represent using ai for therapy

Is it safe to use ChatGPT as a therapist?

No, it is not safe to use ChatGPT as a therapist. But that doesn’t mean ChatGPT can’t be helpful in other ways. In fact, plenty of behavioral health companies are actively working to train AI to support mental health in ways that are ethical and effective.

As a clinical psychologist and therapist, I think it’s really important to know the word “therapist” isn’t legally protected. This means almost anyone — or large language model or bot — can call themselves a therapist, regardless of training or experience. 

What we want to keep in mind is that there’s a world of difference between trained vs. untrained and licensed vs. unlicensed therapists. AI is trained in its own way, but not like humans are. And AI is not, and cannot be, licensed like a human therapist is. At best, AI will become regulated via legal and ethical standards.

While most Americans still trust humans more than AI, attitudes are shifting quickly, and the technology is evolving even faster. That’s why it’s important to pause and look clearly at the risks, especially of leaning too heavily on AI without understanding its limits. 

Limitations of Using AI for Therapy

AI as a therapist is a serious topic — so serious that some states are banning its use in therapy all together. Let’s explore five clear limitations of using AI for therapy. For each one, I’ll share research, nuance, and questions to help you feel empowered to make the best decisions for you.

Lack of Empathy and Human Connection

Empathy has long been recognized as one of the most important parts of effective therapy. Decades of research shows empathy as a strong predictor of outcomes and empathy associated with better clinical outcomes. But AI bots do not feel empathy or offer human-to-human connection.

Here’s a twist: research shows that a client’s perception of their therapist’s empathy matters more than the therapist’s actual experience. This means that an AI bot can trick you and make it seem like it’s feeling empathic because it’s been taught to respond in ways that seem understanding, even though there isn’t a real, bi-directional emotional connection. 

Questions to reflect on:

  • How important is it to me to experience real emotional connection with another human?

  • Do I feel capable of forming necessary human-to-human connections in my life?

  • Could relying on AI make it harder to deepen my relationships with real people?

  • How well do I feel seen and heard by the humans around me?

  • Could AI support me in practicing empathy in ways that enhance my real-life relationships?

  • Could relying on a false sense of empathy from AI give me unrealistic expectations in my real-life relationships?

Inability to Handle Complex Cases

Mental and behavior health disorders and diagnoses are complex because humans are complex. However, AI is not equipped to handle complex cases. 

No two people with anxiety, depression, addiction, etc., have the exact same experience with the exact same thoughts, feelings, circumstances, relationships, race, education, privilege, and more. And all of these parts can be crucial to getting the kind of support we need and that will be effective for us.

Therapists are expertly trained on learning about these parts of us and understanding how each interacts with the others in the unique ways that make us, us. Therapy is the process of putting this together with another person, and research shows it’s better to do this with human empathy, trust, skills, and morality.

Questions to reflect on:

  • Do I believe my struggles are simple, or do they feel layered and connected to many parts of my life?

  • How important is it to me that my therapist understands the full picture of my circumstances, relationships, and history?

  • Could relying on AI risk oversimplifying what I’m going through?

  • In what ways might a trained human therapist help me connect the dots between my experiences more effectively than AI?

  • Do I feel safer knowing that a therapist has ethical training, accountability, and responsibility in how they guide me?

Potential for Bias and Misinformation

AI carries a real risk of bias and misinformation for mental health support. Why? Because AI is trained by humans and other AI, and humans are never free from bias or error

If biases are baked into AI training, there can be unintentional stereotypes, oversimplification, or even exclusion for entire groups of people. In mental health, where cultural context and identity are deeply important, these gaps in knowledge can lead to inaccurate or even harmful therapeutic support.

AI also struggles with misinformation. AI chatbots come up with responses for you by predicting patterns, not by fact-checking against lived expertise or your actual experience of which they can only know what you share with them. Because an AI bot can never be an all-knowing expert on you, their training will always be incomplete, potentially outdated, and or skewed. 

Questions to reflect on:

  • How important is it to me that the information I receive is not only accurate but also relevant to my cultural, personal, and relational context?

  • Could I recognize if AI gave me information that was biased or misleading?

  • Do I trust myself to filter AI responses, or do I tend to take them at face value?

  • What risks might I face if I rely on a tool that could unknowingly carry bias or falsehoods?

  • How does it feel to know that AI can sound certain, even when it’s wrong?

Ethical Concerns

The ethical questions around AI in mental health are enormous, and honestly bigger than I can fully cover in one article. These concerns range from the security and privacy of your personal data, to the reality that many companies entering this space are motivated by profit as much as by care. This can raise questions about whether your autonomy and well-being are always the top priority.

Other concerns include a lack of clear regulation, potentially becoming over-reliant on AI support, and liability because if harm is caused, who is actually responsible? AI can also present information with confidence, even empathy, but still be flat-out wrong or miss the nuance that human complexity requires. Even worse, when AI sounds authoritative, it’s easy to trust it even if it doesn’t actually serve your needs.

Questions to reflect on:

  • How much do I trust AI software with my personal and mental health data?

  • Does the fact that AI mental health tools are big business change how I feel about using them?

  • Who do I believe should be held accountable if AI gives harmful advice?

  • How comfortable am I relying on a tool that doesn’t have a license, a real face, or responsibility for my care?

  • Do I feel confident that AI can tell the difference between helpful information and harmful misinformation?

  • Could using AI for therapy support my growth — or make me too dependent on a tool that isn’t human?

Lack of Human Experience

AI can’t truly connect with you the way another person can. The human therapeutic relationship, with back-and-forth of real emotional understanding, is what builds trust, insight, and lasting change. It’s also what makes therapy feel safe and human, because at its core, therapy is about two people sitting together, navigating life’s most challenging parts.

The latest research backs this up: people still rate human-driven responses of empathy as more satisfying than AI-generated ones. Cognitive behavioral therapy interventions delivered by humans still have better outcomes than those delivered by AI. This is because therapy isn’t just about techniques — it’s about timing, nuance, trust, and shared humanity. 

Questions to reflect on:

  • How important is it to me that I have support from someone who has lived through human experiences themselves?

  • Do I believe therapy is only about tools and techniques, or also about trust, intuition, and relationship?

  • Could AI ever comprehensively understand my unique history, culture, or values the way a person could?

  • When I picture myself growing and healing, do I imagine doing that in connection with another human?

  • Could AI complement, but never replace, the human parts of therapy that I find most valuable?

in-person therapy with 4 people sitting together

The Benefits of Working With a Trained Professional

AI is already playing a role in mental health support — and in many ways, that’s a good thing. It can help fill critical gaps by offering immediate support, resources, and even interventions when people need them most. But here’s the key: when used ethically in behavioral health, AI isn’t replacing professionals. It’s working alongside them. The best models being developed today keep therapists at the center, integrating their knowledge and judgment into digital care.

What a therapist brings that AI never can is humanity. Therapists have lived experience, intuition, and the ability to read between the lines in ways algorithms can’t. They’re trained to recognize bias in themselves and in the systems they work within. And they hold themselves accountable to ethical codes, supervision, and regulation.

When you work with a trained, human therapist, you get access to:

  • A safe, confidential space to explore what you’re feeling without judgement

  • A therapeutic relationship that flexes and adapts to your history, goals, and changing needs

  • Someone who can challenge you with care, or slow down when things feel overwhelming and they sense it before you can say it

  • The comfort of knowing there’s a real human connecting with you and invested in your growth and well-being

  • Attention to the difference between conscious and unconscious thoughts, feelings and behaviors that AI will never have 

Therapy That Meets You Where You Are

Working with a human mental health professional offers things AI simply can’t, but that doesn’t mean AI isn’t valuable when used in healthy and ethical ways. It’s all about understanding what’s best for you, what combination of support works, and how to use it effectively.

I help clients navigate these decisions and work with anyone seeking therapy to improve their relationships with families or significant others. I’ll meet you exactly where you are, guide you to the best support for your needs, and if a mix of AI and human care is right, we’ll make that happen together.

Curious to learn more? Reach out here — I’d love to explore how I can support you.


love what you’re learning?

have my next article sent directly to your inbox.

Next
Next

Cognitive Dissonance in Relationships & How to Overcome It