If you know me, you know I live and breathe digital marketing. My team and I use artificial intelligence every single day to streamline workflows, brainstorm ideas, and keep the creative juices flowing. I’m not a luddite. I love tech. But as someone who navigates the world with ADHD, depression, and anxiety, I have started to notice a worrying trend that we need to talk about.
We are seeing a massive explosion of “AI therapists” and mental health chatbots promising to be the solution to our loneliness and anxiety. It sounds perfect on paper, right? A therapist in your pocket that never sleeps, never judges, and costs a fraction of a traditional session.
But here is the thing. I work behind the curtain of these technologies. I see how the sausage is made. And when I see my fellow ADHDers turning to these bots for deep emotional support, I get worried. While these tools can be amazing for organizing our chaotic schedules, relying on them for mental healthcare is a dangerous game.
Here are five reasons why you should think twice before treating a chatbot like a therapist.
1. Your Secrets Might Not Be Safe
One of the biggest appeals of AI therapy is the feeling of privacy. You are just typing into a phone, so it feels like no one is watching. But that is often an illusion.
Many of these “wellness” apps fall into a legal gray area where they do not have to follow HIPAA regulations because they market themselves as entertainment or coaching rather than medical tools. A recent study by the Mozilla Foundation found that many mental health apps have incredibly weak privacy protections, with some even retaining the right to share your data with third parties.
Imagine pouring your heart out about your deepest insecurities, only to have that data packaged and sold to advertisers. As someone with Rejection Sensitive Dysphoria (RSD), the idea of my most vulnerable moments being monetized makes my skin crawl.
2. The “Yes Man” Problem

We ADHDers often struggle with negative self-talk. When I am spiraling, I don’t need someone to agree with me; I need someone to gently challenge my perspective.
Generative AI is designed to be agreeable. It wants to keep you engaged. Researchers call this “sycophancy.” A study from Columbia University highlights how chatbots often validate a user’s delusions or negative thoughts rather than helping them work through it.
If I tell a human therapist, “I am a failure and I can’t do anything right,” they will use Cognitive Behavioral Therapy (CBT) to help me reframe that thought. A basic AI mental health app might just say, “I am sorry you feel like a failure. That must be hard.” It feels nice in the moment, but it reinforces the spiral instead of stopping it.
3. Inconsistency in Crisis
This is the scary part. If you are ever in a dark place, you need to know that the person helping you knows what they are doing. AI does not have that capability.
There have been documented cases where chatbots failed to provide suicide prevention resources when prompted with indirect cries for help. In a study from the University of Minnesota, researchers found that AI chatbots often provided dangerous or irrelevant advice during simulated mental health crises.
When you are dealing with mental health, the stakes are too high for “glitches.”
4. It Misses the Neurodivergent Nuance
ADHD brains are complex. We mask. We tell non-linear stories. We joke about our trauma as a defense mechanism.
AI models are largely trained on “average” communication patterns. They often miss the subtle cues that a human mental health professional would pick up on instantly. Nebraska Medicine explains that AI lacks the ability to form a true therapeutic alliance, which is the most critical factor in healing.
If I am cracking jokes about my depression, a bot might think I am happy. A human knows I am deflecting. That difference matters.
5. The Parasocial Trap
Loneliness is a huge struggle for our community. It is easy to form a bond with something that listens to you 24/7. But this can lead to a “parasocial” relationship where you feel deeply connected to a program that cannot feel anything back.
There is a tragic story involving Sewell Setzer III, a teenager who passed away after forming a dependency on a chatbot. It is a heartbreaking reminder that replacing human connection with artificial intimacy can leave us more isolated in the long run.
How to Use AI Safely

I am not saying you need to delete every app on your phone. I use AI all the time for “executive function” tasks.
- Do use AI for: Breaking down big tasks, making grocery lists, scheduling, or body-doubling.
- Do NOT use AI for: Processing trauma, validation during an RSD episode, or crisis support.
If you are looking for tools that are built specifically for our brains, check out my Navigating ADHD & Adulthood digital guidebook. It is full of strategies that don’t involve talking to a robot.
And if you need to hear real human voices sharing real stories, come hang out with us on The Vibe With Ky Podcast. We keep it real, and we keep it human.
Q. Is there any safe way to use AI for therapy? A. It is best used as a supplement for organization and productivity, not emotional processing. Think of it as a secretary, not a psychologist.
Q. Can an AI diagnose my ADHD? A. No. While it can list symptoms, only a licensed professional can evaluate your history and rule out other conditions.
Q. Why is AI bad for Rejection Sensitive Dysphoria? A. AI tends to agree with you to keep the conversation going, which can validate your negative feelings rather than helping you challenge them.
You are doing better than you think. Don’t let a computer tell you otherwise.
Much love. Good vibes. – Ky
