It’s 4:30 PM on a Tuesday. I’ve been in patient appointments and meetings all day. It’s the typical life of a clinical psychiatrist. My view is that of a “boots on the ground” mental health worker, but the ground is shifting beneath us. I am seeing a revolution happen in real-time, and we need to talk about it.
I am seeing more and more patients using AI for mental health care. They aren't just using it for productivity; they are reaching out to AI to talk about their deepest personal problems. Many tell me they don't need traditional treatments, like therapy, if they have AI. And for some, using AI is free.
AI is always listening. This isn't a theoretical prediction; it is a clinical fact reported to me by the people in my office. We can’t make it stop. And as a psychiatrist, I’ve had to accept a humbling truth: my patients are finding a type of safe harbor in AI that the traditional clinic hasn't been able to provide.
The Friction of Healthcare
Healthcare is notoriously hard to navigate. For the patients I see in Louisville, Kentucky the barriers aren't just clinical. They are logistical. They struggle with basic needs, transportation, and childcare. For them, the struggles of modern medicine are a series of barriers: scheduling waitlists, insurance copays, and the physical tax of getting to an office.
When a patient tells me they "don't need therapy" because of AI, they are sending a clear signal: The current system's friction has finally outweighed its value. The AI is winning because it is frictionless. It doesn't require a bus pass or a prior authorization. For many, the free tier of an AI model is the only "provider" that is immediate, affordable, and always holding the line.
The Time Gap
The traditional medical model assumes that healing happens in a "45-minute hour." But mental health crises don't schedule themselves for 10:00 AM on a Thursday. They happen in the minutes between sessions—at 2:00 AM when the world is quiet and the anxiety is loud.
Patients aren't choosing AI because it’s "better" than a psychiatrist; they are choosing it because it’s present. AI can be someone to talk to when there is no safe person available. When a patient is in a crisis, the "human gaze" can feel like an added weight of judgment or vulnerability. AI offers a neutral, non-judgmental mirror. It is a way to decompress that is available in the palm of their hand.
From Gatekeepers to Architects
We have to pivot. We must stop trying to "gatekeep" mental health and start building a way to make this inevitable shift safe. If patients are going to use AI as a mental health first responder, we have an ethical obligation to ensure that tech is tailored to psychiatric principles:
- Non-maleficence (Do No Harm): Does the AI know when to stop "talking" and trigger an emergency human response? Does it recognize the linguistic fingerprints of a true crisis?
- Integrity: Is the advice consistent with clinical standards, or is it merely "hallucinating" empathy?
- Transparency: Does the patient know where their most vulnerable data is going? We must protect their privacy as fiercely as we do their lives.
The New Safe Harbor
Our job now is to stop being the gatekeepers of a broken system and start being the architects of a safe AI first-responder model. We must build a way that guides patients back to human care when they need it, while validating the solace they find in technology when they are alone.
We can’t make it stop, and we shouldn't try. We should make it right. We need to build a world that actually works for the people living in it today.
The views expressed here are my own and do not necessarily reflect the views of my employers or affiliated organizations.