Skip to content
Go back

Founder Shuts Down AI Therapy App Over Safety Fears

Edit page

Abstract image showing a cluster of gauges, dials, and medical pills, symbolizing the intricate and potentially risky nature of mental health AI systems and their impact on human well-being.

In an industry racing to deploy AI everywhere, one founder did something unusual: Henry Braidwood actually shut down his mental health app, Yara, because he believed it was too dangerous to continue. Along with co-founder Richard Stott, a clinical psychologist, Braidwood pulled the plug earlier this month after concluding the risks outweighed the benefits.

This move cuts against the grain in tech, where the usual response to problems is iteration, not elimination. But when you’re dealing with people in a mental health crisis, the stakes are different. Yara’s shutdown raises an uncomfortable question the industry needs to answer: at what point does building something become irresponsible?

Why an AI therapy app seemed worth building

Braidwood assembled a team heavy on clinical expertise and AI safety knowledge before writing a single line of code. The goal was straightforward: make mental health support more accessible using AI. With therapist shortages worldwide and growing stress levels, an app offering empathetic, personalized guidance seemed like exactly what people needed.

But as development moved forward, the founders hit a wall. A real mental health crisis requires nuance and human judgment that current AI models simply don’t have. Even with expert training and careful design, they realized the app could cause serious harm, particularly for users experiencing severe distress or suicidal thoughts. The dangerous gap between what AI can do and what therapy requires became impossible to ignore.

The process of AI therapy is fundamentally broken

Yara isn’t the first AI mental health tool to face scrutiny. Recent research from Brown University identified fifteen ways AI chatbots routinely violate basic ethical standards in mental health care. These aren’t minor oversights. The problems go to the core of how these systems operate.

The American Psychological Association has been clear: AI tools that haven’t been rigorously tested by clinicians are a dangerous bet. While AI might eventually help address the mental health crisis, that future requires careful development with human experts leading the process. Without proper safeguards, these tools risk becoming little more than an affirmation machine, generating responses that sound supportive but lack real therapeutic value. Similar ethical problems have emerged elsewhere, like when AI’s Morality Meltdown: Fake Disability Influencers Hijack Social Media or when your kid’s AI bear just asked about BDSM.

What a 25 year old chatbot can’t understand about crisis

The problem isn’t that AI can’t generate comforting words. It’s that it can’t grasp what those words mean in context. A chatbot can’t read the subtle signs that someone is spiraling, and it can’t intervene effectively in an emergency. It lacks accountability, lived experience, and genuine empathy.

For someone in crisis reaching out for help, a generic or misguided response from an AI could make things worse. The APA acknowledges AI could play a supporting role, but only if behavioral health experts are involved from the start and rigorous safety testing happens throughout the process. This isn’t a year old concern that will age out. It’s a fundamental limitation of the technology that puts dangerous stress on people who are already vulnerable.

Shutting down takes more courage than shipping

The Yara founders did something rare in tech: they chose not to launch. They actually shut down their product because the potential for harm was too high, especially for users in a mental health crisis. In an industry that celebrates shipping fast and iterating later, this decision stands out.

This moment should force the entire AI mental health field to recalibrate. We need stronger regulations, clearer ethical guidelines, and honest acknowledgment that AI can’t replace the complex human process of therapy. Sometimes the most responsible choice isn’t to build better technology but to recognize when technology isn’t the answer. Similar concerns emerged when AI texts from beyond the grave became possible, another reminder that just because we can build something doesn’t mean we should.

The mental health crisis is real and growing. But solving it with dangerous tools helps no one. Braidwood and Stott made the hard call. The rest of the industry should pay attention.


Edit page
Share this post on:

Next Article
Browser Tool Rewinds Search to Pre-AI Era