AI in Mental Health Care Supporting Well-Being
- Natasha Tatta

- Jan 8
- 6 min read
⚠ Disclaimer. This content is provided for informational and educational purposes only. It does not replace medical, psychological, or therapeutic advice, diagnosis, or treatment by a qualified professional. If you are experiencing distress or persistent symptoms, please consult a healthcare professional or appropriate support services. For immediate mental health support in Québec, call or text 9-8-8, available 24/7.

Ignoring the growing implications of generative AI in mental health care means overlooking a major shift in how support and well-being are being approached.
Service shortages, long wait times, rising anxiety levels, and increased social isolation are driving demand for digital solutions that are accessible, low-cost, and increasingly personalized.
As a result, AI-powered tools are emerging as a complementary layer of support rather than a replacement for professional healthcare.
If you’re ready to take action, you can jump to the section on how to use AI in mental health care responsibly.
The question is no longer whether AI should be used in mental health care, but how it should be used. A recent study is shifting the tone of that debate. 👇
Therabot: When AI in Mental Health Care Shows Measurable Clinical Effectiveness
Researchers studied the effectiveness of a therapeutic chatbot called Therabot, a tool fully powered by generative AI, comparable to chatbots like ChatGPT.
The study, published in March 2025 in the New England Journal of Medicine AI, followed a rigorous research protocol. A total of 210 adults with clinically significant symptoms were randomly assigned to two groups. The first group used Therabot for four weeks, while the control group was placed on a waitlist and did not have access to the tool during that period. This made it possible to assess the specific impact of the AI intervention in the absence of therapeutic support.

The results were striking. Participants who used Therabot experienced significantly less symptoms compared to those in the control group.
Furthermore, their improvements didn't fade once they stopped using the tool. The benefits persisted for up to eight weeks following the study. On average, users spent more than six hours interacting with the chatbot, indicating sustained and voluntary engagement.
Another finding stood out: participants rated their relationship with the AI tool as being just as satisfying as dealing with a human therapist. This detail is far from trivial. In psychotherapy, the therapeutic alliance, the feeling of being understood, heard, and supported, is a key predictor of treatment effectiveness. The fact that an AI system could reach this level of subjective perception raises important questions about the role it may play as a complement to traditional mental health care.
This is the first rigorous study to demonstrate that a conversational AI can reduce mental health symptoms at a clinically meaningful level. However, the researchers remain cautious.
They emphasize the need to replicate these findings on a larger scale, across more diverse populations and over longer timeframes, before drawing definitive conclusions.
In other words, Therabot is neither a miracle solution nor a replacement for human psychotherapy. Still, it provides tangible evidence that generative AI can move beyond superficial emotional support and produce measurable effects in mental health care.
AI in Mental Health Care Beyond Clinical Therapy
Generative AI is not limited to formal therapeutic settings. Much of its current impact lies elsewhere: in everyday well-being, reducing loneliness, and supporting ongoing, accessible self-reflection. This is where AI-powered wellness tools are rapidly expanding.
Apps such as Manifest rely on AI-generated personalized affirmations to create brief moments of emotional connection. The goal isn't to treat a mental health condition, but to offer positive micro-interventions: a phrase that resonates, a gentle reminder, an invitation to pause and refocus.

The idea behind these applications is simple: if people already spend a significant amount of time on their phones, that same channel can be used to introduce healthy practices.
This approach favours short, frequent, and personalized interactions rather than long, formal sessions.
What’s being offered here is enhanced emotional support, not therapy. AI becomes a discreet companion, capable of reflecting emotional states, normalizing certain feelings, and encouraging perspective-taking. For a generation often hesitant to engage with traditional healthcare structures, this kind of initiative can make a meaningful difference.
When AI Helps Detect Mental Health Risks
Another promising application of AI in mental health care lies in prevention, particularly before critical situations arise. In Québec, since 2024, research teams from Université Laval, Université de Montréal, and Dalhousie University have been working on AI models designed to analyze and predict suicide risk using large-scale data.
The work is supported in collaboration with the Institut national de santé publique du Québec (INSPQ), which provides access to extensive, structured datasets. AI is used to identify correlations, weak signals, and risk trajectories that would be extremely difficult for humans to detect at scale.
These models aren't intended to provide individual diagnoses. Instead, they work as decision-support tools, helping guide prevention strategies, prioritize interventions, and improve population-level understanding of mental health risk factors.
ChatGPT Health Setting Clear Boundaries When Health Is at Stake
In the same spirit of prevention and responsibility, OpenAI recently announced ChatGPT Health, an initiative designed to more carefully frame how AI is used when conversations involve health and mental well-being.

The goal isn't to provide diagnoses or replace qualified professionals, but to improve the caution of responses, reinforce clear limitations, and more consistently direct users toward appropriate human support resources when distress is identified.
This approach reflects a broader trend in AI in mental health care: AI can support information, reflection, and orientation, as long as it's deployed with explicit safeguards and a clear awareness of its limits.
The Promise, Potential, and Limits of AI Therapists
Between everyday well-being tools and clinical research lie AI therapists, chatbots designed to mirror the structure of established therapeutic approaches, such as cognitive behavioural therapy (CBT).
Solutions like Sonia AI, focused on emotional support, or DrEllis, designed for men’s mental health, offer guided conversations, an empathetic tone, and continuous support, around the clock 24/7.

Their main strength is accessibility. For people who hesitate to seek care, are on a waitlist, or are looking for support between sessions, these tools can provide structure, guided exercises, and a space for expression.
The conversations are often inspired by validated therapeutic frameworks, using open-ended questions, reflective responses, and cognitive reframing prompts.
That said, these AI tools carry no clinical responsibility, aren't suited for crisis situations, and cannot manage complex or urgent cases.
There are also risks related to emotional dependency, as well as important concerns around data privacy and protection. For these reasons, AI chatbots should be viewed strictly as a complement — never a substitute — to professional mental health care, especially for severe anxiety or depressive disorders.
What AI Does Well... and Likely Never Will
🟢 What AI does well:
listen without judgment or fatigue,
analyze, reframe, and structure thoughts,
help normalize certain emotions,
suggest simple, repeatable exercises,
provide immediate availability.
🔴 What AI does less well — or not at all:
make clinical judgments,
fully grasp the complexity of human context,
replace a professional’s intuition and experience,
assume legal or ethical responsibility,
respond appropriately in crisis situations.
When used thoughtfully, AI in mental health care can support, accompany, and help prevent escalation. However, it can also create false expectations and can never replace human connection. The future of AI in mental health care lies precisely in maintaining this balance.
How to Use AI in Mental Health Care Responsibly and Effectively
First, AI can serve as a tool for structured self-reflection. By asking the right questions, it helps put words to what feels unclear, identify recurring thought patterns, and create distance from intense emotions.
This type of use is helpful for exploring personal blocks, clarifying sources of stress, or beginning work on self-esteem.
To turn perceived weaknesses into strengths, break through psychological barriers, and step out of routine patterns, here are prompts designed for that purpose:
Next, AI can support well-being and holistic health habits. It can accompany reflection on life balance, sleep, energy management, nutrition, physical activity, and the alignment between mental and physical well-being.
AI can act as a mirror or a guide — offering perspectives, asking questions, or suggesting exercises — without ever replacing medical or psychological care.
Prompts focused on holistic well-being can, for example, help explore the links between mental stress and physical fatigue or identify routines better suited to your personal rhythm:
It's essential to set clear boundaries. AI should never be used to manage a crisis, replace a diagnosis, or determine a treatment plan. In cases of significant distress or ongoing suffering, support from a qualified health professional remains essential.
AI in Mental Health Care Supporting Well-Being With Discernment
When used thoughtfully, AI in mental health care can become a tool for reflection, clarity, and prevention that is accessible, ongoing, and complementary to human support. It cannot “treat” on its own and will likely never replace professional expertise, but the evidence suggests it can be quite helpful.
As with any powerful technology, the risk isn't blind enthusiasm or outright rejection. The real risk is using it without safeguards, boundaries, and critical thinking. Properly used, AI can act as a safety net, or a first step toward seeking human help.

Natasha Tatta, C. Tr., trad. a., réd. a. A bilingual language specialist, I pair word accuracy with impactful ideas. Infopreneur and Gen AI consultant, I help professionals embrace generative AI and content marketing. I also teach IT translation at Université de Montréal.




