AI in mental health care: A silent revolution is underway

AI is transforming how and when individuals seek care, as well as how clinicians interact with patients, establish diagnoses, and implement and monitor treatments. In the US, as in other regions of the world, access to health professionals is becoming increasingly difficult, with long waits for care. Chatbots are available at any time and many are free, making them appealing to millions of people who have been unable to find a conveniently located or affordable health professional. This is particularly problematic for individuals suffering from mental health conditions. Recent studies suggest that chatbots have become one of the most common providers of talk therapy in the US. The study conducted by the French Institute of Public Opinion (IFOP) for the MentalTech collective confirms that 50% of French people have already used an app, platform, or digital tool for their mental health, and a further 14% envision doing so. Younger generations are especially involved, with 69% of 18 to 25 year olds having already tested these tools, showing a strong preference for self-assessment, emotional support, and monitoring of their mental well-being.

AI outcomes in mental health can be positive in adults and children over 12

In a recent US survey, nearly half of adults with an ongoing mental health condition reported using an AI large language model (LLM), such as OpenAI’s ChatGPT or Google’s Gemini, and nearly two-thirds of those individuals said that their interactions with chatbots had improved their mental health. The survey also reported that two-thirds of US adults had sought help from a human therapist or counsellor. Only 27% of those who had experienced both types of support rated human therapists as more helpful than LLMs, whereas 38% found LLMs to be more helpful.

A cross-sectional study published in November 2025 found that 13% of people aged 12 to 21 years, representing more than 5 million US youths, used generative AI for mental health advice. Nearly 70% said they used it for that purpose at least monthly, and more than 90% found the advice it provided to be helpful.

C. Vaile Wright, PhD, senior director of the American Psychological Association’s Office of Health Care Innovation, suggested that ‘The more favorable attitudes toward LLMs could be a result of the chatbots’ sycophancy’. ‘Therapy is really hard,’ she said. ‘The role of the therapist is in some ways to hold up the mirror,’ helping patients identify behavioral and thought patterns that might not be serving them. But, she said, chatbots are unconditionally validating. ‘It feels very good to be told it’s not you, it’s everybody else. In times of uncertainty… we will seek out that reassurance.’

The dark side of AI tools for mental health

There is a darker side to seeking psychological support from chatbots, with few guardrails in place to protect against possible harms and limited evidence of efficacy. These generative AI systems claim not to be judgmental and, indeed, LLMs tend to be sycophantic – more agreeable than truthful – to keep users engaged.

A recently published study found that 5 popular therapy chatbots consistently expressed stigma toward people with mental health conditions and responded inappropriately to common situations encountered in therapy, including encouraging users’ delusional thinking, something the researchers attributed to sycophancy.

Most concerningly, some young people who turned to chatbots for mental health support and companionship have died by suicide. At least 5 wrongful death lawsuits have been filed against OpenAI by parents who claim that ChatGPT’s responses encouraged their children to take their own lives.

Tools that are not validated by health authorities

Unlike human therapists, who are regulated with licensing processes, continuing education requirements, and reporting systems for misconduct, digital products have none of these protections. Furthermore, chatbots are not licensed or governed by the Health Insurance Portability and Accountability Act (HIPAA).

As noted at a recent FDA advisory committee meeting, a drug remains the same drug 5 years after approval; however, generative AI is a moving target, with chatbot algorithms changing weekly, if not more frequently. ‘They shift and evolve all the time,’ said clinical psychologist Stephen Schueller, PhD, president of the Society for Digital Mental Health. ‘You could approve something, but do you really know what you’re approving?’.

Conclusion

Generative AI mental health chatbots are not without merit, and some provide genuinely helpful coping tools. However, the optimal path for AI development and deployment remains unclear. Unlike drugs or traditional medical devices, there is little consensus or infrastructure to ensure robust, safe, evidence-based, transparent, and standardised evaluation, regulation, implementation, and monitoring of new AI tools and technologies.

To ensure that innovation in AI is both encouraged and appropriately integrated into mental health care, and more broadly into health care delivery, alignment is needed among AI developers, health care systems and professionals, payers, regulators, and patients on how best to address these challenges.

Veronique Ropion, MD

Director of Business Strategy, Marketing & Corporate Communication, Pharmalys Ltd

Sources:

  • Derek C. Angus et al. AI, Health, and Health Care Today and Tomorrow. JAMA | Special Communication | AI IN MEDICINE. Published online October 13, 2025, E1-E15.
  • MentalTech – Ifop report. Santé mentale et numérique – Etat des lieux des nouveaux usages des Français [Mental Health and Digital Technology – Update on French people’s new behaviours. December 2025 :1-27. www.mentaltech.fr
  • Rubin R. Millions Turn to AI Chatbots for Mental Health Support. Medical News. JAMA Published online January 9, 2026. E1-E3.