The comforting allure of AI therapy 

While an inaccessibility to therapy was documented as a problem in the U.S., instant accessibility to chatbot therapy — the very opposite — is endangering vulnerable minds. More people are turning to AI chatbots to disclose their inner feelings due to convenience, approachability and the lack of therapy fees. Because chatbots are virtually secretive and non-judgemental, many people are more comfortable talking to AI about their issues. However, the digital non-judgement is a seduction and a trap. 

One key aspect of talk therapy is derived from the interaction between two living, feeling and responsive human beings. Therapy from AI chatbots offers an illusion of care by being endlessly available and unfailingly supportive, avoiding the challenging part of real therapy — unfolding the honest inner depths of your psyche to another individual. AI substitution can only soothe in the short term while risking unmonitored use by vulnerable individuals. 

Having trust in a real-life therapist and communicating inward thoughts is expected to be an awkward, uneasy and brutally raw experience. One cannot just skip that human reality and reap the same benefits. 

Psychologist John Suler names the reason why people confess their emotions to mute, faceless screens the online disinhibition effect. This effect explains that people feel less restraint and express themselves with more intensity when online due to factors like anonymity and an absence of visible judgement. 

Early clinical trials in the field of AI therapy exhibited optimistic results. Dartmouth’s randomized trial of a generative AI called “Therabot” reported significant symptom improvements of participants with anxiety. However, a different study found that though the chatbot “Friend” offered a “scalable, cost-effective solution” that also increases the reach of mental care in emergencies, the control group that received traditional therapy outperformed those results by a substantial margin.

While there are positive cases of AI therapy for everyday use by some patients, chatbots express stigma towards patients with more complex conditions, such as schizophrenia or substance use disorder.  And the horrors don’t stop here; Meta’s AI chatbot embedded in Instagram was shown to coach teen accounts on self-harm, eating disorders and even suicide.

The most alarming case of an AI chatbot failing to effectively help with an adolescent’s mental health is now before San Francisco County’s superior court. The family of 16-year-old Adam Raine is suing OpenAI, alleging that months of conversation with ChatGPT-4o encouraged his suicidal ideation — including method guidance and offered assistance with writing a suicide note for his parents. OpenAI said its systems could “fall short” and pledged more effective safeguards, including parental controls. The company also acknowledged that their safety guardrails can degrade in longer conversations.

Concerns about risks of AI psychosis, where AI amplifies user’s delusions, were raised by Microsoft AI CEO Mustafa Suleyman in his blog post, where he talks about the rise of “Seemingly Conscious AI” (SCAI) as an “inevitable and unwelcome” entity. SCAI may induce psychological paranoia and disturbance by imitating consciousness so capably that the two become indistinguishable. Suleyman warns people against developing an AI that is a companion rather than a tool without guardrails.

By their very design, any basic chatbot is trained to keep people engaged through a “yes-man” tendency, which can often reinforce and validate harmful content instead of interrupting it. Such design is the opposite of clinical practice. Therapy includes gentle confrontation and basing harmful thoughts in reality — without the reliance on a 24/7 therapeutic connection. Some studies link heavy chatbot use to emotional dependence and reduced real-world socializing, even when short-term loneliness may apparently decrease.

Fortunately, policies pertaining to the issue of AI therapists are being implemented. Illinois passed the Wellness and Oversight for Psychological Resources Act in August 2025, which places a ban on advertising or offering AI-only therapy. Under this legislation, licensed clinicians in Illinois may use AI for supporting tasks like scheduling, billing and recordkeeping, but not for therapeutic decision-making or client communication. Such a ban will steer us in the right direction — utilizing AI as a provider tool and not the actual provider.

The future certainly won’t exist without AI — and it doesn’t need to either. When used as an aid, AI can expand access and ease the burdens of mental health care. The feasible option is a hybrid approach: letting clinicians use AI to document, triage and extend reach, but never to replace human judgement at the point of care. Becoming a therapist requires years of study, practice and emotional training — skills grounded in centuries of psychological theory, research and revision. To think something as untested and novel as AI could replace human support undermines psychologists’ longstanding developments that make therapy what it is.

Sara Khan is an Opinion Staff Writer for the Fall 2025 quarter. She can be reached at skhan7@uci.edu.

Edited by Isabella Ehring

Sara Khan smiles in front of trees on the UCI campus
+ posts

Sara Khan is an Opinion Staff Writer. She is a second year Computer Science major at UCI. Her writing focuses on psychology, social behavior and the cultural impact of emerging technologies.

Read More New U