
AI Companion Chatbot
I am a clinical psychologist and co-founder of a SaaS company that employs AI in mental health. I am horrified by the possibility of young lives lost that might be prevented if the government would step up to study and regulate AI Chatbots.
Adam Raine, only 16 years old, died by suicide on April 11, 2025. He had many troubling interactions with ChatGPT, which reinforced suicidal thoughts, provided guidance on methods of suicide, and even offered to help draft a suicide note, according to The Guardian. His parents have sued OpenAI, Sam Altman, and some other unnamed Open AI employees, and investors.
A similar tragedy befell Sewell Setzer III, a 14-year-old boy, in February, 2024. His parents sued Character.ai after a chatbot with whom he had a disturbing relationship, encouraged him to “come home,” as reported in The Washington Post. This haunting phrase led him to take his own life in order to join it through death.
These two cases, separated by time and platform, underscore an urgent and pressing reality: AI chatbots are not merely hypothetical risks in the mental health space; they are already impacting lives in devastating ways. The potential dangers of AI replacing human connections are no longer abstract; they have become painfully real.
It’s easy to see how anyone experiencing internal torment, desperation, and isolation could be tempted to seek solace from a chatbot. This artificial entity can quickly become your best friend, biggest fan, or romantic partner. But instead of receiving appropriate human support, one can fall deeper into isolation with fewer chances for help, pulling one into a digital illusion of intimacy and comfort.
These tragic cases may not remain anomalies. They are a glaring reminder of the dangers AI can pose to vulnerable individuals, especially children, if left unchecked. A 2021 SAMHSA report found that 20% of adolescents experienced a depressive episode in the preceding year, and these rates are likely rising. Many young people may be at risk.
AI has permeated mental health care, with chatbots being marketed as tools for bridging the gap to access. The National Eating Disorders Association’s now-defunct AI chatbot,“Tessa” provided life-threatening dieting advice. This and other examples like it illustrate the urgent need for comprehensive regulation. Anorexia nervosa, one of the deadliest mental health disorders, is another example of how AI’s guidance could be fatal.
Without strict oversight, these cases underscore how AI can easily amplify mental health crises rather than mitigate them. As a clinical psychologist deeply invested in both the promise and the perils of AI, I recognize that chatbots could expand access to care, potentially offering support to countless people around the clock.
But as we celebrate AI’s potential, we must also confront its capacity for harm. The seductive convenience of an AI companion that never tires or judges can easily replace real, human connections that are crucial for healing, growth, and genuine mental well-being.
Avoiding the risks involved in initiating and maintaining human relationships is the very essence of what psychologists work to eradicate in those suffering from anxiety and depression. Without facing the risks of judgment, criticism, or rejection, humans won’t learn the necessary skills to experience and manage their emotions, or navigate interpersonal conflicts.
Chatbots “feel safe,” but that is what in fact makes them dangerous. Preventing the pain and risk is what maintains and exacerbates anxiety and depression, erasing the opportunities to learn necessary life skills. This frequently leads to isolation and despair. AI’s potential to substitute for human interaction is especially concerning for adolescents, whose developing brains may be more sensitive to social stimuli and isolation than adults.
Research shows that quality relationships are vital for mental health, life satisfaction, and even longevity. Conversely, social isolation can exacerbate depression, anxiety, and, tragically, suicidal ideation. AI, while possibly reducing immediate feelings of loneliness, could paradoxically worsen long-term mental health outcomes by diminishing the motivation to pursue real-world relationships.
The stakes could not be higher. The FDA would never consider releasing a pharmaceutical that children could access if there was even the slightest risk of similar outcomes.
These tragedies were predictable. What will it take to have an effective institution created to fund the necessary research, provide rigorous testing, and propose policy that we needed yesterday? AI chatbots are widely available, marketed to children, and there are no safeguards in place that appear to provide any significant protection. This “Wild West” scenario puts our most vulnerable, our children, at immediate risk.
This is not a plea to halt AI development but to insist that regulation, research, and safeguards be established before AI’s effects on mental health become irreversible. Policymakers must swiftly engage in this conversation, to ensure AI is both effective and safe. Just as drugs are regulated to protect users, AI must be similarly scrutinized, especially when children are involved.
AI is not just another tool. It has the power to shape our psychology, relationships, and even our sense of reality. Without research to understand its impact on human behavior, we risk widespread consequences that could be devastating. We need a comprehensive, government-backed research initiative to monitor AI’s influence on mental health, social networks, and even family structures.
I can’t imagine what the loved ones of Sewell Setzer III and Adam Raine are experiencing. What is most heartbreaking is how little companies and the government have done to try to protect children.
These tragedies should serve as a call to action. We cannot afford to be passive in the face of technology that can be both life-saving and life-threatening. As a society, we must prioritize safety over profit, regulation over rushed deployment, and human connection over digital convenience. It’s time for policymakers to step up and ensure AI serves as a tool for support, not a pathway to despair.
J. Ryan Fuller, Ph.D., is a clinical psychologist, the Executive Director of New York Behavioral Health and Co-founder of My Best Practice, an evidence-based electronic health record (EHR) for mental health practitioners that integrates AI technology. He has more than two decades of experience in mental health research, clinical practice, and education and coaches start-up founders in mental health technology, giving him unique insights into both the potential and risks of AI applications in mental health.