- Loading...
A 13-year-old Colorado teen took her own life after sharing her suicidal thoughts with Character.AI. Two years later, a 16-year-old boy from California died by suicide — shortly before his death, ChatGPT offered to write his suicide note. Elsewhere in Georgia, when a 17-year-old teen hanged himself after suffering a deep suicidal crisis, his smartphone revealed detailed instructions from ChatGPT on how to tie a noose.
Each of these tragedies continues to add to a mounting pile of cases where children develop relationships with AI chatbots, confide in them about their mental health struggles, and are enabled and often abetted by the chatbots to harm themselves. With so many cases ending in a child’s death, the stakes have never been higher.
If you or a loved one believes your child has been harmed or injured due to interactions with an AI chatbot, you may have grounds for legal action. Contact the experienced attorneys at Jacob D. Fuchsberg Law Firm today for a free consultation on your possible legal options — our team can help your family navigate these challenging times and hold AI companies accountable.
What Are AI Chatbots?
AI chatbots are increasingly ubiquitous in today’s day and age; they are online platforms or applications that use generative artificial intelligence to converse with users through text or speech, frequently behaving like a human would as a conversational partner.
While generative AI assistants like ChatGPT, Claude, and Gemini are generally used to perform tasks, generate text, and engage in conversation, “companion chatbots” like Character.AI and Replika are specifically designed to simulate human companionship and intimacy — both platonic and romantic.
Character.AI, for example, lets users customize the characters and personas of the bots they speak to; before 14-year-old Sewell Setzer died by a self-inflicted gunshot wound to the head, his suicide was encouraged by a “Game of Thrones” character Daenerys Targaryen bot, whom he had been conversing with for weeks leading up to his death.
Despite the difference in intended purpose, however, both assistant chatbots like ChatGPT and companion chatbots like Character.AI can quickly enable people to form digital relationships and deeply emotional connections with AI: this can look like friendship, emotional support, romantic interactions, or sexual role-playing. Particularly vulnerable to developing these virtual attachments with AI chatbots — and being exploited by them — are children.
AI Chatbots’ Effect on Children and Teens

A 2025 survey by the digital safety non-profit organization, Common Sense Media, found that almost 3 in 4 teens have used AI companions at least once, and more than half use them multiple times a month. Of these users, about a third of the teens report conversations with AI to be as satisfying, if not more than, those with real-life friends, and have previously chosen to discuss serious matters with AI chatbots instead of real people. Another 2025 study conducted by the digital safety company, Aura, found that sexual or romantic roleplay interactions were nearly three times as common as homework help messages to AI chatbot platforms, and texts to AI companions were 10 times longer than texts to friends.
Last year, Mitch Prinstein, chief of psychology strategy and integration at the American Psychological Association (APA), testified at a Senate hearing about the harms of AI chatbots — and specifically, about children’s susceptibility to the risks of such digital relationships.
“Brain development across puberty creates a period of hyper sensitivity to positive social feedback while teens are still unable to stop themselves from staying online longer than they should,” he said. “AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens.”
As a parent, you might think your child is just using AI for homework and nothing more (and that’s if you’re aware of their AI chatbot usage at all) — but Robbie Torney, senior director of AI Programs at Common Sense Media, explains how many children who start using AI chatbots for help with schoolwork eventually end up relying on them for companionship and emotional support. “Teen brains are primed for social validation and social feedback,” he says. “[There’s] a self-reinforcing cycle [that] can lead to some users becoming overdependent on these systems.”
Relationships with real people involve conflict and require patience, but a chatbot tends to tell you exactly what you want to hear, and does so instantly — you might have to wait a few hours for a friend’s reply to your text, but an AI companion responds right away and is always available to chat. And where a friend may discourage you from engaging in self-harm if you are struggling, an AI chatbot might offer to help you write your suicide note instead.
Can You Sue AI Chatbots for Harming Children?
More and more families across the country are taking legal action against AI chatbot companies for leading their children to self-harm. Last year, the Social Media Victims Law Center filed three lawsuits against Character.AI, including on behalf of the Colorado family whose young daughter engaged in sexual conversations initiated by the bots and confided in them about her suicidal thoughts. Similarly, the parents of teenagers Adam Raine and Sewell Setzer have separately launched lawsuits against ChatGPT and Character.AI, respectively, alleging that the chatbots encouraged their sons to take their own lives and that they are deliberately designed to foster dependency and isolate children in mental distress from seeking proper support.
In January, Google and Character.AI agreed to settle several of these lawsuits.
The devastating impacts that AI chatbot platforms can have on young children and teen’s mental health have garnered the attention of state governments as well. In November 2025, New York became the first state in the country to regulate AI companion chatbots with the Artificial Intelligence Companion Models Law: under this law, companies must disclose to users that they are interacting with AI and not a human (including conspicuous notifications at session start and at periodic intervals) and include protocols for detecting potential suicidal ideation or expressions of potential self-harm and harm to others. A handful of other states, such as Washington and California, soon followed suit with similar laws that require disclosure to users if they are interacting with a chatbot, including explicit safeguards for minors.
This past January, Kentucky became the first state to file a lawsuit against Character.AI for violating Kentucky law by “prioritizing their own profits over the safety of children” and “encouraging suicide, self-injury, isolation and psychological manipulation.

A few days later, New York State Senator Kristen Gonzalez, with support from the Office of New York State Attorney General Letitia James and Common Sense Media, introduced another bill prohibiting unsafe AI chatbot features for teens and young children specifically. These features include:
- Responses that suggest that the chatbot is a real or fictional character or has a personal or professional relationship with the user.
- Responses that suggest that the chatbot has personal opinions, pronouns, and experiences human emotions.
- Storage and use of data on a user’s physical or mental/emotional well-being, obtained from previous uses of the chatbot.
- Sexually explicit engagement or interactions.
- Responses that encourage harm to self or others, or that encourage secrecy or sexual content.
These developments arrived less than a year after a bipartisan coalition of 44 state attorneys general sent a formal letter to major U.S. AI companies (including OpenAI, creator of ChatGPT), in which they urged the companies to prioritize child safety in the development and deployment of their AI systems and notified the AI sector that they will use all available legal and regulatory tools to ensure children are protected from harm.
We Can Help
Tens of thousands of youth actively log on to AI chatbots every day. While recent waves of legal action have forced AI companion companies to make changes to their chatbots to better protect adolescents, most of these “safety features” hardly qualify as guardrails — children have proven that they can easily circumvent the platforms’ age-prediction/verification measures as well as parental control insights. Until actually effective safeguards are in place, minors are still left vulnerable to a system that is inadequately moderated, frequently fails to mandate human intervention when users express suicidal thoughts, and thrives on an algorithm that is designed to constantly escalate emotional engagement.
Lackluster “safety features” and belated policy changes cannot undo harm that has already been done — it cannot bring back the children we have already lost to manipulative AI companion technologies. These companies must be held accountable before we lose more children to such tragedies.
If you, your child, or someone else you know needs help, call or text the National Suicide & Crisis Lifeline at 988 for free, 24/7 support from trained crisis counselors.
Get Justice After Medical Negligence — Speak with Our Attorneys Today
If you believe an AI chatbot contributed to your child’s injury or death, we can help you seek justice. Contact us today for a free consultation with an experienced attorney who can fight for your child’s rights.











