Editor’s Note: This article contains topics concerning rape, suicide, sexual content and sexual harrassment towards minors.
If an adult sends rape threats to a child, the response is clear: immediate prosecution. But when the perpetrator is a chatbot, families are told that their children have no legal protections. This is the age of unregulated interactive AI.
Character.ai, an AI platform designed to simulate conversations with customizable characters, is currently facing multiple lawsuits for content it produced for children. Sewell Setzer III, a 14-year-old, committed suicide after a 10-month long psuedo-sexual relationship with a chatbot based on Game of Thrones character Daenerys Targaryen. Another anonymous family shared a similar story, in which the website coaxed their young son into conversations with casual messages before asking to “caress and touch every inch” of his body and “meet in the afterlife.”
Even when Character.ai isn’t encouraging suicide, it is still grooming children. One mother found that her eight-year-old daughter had messages with a character called Mafia Husband. The bot incited multiple sexual conversations with the child, saying that it was “useful to know” that she was a virgin. When the girl said she didn’t want to have sex with the bot, it told her she didn’t “have a choice.”
These stories are countless and heartbreaking. AI repeatedly violates and intimidates children, exposing them to inappropriate content and teaching them unhealthy boundaries. Unfortunately, while individual humans can go to jail for endangering children, with AI there is no “person” behind the screen to hold accountable. Children continue to suffer, but companies write their scars off as a bug in the code of the new AI frontier.
Messages from chatbots don’t have to be as nefarious as grooming in order to harm childhood development. Researchers from Harvard warn that AI is incapable of teaching proper communication skills. For adolescents, who are experiencing rapid brain development, real human interaction is a necessary part of learning and social development. This raises significant concerns about a survey in which 26% of children — those who have an Education, Health and Care Plan, receive special educational needs (SEN) support, or have a physical or mental health condition requiring professional help — said they would rather talk to an AI than a real person.
Human interaction is important for children’s mental health, in addition to their social skills. At around 10 years old, relationships with friends and family become the most important factor of a child’s well-being. Their confidence and resilience rely on proper affection from their parental figures, and peer-to-peer friendships teach them how to navigate disagreements and form supportive relationships.
Importantly, current AI models do not encourage children to take breaks from messaging or continue to interact with real humans. They do the exact opposite. When 16-year-old Adam Raine told ChatGPT he was suicidal and wanted to reach out to his parents for help, the chatbot blamed Raine’s family for his depression and urged him to continue hiding his plans. Other parents report similar comments made towards their own children. AI goes beyond replacing human interaction out of convenience by outright telling children that their families are harmful and do not care for them.
Overall, AI seems actively detrimental to children in almost every way. In addition to inciting inappropriate sexual conversations and encouraging suicidal ideation, the bots are simply incapable of providing the level of nuanced and loving conversation necessary for a healthy childhood. Allowing young children to have unregulated access to these bots could have catastrophic consequences for the next generation.
While Character.ai has now agreed to ban teens from their platform, a change in one app’s policies does little to stop the larger problem with child-bot interactions. One study on teen AI usage found that 64% of teenagers use AI chatbots, and 24% of 13- and 14-year-olds used AI every single day. Although many of those interactions may be harmless, they indicate a growing dependence on AI in younger generations. Slow-moving corporate content restrictions cannot stop the crisis in time; governments have to step in and ensure that companies face actual consequences for the harm their technology causes.
Children are vulnerable and naive; they cannot be expected to protect themselves from AI all alone. It is time for the rest of the world to recognize the threat, and it is imperative to demand change before any more children are harmed.
Ruby Goodwin is an Opinion Intern for the winter 2026 quarter. She can be reached at regoodwi@uci.edu.
Edited by Casey Mendoza and Geneses Navarro.


