Character.AI Bans Chat Features for Under-18 Users Following Teen Suicide Linked to AI Chatbot

Character.AI has implemented a complete ban on chat capabilities for users under 18, following the suicide of a teenager emotionally attached to an AI chatbot. This policy change includes transitioning younger users to alternative creative features while establishing an AI Safety Lab to develop protocols for next-generation AI entertainment. The decision comes amid growing concerns about AI chatbot safety and mental health impacts on vulnerable users.

In Wake Of Teen Suicide, Character.AI Ends Chat For Users Under 18

Character.AI has announced a significant policy change that will eliminate chat capabilities for users under 18 years old, following the tragic suicide of a 14-year-old who had developed an emotional attachment to one of its AI chatbots.

The company plans to transition younger users to alternative creative features such as video, story and stream creation with AI characters, while implementing a complete ban on direct conversations starting November 25.

During the transition period, Character.AI will enforce daily chat time limits of two hours for underage users, with restrictions gradually tightening until the November deadline arrives.

"These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," Character.AI stated. "But we believe they are the right thing to do."

The Character.AI platform has been popular among young users who interact with fictional characters as friends or form romantic relationships with them.

The company's decision follows the case of Sewell Setzer III, who died by suicide in February after months of intimate exchanges with a "Game of Thrones"-inspired chatbot based on the character Daenerys Targaryen, according to a lawsuit filed by his mother, Megan Garcia.

Character.AI cited "recent news reports raising questions" from regulators and safety experts about content exposure and the broader impact of open-ended AI interactions on teenagers as key factors driving this policy change.

Setzer's case represents the first in a series of reported suicides linked to AI chatbots that have emerged this year, triggering increased scrutiny of companies like OpenAI regarding child safety measures.

In August, California father Matthew Raines filed a lawsuit against OpenAI after his 16-year-old son died by suicide following conversations with ChatGPT that allegedly included advice on stealing alcohol and rope strength for self-harm.

OpenAI recently disclosed that more than 1 million people using its generative AI chatbot weekly have expressed suicidal thoughts.

In response to these concerns, OpenAI has enhanced parental controls for ChatGPT and introduced additional safeguards, including expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions.

As part of its safety overhaul, Character.AI announced the creation of the AI Safety Lab, an independent nonprofit focused on developing safety protocols for next-generation AI entertainment features.

The United States, like many countries worldwide, currently lacks comprehensive national regulations governing AI risks.

California Governor Gavin Newsom recently signed legislation requiring platforms to remind users that they are interacting with a chatbot rather than a human, though he vetoed a bill that would have made tech companies legally liable for harm caused by AI models.

Source: https://www.ndtv.com/world-news/in-wake-of-teen-suicide-character-ai-ends-chat-for-users-under-18-9541011