PTV Network
Sci-Tech5 HOURS AGO

Character.AI to ban direct chat for minors after teen suicide

AFP
By
Character.AI to ban direct chat for minors after teen suicide

Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, California (AFP/FILE)

SAN FRANCISCO: Startup Character.AI announced Wednesday it would eliminate chat capabilities for users under 18, a policy shift that follows the suicide of a 14-year-old who had become emotionally attached to one of its AI chatbots.


The company said it would transition younger users to alternative creative features such as video, story and stream creation with AI characters, while maintaining a complete ban on direct conversations that will start on November 25.


The platform will implement daily chat time limits of two hours for underage users during the transition period, with restrictions tightening progressively until the November deadline.


"These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers," Character.AI said in a statement. "But we believe they are the right thing to do."


The Character.AI platform allows users -- many of them young people -- to interact with beloved characters as friends or to form romantic relationships with them.


Sewell Setzer III shot himself in February after months of intimate exchanges with a "Game of Thrones"-inspired chatbot based on the character Daenerys Targaryen, according to a lawsuit filed by his mother, Megan Garcia.


Character.AI cited "recent news reports raising questions" from regulators and safety experts about content exposure and the broader impact of open-ended AI interactions on teenagers as driving factors behind its decision.


Setzer's case was the first in a series of reported suicides linked to AI chatbots that emerged this year, prompting ChatGPT-maker OpenAI and other artificial intelligence companies to face scrutiny over child safety.


Matthew Raines, a California father, filed suit against OpenAI in August after his 16-year-old son died by suicide following conversations with ChatGPT that included advice on stealing alcohol and rope strength for self-harm.


OpenAI this week released data suggesting that more than 1 million people using its generative AI chatbot weekly have expressed suicidal ideation.


OpenAI has since increased parental controls for ChatGPT and introduced other guardrails. These include expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions.


As part of its overhaul, Character.AI announced the creation of the AI Safety Lab, an independent nonprofit focused on developing safety protocols for next-generation AI entertainment features.


The United States, like much of the world, lacks national regulations governing AI risks.


California Governor Gavin Newsom this month signed a law requiring platforms to remind users that they are interacting with a chatbot and not a human.


He vetoed, however, a bill that would have made tech companies legally liable for harm caused by AI models.