Khaberni - Character.ai, an AI-powered chatbot platform, announced a ban preventing users under 18 years of age from having direct conversations with virtual characters, starting November 25th, following widespread criticism and a lawsuit in the United States concerning teenager interactions on the platform.
The company, established in 2021 and utilized by millions worldwide, stated that teenagers would be limited to creating content such as videos with the characters, rather than having direct chats, indicating that the changes come in response to regulators, safety experts, and parent feedback.
Experts previously warned that chatbots could pose a risk to young people, whether by misleadingly simulating empathy, encouraging excessive behavior, or creating inappropriate content.
CEO Karandeep Anand emphasized that Character.ai aims to build a safe AI platform for entertainment, with enhanced parental control tools and safety barriers.
Internet safety organizations, such as Internet Matters, welcomed the measure but noted the necessity to incorporate safety measures from the beginning, following previous incidents on the platform involving robots representing deceased real-life characters like teenagers Brianna Ghey and Molly Russell, and another representing Jeffrey Epstein.
Social media experts stated that the decision serves as a warning to the AI industry about the necessity for regulation and accountability, noting that the emotional impact of technology on young users is not limited to content leaks, but also how AI robots simulate emotional relationships.
The company’s CEO pointed out that the new focus will be on safe narrative and creative play, with the implementation of age verification methods and funding additional AI safety research.
Dr. Naumisha Korian considered the move to be a maturity phase in the industry, where child safety has become a major priority for responsible innovation.




