Character.AI Announces Parental Controls for Teen Users
In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.
New Safety Features
In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.
Minors’ Interactions Limited
Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.
Additional Features
Beyond these changes, Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything characters say is made up” is being replaced with more detailed language. For bots that include descriptions like “therapist” or “doctor,” an additional note will warn that they can’t offer professional advice.
Parental Control Options
The parental control options are coming in the first quarter of next year, Character.AI says, and they’ll tell parents how much time a child is spending on Character.AI and which bots they interact with most frequently. All the changes are being made in collaboration with “several teen online safety experts,” including the organization ConnectSafely.
Conclusion
Character.AI is taking steps to address concerns about the safety and well-being of its teenage users. The new safety features and parental control options aim to provide a safer and more responsible experience for minors. While the company still faces lawsuits and criticism, these changes demonstrate its commitment to continuously improving its policies and product.
FAQs
Q: What are the new safety features?
A: The new safety features include a separate large language model (LLM) for users under 18, more aggressive blocking of sensitive or suggestive content, and better detection and blocking of user prompts meant to elicit inappropriate content.
Q: How will minors be prevented from editing bots’ responses?
A: Minors will be prevented from editing bots’ responses to prevent them from adding content that Character.AI might otherwise block.
Q: What are the parental control options?
A: The parental control options will allow parents to track how much time their child is spending on Character.AI and which bots they interact with most frequently.
Q: When will the parental control options be available?
A: The parental control options will be available in the first quarter of next year.
Q: Who is collaborating with Character.AI on these changes?
A: Character.AI is collaborating with several teen online safety experts, including the organization ConnectSafely.

