1 min read

Link: Character.AI has retrained its chatbots to stop chatting up teens

Character.AI announced new parental controls and safety measures for its service following lawsuits related to self-harm and suicide among teenage users. The changes include a separate language model for users under 18, designed with stricter limits, especially concerning romantic content.

The updated teen model aggressively blocks sensitive or suggestive responses and detects prompts for inappropriate content. Additionally, warning pop-ups will direct users to the National Suicide Prevention Lifeline if harmful language is detected.

User interactions will now be more restricted, as minors will not be allowed to edit bots’ responses, which might include blocked content.

Other forthcoming features aim to address issues like addiction and user confusion over the bot's human-like interactions. Alerts will notify users after spending an hour with the bots, and disclaimers will clarify that the bots cannot provide professional advice.

At the service, each bot reminds users it is an AI and not a real person, emphasizing that all responses should be seen as fictional and not factual advice.

Set to release in the first quarter of the next year, the parental control features will inform parents about their child's usage details, enhancing safety in collaboration with teen online safety experts.

 #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.