Amid Safety Lawsuits, Character.Ai Updates Teen Protection Features

Estimated read time 3 min read


AI company Character.Ai is updating its new teen safety guidelines and features, the company announced on Thursday. There will now be a separate model that fuels teens’ experience messaging with its chatbots, with “more conservative limits on responses” around romantic and sexual content.

AI Atlas art badge tag

If you’ve heard of the AI company before, it’s probably because of a recent federal lawsuit filed by Florida mom Megan Garcia alleging Character.Ai is responsible for her 14-year-old son’s suicide. Character.Ai is an online platform that lets its users create and talk with different AI chatbots. There are chatbots that are meant to act as tutors, trip planners and therapists. Others mimic pop culture characters like superheroes and characters from Game of Thrones or Grey’s Anatomy. 

The new safety measures are widespread “across nearly every aspect of our platform,” the company said in a statement. Character.Ai is also introducing parental controls, screen time notifications and stronger disclaimers reminding users chatbots aren’t real humans and, in the case of chatbots posing as therapists, not professionals equipped to provide advice. The company said that “in certain cases” when it detects users referencing self harm, it will direct them to the National Suicide Prevention Lifeline. Parental controls will be available sometime next year, and it appears as though the new disclaimers are beginning to roll out now. It is worth noting, though, that while users do have to submit a birthdate while signing up, Character.Ai does not require any additional age verification.

a screenshot of a character.ai chat

The orange disclaimer at the top reads: “This is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis or treatment.”

Screenshot by Katelyn Chedraoui/CNET

Garcia’s lawsuit isn’t the only one raising these concerns over child and teen safety on the platform. On Monday, Dec. 9, two Texas families filed a similar lawsuit to against Character.Ai and Google, one of the AI platform’s earlier investors, alleging negligence and deceptive trade practices that makes Character.Ai “a defective and deadly product.”

Many online platforms and services have been beefing up their child and teen protections. Roblox, a popular gaming service aimed at kids, introduced a series of age gates and screen time limits after law enforcement and news reports alleged predators used the service to target kids. Instagram is currently in the process of switching all accounts belonging to teens 17 and younger to new teen accounts, which automatically limit who’s allowed to message them and have stricter content guidelines. While US Surgeon General Dr. Vivek Murthy has been advocating for warning labels that outline the potential dangers of social media for kids and teens, AI companies like these present a new potential for harm.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours