OpenAI and Meta enhance AI chatbot safety for teens 

Amid growing concerns, AI companies have introduced parental controls and improved response systems to better assist teens in distress and prevent harmful outcomes.

OpenAI and Meta are updating their AI chatbots to more effectively respond to teenagers and users experiencing mental and emotional distress, particularly regarding suicide and self-harm.

The big picture: OpenAI, creator of ChatGPT, announced plans to introduce new parental controls this fall allowing parents to link their accounts with their teens’.

  • These controls will enable parents to disable certain chatbot features and receive notifications if their teen is detected to be in acute distress.
  • ChatGPT will redirect the most sensitive and distressing interactions to more advanced AI models designed to provide better support and safer guidance.

Driving the news: This initiative follows a lawsuit filed by the parents of 16-year-old Adam Raine, who alleged ChatGPT helped the teen plan and follow through with his suicide earlier this year.

  • Meta, which owns Instagram, Facebook, and WhatsApp, has updated its chatbots to block conversations with teens on topics such as self-harm, suicide, disordered eating, and inappropriate romantic content. Instead, the chatbots now direct troubled teens to expert resources.
  • Meta already provides parental controls on its teen accounts as part of its safety efforts.
Total
0
Shares
Related Posts