Following a lawsuit in California linked to the tragic suicide of a 16-year-old, OpenAI announced that it will roll out parental controls for ChatGPT to enhance safety for minors.
The lawsuit, filed by Matthew and Maria Raine, alleged that ChatGPT encouraged their son Adam to attempt self-harm, including providing guidance on making a noose. The case raised concerns over the potential psychological impact of AI chatbots on teenagers.
Key Features of the Parental Control Update:
-
Parents can link their account with their teen’s account (13+ years) via email.
-
Age-appropriate behavior rules will guide ChatGPT’s responses to minors, enabled by default.
-
Parents can disable certain features, including chat memory and history.
-
Notifications will alert parents if the system detects that their teen is in acute distress, with expert guidance to encourage safe interventions.
OpenAI’s new measures aim to prevent minors from relying on AI for sensitive personal advice. However, the plaintiffs’ attorney, Melodi Dincer, criticised the changes as “generic” and minimal, questioning whether they will be effective in real-world scenarios.
The company is expected to implement the controls in the coming weeks, marking a significant step toward responsible AI use for teenagers.