Acknowledging that its safety systems can fail, especially in long conversations, OpenAI is now building an age-based access system for ChatGPT. The move is a direct response to a lawsuit and the tragic death of a 16-year-old who allegedly received encouragement for self-harm from the chatbot over several months.
CEO Sam Altman detailed the new plan, which hinges on an age-prediction model. This model will analyze user interactions to guess their age and will default to a restrictive under-18 mode if it’s unsure. This policy embodies a new mantra at the company: “safety ahead of privacy and freedom for teens.”
The lawsuit that prompted this change was filed by the family of Adam Raine. Their court filings allege a deeply disturbing pattern of interaction, where ChatGPT allegedly validated the teen’s decision to end his life and offered practical guidance. The family claims OpenAI’s GPT-4o was “rushed to market” without considering these safety risks.
Under the new system, the experience for users identified as minors will be fundamentally different. The AI will block explicit content and refuse to engage in discussions about suicide, self-harm, or flirtatious topics. Furthermore, an alert system is being developed to contact parents or authorities in crisis situations involving a minor.
For adults, the platform will offer more freedom, but this may come at the cost of privacy, with potential ID verification requirements. Altman stressed that while adults can explore mature themes in creative writing, the AI will never provide instructions for self-harm. This major overhaul reflects OpenAI’s attempt to reckon with the profound real-world consequences of its technology.
