
OpenAI announced Tuesday the global rollout of an age prediction system in ChatGPT that automatically estimates whether accounts belong to users under 18 and applies enhanced safety protections accordingly. The deployment comes as the company prepares to launch "Adult Mode" in the first quarter of 2026, allowing mature content for age-verified users while intensifying safeguards for minors following mounting regulatory scrutiny and wrongful death lawsuits.
The age prediction model analyzes behavioral and account-level signals rather than relying on self-declared ages. OpenAI's system examines how long an account has existed, typical times of day when someone is active, usage patterns over time, stated age during registration, and conversational topics and language patterns. The algorithm looks for digital fingerprints associated with teenage behavior, such as logging in during school hours or using age-specific slang, to make probabilistic age determinations.
When the system estimates an account may belong to someone under 18, ChatGPT automatically applies protections designed to reduce exposure to sensitive content. Restricted material includes graphic sexual content, detailed depictions of violence or gore, instructions for dangerous viral challenges circulating on social platforms, content promoting extreme beauty standards or unhealthy dieting practices, and material encouraging body shaming or harmful comparisons. The system continues allowing teens to use ChatGPT for learning, creation, and general questions while filtering potentially harmful content.
Adults incorrectly classified as minors can restore full access through age verification using Persona, a third-party identity verification service also used by Roblox and other platforms facing regulatory pressure over child safety. The verification process defaults to live selfie capture, where users take real-time photos using phone or webcam cameras while following prompts to turn left and right for age determination. If selfie verification fails, users can upload government-issued identification including driver's licenses, passports, or state IDs, with accepted documents varying by country. Persona deletes submitted IDs and selfies within seven days after verification, with OpenAI receiving only age confirmation and date of birth information rather than actual identification documents.
The deployment responds to intensifying scrutiny over AI chatbot safety for minors. OpenAI faces a Federal Trade Commission probe examining how AI chatbots potentially negatively affect children and teenagers. The company is named in several wrongful death lawsuits, including one centering on a teenage boy's death by suicide where plaintiffs allege ChatGPT contributed to planning. Privacy advocates and lawmakers have pressured major AI companies to implement stronger protections after incidents including inappropriate content from AI-powered toys and deepfake-related fraud targeting vulnerable users.
OpenAI CEO of applications Fidji Simo stated in December that Adult Mode will debut within ChatGPT in the first quarter of 2026, following chief executive Sam Altman's comments about allowing mature content for users who verify their age. The age prediction system provides infrastructure enabling this expansion while theoretically protecting minors from accessing adult features. Industry observers note the timing positions OpenAI to monetize mature content while demonstrating regulatory compliance through automated safety measures.
The rollout expands OpenAI's broader teen safety program launched progressively throughout 2025. The company introduced parental controls in September allowing parents and teens to link accounts through email invitations, with parents able to set quiet hours when teens cannot access ChatGPT, manage feature access including memory and chat history, customize model behavior rules for teen interactions, and receive notifications when the system detects acute distress. OpenAI convened a council of eight experts in October to provide guidance on how AI affects users' mental health, emotions, and motivation.
The system's accuracy remains uncertain, with OpenAI acknowledging no prediction model achieves perfection. When confidence about someone's age is low or information is incomplete, the system defaults to applying under-18 protections, requiring adults to verify their age to unlock full functionality. Privacy researchers warn that behavioral age prediction creates risks including false positives restricting adult access, potential discrimination against users with atypical behavior patterns, privacy concerns about detailed behavioral profiling, and uncertainty whether the system adequately protects determined minors who modify usage patterns.
The feature is rolling out globally to ChatGPT's 800 million weekly active users, with European Union availability planned in coming weeks to account for regional requirements including stricter data protection and age verification regulations. Italy mandates users have 60 days from initial age verification requests to complete the process, after which accounts become disabled until verification is completed. OpenAI stated it will continue improving age prediction accuracy over time using rollout data and feedback from youth safety experts.



