OpenAI Will Add Parental Controls to ChatGPT This Month

[ad_1]

eWeek content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More

OpenAI plans to add parental controls to ChatGPT and route more sensitive conversations to its more advanced models, seemingly in response to recent cases where the AI allegedly encouraged suicide or other harm. 

The company has been working on safeguards behind the scenes, but “wants to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” according to a Sept. 2 blog post. 

Table of Contents

Parental controls will allow users to access their teen’s account, disable certain features

Parental controls will appear as an option within ChatGPT in September. These will allow parents to:

  • Link their account to their teen’s account.
  • Select model behavior rules designed for teens (which are on by default). 
  • Disable features like memory and chat history.
  • Receive alerts if the system flags “acute distress” on their teen’s account. 

OpenAI said it consulted with experts to ensure these controls maintain trust between parents and teens, the company said. ChatGPT is not recommended for users under 13.

“Safety features like these not only protect vulnerable users but also set an important precedent for responsible AI governance,” said Alon Yamin, co-founder and CEO of AI-based content verification service Copyleaks, in an email to TechRepublic. 

“Transparency, control, and accountability must become baseline expectations, not afterthoughts. If companies want AI to be widely adopted at scale, whether in classrooms, workplaces, or homes, they must demonstrate that safeguarding human well-being is just as important as technological advancement.”

Difficult conversations may automatically prompt ChatGPT to select a reasoning model 

OpenAI is also exploring modifications to how ChatGPT selects the model to use when addressing mental health crises or emotionally complex topics. 

Reasoning models, such as GPT‑5-thinking and o3, are designed to spend more time processing user messages, applying additional context before generating a response. Because of this, OpenAI said some conversations involving “sensitive” topics or signs of “acute distress” will “soon” be automatically routed to these more deliberative models.

OpenAI’s decisions around health and safety will be guided by an Expert Council on Well-Being and AI, as well as the Global Physician Network, the company said. The council consists of professionals in youth development, mental health, and human-computer interaction who provide guidance on how AI can improve people’s lives. Future versions of parental controls will be developed in part based on this group’s feedback. 

Meanwhile, the Global Physician Network consists of 250 physicians in 60 countries. OpenAI said 90 of these experts from 30 countries have already contributed research to help shape how its models behave in conversations about mental health. OpenAI has also added specialists in eating disorders, substance use, adolescent health, and other areas. 

In other Big Tech news, Meta recently removed accounts that had been spamming Facebook with AI-generated images depicting the Holocaust.

[ad_2]

Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment