ChatGPT will verify your age soon, in attempt to protect teen users

Elyse Betters Picaro / ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • OpenAI Sam Altman discussed teen safety, freedom, and privacy.
  • Teens will have more safety measures, compromising freedom. 
  • Adults will retain the freedom to use the chatbot as they want. 

Chatbot users are increasingly confiding in tools like ChatGPT for sensitive or personal matters, including mental health issues. The consequences can be devastating: A teen boy took his own life after spending hours chatting with ChatGPT about suicide. Amidst a resulting lawsuit and other mounting pressures, OpenAI has reevaluated its chatbot’s safeguards with changes that aim to protect teens, but will impact all users.

Also: How people actually use ChatGPT vs Claude – and what the differences tell us

In a blog posted on Tuesday, OpenAI CEO Sam Altman explored the complexity of implementing a teen safety framework while adhering to his company’s principles, which conflict when weighing freedom, privacy, and teen safety as individual goals. The result is a new age prediction model that attempts to verify the age of the user to ensure teens have a safer experience — even if it means adult users may have to prove they’re over 18, too. 

New teen protections, including age prediction

Altman said OpenAI is prioritizing safety over privacy and freedom with added measures specifically for teens, primarily by building an age-prediction system that estimates a user’s age based on how they use ChatGPT. The chatbot is intended for users 13 years or older, so the first step of verification will be differentiating between users who are 13 through 18 years old. 

Also: OpenAI has new agentic coding partner for you now: GPT-5-Codex

It’s unclear when the company will roll out age verification, but it is currently in the works. 

When in doubt, ChatGPT will boot users down to the under-18 experience, and in some cases and countries, will ask for an ID. Altman acknowledges that this may come up as a privacy compromise for adults, but sees it as “a worthy tradeoff.” 

In the teen version, ChatGPT will be trained not to talk flirtatiously, even if requested, or participate in a discussion about suicide in any setting. If an underage user is having suicidal ideation, OpenAI will attempt to contact parents or authorities if unavailable. The company recently announced these policy changes following an incident in April in which a teenage boy who spent hours chatting with ChatGPT about suicide took his own life. 

For many, an ideal chatbot experience would maximize assistance and minimize objections while keeping your information private and not engaging with harmful queries in the interest of safety. However, as Altman’s blog explored, many of these goals are at odds with each other, making perfecting ChatGPT especially challenging when it comes to teen usage. The new policies are an attempt to prevent similar tragedies from happening to underage users, even if it curtails some experiences for other users. 

AI chatbots, privacy, and safety issues

People are increasingly using AI chatbots to talk through private or sensitive matters, as they can act like unbiased confidants who can help you make sense of difficult topics, such as a medical diagnosis or legal issue, or just provide a listening ear. In July, Altman said that the same privacy protections that apply to a doctor or a lawyer should apply to a conversation with AI. The company is advocating for said protections with policymakers. 

Also: FTC scrutinizes OpenAI, Meta, and others on AI companion safety for kids

In the meantime, Altman wrote in the blog that OpenAI is developing “advanced security features” meant to keep users’ information private even from OpenAI employees. Of course, there would still be some exceptions, such as when the automated systems identify serious misuse, including a threat to someone’s life or an intent to harm others, which would require human review. 

Altogether, these measures follow a larger trend of users turning to ChatGPT for mental health concerns or as a therapist. While ChatGPT and other generative AI chatbots can be good conversationalists, they are not meant to replace a medical professional, and do not have the clearance to do so. A Stanford University study even found that AI therapists can misread crises, provide inappropriate responses, and reinforce biases. 

At the same time, the Federal Trade Commission (FTC) is investigating AI companions marketed toward young users and their potential dangers. 

Maintaining user freedom

Altman also explored how to maintain freedom, emphasizing the company’s desire for users to use its AI tools however they want, with “very broad bounds of safety.” As a result, the company has continued to update user freedoms. For example, Altman says that while the chatbot isn’t built to be flirty, users who want it to be can enable that trait.

Also: Anthropic says Claude helps emotionally support users – we’re not convinced

In more drastic cases, even though the chatbot shouldn’t default to giving instructions on how to commit suicide, if an adult user wants ChatGPT to depict a suicide to help write a fictional story, the model should help with the request, according to Altman. 

“Treat our adult users like adults,” he wrote.



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *