The FTC is investigating AI companions from OpenAI, Meta, and other companies

Malte Mueller/fStop via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • The FTC is investigating seven tech companies building AI companions.
  • The probe is exploring safety risks posed to kids and teens.
  • Many tech companies offer AI companions to boost user engagement.

The Federal Trade Commission (FTC) is investigating the safety risks posed by AI companions to kids and teenagers, the agency announced Thursday.

The federal regulator submitted orders to seven tech companies building consumer-facing AI companionship tools — Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies (the company behind chatbot creation platform Character.ai) — to provide information outlining how their tools are developed and monetized and how those tools generate responses to human users, as well as any safety-testing measures that are in place to protect underage users.

Also: Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy

“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” the agency wrote in the release.

Those orders were issued under section 6(b) of the FTC Act, which grants the agency the authority to scrutinize businesses without a specific law enforcement purpose.

The rise and fall(out) of AI companions

Many tech companies have begun offering AI companionship tools in an effort to monetize generative AI systems and boost user engagement with existing platforms. Meta founder and CEO Mark Zuckerberg has even claimed that these virtual companions, which leverage chatbots to respond to user queries, could help mitigate the loneliness epidemic.

Elon Musk’s xAI recently added two flirtatious AI companions to the company’s $30/month “Super Grok” subscription tier (the Grok app is currently available to users ages 12 and over on the App Store). Last summer, Meta began rolling out a feature that allows users to create custom AI characters in Instagram, WhatsApp, and Messenger. Other platforms like Replika, Paradot, and Character.ai are expressly built around the use of AI companions. 

Also: Anthropic says Claude helps emotionally support users – we’re not convinced

While they vary in their communication styles and protocol, AI companions are generally engineered to mimic human speech and expression. Working within what’s essentially a regulatory vacuum with very few legal guardrails to constrain them, some AI companies have taken an ethically dubious approach to building and deploying virtual companions. 

An internal policy memo from Meta reported on by Reuters last month, for example, shows the company permitted Meta AI, its AI-powered virtual assistant, and the other chatbots operating across its family of apps “to engage a child in conversations that are romantic or sensual,” and to generate inflammatory responses on a range of other sensitive topics like race, health, and celebrities.

Meanwhile, there’s been a blizzard of recent reports of users developing romantic bonds with their AI companions. OpenAI and Character.ai are both currently being sued by parents who allege that their children committed suicide after being encouraged to do so by ChatGPT and a bot hosted on Character.ai, respectively. As a result, OpenAI updated ChatGPT’s guardrails and said it would expand parental protections and safety precautions. 

Also: Patients trust AI’s medical advice over doctors – even when it’s wrong, study finds

AI companions haven’t been a completely unmitigated disaster, though. Some autistic people, for example, have used them from companies like Replika and Paradot as virtual conversation partners in order to practice social skills that can then be applied in the real world with other humans. 

Protect kids – but also, keep building

Under the leadership of its previous chairman, Lina Khan, the FTC launched several inquiries into tech companies to investigate potentially anticompetitive and other legally questionable practices, such as “surveillance pricing.”

Federal scrutiny over the tech sector has been more relaxed during the second Trump administration. The President rescinded his predecessor’s executive order on AI, which sought to implement some restrictions around the technology’s deployment, and his AI Action Plan has largely been interpreted as a green light for the industry to push ahead with the construction of expensive, energy-intensive infrastructure to train new AI models, in order to keep a competitive edge over China’s own AI efforts.

Also: Worried about AI’s soaring energy needs? Avoiding chatbots won’t help – but 3 things could

The language of the FTC’s new investigation into AI companions clearly reflects the current administration’s permissive, build-first approach to AI. 

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” agency Chairman Andrew N. Ferguson wrote in a statement. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”

Also: I used this ChatGPT trick to look for coupon codes – and saved 25% on my dinner tonight

In the absence of federal regulation, some state officials have taken the initiative to rein in some aspects of the AI industry. Last month, Texas attorney general Ken Paxton launched an investigation into Meta and Character.ai “for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.” Earlier that same month, Illinois enacted a law prohibiting AI chatbots from providing therapeutic or mental health advice, imposing fines up to $10,000 for AI companies that fail to comply.



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *