Anthropic says Claude helps emotionally support users – we’re not convinced

Richard Drury/Getty Images

More and more, in the midst of a loneliness epidemic and structural barriers to mental health support, people are turning to AI chatbots for everything from career coaching to romance. Anthropic’s latest study indicates its chatbot, Claude, is handling that well — but some experts aren’t convinced. 

Also: You shouldn’t trust AI for therapy – here’s why

On Thursday, Anthropic published new research on its Claude chatbot’s emotional intelligence (EQ) capabilities — what the company calls affective use, or conversations “where people engage directly with Claude in dynamic, personal exchanges motivated by emotional or psychological needs such as seeking interpersonal advice, coaching, psychotherapy/counseling, companionship, or sexual/romantic roleplay,” the company explained. 

While Claude is designed primarily for tasks like code generation and problem solving, not emotional support, the research acknowledges that this type of use is still happening, and is worthy of investigation given the risks. The company also noted that doing so is relevant to its focus on safety. 

The main findings 

Anthropic analyzed about 4.5 million conversations from both Free and Pro Claude accounts, ultimately settling on 131,484 that fit the affective use criteria. Using its privacy data tool Clio, Anthropic stripped conversations of personally identifying information (PII). 

The study revealed that only 2.9% of Claude interactions were classified as affective conversations, which the company says mirrors previous findings from OpenAI. Examples of “AI-human companionship” and roleplay comprised even less of the dataset, combining to under 0.5% of conversations. Within that 2.9%, conversations about interpersonal issues were most common, followed by coaching and psychotherapy. 

dcfe3a58b728e541ee83bde18664bdbe1ab66a8f-1923x1080.png

Anthropic

Usage patterns show that some people consult Claude to develop mental health skills, while others are working through personal challenges like anxiety and workplace stress — suggesting that mental health professionals may be using Claude as a resource. 

The study also found that users seek Claude out for help with “practical, emotional, and existential concerns,” including career development, relationship issues, loneliness, and “existence, consciousness, and meaning.” Most of the time (90%), Claude does not appear to push back against the user in these types of conversations, “except to protect well-being,” the study notes, as when a user is asking for information on extreme weight loss or self-harm. 

Also: AI is relieving therapists from burnout. Here’s how it’s changing mental health

The study did not cover whether the AI reinforced delusions or extreme usage patterns, as Anthropic noted that these are worthy of separate studies.

Most notably, however, is that Anthropic determined people “express increasing positivity over the course of conversations” with Claude, meaning user sentiment improved when talking to the chatbot. “We cannot claim these shifts represent lasting emotional benefits — our analysis captures only expressed language in single conversations, not emotional states,” Anthropic stated. “But the absence of clear negative spirals is reassuring.” 

Within these criteria, that’s perhaps measurable. But there is growing concern — and disagreement — across medical and research communities about the deeper impacts of these chatbots in therapeutic contexts. 

Conflicting perspectives

As Anthropic itself acknowledged, there are downsides to AI’s incessant need to please — which is what they’re trained to do as assistants. Chatbots can be deeply sycophantic (OpenAI recently recalled a model update for this very issue), agreeing with users in ways that can dangerously reinforce harmful beliefs and behaviors. 

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Earlier this month, researchers at Stanford released a study detailing several reasons why using AI chatbots as therapists can be dangerous. In addition to perpetuating delusions, likely due to sycophancy, the study found that AI models can carry stigmas toward certain mental health conditions and respond inappropriately to users. Several of the chatbots studied failed to recognize suicidal ideation in conversation and offered simulated users dangerous information. 

These chatbots are perhaps less guardrailed than Anthropic’s models, which were not included in the study. The companies behind other chatbots may lack the safety infrastructure Anthropic appears committed to. Still, some are skeptical about the Anthropic study itself. 

“I have reservations of the medium of their engagement,” said Jared Moore, one of the Stanford researchers, citing how “light on technical details” the post is. He believes some of the “yes or no” prompts Anthropic used were too broad to determine fully how Claude is reacting to certain queries. 

“These are only very high-level reasons why a model might ‘push back’ against a user,” he said, pointing out that what therapists do — push back against a client’s delusional thinking and intrusive thoughts — is a “much more granular” response in comparison. 

Also: Anthropic has a plan to combat AI-triggered job losses predicted by its CEO

“Similarly, the concerns that have lately appeared about sycophancy seem to be of this more granular type,” he added. “The issues I found in my paper were that the ‘content filters’ — for this really seems to be the subject of the Claude push-backs, as opposed to something deeper — are not sufficient to catch a variety of the very contextual queries users might make in mental health contexts.”

Moore also questioned the context around when Claude refused users. “We can’t see in what kinds of context such pushback occurs. Perhaps Claude only pushes back against users at the start of a conversation, but can be led to entertain a variety of ‘disallowed’ [as per Anthropic’s guidelines] behaviors through extended conversations with users,” he said, suggesting users could “warm up” Claude to break its rules. 

That 2.9% figure, Moore pointed out, likely doesn’t include API calls from companies building their own bots on top of Claude, meaning Anthropic’s findings may not generalize to other use cases. 

“Each of these claims, while reasonable, may not hold up to scrutiny — it’s just hard to know without being able to independently analyze the data,” he concluded. 

The future of AI and therapy 

Claude’s impact aside, the tech and healthcare industries remain very undecided about AI’s role in therapy. While Moore’s research urged caution, in March, Dartmouth released initial trial results for its “Therabot,” an AI-powered therapy chatbot, which claims to be fine-tuned on conversation data and showed “significant improvements in participants’ symptoms.” 

Online, users also colloquially report positive outcomes from using chatbots this way. At the same time, the American Psychological Association has called on the FTC to regulate chatbots, citing concerns that mirror Moore’s research. 

CNET: AI obituary pirates are exploiting our grief. I tracked one down to find out why

Beyond therapy, Anthropic acknowledges there are other pitfalls to linking persuasive natural language technology and EQ. “We also want to avoid situations where AIs, whether through their training or through the business incentives of their creators, exploit users’ emotions to increase engagement or revenue at the expense of human well-being,” Anthropic noted in the blog. 



Original Source: zdnet

Leave a Reply

Your email address will not be published. Required fields are marked *