Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 


Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.

To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

However, its announcement points to a recent program created to study what it calls “model welfare” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”

While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.

As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”

Anthropic also says Claude has been “directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.”

Techcrunch event

San Francisco
|
October 27-29, 2025

When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses.

“We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.



Source link

Latest

A new iPhone hacking tool puts anyone still on iOS 18 at risk

Google and cybersecurity companies Lookout and iVerify have...

Spider-Man uses a Galaxy Z Flip in first Brand New Day trailer

The first trailer for Marvel’s Spider-Man: Brand New...

Mistral bets on ‘build-your-own AI’ as it takes on OpenAI, Anthropic in the enterprise

Most enterprise AI projects fail not because companies...

IO Interactive splits with MindsEye developer and ends Hitman collab

MindsEye developer Build a Rocket Boy (BARB) has...

Newsletter

Don't miss

A new iPhone hacking tool puts anyone still on iOS 18 at risk

Google and cybersecurity companies Lookout and iVerify have...

Spider-Man uses a Galaxy Z Flip in first Brand New Day trailer

The first trailer for Marvel’s Spider-Man: Brand New...

Mistral bets on ‘build-your-own AI’ as it takes on OpenAI, Anthropic in the enterprise

Most enterprise AI projects fail not because companies...

IO Interactive splits with MindsEye developer and ends Hitman collab

MindsEye developer Build a Rocket Boy (BARB) has...

Pixel Weather tweaks widget, new icons come to Google Clock 8.6

Google has made a pair of weather tweaks...

A new iPhone hacking tool puts anyone still on iOS 18 at risk

Google and cybersecurity companies Lookout and iVerify have detailed a new hacking technique that potentially puts a significant portion of iPhone users in...

Spider-Man uses a Galaxy Z Flip in first Brand New Day trailer

The first trailer for Marvel’s Spider-Man: Brand New Day landed today and features a cameo by Samsung’s Galaxy Z Flip foldable, but please...

Mistral bets on ‘build-your-own AI’ as it takes on OpenAI, Anthropic in the enterprise

Most enterprise AI projects fail not because companies lack the technology, but because the models they’re using don’t understand their business. The models...

LEAVE A REPLY

Please enter your comment!
Please enter your name here