x.com/robbensinger/status/2042690394035753226
1 correction found
You won't even find it tucked away in a press release somewhere.
This absolute claim is incorrect. Before April 2026, Anthropic had already published official news/announcement pages explicitly warning about catastrophic AI risks and threats to society.
Full reasoning
Anthropic had publicly posted multiple official news/announcement items that warned about severe AI dangers well before this X post was published on April 7, 2026.
Examples:
- In Anthropic's Responsible Scaling Policy (Anthropic announcement, September 19, 2023), the company says increasingly capable AI systems "will also present increasingly severe risks" and that its policy focuses on "catastrophic risks" where an AI model could cause "large scale devastation", including misuse to create bioweapons or autonomous models causing destruction.
- In Dario Amodei's prepared remarks from the AI Safety Summit on Anthropic's Responsible Scaling Policy (Anthropic policy/news page, November 1, 2023), Anthropic says AI systems may create "very serious" threats in the near future and describes ASL-4 as including autonomous systems that "escape human control and pose a significant threat to society."
- In Announcing our updated Responsible Scaling Policy (Anthropic announcement, October 15, 2024), Anthropic says its RSP is meant to mitigate "potential catastrophic risks from frontier AI systems."
That does not settle the separate, more subjective question of whether Anthropic's warnings were prominent enough or sufficiently candid. But the specific statement that you won't even find such warnings in an official press-release-style Anthropic publication is contradicted by Anthropic's own published announcements and policy pages.
3 sources
- Anthropic's Responsible Scaling Policy
As AI models become more capable, we believe that they will create major economic and social value, but will also present increasingly severe risks. Our RSP focuses on catastrophic risks - those where an AI model directly causes large scale devastation.
- Dario Amodei's prepared remarks from the AI Safety Summit on Anthropic's Responsible Scaling Policy
ASL-4 represents an escalation of the catastrophic misuse risks from ASL-3, and also adds a new risk: concerns about autonomous AI systems that escape human control and pose a significant threat to society.
- Announcing our updated Responsible Scaling Policy
Today we are publishing a significant update to our Responsible Scaling Policy (RSP), the risk governance framework we use to mitigate potential catastrophic risks from frontier AI systems.