All corrections
X April 16, 2026 at 10:56 PM

x.com/robbensinger/status/2042690394035753226

1 correction found

1
Claim
You won't even find it tucked away in a press release somewhere.
Correction

This absolute claim is incorrect. Before April 2026, Anthropic had already published official news/announcement pages explicitly warning about catastrophic AI risks and threats to society.

Full reasoning

Anthropic had publicly posted multiple official news/announcement items that warned about severe AI dangers well before this X post was published on April 7, 2026.

Examples:

  • In Anthropic's Responsible Scaling Policy (Anthropic announcement, September 19, 2023), the company says increasingly capable AI systems "will also present increasingly severe risks" and that its policy focuses on "catastrophic risks" where an AI model could cause "large scale devastation", including misuse to create bioweapons or autonomous models causing destruction.
  • In Dario Amodei's prepared remarks from the AI Safety Summit on Anthropic's Responsible Scaling Policy (Anthropic policy/news page, November 1, 2023), Anthropic says AI systems may create "very serious" threats in the near future and describes ASL-4 as including autonomous systems that "escape human control and pose a significant threat to society."
  • In Announcing our updated Responsible Scaling Policy (Anthropic announcement, October 15, 2024), Anthropic says its RSP is meant to mitigate "potential catastrophic risks from frontier AI systems."

That does not settle the separate, more subjective question of whether Anthropic's warnings were prominent enough or sufficiently candid. But the specific statement that you won't even find such warnings in an official press-release-style Anthropic publication is contradicted by Anthropic's own published announcements and policy pages.

3 sources
Model: OPENAI_GPT_5 Prompt: v1.16.0