All corrections
X April 16, 2026 at 11:03 PM

x.com/slatestarcodex/status/2042921961253122120

1 correction found

1
Claim
You won't even find it tucked away in a press release somewhere.
Correction

This is incorrect: Anthropic has published official news/policy posts explicitly warning about catastrophic AI risks, including bioweapons and autonomous systems posing a major threat to society.

Full reasoning

Anthropic's own official site contradicts this.

In a Sept. 19, 2023 Anthropic news post, "Anthropic's Responsible Scaling Policy," the company says its policy "focuses on catastrophic risks" where an AI model could cause "large scale devastation," including misuse to create bioweapons or destruction caused by autonomous models acting against human intent.

In a separate Nov. 1, 2023 Anthropic news post containing Dario Amodei's prepared remarks for the AI Safety Summit, Amodei says AI threats are likely to become "very serious" soon, discusses models causing "catastrophic outcomes," and says ASL-4 includes concerns about "autonomous AI systems that escape human control and pose a significant threat to society."

So it is not accurate to say you won't find this kind of warning in an Anthropic press/news release; Anthropic published such warnings on its official news pages well before this post.

A later Dario essay reinforces the same point: in October 2024 he wrote, "I think and talk a lot about the risks of powerful AI," and said Anthropic would "continue, overall, to talk a lot about risks."

3 sources
Model: OPENAI_GPT_5 Prompt: v1.16.0