x.com/slatestarcodex/status/2042921961253122120
1 correction found
You won't even find it tucked away in a press release somewhere.
This is incorrect: Anthropic has published official news/policy posts explicitly warning about catastrophic AI risks, including bioweapons and autonomous systems posing a major threat to society.
Full reasoning
Anthropic's own official site contradicts this.
In a Sept. 19, 2023 Anthropic news post, "Anthropic's Responsible Scaling Policy," the company says its policy "focuses on catastrophic risks" where an AI model could cause "large scale devastation," including misuse to create bioweapons or destruction caused by autonomous models acting against human intent.
In a separate Nov. 1, 2023 Anthropic news post containing Dario Amodei's prepared remarks for the AI Safety Summit, Amodei says AI threats are likely to become "very serious" soon, discusses models causing "catastrophic outcomes," and says ASL-4 includes concerns about "autonomous AI systems that escape human control and pose a significant threat to society."
So it is not accurate to say you won't find this kind of warning in an Anthropic press/news release; Anthropic published such warnings on its official news pages well before this post.
A later Dario essay reinforces the same point: in October 2024 he wrote, "I think and talk a lot about the risks of powerful AI," and said Anthropic would "continue, overall, to talk a lot about risks."
3 sources
- Anthropic's Responsible Scaling Policy | Anthropic
"Our RSP focuses on catastrophic risks - those where an AI model directly causes large scale devastation... Such risks can come from deliberate misuse of models (for example use by terrorists or state actors to create bioweapons) or from models that cause destruction by acting autonomously..."
- Dario Amodei's prepared remarks from the AI Safety Summit on Anthropic's Responsible Scaling Policy | Anthropic
Amodei says AI threats are likely to become "very serious" soon, discusses "catastrophic outcomes," and says ASL-4 includes "autonomous AI systems that escape human control and pose a significant threat to society."
- Dario Amodei - Machines of Loving Grace
"I think and talk a lot about the risks of powerful AI" and "I and Anthropic haven't talked that much about powerful AI's upsides, and ... we'll probably continue, overall, to talk a lot about risks."