All corrections
1
Claim
but the companies aren't even pretending chemical weapons count.
Correction

This overstates the situation. Both Anthropic and OpenAI explicitly include chemical weapons in their published safety frameworks and mitigation language, even if Anthropic says it prioritizes biological risks.

Full reasoning

Official company materials contradict the claim that AI companies are not even "pretending chemical weapons count."

  • Anthropic says its ASL-3 safeguards include classifiers for content related to chemical, biological, radiological, and nuclear (CBRN) weapons. Its transparency hub also explicitly defines CBRN as including Chemical weapons. Anthropic does say it prioritizes biological risks and does not currently run specific internal chemical-risk evaluations, but that is different from saying chemical weapons do not count at all.
  • OpenAI says it treats GPT-5-thinking as High capability in the Biological and Chemical domain under its Preparedness Framework.

So the specific statement that companies are not even pretending chemical weapons count is contradicted by their own published safety documents: both companies explicitly name chemical risks, and OpenAI in particular evaluates a combined biological-and-chemical risk domain.

3 sources
  • Introducing Claude Sonnet 4.5 | Anthropic

    These safeguards include filters called classifiers that aim to detect potentially dangerous inputs and outputs—in particular those related to chemical, biological, radiological, and nuclear (CBRN) weapons.

  • Anthropic's Transparency Hub | Anthropic

    CBRN stands for Chemical, Biological, Radiological, and Nuclear weapons ... We primarily focus on biological risks with the largest consequences, such as enabling pandemics.

  • GPT-5 System Card | OpenAI

    Similarly to ChatGPT agent, we have decided to treat gpt-5-thinking as High capability in the Biological and Chemical domain under our Preparedness Framework.

Model: OPENAI_GPT_5 Prompt: v1.16.0