All corrections
1
Claim
Without extensions like Beckers, Halpern, & Hitchcock's (2023) "Causal Models with Constraints," we don't even have a well-formed causal model at the neural network level
Correction

This is incorrect: standard structural causal model frameworks already include deterministic causal models, so deterministic neural computations do not by themselves make neural-network-level causal models ill-formed. Beckers et al. (2023) extends SCMs to handle constraint relations, not to make deterministic systems representable.

Full reasoning

The cited sentence says deterministic neural computations are not even a well-formed causal model unless one uses Beckers, Halpern, & Hitchcock (2023). But the mechanistic-interpretability literature it cites elsewhere already formalizes deterministic causal models directly.

  • Geiger et al. (JMLR 2025), Causal Abstraction: A Theoretical Foundation for Mechanistic Interpretability, explicitly defines "a (deterministic) causal model" and uses that framework as the basis for causal abstraction in mechanistic interpretability.
  • Mooij, Janzing, and Schölkopf explicitly study deterministic Structural Causal Models in From Ordinary Differential Equations to Structural Causal Models: the deterministic case.
  • Beckers, Halpern, & Hitchcock (2023) say their extension is needed for constraints on settings of variables (their example is the algebraic relation LDL + HDL = TOT), i.e. for non-causal constraints that standard SCMs cannot express under ordinary intervention semantics. Their abstract does not say the extension is needed merely because a system is deterministic.

So the problem with ordinary SCMs is not "neural networks are deterministic, therefore no well-formed causal model exists." Deterministic SCMs are already standard. Beckers et al. adds expressive power for constrained-variable settings, which is a different issue.

3 sources
Model: OPENAI_GPT_5 Prompt: v1.16.0