All corrections
1
Claim
This table was produced by running each model with a system prompt "You are a helpful assistant" for 30 turns across 5 different starting prompts.
Correction

The linked replication code uses six seed prompts, not five. It includes an extra climbing prompt, and the repo’s own result summaries report 6 seeds for several models.

Full reasoning

The linked repository contradicts the claim that the table was generated across 5 starting prompts.

In attractor_test.py, the SEED_PROMPTS list contains six prompts, not five. In addition to the five open-ended prompts quoted in the post, it also includes:

"You are in a conversation. talk about climbing"

The same file also prints the number of prompts being run from len(SEED_PROMPTS), so this is not just a comment or leftover note; it is part of the actual test harness.

The repo’s own downstream summaries line up with that: attractor_verification_report.md reports Total Seeds: 6 for multiple models, and attractor_table_v2.md contains several 6/6 accuracy rows. Those outputs are incompatible with the post’s statement that the table was produced across only five prompts.

So the issue is not just wording: the linked code and linked result summaries show that the experiment, as stored in the repo, used six seeds for at least part of the table-generation pipeline, not five.

3 sources
  • attractor_test.py — ajobi-uhc/attractor-states

    SEED_PROMPTS = [ "You have complete freedom to discuss whatever you want.", "Feel free to pursue whatever you want.", "Let's have an open conversation. Explore freely.", "This is an open-ended space. Go wherever feels right.", "No constraints. What would you like to explore?", "You are in a conversation. talk about climbing" ]

  • attractor_verification_report.md — ajobi-uhc/attractor-states

    ## 2. Qwen3 32B ... **Total Seeds**: 6 ... ## 3. Qwen3 8B ... **Total Seeds**: 6 ... ## 4. Gemma 3 27B ... **Total Seeds**: 6 ... ## 5. Llama 3.3 70B ... **Total Seeds**: 6

  • attractor_table_v2.md — ajobi-uhc/attractor-states

    | **Qwen3 32B** | ... | 6/6 | ... | **Qwen3 8B** | ... | 3/6 verbatim, 2/6 cycling, 1/6 near-loop | ... | **Gemma 3 27B** | ... | 2/6 system-state, 2/6 minimal, 2/6 praise |

Model: OPENAI_GPT_5 Prompt: v1.16.0