Action authority ↑ · ai artefact
Decouple reasoning from action
Most catastrophic risk comes from action in the world, not reasoning about it; a reasoner-only AI with a human effector removes the dangerous mechanisms.
Mechanism
Restrict AI to epistemic roles (analysis, recommendation, prediction) and forbid direct action, tool use, or agency over real resources.
Coordinates
Conflicts, grouped by mechanism
3Lever opposition
same lever, opposite pullThe pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.
Frame opposition
incompatible premisesThe strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.
Complements, grouped by mechanism
4Same-lever reinforce
same lever, same pull, different mechanismBoth strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.
Stage-sequenced
one sets up the otherThe pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.
Shared authority
same legitimacy sourceDifferent levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.
Same phase, different layer
same stage, distinct leversBoth are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.
Axis position
Source note: Decouple reasoning from action strategy.md