Scope ↓ · ai artefact
Red line capability
Most risk comes from a small number of identifiable capabilities that can be banned outright while the rest of the frontier advances.
Mechanism
Specify a short list of forbidden capabilities (bioweapon design, self-exfiltration, autonomous cyber offence) and certify systems against them.
Falsification signal
A system crosses a named red line without a prior warning signal, or many deployed systems hold a red line capability latently.
A strategy held without a falsification signal is not strategy; it is affiliation. Continued support after this signal lands is identity, not bet. See the identity diagnostic.
Historical analogue
Biotechnology · DURC / gain-of-function moratoriumEvery strategy inherits a plausible ceiling from its precedent. The analogue conditions the realistic reach.
Produced
Institutional review for named-risk experiments in some jurisdictions.
Did not produce
Gain-of-function research continued; proliferation of capability beyond the moratorium scope.
Addresses 1 failure scenario
all scenarios →Coordinates
Conflicts, grouped by mechanism
0No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.
Complements, grouped by mechanism
5Cross-side bridge
one AI-side, one world-sideOne acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.
Same-lever reinforce
same lever, same pull, different mechanismBoth strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.
Same-lever twins
5Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.
Axis position
Source note: Red line capability strategy.md