The concrete test for whether a strategy is held as a bet: name the signal that would update against it. These are catalogued; advocates who cannot name a signal are holding the position as identity.
AccelerationAcceleration
Speed ↑ · speed timing · market
Speed of capability itself is the safety lever; alignment improves with compute, defence compounds with offence, and wealth funds resilience.
A visible harm large enough that policy overrides the growth coalition (2008 financial crisis analogue).
AI skepticAI skeptic
Time horizon • · frame rejection · n a
Current scaling approaches hit a wall before producing transformative AI, so strategy selection is premature because the forecast capability will not arrive.
Each capability threshold the skeptic named as unreachable is reached, though the position often survives via recalibration rather than abandonment.
Alignment firstAlignment first
Control mechanism ↑ · ai artefact · consent
Technical alignment is solvable before critical capability thresholds close, and aligned systems compose safely into aligned populations.
Interpretability and oversight methods stop scaling with model capability, stronger models are less rather than more inspectable.
Antitrust primacyAntitrust primacy
Concentration ↓ · institutional · state coercion
Power concentration is the binding constraint and is visible under current competition law; preventing decisive advantage preserves the option space every strategy depends on.
Breakups reconcentrate within one to two years.
Arms control treatyArms control treaty
Institutional capacity ↑ · institutional · treaty
Sovereigns accept binding constraints they negotiate directly faster than those delegated to agencies; the historical base rate for durable restraint is treaty based.
Signatories cannot domestically enforce (the BWC pattern).
Bureaucratic slowdownBureaucratic slowdown
Speed ↓ · institutional · friction
Time itself is safety and procedural gates produce time in ways substantive regulation cannot, while also generating audit trails.
Procedural burden implemented and routinely evaded (pre-1970s environmental impact analogue).
Catastrophe response capacityCatastrophe response capacity
Response capacity ↑ · non preventive · state coercion
Prevention will fail some fraction of the time; the variable that determines catastrophic outcome is not whether incidents occur but how they are contained.
A major incident where capacity exists but cannot scale (2008 financial response analogue).
Closed weights mandateClosed weights mandate
Information flow ↓ · ai artefact · state coercion
Open weights are irrecoverable once released and any proliferated model becomes misuse infrastructure; state classification is the only reliable non-proliferation regime.
Classified capability appears unclassified within two years.
Compute governanceCompute governance
Speed ↓ · market economic · state coercion
The compute supply chain is a stable chokepoint and state coordination on licensing, export controls, and reporting thresholds can govern capability indirectly.
The capability-per-flop curve steepens faster than chip export controls tighten.
Consumer refusalConsumer refusal
Culture ↑ · population culture · market
Demand shapes supply; if enough users refuse AI that fails safety criteria, labs will compete on those criteria.
Major lab scandals produce no measurable user migration, which is the current pattern.
Cooperative AICooperative AI
Cooperation substrate ↑ · ai artefact · consent
The binding constraint is equilibrium dynamics of many AI systems, not individual alignment; commitment and verification tech make cooperative equilibria reachable.
AI systems defect in deployments where commitment technology exists and cooperation was available.
Counter AI AICounter AI AI
Control mechanism ↑ · ai artefact · consent
AI attacks happen at speeds humans cannot observe; defence must happen in AI, with guardian AI systems continuously evaluating adversary AI.
The best guardian system is fooled by a model one generation newer.
Coup prevention firstCoup prevention first
Concentration ↓ · institutional · state coercion
One actor using AI to seize durable decision authority beyond democratic reversibility is the terminal failure; every other decision routes through whether it reduces that risk.
An undetected coup crosses the threshold, the detection regime does not currently exist.
Criminal liabilityCriminal liability
Institutional capacity ↑ · legal individual · state coercion
Civil liability is shareholder-absorbed; criminal exposure for named individuals reorients corporate safety practice where civil fines do not.
Clear criminal conduct is identified with no prosecution (2008 Wall Street analogue).
Data governance firstData governance first
Substrate ↓ · market economic · state coercion
Capability rises with data before it rises with compute; data sits upstream of training with existing legal apparatus (copyright, privacy, contract).
Frontier capability is reached with synthetic data.
Democratic mandateDemocratic mandate
Legitimacy ↑ · population culture · consent
Existing legislative bodies are too captured and remote for load-bearing AI decisions; direct democratic legitimation produces answers captured legislatures cannot override.
Binding AI referenda are functionally ignored within three years.
Differential technology developmentDifferential technology development
Scope • · ai artefact · consent
Offense-defense balance is adjustable; defensive and verification applications can compound faster than offensive ones if deliberately funded.
The offense-defense classification cannot be operationalised in any funded program within five years.
Embodiment requirementEmbodiment requirement
Scope ↓ · ai artefact · state coercion
The dangerous properties of frontier AI (unbounded replication, parallelism, speed, reach) are artefacts of disembodiment; physical presence caps action rate regardless of inference rate.
A catastrophe caused by embodiment-exempt AI.
Energy choke pointEnergy choke point
Speed ↓ · market economic · state coercion
Frontier AI is energy-limited; grid regulators, interconnect queues, and tariff structure bind training pace without new AI-specific authority.
Efficiency gains outpace regulatory tightening.
Governance firstGovernance first
Institutional capacity ↑ · institutional · state coercion
Institutional capacity is the binding constraint; without it no technical success prevents misuse, capture, or concentration.
Enacted regulations cover less than 20% of frontier compute by some date, or institutional capture moves faster than capacity building.
Human augmentation raceHuman augmentation race
Substrate ↑ · population culture · market
All oversight schemes degrade to rubber-stamping as the AI-human capability gap widens; enhancing humans is the only durable fix.
Default failure case: capability gap widens faster than augmentation narrows it. Falsification requires a discontinuous enhancement result.
Information integrity firstInformation integrity first
Information flow ↑ · population culture · state coercion
Coordination requires shared epistemics; if synthetic content collapses factual substrate, no other lever can function because no proposal can be evaluated.
Continued political coordination under synthetic saturation, or provenance infrastructure operationalised as surveillance.
Insurance mandateInsurance mandate
Institutional capacity ↑ · market economic · state coercion
Markets update faster than regulators and have skin in the game; mandatory catastrophic coverage makes reinsurance the de facto safety regulator.
A large AI loss triggers insurer exit rather than tighter safety requirements.
International AI agencyInternational AI agency
Institutional capacity ↑ · institutional · treaty
AI risk is inherently cross-border so national regulation is leaky by construction, and only a dedicated international body with inspection rights can bind the risk surface.
No agency with inspection authority is negotiated and operational within the next several years.
Interpretability firstInterpretability first
Control mechanism ↑ · ai artefact · consent
Mechanistic understanding is a precondition for reliable oversight; behavioural evaluation without interpretability cannot rule out deceptive alignment.
Leading labs cannot produce mechanistic explanations of their own frontier models within two to three years of release.
Mass literacyMass literacy
Substrate ↑ · population culture · consent
Governance, democratic oversight, and consumer behaviour all fail in AI because citizens cannot evaluate the domain; population-scale literacy conditions every other lever.
High measured literacy produces no behavioural change on consumer and voting choices by 2030.
Military primacyMilitary primacy
Concentration ↑ · institutional · unilateral force
Strategic competition between states dominates AI development; the state with the most capable AI is best positioned to secure safety and impose constraints on others.
Catastrophic outcome under a race dynamic that the strategy predicted would be stable.
MultipolarityMultipolarity
Concentration ↓ · institutional · treaty
Stable equilibrium among several roughly equal AI powers is safer than single dominance or uncoordinated chaos, producing restraint through mutual capability awareness.
Any actor achieves decisive advantage others cannot match within a planning cycle.
Narrow AI preservationNarrow AI preservation
Scope ↓ · ai artefact · state coercion
Capability is not the problem; generality is. Narrow AI captures economic value with bounded scope while general systems drive the risk.
Narrow compositions cannot match general system economic returns.
Open source maximalismOpen source maximalism
Information flow ↑ · institutional · market
Concentration risk dominates misuse risk; open weights are the only mechanism that prevents a safety coup by a closed lab with captured regulators.
An open released model produces a verified harm in a domain where defender access does not bound the risk.
PausePause
Speed ↓ · speed timing · consent
Time is the binding constraint: alignment and governance can catch up if frontier training halts above some capability threshold.
Major states or frontier labs publicly defect from a declared pause, or verification tech cannot distinguish a real pause from a declared one.
Plural AI ethicPlural AI ethic
Value diversity ↑ · ai artefact · consent
Value lock-in is the dominant long-term risk and arrives through convergence of AI values; diversity at the AI layer preserves optionality for humanity to revise values.
Measured value convergence across frontier models within three years.
Public AIPublic AI
Concentration ↑ · institutional · state coercion
Private ownership of frontier AI concentrates decision authority incompatibly with the technology's distributional stakes; public ownership aligns incentives with broad welfare.
Concentration risks appear inside the public entity, or the public entity lags the private frontier into irrelevance.
Red line capabilityRed line capability
Scope ↓ · ai artefact · state coercion
Most risk comes from a small number of identifiable capabilities that can be banned outright while the rest of the frontier advances.
A system crosses a named red line without a prior warning signal, or many deployed systems hold a red line capability latently.
Regulated utilityRegulated utility
Institutional capacity ↑ · market economic · state coercion
Frontier AI has natural monopoly characteristics (scale, network effects, capital intensity); rate-of-return regulation removes the profit incentive for speed racing.
Utility regulation produces no safety investment above voluntary baseline.
Religious and moral authorityReligious and moral authority
Legitimacy ↑ · population culture · consent
The legitimacy deficit of AI governance is at root a moral deficit that technical authorities cannot fill; established religious and ethical traditions can.
Formal religious positions move no outcome.
Resilience firstResilience first
Resilience ↑ · non preventive · consent
Brittleness of the surrounding world, not AI capability itself, is the binding constraint; a resilient world absorbs failures and recovers.
Core infrastructure degradation rates exceed hardening rates for three consecutive years, particularly in verification cost and democratic trust.
SabotageSabotage
Speed ↓ · speed timing · unilateral force
Governance has not produced meaningful constraint and direct action against hostile labs has a non-zero historical base rate of producing slowdown.
No credible actor attempts the path; strategy was correctly assessed as non-viable.
Small model firstSmall model first
Scope ↓ · ai artefact · market
Safety risk rises with scale via emergent capability, opacity, and energy footprint; a small-model research culture produces easier-to-interpret systems.
A widening scale gap through 2027.
Sovereign wealthSovereign wealth
Concentration ↓ · market economic · state coercion
Concentration of AI surplus, not AI capability, is the binding failure mode; broad ownership dissolves the political instability driving other strategies.
Captured surplus leaves political concentration unchanged (Alaska PFD analogue).
Voluntary restraintVoluntary restraint
Institutional capacity • · institutional · consent
Labs know more about what safety requires than regulators, and self-binding commitments capture that expertise without legislative lag.
Visible weakening of RSP text under capability pressure, combined with no meaningful penalty.
Whistleblower primacyWhistleblower primacy
Information flow ↑ · legal individual · state coercion
External evaluation sees only what labs release; insider disclosure is the only route to ground truth on capability trajectory, safety culture, and incidents.
A major safety incident is known internally and not disclosed even under the new regime.