AGI Strategies
← overview

Institutional capacity · institutional

International AI agency

AI risk is inherently cross-border so national regulation is leaky by construction, and only a dedicated international body with inspection rights can bind the risk surface.

Mechanism

Build an IAEA-for-AI with monitoring, inspection, and sanction authority across member states.

If it succeeds: what binds next

The agency exists. Its decisions become the binding layer. UN Security Council analogue suggests it replicates the geopolitical tensions it was meant to bridge.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Falsification signal

No agency with inspection authority is negotiated and operational within the next several years.

A strategy held without a falsification signal is not strategy; it is affiliation. Continued support after this signal lands is identity, not bet. See the identity diagnostic.

Historical analogue

Nuclear · IAEA

Every strategy inherits a plausible ceiling from its precedent. The analogue conditions the realistic reach.

Produced

Controlled primary materials, inspection regime for some signatories.

Did not produce

Could not prevent state proliferation; enforcement bounded by major-power consent.

Addresses 2 failure scenarios

all scenarios →

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Coordination

Coordination is tractable at sufficient scale with a legitimate agency.

Fails if: If the agency replicates existing geopolitical tensions, it becomes a venue for the conflict rather than a solution.

Coordinates

Acts oninstitutional
Coerciontreaty
Actor in controlhumans
Time horizonpre transition
Legitimacy sourcestate

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

5

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Governance firstArms control treaty

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

Compute governance

Shared authority

same legitimacy source

Different levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.

Antitrust primacy

Cross-side bridge

one AI-side, one world-side

One acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.

Red line capability

Same-lever twins

7

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Academic firewallingtwinAI worker collective actiontwinCriminal liabilitytwinInsurance mandatetwinLiability driven safetytwinRegulated utilitytwinScientific accumulationtwin

Axis position

What the strategy acts onInstitutional
Coercion levelTreaty
Actor in controlHumans as principals
Time horizonPre-transition
Legitimacy sourceState

Source note: International AI agency strategy.md