AGI Strategies
← overview

Scope · institutional

Sunset clause

The default direction of AI governance is toward permanent permission; every new capability becomes an entitlement. Reversing the default concentrates deliberative attention on re-authorisation, which is where it matters.

Mechanism

All AI capability authorisations default to expiry. Training approvals and capability thresholds sunset after a specified period unless affirmatively re-authorised. Temporary authorisation replaces permanent permission.

If it succeeds: what binds next

All AI authorisations default to expiry. The re-authorisation calendar becomes the binding process. If renewals routinise, the strategy degrades into the permission regime it was meant to replace.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Authority

Default toward permission is reversible by institutional design; re-authorisation can stay contested rather than routinised.

Fails if: If renewals become automatic, the sunset is procedural theatre.

Coordinates

Primary leverScope (Restrict)
Acts oninstitutional
Coercionstate coercion
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcestate

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

Gradualism

Shared authority

same legitimacy source

Different levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.

Bureaucratic slowdown

Cross-side bridge

one AI-side, one world-side

One acts on the model, the other on institutions or culture. The bridge hedges against both artefact-level and substrate-level failure.

Governance first

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Capability ceiling

Same-lever twins

1

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Test groundtwin

Axis position

What the strategy acts onInstitutional
Coercion levelState coercion
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceState

Source note: Sunset clause strategy.md