ai side lever
Scope
Which kinds of AI capability are allowed at all.
Conflict surface
This lever has strategies pulling in both directions. Any portfolio that includes one of each is in tension; the combination cannot be honestly pursued simultaneously.
↑ Restrict
Sunset clause
Scope ↑The default direction of AI governance is toward permanent permission; every new capability becomes an entitlement. Reversing the default concentrates deliberative attention on re-authorisation, which is where it matters.
Test ground
Scope ↑Empirical data on AI impacts requires deployment somewhere; concentrated deployment in a defined testbed produces data without generalising risk. Testbed consent produces legitimacy uncontrolled deployment lacks.
↓ Permit
Abandon superintelligence
Scope ↓Risk of superintelligence is unbounded and value foregone is bounded; permanent global coordination against the technology is possible enough.
Capability ceiling
Scope ↓Some capability level captures most economic value while avoiding most risk, is identifiable before crossing, and can be verifiably enforced.
Embodiment requirement
Scope ↓The dangerous properties of frontier AI (unbounded replication, parallelism, speed, reach) are artefacts of disembodiment; physical presence caps action rate regardless of inference rate.
Narrow AI preservation
Scope ↓Capability is not the problem; generality is. Narrow AI captures economic value with bounded scope while general systems drive the risk.
Rate limited AI
Scope ↓Most AI caused catastrophe requires speed; slow AI, even if arbitrarily capable, is supervisable and rate limits are easier to enforce than capability limits.
Red line capability
Scope ↓Most risk comes from a small number of identifiable capabilities that can be banned outright while the rest of the frontier advances.
Small model first
Scope ↓Safety risk rises with scale via emergent capability, opacity, and energy footprint; a small-model research culture produces easier-to-interpret systems.