AGI Strategies
← overview

Time horizon · speed timing

Gradualism

Harms from lower capability AI are informative about harms from higher capability AI, and deployment feedback outperforms fast scaling.

Mechanism

Deploy AI widely at each capability level before advancing to the next, using observed effects to decide whether to scale.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

Time

Incremental evidence accumulates faster than risk.

Fails if: If failures are abrupt rather than gradual, incremental evidence lags the threat.

Coordinates

Acts onspeed timing
Coercionmarket
Actor in controlhumans
Time horizonduring transition
Legitimacy sourcemarket

Conflicts, grouped by mechanism

2

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

PauseAbandon superintelligence

Complements, grouped by mechanism

4

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Alignment firstGovernance firstDifferential technology development

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

Catastrophe response capacity

Same-lever twins

2

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

AI skeptictwinDefault drifttwin

Axis position

What the strategy acts onSpeed / timing
Coercion levelMarket
Actor in controlHumans as principals
Time horizonDuring transition
Legitimacy sourceMarket

Source note: Gradualism as strategy.md