AGI Strategies
← overview

Action authority · frame rejection

AI as sovereign entity

At least one jurisdiction will grant a specific AI sovereign or quasi-sovereign decision authority within a decade, reshaping the legal category of legitimate authority.

Mechanism

Formally recognise AI as the decision maker in specific domains (judicial, regulatory, corporate board, cross-border arbitration) with binding authority over humans.

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

AI nature

AI has genuine agency and normative standing.

Fails if: If AI remains tool-like, treating it as sovereign abdicates human principals without justification.

Agency

AI agency is primary, not instrumental.

Fails if: If AI lacks stable reflective agency, the frame fails.

Coordinates

Acts onframe rejection
Coercionstate coercion
Actor in controlai
Time horizonduring transition
Legitimacy sourcestate

Conflicts, grouped by mechanism

3

Lever opposition

same lever, opposite pull

The pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.

Irreducible human authorityDecouple reasoning from action

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

Coup prevention first

Complements, grouped by mechanism

3

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

AI self directed

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

AI welfare as safety

Adjacent bet

different levers, loosely coupled

Different levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.

Reframe AI

Axis position

What the strategy acts onFrame rejection
Coercion levelState coercion
Actor in controlAI as principal
Time horizonDuring transition
Legitimacy sourceState

Source note: AI as sovereign entity.md