Action authority ↓ · frame rejection
AI as sovereign entity
At least one jurisdiction will grant a specific AI sovereign or quasi-sovereign decision authority within a decade, reshaping the legal category of legitimate authority.
Mechanism
Formally recognise AI as the decision maker in specific domains (judicial, regulatory, corporate board, cross-border arbitration) with binding authority over humans.
Load-bearing commitments
Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.
AI has genuine agency and normative standing.
Fails if: If AI remains tool-like, treating it as sovereign abdicates human principals without justification.
AI agency is primary, not instrumental.
Fails if: If AI lacks stable reflective agency, the frame fails.
Coordinates
Conflicts, grouped by mechanism
3Lever opposition
same lever, opposite pullThe pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.
Frame opposition
incompatible premisesThe strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.
Complements, grouped by mechanism
3Same-lever reinforce
same lever, same pull, different mechanismBoth strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.
Same phase, different layer
same stage, distinct leversBoth are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.
Adjacent bet
different levers, loosely coupledDifferent levers, different directions of action. They reinforce only via the general principle that covering more bets dominates covering fewer.
Axis position
Source note: AI as sovereign entity.md