AGI Strategies

axes

Five axes of variation.

Every strategy sits at a position on each of five axes. The axes cross-cut the lever frame: two strategies can share a primary lever yet differ on coercion or actor, and two strategies on different levers can share nearly every axis value.

A clustered distribution on an axis means the field is concentrated on one kind of bet; a dispersed distribution means real strategic variety. Compare a dense axis (like actor in control) to a spread axis (like coercion).

What the strategy acts on

8 values

The AI-artefact vs world-side partition. Strategies acting on AI are a minority of the named space.

AI artefact

17 · 22%

Acts on the model, its training, its capabilities, or its scope.

AI containmentAI for safetyAlignment firstCapability ceilingClosed weights mandateCooperative AICounter AI AIDecouple reasoning from actionDifferential technology developmentEmbodiment requirementInterpretability firstNarrow AI preservationPlural AI ethicRate limited AIRed line capabilitySafe by construction AISmall model first

Institutional

22 · 29%

Acts on governance, agencies, antitrust, treaties.

Academic firewallingAI worker collective actionAntitrust primacyArms control treatyBureaucratic slowdownCentralised AI projectConstitutional AI (governance)Coordination infrastructureCoup prevention firstDistributed buildersGovernance firstInternational AI agencyMilitary primacyMultipolarityMutual dependencyOpen source maximalismPublic AIResearch community normsScientific accumulationSunset clauseTest groundVoluntary restraint

Market / economic

7 · 9%

Acts on liability, insurance, compute, energy, data supply.

Compute governanceData governance firstEnergy choke pointInsurance mandateLiability driven safetyRegulated utilitySovereign wealth

Population / culture

8 · 11%

Acts on literacy, information integrity, legitimacy, framing.

Consumer refusalDemocratic mandateHuman augmentation raceInformation integrity firstLegitimacy firstMass literacyReligious and moral authorityUbuntu relational AI

Legal / individual

3 · 4%

Acts on specific actors, prosecution, whistleblowing, authority reservation.

Criminal liabilityIrreducible human authorityWhistleblower primacy

Non-preventive

6 · 8%

Does not act before harm; builds resilience, exit, or response.

Catastrophe response capacityDefault driftHedge via exitLong reflectionPortfolio hedgeResilience first

Speed / timing

6 · 8%

Structures when and how capability arrives.

Abandon superintelligenceAccelerationGradualismPauseRace to aligned superintelligenceSabotage

Frame rejection

7 · 9%

Rejects the alignment / control framing outright.

AI as sovereign entityAI self directedAI skepticAI welfare as safetyConfucian role ethicsDharma conformityReframe AI

Coercion level

7 values

Orthogonal to lever choice. The same lever can be pulled by consent, treaty, law, friction, or force.

Consent

27 · 36%
Academic firewallingAI for safetyAI welfare as safetyAlignment firstConfucian role ethicsCooperative AICoordination infrastructureCounter AI AIDemocratic mandateDharma conformityDifferential technology developmentHedge via exitInterpretability firstLegitimacy firstLong reflectionMass literacyPausePlural AI ethicPortfolio hedgeReframe AIReligious and moral authorityResearch community normsResilience firstSafe by construction AIScientific accumulationUbuntu relational AIVoluntary restraint

Treaty

4 · 5%
Abandon superintelligenceArms control treatyInternational AI agencyMultipolarity

State coercion

29 · 38%
AI as sovereign entityAntitrust primacyCapability ceilingCatastrophe response capacityCentralised AI projectClosed weights mandateCompute governanceConstitutional AI (governance)Coup prevention firstCriminal liabilityData governance firstDecouple reasoning from actionEmbodiment requirementEnergy choke pointGovernance firstInformation integrity firstInsurance mandateIrreducible human authorityLiability driven safetyNarrow AI preservationPublic AIRace to aligned superintelligenceRate limited AIRed line capabilityRegulated utilitySovereign wealthSunset clauseTest groundWhistleblower primacy

Market

7 · 9%
AccelerationConsumer refusalDistributed buildersGradualismHuman augmentation raceOpen source maximalismSmall model first

Friction

4 · 5%
AI containmentAI worker collective actionBureaucratic slowdownMutual dependency

Unilateral force

2 · 3%
Military primacySabotage

Not applicable

3 · 4%
AI self directedAI skepticDefault drift

Actor in control

4 values

Who or what holds the steering role.

Humans as principals

68 · 89%
Abandon superintelligenceAcademic firewallingAccelerationAI containmentAI for safetyAI worker collective actionAlignment firstAntitrust primacyArms control treatyBureaucratic slowdownCapability ceilingCatastrophe response capacityCentralised AI projectClosed weights mandateCompute governanceConfucian role ethicsConstitutional AI (governance)Consumer refusalCoordination infrastructureCoup prevention firstCriminal liabilityData governance firstDecouple reasoning from actionDemocratic mandateDharma conformityDifferential technology developmentDistributed buildersEmbodiment requirementEnergy choke pointGovernance firstGradualismHedge via exitHuman augmentation raceInformation integrity firstInsurance mandateInternational AI agencyInterpretability firstIrreducible human authorityLegitimacy firstLiability driven safetyLong reflectionMass literacyMilitary primacyMultipolarityMutual dependencyNarrow AI preservationOpen source maximalismPausePortfolio hedgePublic AIRace to aligned superintelligenceRate limited AIRed line capabilityReframe AIRegulated utilityReligious and moral authorityResearch community normsResilience firstSabotageSafe by construction AIScientific accumulationSmall model firstSovereign wealthSunset clauseTest groundUbuntu relational AIVoluntary restraintWhistleblower primacy

AI as principal

2 · 3%
AI as sovereign entityAI self directed

Multi-AI equilibrium

4 · 5%
AI welfare as safetyCooperative AICounter AI AIPlural AI ethic

No principal (drift)

2 · 3%
AI skepticDefault drift

Time horizon

4 values

When in the transition the strategy binds.

Pre-transition

33 · 43%
Academic firewallingAI worker collective actionAlignment firstArms control treatyBureaucratic slowdownCapability ceilingClosed weights mandateCompute governanceCoordination infrastructureCoup prevention firstData governance firstDecouple reasoning from actionDemocratic mandateDifferential technology developmentEmbodiment requirementEnergy choke pointGovernance firstInformation integrity firstInternational AI agencyInterpretability firstLegitimacy firstMass literacyMutual dependencyNarrow AI preservationPausePublic AIRed line capabilityResearch community normsSabotageSafe by construction AIScientific accumulationSmall model firstVoluntary restraint

During transition

17 · 22%
AccelerationAI as sovereign entityAI containmentAI for safetyAI self directedAI welfare as safetyCentralised AI projectConstitutional AI (governance)Cooperative AICounter AI AIGradualismHuman augmentation raceMilitary primacyMultipolarityPlural AI ethicRace to aligned superintelligenceTest ground

Post-transition

1 · 1%
Long reflection

Horizon-neutral

25 · 33%
Abandon superintelligenceAI skepticAntitrust primacyCatastrophe response capacityConfucian role ethicsConsumer refusalCriminal liabilityDefault driftDharma conformityDistributed buildersHedge via exitInsurance mandateIrreducible human authorityLiability driven safetyOpen source maximalismPortfolio hedgeRate limited AIReframe AIRegulated utilityReligious and moral authorityResilience firstSovereign wealthSunset clauseUbuntu relational AIWhistleblower primacy

Legitimacy source

8 values

Where the strategy derives its authority to act.

Technical

18 · 24%
AI containmentAI for safetyAI welfare as safetyAlignment firstCooperative AICoordination infrastructureCounter AI AIDecouple reasoning from actionDifferential technology developmentHuman augmentation raceInterpretability firstMutual dependencyNarrow AI preservationRate limited AIReframe AISafe by construction AIScientific accumulationSmall model first

State

26 · 34%
AI as sovereign entityAntitrust primacyArms control treatyBureaucratic slowdownCapability ceilingCatastrophe response capacityCentralised AI projectClosed weights mandateCompute governanceCriminal liabilityData governance firstEmbodiment requirementEnergy choke pointGovernance firstInformation integrity firstInternational AI agencyLiability driven safetyMilitary primacyMultipolarityRace to aligned superintelligenceRed line capabilityRegulated utilityResilience firstSunset clauseTest groundWhistleblower primacy

Democratic

11 · 14%
Abandon superintelligenceConstitutional AI (governance)Coup prevention firstDemocratic mandateIrreducible human authorityLegitimacy firstLong reflectionMass literacyPausePublic AISovereign wealth

Market

6 · 8%
AccelerationConsumer refusalDistributed buildersGradualismInsurance mandateOpen source maximalism

Self

4 · 5%
AI self directedAI skepticDefault driftVoluntary restraint

Religious

3 · 4%
Confucian role ethicsDharma conformityReligious and moral authority

Extra-institutional

5 · 7%
Academic firewallingAI worker collective actionResearch community normsSabotageUbuntu relational AI

Mixed

3 · 4%
Hedge via exitPlural AI ethicPortfolio hedge

density map

Where the field has explored, and where it has not.

Each cell is one lever crossed with one time horizon. A thick cell means many strategies make that kind of bet at that stage of the transition. An empty cell is either an unexplored region or a structural no-go.

4 thick cells (4+ strategies), 27 empty cells of 60. Marginal returns to new strategy invention are higher in the empty cells than in the crowded ones.

lever \ horizonpreduringpostneutralrow total
Speed
5thick
2sparse
Acceleration, Race to aligned superintelligence
·
·
7
Concentration
2sparse
Coup prevention first, Public AI
3
Centralised AI project, Military primacy, Multipolarity
·
3
Antitrust primacy, Distributed builders, Sovereign wealth
8
Control mechanism
3
Alignment first, Interpretability first, Safe by construction AI
3
AI containment, AI for safety, Counter AI AI
·
3
Confucian role ethics, Dharma conformity, Reframe AI
9
Institutional capacity
7thick
·
·
4thick
11
Resilience
·
·
·
3
Hedge via exit, Portfolio hedge, Resilience first
3
Scope
6thick
1sparse
Test ground
·
3
Abandon superintelligence, Rate limited AI, Sunset clause
10
Action authority
1sparse
Decouple reasoning from action
2sparse
AI as sovereign entity, AI self directed
·
1sparse
Irreducible human authority
4
Information flow
2sparse
Closed weights mandate, Information integrity first
·
·
2sparse
Open source maximalism, Whistleblower primacy
4
Cooperation substrate
2sparse
Coordination infrastructure, Mutual dependency
2sparse
AI welfare as safety, Cooperative AI
·
·
4
Time horizon
·
1sparse
Gradualism
1sparse
Long reflection
2sparse
AI skeptic, Default drift
4
Substrate
2sparse
Data governance first, Mass literacy
1sparse
Human augmentation race
·
·
3
Value diversity
·
1sparse
Plural AI ethic
·
·
1
Response capacity
·
·
·
1sparse
Catastrophe response capacity
1
Legitimacy
2sparse
Democratic mandate, Legitimacy first
1sparse
Constitutional AI (governance)
·
1sparse
Religious and moral authority
4
Culture
1sparse
Research community norms
·
·
2sparse
Consumer refusal, Ubuntu relational AI
3
column total331712576

Read down the horizon columns: the post-transition column is thin. The field has little to say about the world after AI succeeds or fails. The during-transition column is where most strategy effort concentrates. Read across lever rows: the field is thick on speed, concentration, control mechanism; thin on response capacity, culture, and legitimacy.

Dimensions beyond these five remain under debate. Seven-dimension, ten-lever, and axis-only frames all give partial views of the same space (see vault notes on frame unification).

An empty cell here (say, coercion = unilateral force with few strategies) either points to a blind spot in the named portfolio or to an empirical no-go region. The survey catalogues; it does not judge which.