AGI Strategies
← overview

Speed · speed timing

Race to aligned superintelligence

Alignment is solvable in the window and a single aligned superintelligence in a legitimate state's hands beats the counterfactual of coordination failure.

Mechanism

Put national resources behind building aligned superintelligence first, treated as Manhattan-scale strategic priority.

If it succeeds: what binds next

One actor has aligned superintelligence. They choose whether to constrain rivals, share, or defer. Power concentrates at exactly the moment it is least legitimately held.

A strategy that produces a worse next problem than the one it solved has not done durable work.

Self-undermining threshold

overshoot risk

When more than one state begins racing.

The race dynamic pressures every participant to cut alignment corners. A race for alignment becomes a race against alignment by step two.

Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.

People on the record

14

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: race-to-aligned-si.

expertise mix · 9 profiled

Builds frontier systems
2
Deep ML / safety technical
1
Applied or adjacent technical
0
Governance, policy, strategy
4
Expert in another field
0
Public-square commentator
2

recognition mix

Mass-public recognition
7
Known across the AI/safety field
2
Recognised inside subfield
0
Newer or less central voice
0

A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 9 profiled people on this strategy (5 unprofiled excluded).

  • Alex Karp

    Alex Karp

    Governance, policy, strategy · Mass-public recognition

  • Alex Wang

    Alex Wang

    Governance, policy, strategy · Known across the AI/safety field

  • Dario Amodei

    Dario Amodei

    Builds frontier systems · Mass-public recognition

  • Elon Musk

    Elon Musk

    Public-square commentator · Mass-public recognition

  • Eric Schmidt

    Eric Schmidt

    Governance, policy, strategy · Mass-public recognition

  • Ilya Sutskever

    Ilya Sutskever

    Builds frontier systems · Mass-public recognition

  • Leopold Aschenbrenner

    Leopold Aschenbrenner

    Deep ML / safety technical · Known across the AI/safety field

  • Palmer Luckey

    Palmer Luckey

    Public-square commentator · Mass-public recognition

  • Xi Jinping

    Xi Jinping

    Governance, policy, strategy · Mass-public recognition

  • Carl Shulman

    Open Phil senior research analyst; AGI takeoff economics

  • Daniel Eth

    Foresight Institute alignment researcher

  • Jakub Pachocki

    Jakub Pachocki

    OpenAI Chief Scientist (since 2024)

  • Kara Frederick

    Heritage Foundation tech policy director

  • Mark Chen

    OpenAI Chief Research Officer

Coordinates

Primary leverSpeed (Accelerate)
Acts onspeed timing
Coercionstate coercion
Actor in controlhumans
Time horizonduring transition
Legitimacy sourcestate

Conflicts, grouped by mechanism

4

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

Abandon superintelligenceCapability ceilingNarrow AI preservation

Lever opposition

same lever, opposite pull

The pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.

Pause

Complements, grouped by mechanism

4

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

Centralised AI projectMilitary primacy

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Alignment first

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Acceleration

Axis position

What the strategy acts onSpeed / timing
Coercion levelState coercion
Actor in controlHumans as principals
Time horizonDuring transition
Legitimacy sourceState

Source note: Race to aligned superintelligence strategy.md