Speed ↑ · speed timing
Race to aligned superintelligence
Alignment is solvable in the window and a single aligned superintelligence in a legitimate state's hands beats the counterfactual of coordination failure.
Mechanism
Put national resources behind building aligned superintelligence first, treated as Manhattan-scale strategic priority.
If it succeeds: what binds next
One actor has aligned superintelligence. They choose whether to constrain rivals, share, or defer. Power concentrates at exactly the moment it is least legitimately held.
A strategy that produces a worse next problem than the one it solved has not done durable work.
Self-undermining threshold
overshoot riskWhen more than one state begins racing.
The race dynamic pressures every participant to cut alignment corners. A race for alignment becomes a race against alignment by step two.
Every strategy has a stable region where it reinforces itself and an unstable region where pursuit defeats it. The threshold between them is usually narrower than advocates acknowledge.
People on the record
14Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: race-to-aligned-si.
expertise mix · 9 profiled
recognition mix
A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 9 profiled people on this strategy (5 unprofiled excluded).

Alex Karp
Governance, policy, strategy · Mass-public recognition

Alex Wang
Governance, policy, strategy · Known across the AI/safety field

Dario Amodei
Builds frontier systems · Mass-public recognition

Elon Musk
Public-square commentator · Mass-public recognition

Eric Schmidt
Governance, policy, strategy · Mass-public recognition

Ilya Sutskever
Builds frontier systems · Mass-public recognition

Leopold Aschenbrenner
Deep ML / safety technical · Known across the AI/safety field

Palmer Luckey
Public-square commentator · Mass-public recognition

Xi Jinping
Governance, policy, strategy · Mass-public recognition
Carl Shulman
Open Phil senior research analyst; AGI takeoff economics
Daniel Eth
Foresight Institute alignment researcher

Jakub Pachocki
OpenAI Chief Scientist (since 2024)
Kara Frederick
Heritage Foundation tech policy director
Mark Chen
OpenAI Chief Research Officer
Coordinates
Conflicts, grouped by mechanism
4Frame opposition
incompatible premisesThe strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.
Lever opposition
same lever, opposite pullThe pair's primary lever is the same; they pull it in opposite directions. A portfolio containing both is internally incoherent on that lever.
Complements, grouped by mechanism
4Same phase, different layer
same stage, distinct leversBoth are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.
Stage-sequenced
one sets up the otherThe pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.
Same-lever reinforce
same lever, same pull, different mechanismBoth strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.
Axis position
Source note: Race to aligned superintelligence strategy.md