AGI Strategies

overview

A map of AI safety strategies.

Each strategy is a bet about which failure mode binds, which one actually gates a good outcome. The survey catalogues 76 named bets, the 15 levers they pull, and which combinations compose or conflict.

Two strategies conflict only when they pull the same lever in opposite directions, which is rare. Most pairs compose. Most public proposals combine three or four levers without stating which bet is load-bearing; the portfolio audit exposes that concealed concentration.

the field, every strategy, plotted

76 strategies · 15 levers

Speed
7
Concentration
8
Control mechanism
9
Institutional capacity
11
Resilience
3
Scope
10
Action authority
4
Information flow
4
Cooperation substrate
4
Time horizon
4
Substrate
3
Value diversity
1
Response capacity
1
Legitimacy
4
Culture
3
pulls down (negative direction)neutral or frame-rejectingpulls up (positive direction)

Each dot is one strategy. Rows are levers. A lever with dots on both sides is a real conflict surface; any portfolio with strategies from both sides contradicts itself on that lever. Lonely dots name under-explored positions. Click any dot to open the strategy.

Strategies catalogued

76

each a bet about what binds

Levers they pull

15

of 15 distinct types

Conflict pairs

51

across 6 levers with real two-sided pull

World-side strategies

33%

act on institutions, culture, substrate, not the model

Total unordered pairs

2,850

most compose; few actually conflict

What's here.

Seven ways to enter the survey. Start where the question is yours.

a walking tour

If you want one path through the survey.

  1. 01Start with a failure mode you actually fear, pick one at scenarios. See which strategies are catalogued as responsive.
  2. 02Open the top candidate. Read the bet, mechanism, and what binds next if it succeeds. Does its successor problem scare you more than the original?
  3. 03Check the falsification signal and self-undermining threshold. Would the advocate community update if the signal landed? Where does pursuit overshoot into the unstable region?
  4. 04Walk the complements by mechanism. Cross-side bridges reduce lever concentration; stage-sequenced pairs extend time coverage.
  5. 05Return here, load the portfolio you are building into the audit. See which levers it misses and which strategies double-count.

Where the consensus lives.

the board →

For each strategy with at least four profiled endorsers, who actually holds it. A strategy held mostly by frontier-builders is in a different epistemic state from one held mostly by commentators. Counts are over the 9 strategies that meet the bar.

Builder-heavy

Endorsement is ≥60% frontier-builder + deep-technical

  • AI skeptic

    Deep ML / safety technical · 57% of 35

    • Yann LeCun
    • Gary Marcus
    • Timnit Gebru
    • Andrew Ng
    • Ted Chiang
    • Naomi Klein
    • +29
  • Alignment first

    Deep ML / safety technical · 62% of 29

    • Stuart Russell
    • Nick Bostrom
    • Norbert Wiener
    • Claude Shannon
    • Alan Turing
    • John McCarthy
    • +23
  • Open source maximalism

    Builds frontier systems · 38% of 8

    • Yann LeCun
    • Andrew Ng
    • Mark Zuckerberg
    • Tim Berners-Lee
    • Emad Mostaque
    • Jeremy Howard
    • +2

Policy-heavy

Endorsement is ≥40% policy / meta

  • Governance first

    Governance, policy, strategy · 53% of 53

    • Yoshua Bengio
    • Demis Hassabis
    • Sam Altman
    • Gary Marcus
    • Mustafa Suleyman
    • Timnit Gebru
    • +47
  • Race to aligned superintelligence

    Governance, policy, strategy · 44% of 9

    • Dario Amodei
    • Ilya Sutskever
    • Elon Musk
    • Eric Schmidt
    • Alex Karp
    • Palmer Luckey
    • +3
  • Acceleration

    Governance, policy, strategy · 43% of 7

    • Marc Andreessen
    • JD Vance
    • Donald Trump
    • David Sacks
    • Vivek Ramaswamy
    • Richard S. Sutton
    • +1
  • Antitrust primacy

    Governance, policy, strategy · 100% of 4

    • Cory Doctorow
    • Lina Khan
    • Tim O'Reilly
    • Meredith Whittaker

Commentary-heavy

Endorsement is ≥50% commentator + external-domain

  • Pause

    Public-square commentator · 30% of 20

    • Geoffrey Hinton
    • Eliezer Yudkowsky
    • Elon Musk
    • Tristan Harris
    • Max Tegmark
    • Emmett Shear
    • +14
  • AI welfare as safety

    Expert in another field · 100% of 8

    • Peter Singer
    • Daniel Dennett
    • Thomas Nagel
    • David Chalmers
    • Christof Koch
    • Patricia Churchland
    • +2

Each lever is a kind of action a strategy takes. Strategies grouped on the same lever either reinforce (same direction) or conflict (opposite direction).

Speed

7 · mixed

How fast frontier capability advances.

Concentration

8 · mixed

How many actors build frontier AI.

Control mechanism

9 · ai side

How AI is kept predictable.

Institutional capacity

11 · world side

Whether state and cross-state institutions can steer the outcome.

Resilience

3 · world side

How much the world tolerates AI failure.

Scope

10 · ai side

Which kinds of AI capability are allowed at all.

Who (or what) makes binding decisions.

What gets disclosed, verified, or hidden.

Whether safety runs on AI-AI, human-AI, or human-only coordination.

Time horizon

4 · mixed

Whether safety planning looks at current systems, short-term agents, or post-ASI.

Substrate

3 · world side

Upstream physical inputs (compute, energy, data) or downstream substrates (information integrity, literacy).

Value diversity

1 · ai side

Pluralism across AI systems' values.

Response capacity

1 · world side

Ability to recover after AI-driven harms.

Legitimacy

4 · world side

Democratic, religious, or civic authority for any AI path.

Culture

3 · world side

Population competence, norms, and demand shaping.