AGI Strategies
← overview

Scope · speed timing

Abandon superintelligence

Risk of superintelligence is unbounded and value foregone is bounded; permanent global coordination against the technology is possible enough.

Mechanism

Civilizational commitment to permanent foregoing of AI above some capability threshold, analogous to cloning moratoria or BWC.

If it succeeds: what binds next

Civilizational moratorium holds indefinitely. The binding problem is enforcement over generations as the value of defection grows with accumulated forgone capability.

A strategy that produces a worse next problem than the one it solved has not done durable work.

People on the record

6

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: abandon-superintelligence.

expertise mix · 4 profiled

Builds frontier systems
0
Deep ML / safety technical
3
Applied or adjacent technical
0
Governance, policy, strategy
0
Expert in another field
1
Public-square commentator
0

recognition mix

Mass-public recognition
1
Known across the AI/safety field
3
Recognised inside subfield
0
Newer or less central voice
0

A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 4 profiled people on this strategy (2 unprofiled excluded).

  • Bill Joy

    Bill Joy

    Deep ML / safety technical · Mass-public recognition

  • Richard S. Sutton

    Richard S. Sutton

    Deep ML / safety technical · Known across the AI/safety field

  • Roman Yampolskiy

    Roman Yampolskiy

    Deep ML / safety technical · Known across the AI/safety field

  • Samuel Butler

    Samuel Butler

    Expert in another field · Known across the AI/safety field

  • Avi Loeb

    Avi Loeb

    Harvard astrophysicist; Galileo Project director

  • Hans Moravec

    Robotics pioneer; 'Mind Children' (1948–)

Coordinates

Primary leverScope (Permit)
Acts onspeed timing
Coerciontreaty
Actor in controlhumans
Time horizonhorizon neutral
Legitimacy sourcedemocratic

Conflicts, grouped by mechanism

3

Frame opposition

incompatible premises

The strategies accept different premises about what AI is or what the binding problem is. They conflict not on lever choice but on the frame that makes lever choice sensible.

Race to aligned superintelligenceAccelerationAI for safety

Complements, grouped by mechanism

4

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Capability ceilingNarrow AI preservation

Shared authority

same legitimacy source

Different levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.

Pause

Same-side diversification

same side, different lever

Both act on the same side (AI or world) but pull distinct levers. They cover several failure modes on that side while leaving the other side uncovered.

Safe by construction AI

Same-lever twins

4

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Embodiment requirementtwinRate limited AItwinRed line capabilitytwinSmall model firsttwin

Axis position

What the strategy acts onSpeed / timing
Coercion levelTreaty
Actor in controlHumans as principals
Time horizonHorizon-neutral
Legitimacy sourceDemocratic

Source note: Abandon superintelligence strategy.md