AGI Strategies

compare

Two strategies, side by side.

Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed, or does this disagreement live in builders, in policy, or in the public square.

Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.

Abandon superintelligence

4 endorsers · 2 oppose

Reject superintelligence as a goal entirely; narrow AI only

expertise mix

Builds frontier systems
0
Deep ML / safety technical
2
Applied or adjacent technical
0
Governance, policy, strategy
0
Expert in another field
1
Public-square commentator
0

recognition mix

Mass-public recognition
1
Known across the AI/safety field
2
Recognised inside subfield
0
Newer or less central voice
0

profiled

3/4

mean p(doom)

100%

n=1

quotes

5

AI skeptic

81 endorsers · 2 oppose

AGI risk narratives overstated; real harms are mundane and current

expertise mix

Builds frontier systems
1
Deep ML / safety technical
20
Applied or adjacent technical
1
Governance, policy, strategy
2
Expert in another field
10
Public-square commentator
1

recognition mix

Mass-public recognition
16
Known across the AI/safety field
18
Recognised inside subfield
1
Newer or less central voice
0

profiled

35/81

mean p(doom)

0%

n=1

quotes

97

where the disagreement lives

Tier shares within profiled endorsers. Positive shift means the tier is over-represented in Abandon superintelligence; negative means it's over-represented in AI skeptic.

Abandon superintelligence skews these tiers

  • Field-leading+15pp
  • Deep technical+10pp

AI skeptic skews these tiers

  • Household name+12pp
mean p(doom)Abandon superintelligence: 100% (n=1)vsAI skeptic: 0% (n=1)Δ +100.0%