AGI Strategies

compare

Two strategies, side by side.

Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed, or does this disagreement live in builders, in policy, or in the public square.

Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.

Governance first

252 endorsers · 0 oppose

Lead with regulation, treaties, liability regimes

expertise mix

Builds frontier systems
5
Deep ML / safety technical
10
Applied or adjacent technical
2
Governance, policy, strategy
28
Expert in another field
8
Public-square commentator
0

recognition mix

Mass-public recognition
28
Known across the AI/safety field
18
Recognised inside subfield
7
Newer or less central voice
0

profiled

53/252

mean p(doom)

35%

n=2

quotes

272

AI skeptic

81 endorsers · 2 oppose

AGI risk narratives overstated; real harms are mundane and current

expertise mix

Builds frontier systems
1
Deep ML / safety technical
20
Applied or adjacent technical
1
Governance, policy, strategy
2
Expert in another field
10
Public-square commentator
1

recognition mix

Mass-public recognition
16
Known across the AI/safety field
18
Recognised inside subfield
1
Newer or less central voice
0

profiled

35/81

mean p(doom)

0%

n=1

quotes

97

where the disagreement lives

Tier shares within profiled endorsers. Positive shift means the tier is over-represented in Governance first; negative means it's over-represented in AI skeptic.

Governance first skews these tiers

  • Policy / meta+47pp
  • Established+10pp
  • Household name+7pp

AI skeptic skews these tiers

  • Deep technical+38pp
  • Field-leading+17pp
  • External-domain expert+13pp
mean p(doom)Governance first: 35% (n=2)vsAI skeptic: 0% (n=1)Δ +35.0%