AGI Strategies

compare

Two strategies, side by side.

Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed, or does this disagreement live in builders, in policy, or in the public square.

Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.

EA framing

3 endorsers · 0 oppose

Explicitly EA-grounded prioritisation of existential risk

expertise mix

Builds frontier systems
0
Deep ML / safety technical
0
Applied or adjacent technical
0
Governance, policy, strategy
0
Expert in another field
0
Public-square commentator
0

recognition mix

Mass-public recognition
0
Known across the AI/safety field
0
Recognised inside subfield
0
Newer or less central voice
0

profiled

0/3

mean p(doom)

·

quotes

3

AI skeptic

81 endorsers · 2 oppose

AGI risk narratives overstated; real harms are mundane and current

expertise mix

Builds frontier systems
1
Deep ML / safety technical
20
Applied or adjacent technical
1
Governance, policy, strategy
2
Expert in another field
10
Public-square commentator
1

recognition mix

Mass-public recognition
16
Known across the AI/safety field
18
Recognised inside subfield
1
Newer or less central voice
0

profiled

35/81

mean p(doom)

0%

n=1

quotes

97

where the disagreement lives

Tier shares within profiled endorsers. Positive shift means the tier is over-represented in EA framing; negative means it's over-represented in AI skeptic.

EA framing skews these tiers

No tier swings more than 7pp.

AI skeptic skews these tiers

  • Deep technical+57pp
  • Field-leading+51pp
  • Household name+46pp
  • External-domain expert+29pp