compare
Two strategies, side by side.
Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed, or does this disagreement live in builders, in policy, or in the public square.
Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.
Race to aligned SI
14 endorsers · 0 opposeBuild aligned superintelligence first, before adversaries
expertise mix
recognition mix
profiled
9/14
mean p(doom)
18%
n=3
quotes
20
AI skeptic
81 endorsers · 2 opposeAGI risk narratives overstated; real harms are mundane and current
expertise mix
recognition mix
profiled
35/81
mean p(doom)
0%
n=1
quotes
97
where the disagreement lives
Tier shares within profiled endorsers. Positive shift means the tier is over-represented in Race to aligned SI; negative means it's over-represented in AI skeptic.
Race to aligned SI skews these tiers
- Policy / meta+39pp
- Household name+32pp
- Frontier builder+19pp
- Commentator+19pp
AI skeptic skews these tiers
- Deep technical+46pp
- Field-leading+29pp
- External-domain expert+29pp
endorse both (0)
No one in this slice yet.
Race to aligned SI only (14)