compare
Two strategies, side by side.
Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed, or does this disagreement live in builders, in policy, or in the public square.
Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.
Alignment first
102 endorsers · 0 opposeSolve technical alignment before capability thresholds close
expertise mix
recognition mix
profiled
29/102
mean p(doom)
35%
n=3
quotes
112
AI skeptic
81 endorsers · 2 opposeAGI risk narratives overstated; real harms are mundane and current
expertise mix
recognition mix
profiled
35/81
mean p(doom)
0%
n=1
quotes
97
where the disagreement lives
Tier shares within profiled endorsers. Positive shift means the tier is over-represented in Alignment first; negative means it's over-represented in AI skeptic.
Alignment first skews these tiers
- Established+35pp
- Frontier builder+11pp
AI skeptic skews these tiers
- Household name+22pp
- External-domain expert+18pp
- Field-leading+13pp
endorse both (0)
No one in this slice yet.
Alignment first only (102)