compare
Two strategies, side by side. Pick any two strategies. See who endorses each, the tier mix of endorsers, the p(doom) distribution, and which people endorse both. Useful for asking are these strategies actually opposed , or does this disagreement live in builders, in policy, or in the public square .
Stance defaults to live engagement: endorses, mixed, conditional, or evolved-toward. These are people who treat the strategy as a live bet of theirs at any time. Opposers are listed separately.
strategy A Governance first (252) Alignment first (102) AI skeptic (81) Existential primacy (76) Techno-optimism (47) Evals-driven (46) Open source (37) Near-term harms first (36) Pause (23) AI welfare (21) International treaty (18) Antitrust primacy (15) Acceleration (14) Race to aligned SI (14) Interpretability bet (14) Compute governance (12) Sovereign AI (12) Democratic mandate (8) RSP-style commitments (8) Cooperative AI (6) Open-endedness (6) Security mindset (6) Differential technology (5) Distributed builders (4) Public AI (4) Abandon superintelligence (4) Centralised project (3) Liability-driven safety (3) Long reflection (3) Military primacy (3) EA framing (3) AGI risk narratives overstated; real harms are mundane and current strategy B Governance first (252) Alignment first (102) AI skeptic (81) Existential primacy (76) Techno-optimism (47) Evals-driven (46) Open source (37) Near-term harms first (36) Pause (23) AI welfare (21) International treaty (18) Antitrust primacy (15) Acceleration (14) Race to aligned SI (14) Interpretability bet (14) Compute governance (12) Sovereign AI (12) Democratic mandate (8) RSP-style commitments (8) Cooperative AI (6) Open-endedness (6) Security mindset (6) Differential technology (5) Distributed builders (4) Public AI (4) Abandon superintelligence (4) Centralised project (3) Liability-driven safety (3) Long reflection (3) Military primacy (3) EA framing (3) AGI risk narratives overstated; real harms are mundane and current
Both pickers are on the same strategy. The contrast view needs two different strategies to be useful.
AGI risk narratives overstated; real harms are mundane and current
expertise mix
Deep ML / safety technical 20 Applied or adjacent technical 1 Governance, policy, strategy 2 Expert in another field 10 Public-square commentator 1 recognition mix
Mass-public recognition 16 Known across the AI/safety field 18 Recognised inside subfield 1 Newer or less central voice 0 AGI risk narratives overstated; real harms are mundane and current
expertise mix
Deep ML / safety technical 20 Applied or adjacent technical 1 Governance, policy, strategy 2 Expert in another field 10 Public-square commentator 1 recognition mix
Mass-public recognition 16 Known across the AI/safety field 18 Recognised inside subfield 1 Newer or less central voice 0 AI skeptic only (0)
No one in this slice yet.
AI skeptic only (0)
No one in this slice yet.