strategies
Strategy tags.
A tag is a handle for a recurring strategic claim. Tags are inductive: if the corpus does not carry an argument for a tag, the tag does not exist here yet. Tags with one or two adherents may merge or split as data grows.
strategy tags
46
tags with endorsers
41
5 unused so far
crowded (≥30 endorsers)
8
contested
1
≥25% of stances oppose
Most adhered to.
11 tags · ≥20 endorsersThe 11 strategies that show up most across the corpus. Read these first to map the discourse.
Governance first
252Lead with regulation, treaties, liability regimes
↑ 252 endorsep̄ 35% (n=2)Alignment first
103Solve technical alignment before capability thresholds close
↑ 103 endorse~ 1 tentativep̄ 35% (n=3)Techno-optimism
96Technology and markets solve risks faster than regulation creates them
↑ 96 endorse~ 49 tentativep̄ 20% (n=1)AI skeptic
83AGI risk narratives overstated; real harms are mundane and current
↑ 81 endorse↓ 2 opposep̄ 0% (n=1)Existential primacy
76Extinction/disempowerment risk overrides ordinary cost-benefit
↑ 76 endorsep̄ 28% (n=11)Evals-driven
46Capability/risk evals gate deployment; evals are the load-bearing artefact
↑ 46 endorsep̄ 80% (n=1)Open source
37Release weights widely; transparency beats closed safety
also: open weights
↑ 37 endorsep̄ 25% (n=2)Near-term harms first
36Documented present harms outweigh speculative existential narratives
also: AI ethics
↑ 36 endorseAcceleration
29Build faster; delay costs more than capability
also: e/acc, effective accelerationism
↑ 29 endorse~ 15 tentativep̄ 5% (n=1)Pause
23Halt frontier training until alignment catches up
also: moratorium, stop-ai
↑ 23 endorsep̄ 50% (n=9)AI welfare
21Model welfare/moral status is a primary consideration
↑ 21 endorse
Established positions.
13 tags · 5–19 endorsersInternational treaty
18Arms-control-style treaty on frontier training or deployment
also: arms control
↑ 18 endorseInterpretability bet
15Mechanistic interpretability is necessary and sufficient to know models are safe
↑ 14 endorse↓ 1 opposeAntitrust primacy
15Break concentration via competition law
↑ 15 endorsep̄ 15% (n=1)Race to aligned SI
14Build aligned superintelligence first, before adversaries
↑ 14 endorsep̄ 18% (n=3)Compute governance
12Control flops via export controls, licensing, reporting
↑ 12 endorseSovereign AI
12Nation states must build their own AI for sovereignty
↑ 12 endorseDemocratic mandate
8Decisions about AI must come through democratic processes
↑ 8 endorseRSP-style commitments
8Responsible scaling policies; labs commit to capability-tied safety
↑ 8 endorsep̄ 18% (n=1)Cooperative AI
6Invest in AI-AI and AI-human cooperation capacities
↑ 6 endorseAbandon superintelligence
6Reject superintelligence as a goal entirely; narrow AI only
↑ 4 endorse↓ 2 opposep̄ 100% (n=1)CONTESTEDOpen-endedness
6Build AI via open-ended self-generated curricula; safety must follow from the dynamics
also: autocurricula
↑ 6 endorseSecurity mindset
6Treat safety as adversarial security; assume systems break under attack
↑ 6 endorseDifferential technology
5Preferentially develop protective technology over dangerous
↑ 5 endorsep̄ 10% (n=1)
Emerging or niche.
17 tags · 1–4 endorsers
Strategies the corpus only weakly carries. Some are early signals; some will fold into a neighbour as data grows.
Distributed builders
4Keep many independent actors; concentration is the bigger risk
↑ 4 endorsePublic AI
4State-run or public-option AI as a check on private concentration
↑ 4 endorseCentralised project
3Merge frontier development into one state-led project
also: CERN for AI, Manhattan Project for AI
↑ 3 endorseLiability-driven safety
3Make labs financially liable for harms; markets handle the rest
↑ 3 endorseLong reflection
3Use post-AGI stability for extended moral deliberation before locking in
↑ 3 endorseMilitary primacy
3National security framing; AI as a strategic weapon
↑ 3 endorseEA framing
3Explicitly EA-grounded prioritisation of existential risk
↑ 3 endorseClosed weights
2Keep frontier weights closed; treat them as hazardous artefacts
↑ 2 endorseDigital minds
2Mind-uploading or digital people as strategic horizon
↑ 2 endorseConstitutional AI
2Principles-based training for value alignment
↑ 2 endorseScalable oversight
2Human or human+AI oversight scales past human expertise
↑ 2 endorseCyborg/merge
2Brain-computer interfaces; humans must merge to keep up
↑ 2 endorseAgent foundations
2Reformulate decision theory and embedded agency before behavioural training can be trusted
also: embedded agency
↑ 2 endorseMoral circle expansion
2Treat AGIs as people whose creation extends rather than threatens humanity
↑ 2 endorseNarrow AI preservation
1Preserve narrow/task-specific AI; don't build general agents
↑ 1 endorseHardware killswitch
1On-chip verification and remote off-switches for frontier compute
↑ 1 endorseMulti-agent equilibrium
1Many AIs checking each other is the safety mechanism
↑ 1 endorse
Catalogued, no endorsers yet.
5 tags
Tags carried by the framework but not yet matched to a named position in the corpus.
Resilience first
0Harden institutions and epistemic infrastructure against shocks
Capability ceiling
0Cap maximum capability of deployed systems
Conditional pause
0Pause at a capability trigger, not a date
AI for safety
0Use AI itself to solve alignment and safety research
World government
0Only a singleton authority can stably govern AI