strategy tag
Race to aligned SI.
Build aligned superintelligence first, before adversaries
stated endorsers
14
no opposers yet
profiled endorsers
9
248 on the board total
endorser mean p(doom)
18%
n=3 · median 18%
quotes by endorsers
15
just for this tag
principal voices
Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.
Dario AmodeiHousehold name
Ilya SutskeverHousehold name
Elon MuskHousehold name
Eric SchmidtHousehold name
Alex KarpHousehold name
where the endorsers sit on the board
9 of 248 profiled · 4% of the board
| expertise ↓ · recognition → | Household name | Field-leading | Established | Emerging |
|---|---|---|---|---|
| Frontier builder | · | · | · | |
| Deep technical | · | · | · | |
| Applied technical | · | · | · | · |
| Policy / meta | · | · | ||
| External-domain expert | · | · | · | · |
| Commentator | · | · | · |
Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.
also held by these endorsers
What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.
Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.
Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).
expertise mix of endorsers · 9 profiled of 14
recognition mix of endorsers
vintage mix · n=9 of 9 profiled with era assigned
Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.
People on the record.
14
Alex Karp
CEO of Palantir
Argues US AI supremacy is a national-security imperative; Palantir is positioned around this framing.
If we don't build the most powerful AI in the West, China will.

Alex Wang
Founder of Scale AI; data infrastructure for frontier models
Publicly frames US-China AI competition as the decisive strategic framing and advocates building Western frontier AI quickly.
We are in an AI war with China. We cannot afford to lose.
Carl Shulman
Open Phil senior research analyst; AGI takeoff economics
Argues a fast software-driven takeoff is plausible, that aligned AI labs racing ahead of unaligned ones is one of the load-bearing strategies, and that the economics of compute will dominate political reactions.
If you have AGI which can do most cognitive work, you very rapidly get superintelligence. The compounding from AI doing AI research is enormous and historically unprecedented.
Daniel Eth
Foresight Institute alignment researcher
Argues a race to aligned superintelligence is reluctantly the right framing; the alternative, paralysis or unilateral pause, plays into the hands of less-safety-oriented developers.
Pausing unilaterally just hands the lead to actors with less interest in safety. The right strategy is to race carefully, with the strongest safety practices we can sustain.

Dario Amodei
CEO of Anthropic; 'Machines of Loving Grace' author
Runs a frontier lab on the stated theory that safety-focused actors must be at the frontier; publicly acknowledges the 'we are pushing what we fear' tension.
Powerful AI could appear as early as 2026.

Elon Musk
CEO of Tesla and xAI; co-founded OpenAI
Simultaneously advocates for pause and runs xAI; frames xAI as a 'maximally truth-seeking' safety-differentiated frontier lab.
xAI is building AI to understand the universe.

Eric Schmidt
Former Google CEO; AI national security advocate
Frames AI development as a national-security competition with China; advocates for government-industry partnership.
We are in a technology race with China, one the US must win.

Ilya Sutskever
OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)
Founded SSI on the explicit thesis that building safe superintelligence is one technical problem to be solved in a single push, insulated from commercial product pressure.
We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.

Jakub Pachocki
OpenAI Chief Scientist (since 2024)
Argues OpenAI's mission of ensuring AGI benefits humanity requires being at the frontier; frames scaling as the path to superintelligence and safety as integral to that path.
We're confident in our ability to deliver on our mission, and Jakub will lead our research as we continue to push the frontier of AI.
Context: Sam Altman's announcement of Pachocki's elevation following Sutskever's departure.
Kara Frederick
Heritage Foundation tech policy director
Frames AI policy from a conservative national-security lens; argues US must out-compete China and limit Big Tech-state collusion.
AI is the central technological battle of our era, and the US is not winning it as decisively as we should be.

Leopold Aschenbrenner
Author of 'Situational Awareness'; former OpenAI Superalignment team
Argues liberal democracies must reach transformative AI first; advocates a government-led Manhattan-scale AGI project for strategic reasons.
AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from preschooler to smart high-schooler abilities in 4 years.
“By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.”
Mark Chen
OpenAI Chief Research Officer
Argues OpenAI's mission requires being at the frontier of capabilities; oversees the research organization and emphasizes deployment-coupled safety practice.
“We evaluate Codex on a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings.”

Palmer Luckey
Founder of Anduril; defense AI builder
Argues Western AI-enabled defense capacity is essential; has publicly criticized AI-safety-focused hiring restrictions among frontier labs.
If we don't build AI weapons, our adversaries will, and we will lose.

Xi Jinping
President of China; AI as national strategic priority
China's national AI strategy frames AI as central to economic and military power. Targets first-tier global AI leadership by 2030.
AI is a strategic technology that will lead a new round of scientific and technological revolution and industrial transformation.
Context: Quoted in China's New Generation AI Development Plan rollout speeches.