AGI Strategies

strategy tag

Race to aligned SI.

Build aligned superintelligence first, before adversaries

stated endorsers

14

no opposers yet

profiled endorsers

9

248 on the board total

endorser mean p(doom)

18%

n=3 · median 18%

quotes by endorsers

15

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Dario AmodeiDario Amodei

    Household name

  • Ilya SutskeverIlya Sutskever

    Household name

  • Elon MuskElon Musk

    Household name

  • Eric SchmidtEric Schmidt

    Household name

  • Alex KarpAlex Karp

    Household name

where the endorsers sit on the board

9 of 248 profiled · 4% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder
  • Dario Amodei
  • Ilya Sutskever
···
Deep technical·
  • Leopold Aschenbrenner
··
Applied technical····
Policy / meta
  • Eric Schmidt
  • Alex Karp
  • Xi Jinping
  • Alex Wang
··
External-domain expert····
Commentator
  • Elon Musk
  • Palmer Luckey
···

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

also held by these endorsers

What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.

Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 9 profiled of 14

Builds frontier systems
2
Deep ML / safety technical
1
Applied or adjacent technical
0
Governance, policy, strategy
4
Expert in another field
0
Public-square commentator
2

recognition mix of endorsers

Mass-public recognition
7
Known across the AI/safety field
2
Recognised inside subfield
0
Newer or less central voice
0

vintage mix · n=9 of 9 profiled with era assigned

Pioneer
0
Symbolic era
0
Pre-deep-learning
0
Deep-learning rise
5
Scaling era
3
Post-ChatGPT
1

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

14
Alex Karp

Alex Karp

CEO of Palantir

endorses

Argues US AI supremacy is a national-security imperative; Palantir is positioned around this framing.

If we don't build the most powerful AI in the West, China will.
articleAlex Karp on Palantir and defense AI· Palantir· 2024· loose paraphrase
Alex Wang

Alex Wang

Founder of Scale AI; data infrastructure for frontier models

endorses

Publicly frames US-China AI competition as the decisive strategic framing and advocates building Western frontier AI quickly.

We are in an AI war with China. We cannot afford to lose.
testimonyAlex Wang testimony on AI and national security· US Senate Armed Services Committee· 2024· faithful paraphrase

Carl Shulman

Open Phil senior research analyst; AGI takeoff economics

endorses

Argues a fast software-driven takeoff is plausible, that aligned AI labs racing ahead of unaligned ones is one of the load-bearing strategies, and that the economics of compute will dominate political reactions.

If you have AGI which can do most cognitive work, you very rapidly get superintelligence. The compounding from AI doing AI research is enormous and historically unprecedented.
podcastCarl Shulman on AI takeoff and economic feedback loops· Dwarkesh Podcast· 2023-06· faithful paraphrase

Daniel Eth

Foresight Institute alignment researcher

endorses

Argues a race to aligned superintelligence is reluctantly the right framing; the alternative, paralysis or unilateral pause, plays into the hands of less-safety-oriented developers.

Pausing unilaterally just hands the lead to actors with less interest in safety. The right strategy is to race carefully, with the strongest safety practices we can sustain.
blogDaniel Eth, LessWrong· LessWrong· 2023· faithful paraphrase
Dario Amodei

Dario Amodei

CEO of Anthropic; 'Machines of Loving Grace' author

mixed

Runs a frontier lab on the stated theory that safety-focused actors must be at the frontier; publicly acknowledges the 'we are pushing what we fear' tension.

Powerful AI could appear as early as 2026.
blogMachines of Loving Grace· darioamodei.com· 2024-10-11· faithful paraphrase
Elon Musk

Elon Musk

CEO of Tesla and xAI; co-founded OpenAI

endorses

Simultaneously advocates for pause and runs xAI; frames xAI as a 'maximally truth-seeking' safety-differentiated frontier lab.

xAI is building AI to understand the universe.
articlexAI launch announcement· xAI· 2023-07-12· faithful paraphrase
Eric Schmidt

Eric Schmidt

Former Google CEO; AI national security advocate

endorses

Frames AI development as a national-security competition with China; advocates for government-industry partnership.

We are in a technology race with China, one the US must win.
§ paperNational Security Commission on Artificial Intelligence Final Report· NSCAI· 2021· faithful paraphrase
Ilya Sutskever

Ilya Sutskever

OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)

endorses

Founded SSI on the explicit thesis that building safe superintelligence is one technical problem to be solved in a single push, insulated from commercial product pressure.

We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.
articleSafe Superintelligence Inc. launch announcement· Safe Superintelligence Inc· 2024-06-19· faithful paraphrase
Jakub Pachocki

Jakub Pachocki

OpenAI Chief Scientist (since 2024)

endorses

Argues OpenAI's mission of ensuring AGI benefits humanity requires being at the frontier; frames scaling as the path to superintelligence and safety as integral to that path.

We're confident in our ability to deliver on our mission, and Jakub will lead our research as we continue to push the frontier of AI.

Context: Sam Altman's announcement of Pachocki's elevation following Sutskever's departure.

articleOpenAI announces leadership changes· OpenAI· 2024-05-15· faithful paraphrase

Kara Frederick

Heritage Foundation tech policy director

endorses

Frames AI policy from a conservative national-security lens; argues US must out-compete China and limit Big Tech-state collusion.

AI is the central technological battle of our era, and the US is not winning it as decisively as we should be.
articleHeritage Foundation Tech Policy· Heritage Foundation· 2024· loose paraphrase
Leopold Aschenbrenner

Leopold Aschenbrenner

Author of 'Situational Awareness'; former OpenAI Superalignment team

endorses

Argues liberal democracies must reach transformative AI first; advocates a government-led Manhattan-scale AGI project for strategic reasons.

AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from preschooler to smart high-schooler abilities in 4 years.
blogSituational Awareness: The Decade Ahead· For Our Posterity· 2024-06· faithful paraphrase
“By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.”
blogSituational Awareness: The Decade Ahead· For Our Posterity· 2024-06· direct quote

Mark Chen

OpenAI Chief Research Officer

endorses

Argues OpenAI's mission requires being at the frontier of capabilities; oversees the research organization and emphasizes deployment-coupled safety practice.

“We evaluate Codex on a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings.”
§ paperEvaluating Large Language Models Trained on Code· arXiv / OpenAI· 2021-07· direct quote
Palmer Luckey

Palmer Luckey

Founder of Anduril; defense AI builder

endorses

Argues Western AI-enabled defense capacity is essential; has publicly criticized AI-safety-focused hiring restrictions among frontier labs.

If we don't build AI weapons, our adversaries will, and we will lose.
articleAnduril Industries· Anduril· 2024· loose paraphrase
Xi Jinping

Xi Jinping

President of China; AI as national strategic priority

endorses

China's national AI strategy frames AI as central to economic and military power. Targets first-tier global AI leadership by 2030.

AI is a strategic technology that will lead a new round of scientific and technological revolution and industrial transformation.

Context: Quoted in China's New Generation AI Development Plan rollout speeches.

articleNew Generation AI Development Plan· State Council of China· 2018· faithful paraphrase