AGI Strategies

strategy tag

Pause.

Halt frontier training until alignment catches up

also known as: moratorium, stop-ai

stated endorsers

23

no opposers yet

profiled endorsers

20

248 on the board total

endorser mean p(doom)

50%

n=9 · median 50%

quotes by endorsers

29

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Geoffrey HintonGeoffrey Hinton

    Household name

  • Eliezer YudkowskyEliezer Yudkowsky

    Household name

  • Elon MuskElon Musk

    Household name

  • Tristan HarrisTristan Harris

    Household name

  • Max TegmarkMax Tegmark

    Household name

where the endorsers sit on the board

20 of 248 profiled · 8% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder····
Deep technical
  • Geoffrey Hinton
  • Eliezer Yudkowsky
  • Connor Leahy
  • Nate Soares
··
Applied technical··
  • Liron Shapira
·
Policy / meta
  • Tristan Harris
  • Jaan Tallinn
  • Aza Raskin
·
External-domain expert
  • Max Tegmark
  • Yuval Noah Harari
  • Anthony Aguirre
  • Geoffrey Miller
··
Commentator
  • Elon Musk
  • Emmett Shear
  • Steve Wozniak
  • Emad Mostaque
  • Liv Boeree
  • Zvi Mowshowitz
·

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

also held by these endorsers

What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.

Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 20 profiled of 23

Builds frontier systems
0
Deep ML / safety technical
4
Applied or adjacent technical
1
Governance, policy, strategy
5
Expert in another field
4
Public-square commentator
6

recognition mix of endorsers

Mass-public recognition
8
Known across the AI/safety field
9
Recognised inside subfield
3
Newer or less central voice
0

vintage mix · n=20 of 20 profiled with era assigned

Pioneer
1
Symbolic era
1
Pre-deep-learning
5
Deep-learning rise
1
Scaling era
5
Post-ChatGPT
7

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

23

Andrea Miotti

Founder of ControlAI; pause campaigner

endorses

Publicly campaigns for a prohibition on superintelligence development; drafts legislative proposals for licensing compute above 10^25 FLOP.

Training runs above 10^25 FLOP should require a license; license applications should detail capabilities, risk management, and safety protocols.
testimonyWritten evidence submitted by Andrea Miotti and Steven Adler· UK Parliament· 2023-10· faithful paraphrase
Anthony Aguirre

Anthony Aguirre

UC Santa Cruz physicist; FLI co-founder

endorses

Steers FLI's policy work; co-authored the Pause letter and has called for a conditional moratorium tied to capability thresholds.

We don't want to stop all AI, we want to stop the reckless training of giant, dangerous, unaligned systems.
blogFLI AI safety policy· Future of Life Institute· 2023· faithful paraphrase
Aza Raskin

Aza Raskin

Co-founder of the Center for Humane Technology; Earth Species Project

endorses

Argues the pace of AI deployment currently exceeds institutional capacity to absorb it.

When a new technology is released faster than the institutions that would wisely govern it, you get a governance crisis.
videoThe A.I. Dilemma· Center for Humane Technology· 2023-03-09· faithful paraphrase
Connor Leahy

Connor Leahy

CEO of Conjecture; EleutherAI co-founder turned AI safety hawk

endorses

Argues for 'a moratorium on frontier AI runs' implemented through a cap on compute, enforced internationally.

“If they just get more and more powerful, without getting more controllable, we are super, super fucked. And by 'we' I mean all of us.”
articleAI will leave us 'super fucked', says Conjecture's Connor Leahy· Sifted· 2023-04· direct quote
If you build systems that are more capable than humans at manipulation, business, politics, science and everything else, and we do not control them, then the future belongs to them, not us.

Context: Commentary around the Bletchley Park AI Safety Summit.

tweetCEO Connor Leahy attended the AI Safety Summit· Conjecture (official)· 2023-11· faithful paraphrase

Daniel Kokotajlo

Former OpenAI governance team member; author of AI 2027 scenario

endorses

Publicly urged OpenAI to change course and has endorsed stronger regulatory constraints on frontier training.

I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

Context: Statement to the New York Times on why he resigned from OpenAI.

articleOpenAI Insiders Warn of 'Reckless' Race for Dominance· The New York Times· 2024-06-04· faithful paraphrase
Eliezer Yudkowsky

Eliezer Yudkowsky

Founder of MIRI; the original AI-extinction pessimist

endorses

Wants an unconditional moratorium on frontier training, enforced internationally, with explicit willingness to destroy rogue data centres by airstrike.

“The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
articlePausing AI Developments Isn't Enough. We Need to Shut it All Down· TIME· 2023-03-29· direct quote
“Shut it all down. Shut down all the large GPU clusters. Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system.”
articlePausing AI Developments Isn't Enough. We Need to Shut it All Down· TIME· 2023-03-29· direct quote
I think that humanity is on track to be killed.

Context: Three-plus-hour interview on the Lex Fridman Podcast #368.

videoEliezer Yudkowsky: Dangers of AI and the End of Human Civilization· Lex Fridman Podcast· 2023-03-30· faithful paraphrase
Elon Musk

Elon Musk

CEO of Tesla and xAI; co-founded OpenAI

endorses

Signed the March 2023 Pause Giant AI Experiments open letter; has repeatedly called for regulatory oversight.

“With artificial intelligence we are summoning the demon.”

Context: MIT AeroAstro centennial symposium.

articleElon Musk warns 'we are summoning the demon' with artificial intelligence· The Washington Post· 2014-10· direct quote
Emad Mostaque

Emad Mostaque

Former CEO of Stability AI; open-source frontier advocate

endorses

Signed the FLI Pause Giant AI Experiments letter.

I am a signatory of the Pause letter because I believe coordination is necessary.
articlePause Giant AI Experiments signatories· Future of Life Institute· 2023-03-29· faithful paraphrase
Emmett Shear

Emmett Shear

Former interim CEO of OpenAI; Twitch co-founder

mixed

Has advocated for slowing down frontier development; describes high but uncertain p(doom).

My p(doom) is somewhere between 5 and 50 percent. I genuinely don't know.
videoEmmett Shear on AI risk (YouTube)· YouTube· 2023-09· faithful paraphrase

Fynn Heide

AI safety engineer; PauseAI Europe

endorses

Active organiser of PauseAI's street-level campaigns and public demonstrations.

Pause is the only policy response that scales with the risk.
articlePauseAI· PauseAI· 2024· loose paraphrase
Geoffrey Hinton

Geoffrey Hinton

Godfather of deep learning; left Google in 2023 to speak about AI risk

mixed

Has expressed sympathy for slowing development but stops short of endorsing a full moratorium; frames the risk as primarily about losing control and about bad-actor misuse.

If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us.

Context: CBS 60 Minutes interview with Scott Pelley, the most-watched mainstream coverage of Hinton's position.

videoGodfather of AI Geoffrey Hinton: The 60 Minutes Interview· CBS 60 Minutes· 2023-10-08· faithful paraphrase
“It is hard to see how you can prevent the bad actors from using it for bad things.”

Context: Interview with the New York Times announcing his departure from Google so he could speak freely about AI dangers.

articleGeoffrey Hinton: AI pioneer quits Google to warn about the technology's 'dangers'· CNN Business· 2023-05-01· direct quote
“I left so that I could talk about the dangers of AI without considering how this impacts Google.”
articleDeep learning pioneer Geoffrey Hinton quits Google· MIT Technology Review· 2023-05-01· direct quote
Geoffrey Miller

Geoffrey Miller

UNM evolutionary psychologist; AGI pause advocate

endorses

Publicly advocates for a moratorium on advanced AI; characterises current AGI pursuit as 'reckless and dangerous and evil and stupid'.

Continued pursuit of AGI capabilities is reckless and dangerous and evil and stupid.
podcastTop Professor Condemns AGI Development: 'It's Frankly Evil'· Doom Debates (Liron Shapira)· 2024· faithful paraphrase

Holly Elmore

PauseAI US executive director

endorses

Argues that the only responsible policy given current uncertainty is a global pause on frontier-model training, enforced by treaty if necessary.

We are accelerating toward a technology nobody knows how to control. A pause is the minimum reasonable response while we figure that out.
articlePauseAI US· PauseAI· 2023· faithful paraphrase
Jaan Tallinn

Jaan Tallinn

Skype co-founder; AI safety funder and advocate

endorses

Signed the 2023 FLI Pause Giant AI Experiments letter.

I am signing the pause letter.
articlePause Giant AI Experiments: signatories· Future of Life Institute· 2023-03-22· summary
Liron Shapira

Liron Shapira

Founder; Doom Debates podcast host

endorses

Publicly advocates for a pause or slowdown on frontier training.

My p(doom) is 50% and I think a pause is the only sensible policy.
videoDoom Debates· YouTube· 2023-11· faithful paraphrase
Liv Boeree

Liv Boeree

Poker player; Win-Win podcast host

endorses

Frames the AI race as a textbook Moloch trap and calls for coordinated slowdowns.

The AI race is a textbook Moloch problem: individually rational actors produce a collectively catastrophic outcome.
videoWin-Win Podcast· YouTube· 2023· faithful paraphrase
Max Tegmark

Max Tegmark

Physicist; co-founder and president of the Future of Life Institute

endorses

Public face of the Pause Giant AI Experiments letter calling for a six-month moratorium on systems more powerful than GPT-4.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or reliably control.”

Context: Opening paragraph of the Pause Giant AI Experiments open letter; Tegmark's FLI published it.

articlePause Giant AI Experiments: An Open Letter· Future of Life Institute· 2023-03-22· direct quote
Nate Soares

Nate Soares

President of MIRI; co-author of 'If Anyone Builds It, Everyone Dies'

endorses

Argues the only sane response to current AI development is an unconditional global halt until alignment is solved.

Whichever external behaviors we set for AIs during training, we will almost certainly fail to give them internal drives that remain aligned with human well-being outside the training environment.
bookIf Anyone Builds It, Everyone Dies· Little, Brown and Company· 2025-09-16· faithful paraphrase

Rob Bensinger

MIRI communications lead

endorses

Publicly supports MIRI's argument for an unconditional halt on frontier training.

If we can't solve alignment, we shouldn't build the systems we can't align.
blogMIRI blog· MIRI· 2023· loose paraphrase
Steve Wozniak

Steve Wozniak

Apple co-founder; Pause letter signatory

endorses

Signed the Pause Giant AI Experiments letter; publicly explained his concern is primarily about misuse.

I'm not afraid of large language models themselves. I'm afraid of people using them for bad things.
articleApple's Steve Wozniak warns A.I. could be used by 'evil people' after signing letter with Tesla's Elon Musk· Fortune· 2023-05-03· faithful paraphrase
Tristan Harris

Tristan Harris

Co-founder of the Center for Humane Technology; 'The AI Dilemma'

endorses

Argues there is a gap between what CEOs say publicly and what AI-lab insiders say privately about risk; has called for slowing deployment to match governance capacity.

“No matter how high the skyscraper of benefits that AI assembles, if it can also be used to undermine the foundation of society upon which that skyscraper depends, it won't matter how many benefits there are.”
talkTristan Harris at the AI for Good Global Summit: The AI Dilemma· AI for Good· 2023· direct quote
50% of AI researchers believe there's a 10% or greater chance humans go extinct from our inability to control AI.

Context: Slide quoted in The AI Dilemma presentation.

videoThe A.I. Dilemma, March 9, 2023· Center for Humane Technology· 2023-03-09· faithful paraphrase
Yuval Noah Harari

Yuval Noah Harari

Historian; author of Sapiens and Nexus

endorses

Signed the FLI Pause letter and has publicly called for a six-month moratorium on advanced AI development.

“AI has thereby hacked the operating system of our civilisation.”
articleYuval Noah Harari argues that AI has hacked the operating system of human civilisation· The Economist· 2023-04-28· direct quote
Zvi Mowshowitz

Zvi Mowshowitz

Don't Worry About The Vase; weekly AI newsletter

endorses

Public supporter of pause-style interventions; writes exhaustively on AI policy and industry dynamics.

“p(doom) 60%.”
tweetp(doom) tweet thread· X/Twitter· 2023-11-28· direct quote