AGI Strategies

person

Eliezer Yudkowsky

Eliezer Yudkowsky

Founder of MIRI; the original AI-extinction pessimist

Research fellow who spent two decades arguing that default paths to superintelligence kill everyone, and that the only sane response is an unconditional international halt to frontier training. His 2023 TIME op-ed shifted 'shut it down' from a fringe position into the public debate.

current Research Fellow and co-founder, Machine Intelligence Research Institute (MIRI)

Profile

expertise

Deep technical

Sustained peer-reviewed contribution to ML, alignment, interpretability, or safety techniques. Could review a frontier paper.

Founded MIRI; originated much of agent-foundations alignment vocabulary (orthogonality, instrumental convergence, mesa-optimisation framing). Sequences and HPMOR are widely-read foundational texts in the rationalist/safety community. Not a frontier ML researcher but technically deep on alignment theory.

recognition

Household name

Name recognition outside the AI/CS community. Featured by mainstream press, a Wikipedia page in many languages, a published bestseller, or holds a position the lay public knows.

TIME magazine cover essay (March 2023) calling for an indefinite pause. 60 Minutes, NYT profiles. Name recognised broadly past the AI community.

vintage

Symbolic era

Career started in the GOFAI / expert-systems / early-rationalist period. Vinge's 1993 Singularity, MIRI founded 2000, Bostrom and Yudkowsky writing.

Founded Singularity Institute (later MIRI) in 2000. Sequences 2006–2009. His framing pre-dates deep learning; he engages it from a 2000s rationalist vantage.

Hand-classified. See the board for the criteria and the full grid.

p(doom)

  • 95%2023

    Definition used: Probability that AI wipes out humanity; Yudkowsky has repeatedly said >95%, sometimes framed as 99%.

    PauseAI aggregated p(doom) list · PauseAI

Strategy positions

Pauseendorses

Halt frontier training until alignment catches up

Wants an unconditional moratorium on frontier training, enforced internationally, with explicit willingness to destroy rogue data centres by airstrike.

“The most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
articlePausing AI Developments Isn't Enough. We Need to Shut it All Down· TIME· 2023-03-29· direct quote
“Shut it all down. Shut down all the large GPU clusters. Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system.”
articlePausing AI Developments Isn't Enough. We Need to Shut it All Down· TIME· 2023-03-29· direct quote
I think that humanity is on track to be killed.

Context: Three-plus-hour interview on the Lex Fridman Podcast #368.

videoEliezer Yudkowsky: Dangers of AI and the End of Human Civilization· Lex Fridman Podcast· 2023-03-30· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Eliezer Yudkowsky's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Andrea Miotti

    shared 1 · J=1.00

    Founder of ControlAI; pause campaigner

  • Anthony Aguirre

    Anthony Aguirre

    shared 1 · J=1.00

    UC Santa Cruz physicist; FLI co-founder

  • Aza Raskin

    Aza Raskin

    shared 1 · J=1.00

    Co-founder of the Center for Humane Technology; Earth Species Project

  • Daniel Kokotajlo

    shared 1 · J=1.00

    Former OpenAI governance team member; author of AI 2027 scenario

  • Emmett Shear

    Emmett Shear

    shared 1 · J=1.00

    Former interim CEO of OpenAI; Twitch co-founder

  • Fynn Heide

    shared 1 · J=1.00

    AI safety engineer; PauseAI Europe

Record last updated 2026-04-24.