AGI Strategies

p(doom) board

Every p(doom) on the record.

p(doom) is shorthand for the probability a person assigns to civilisational catastrophe from AI. Definitions vary: extinction, disempowerment, loss of control, or just bad outcomes. The claim only means what the person said it means, on the date they said it. Every entry below links to its source.

distribution · latest claim per person

n 28mean 35%median 29%
  • ≥ 90%
    2
  • 50–89%
    8
  • 20–49%
    8
  • 10–19%
    7
  • 1–9%
    2
  • < 1%
    1

A person who has stated multiple p(doom) values shows up once here, using their most recent claim. Below, every claim is listed, including past ones, so a single person can appear multiple times.

mean p(doom) by vintage · era of formation

latest claim per person
  • Symbolic era
    38% · n=3
  • Pre-deep-learning
    48% · n=4
  • Deep-learning rise
    25% · n=3
  • Scaling era
    41% · n=10
  • Post-ChatGPT
    27% · n=5

The honest test of whether era predicts estimate. n is small per tier; read this as a signal, not a verdict. Tiers with fewer than 3 datapoints are hidden.

≥ 90%

2
  • Roman Yampolskiy

    Roman Yampolskiy

    100%

    University of Louisville professor; argues AI safety is impossible

    Explicit Twitter statement.

    2024-03-13Tweet from Roman Yampolskiy · X/Twitter

  • Eliezer Yudkowsky

    Eliezer Yudkowsky

    95%

    Founder of MIRI; the original AI-extinction pessimist

    Probability that AI wipes out humanity; Yudkowsky has repeatedly said >95%, sometimes framed as 99%.

    2023PauseAI aggregated p(doom) list · PauseAI

50–89%

8
  • Dan Hendrycks

    Dan Hendrycks

    80%

    Director of the Center for AI Safety; drafter of the Statement on AI Risk

    Hendrycks has publicly indicated a p(doom) above 80%.

    2023-04-02Tweet from Dan Hendrycks · X/Twitter

  • Daniel Kokotajlo

    70%

    Former OpenAI governance team member; author of AI 2027 scenario

    Self-reported ~70% chance of existential catastrophe.

    2023LessWrong comment by Daniel Kokotajlo · LessWrong

  • Zvi Mowshowitz

    Zvi Mowshowitz

    60%

    Don't Worry About The Vase; weekly AI newsletter

    2023-11-28Tweet reporting Zvi's p(doom) · X/Twitter

  • Holden Karnofsky

    Holden Karnofsky

    10–90%

    Co-founder of Open Philanthropy; AI safety funder and strategist

    Explicitly unresolved wide range, framed around uncertainty about alignment difficulty.

    2022-08How might we align transformative AI if it's developed very soon? · AI Alignment Forum

  • Jan Leike

    Jan Leike

    10–90%

    Former head of OpenAI Superalignment; now at Anthropic

    Large uncertainty range cited in interview.

    2023-08Jan Leike interview (PauseAI citation) · YouTube

  • Emad Mostaque

    Emad Mostaque

    50%

    Former CEO of Stability AI; open-source frontier advocate

    2024-12-04Tweet from Emad Mostaque · X/Twitter

  • Liron Shapira

    Liron Shapira

    50%

    Founder; Doom Debates podcast host

    Existential catastrophe from AI in the next several decades.

    2024Liron Shapira on Doom Debates · YouTube

  • Geoffrey Miller

    Geoffrey Miller

    50%

    UNM evolutionary psychologist; AGI pause advocate

    ~50% p(doom) with wide error bars (5–80%).

    2024Modern Wisdom, Geoffrey Miller episode · Modern Wisdom

20–49%

9
  • Paul Christiano

    46%

    Founder of the US AI Safety Institute safety team; ex-OpenAI alignment lead

    Approximately 46% chance of an extremely bad outcome, in his LessWrong post decomposing takeover and non-takeover catastrophes.

    2023-04-27My views on doom · LessWrong

  • Eli Lifland

    35%

    Forecaster; co-author of AI 2027

    2023Eli Lifland on navigating the AI alignment landscape · EA Forum

  • Scott Alexander

    Scott Alexander

    33%

    Astral Codex Ten / Slate Star Codex blogger

    2023-03-14Why I Am Not As Much Of A Doomer As Some People · Astral Codex Ten

  • Geoffrey Hinton

    Geoffrey Hinton

    10–50%

    Godfather of deep learning; left Google in 2023 to speak about AI risk

    Probability AI leads to human extinction in the next 30 years

    2024-06PauseAI aggregated p(doom) list · PauseAI

  • Joseph Carlsmith

    Joseph Carlsmith

    10–50%

    Open Philanthropy researcher; 'Is Power-Seeking AI an Existential Risk?'

    Existential catastrophe from misaligned power-seeking AI by 2070; revised range.

    2022Is Power-Seeking AI an Existential Risk? · Open Philanthropy

  • Emmett Shear

    Emmett Shear

    5–50%

    Former interim CEO of OpenAI; Twitch co-founder

    2023-09Emmett Shear on AI risk · YouTube

  • Yoshua Bengio

    Yoshua Bengio

    20%

    Turing Award laureate; scientific chair of the International AI Safety Report

    Probability of AI catastrophe (reported in ABC News piece).

    2023-07-15What's your p(doom)? AI researchers worry catastrophe · ABC News

  • Reid Hoffman

    Reid Hoffman

    20%

    LinkedIn co-founder; AI optimist investor

    2024-09Future of AI (PBS) · PBS

  • Carl Shulman

    20%

    Open Phil senior research analyst; AGI takeoff economics

    Existential catastrophe from AI; rough rather than precise.

    2023Carl Shulman on the moral status of AI · Dwarkesh Podcast

10–19%

7
  • Dario Amodei

    Dario Amodei

    10–25%

    CEO of Anthropic; 'Machines of Loving Grace' author

    Publicly cited 10–25% chance of catastrophic outcomes.

    2023-10PauseAI aggregated p(doom) list · PauseAI

  • Elon Musk

    Elon Musk

    10–20%

    CEO of Tesla and xAI; co-founded OpenAI

    Has stated 10–20% chance AI destroys humanity; in 2024 said 20%.

    2024-04Elon Musk on AI risk (video) · YouTube

  • Lina Khan

    Lina Khan

    15%

    Former chair of the FTC

    2023-11Tweet citing Lina Khan's p(doom) · X/Twitter

  • Vitalik Buterin

    Vitalik Buterin

    10%

    Ethereum co-founder; author of 'My techno-optimism' manifesto

    2023-11-28Tweet from Vitalik Buterin · X/Twitter

  • Toby Ord

    Toby Ord

    10%

    Philosopher; author of The Precipice

    Probability of existential catastrophe from unaligned AI in the next 100 years, as estimated in The Precipice.

    2020-03-05The Precipice · Bloomsbury

  • Lex Fridman

    Lex Fridman

    10%

    MIT researcher; long-form podcast host

    2024Lex Fridman Podcast · Lex Fridman Podcast

  • Joseph Carlsmith

    Joseph Carlsmith

    10%

    Open Philanthropy researcher; 'Is Power-Seeking AI an Existential Risk?'

    Probability of existential catastrophe from misaligned AI by 2070.

    2022-06-23Is Power-Seeking AI an Existential Risk? · arXiv

1–9%

2
  • Nate Silver

    Nate Silver

    5–10%

    Statistician; Silver Bulletin / FiveThirtyEight founder

    2024-08It's time to come to grips with AI · Silver Bulletin

  • John Carmack

    John Carmack

    5%

    Keen Technologies founder; ex-Meta CTO

    Existential catastrophe from AI within his lifetime; rough.

    2023John Carmack on Lex Fridman Podcast · Lex Fridman

< 1%

1
  • Yann LeCun

    Yann LeCun

    0%

    Chief AI Scientist at Meta; outspoken AI-doom skeptic

    LeCun repeatedly places his p(doom) effectively at zero; <0.01% reported by aggregator.

    2023-12PauseAI aggregated p(doom) list · PauseAI