AGI Strategies

strategy tag

Existential primacy.

Extinction/disempowerment risk overrides ordinary cost-benefit

stated endorsers

76

no opposers yet

profiled endorsers

52

248 on the board total

endorser mean p(doom)

28%

n=11 · median 20%

quotes by endorsers

82

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Geoffrey HintonGeoffrey Hinton

    Household name

  • Yoshua BengioYoshua Bengio

    Household name

  • Stuart RussellStuart Russell

    Household name

  • Dario AmodeiDario Amodei

    Household name

  • Demis HassabisDemis Hassabis

    Household name

where the endorsers sit on the board

52 of 248 profiled · 21% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder
  • Dario Amodei
  • Demis Hassabis
  • Ilya Sutskever
  • Mustafa Suleyman
  • Mira Murati
  • Shane Legg
  • Wojciech Zaremba
  • Ian Goodfellow
··
Deep technical
  • Geoffrey Hinton
  • Yoshua Bengio
  • Stuart Russell
  • Dan Hendrycks
  • Jeff Clune
  • Eric Horvitz
  • Dawn Song
  • Peter Norvig
  • Irving John Good
  • Tamay Besiroglu
  • Jaime Sevilla
·
Applied technical·
  • Gwern Branwen
  • Liron Shapira
·
Policy / meta
  • Sam Altman
  • Nick Bostrom
  • Audrey Tang
  • Toby Ord
  • Jaan Tallinn
  • Kevin Scott
  • Joseph Carlsmith
  • William MacAskill
  • Katja Grace
  • Ted Lieu
·
External-domain expert
  • Nate Silver
  • Douglas Hofstadter
  • Stephen Hawking
  • Vernor Vinge
  • Martin Rees
  • Ezra Klein
  • Bill McKibben
  • Erik Brynjolfsson
··
Commentator
  • Bill Gates
  • Lex Fridman
  • Scott Alexander
  • Dwarkesh Patel
  • Mo Gawdat
··

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

also held by these endorsers

What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.

Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 52 profiled of 76

Builds frontier systems
10
Deep ML / safety technical
13
Applied or adjacent technical
2
Governance, policy, strategy
13
Expert in another field
9
Public-square commentator
5

recognition mix of endorsers

Mass-public recognition
20
Known across the AI/safety field
25
Recognised inside subfield
7
Newer or less central voice
0

vintage mix · n=52 of 52 profiled with era assigned

Pioneer
2
Symbolic era
7
Pre-deep-learning
11
Deep-learning rise
17
Scaling era
10
Post-ChatGPT
5

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

76
Alan Robock

Alan Robock

Rutgers climate scientist; nuclear winter researcher

endorses

Signatory to the Statement on AI Risk, bringing a civilisational-scale-risk scientist's perspective.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Anca Dragan

UC Berkeley professor; Google DeepMind AI safety lead

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Andrew G. Barto

RL co-founder; 2024 Turing Award recipient

endorses

Signatory to the Center for AI Safety's Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Andy Jones

Anthropic researcher; scaling inference laws

mixed

Works on empirical scaling laws; measured technical engagement with safety.

Inference-time compute is a new dimension of the scaling curves we hadn't properly mapped.
articleAndy Jones, Anthropic· Anthropic· 2024· loose paraphrase
Audrey Tang

Audrey Tang

First Digital Minister of Taiwan; pluralism and civic tech

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Avital Balwit

Anthropic communications lead; public-facing AI safety voice

endorses

Public Anthropic voice on the moral and personal stakes of short-timelines AGI.

“I may have three more years to work.”

Context: Widely-cited Palladium essay about living through short-timeline AGI.

articleMy last five years of work· Palladium· 2024· direct quote
Bill Gates

Bill Gates

Microsoft co-founder; AI optimist-with-caveats

mixed

Signed the Statement on AI Risk but publicly frames loss-of-control as a longer-term concern.

“There's the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”
blogThe Age of AI has begun· Gates Notes· 2023-03-21· direct quote
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Bill McKibben

Bill McKibben

Environmental writer; Middlebury scholar

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Cade Metz

NYT AI reporter; Genius Makers author

mixed

Reports on AI safety as a legitimate mainstream story while interrogating claims from both camps.

Inside Google, Microsoft, and OpenAI, there is real disagreement about what is actually happening.
articleCade Metz at The New York Times· The New York Times· 2023· loose paraphrase

Clay Graubard

Forecaster; RAND and Good Judgment contributor

mixed

Represents measured forecasting-grade views on x-risk; rarely takes strong partisan positions.

Forecasting AI extinction risk under Knightian uncertainty is a different exercise from forecasting under well-defined base rates.
articleGood Judgment· Good Judgment· 2024· loose paraphrase
Dan Hendrycks

Dan Hendrycks

Director of the Center for AI Safety; drafter of the Statement on AI Risk

endorses

Organised the single-sentence Statement on AI Risk to move extinction concern into the Overton window.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Context: Statement Hendrycks drafted and organised.

articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Daniela Amodei

President of Anthropic; co-founder

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Dario Amodei

Dario Amodei

CEO of Anthropic; 'Machines of Loving Grace' author

endorses

Signatory to the Statement on AI Risk; treats catastrophic misuse and loss of control as primary downside risks.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

David Silver

DeepMind principal research scientist; AlphaGo and AlphaZero

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Dawn Song

Dawn Song

UC Berkeley professor; AI security researcher

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Demis Hassabis

Demis Hassabis

CEO of Google DeepMind; 2024 Nobel laureate

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Douglas Hofstadter

Douglas Hofstadter

Gödel, Escher, Bach author; cognitive scientist

evolved-toward

Publicly shifted from dismissive of deep learning to deeply worried; frames the concern partly as loss of human dignity rather than only extinction.

“I think it's terrifying. I hate it. I think about it practically all the time, every single day.”

Context: On modern AI, in a 2023 interview.

articleDouglas Hofstadter changes his mind on Deep Learning & AI risk· LessWrong· 2023-06· direct quote
If minds of infinite subtlety and complexity and emotional depth could be trivialized by a small chip, it would destroy my sense of what humanity is about.
videoHofstadter interview· YouTube· 2023-06· faithful paraphrase
Dwarkesh Patel

Dwarkesh Patel

Dwarkesh Podcast host; AI progress commentator

mixed

Treats AI risk and AI transformation as live concerns while publicly leaning skeptical of near-term AGI hype.

“25th percentile, maybe 2029, and then 75th percentile, like 2050.”

Context: On his personal AGI timeline.

podcastDwarkesh Podcast, AI Timelines· Dwarkesh Podcast· 2023· direct quote

Eli Lifland

Forecaster; co-author of AI 2027

endorses

Publicly reports a ~35% p(doom) and works on detailed AI scenarios.

My p(doom) is around 35%.
blogEli Lifland on navigating the AI alignment landscape· EA Forum· 2023· faithful paraphrase
Eric Horvitz

Eric Horvitz

Chief Scientific Officer at Microsoft

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Erik Brynjolfsson

Erik Brynjolfsson

Stanford HAI; 'Turing Trap' essay

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Ezra Klein

Ezra Klein

New York Times columnist; Ezra Klein Show host

mixed

Treats AI risk as a serious mainstream concern while pushing back on the most extreme framings.

The AI safety people spend a lot of time convincing their friends this is serious. I think it is.
podcastThe Ezra Klein Show, AI episodes· The New York Times· 2023-08· loose paraphrase
Geoffrey Hinton

Geoffrey Hinton

Godfather of deep learning; left Google in 2023 to speak about AI risk

endorses

Treats AI extinction risk as on par with pandemic and nuclear risk. Was a headline signatory of the CAIS Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Context: Single-sentence Statement on AI Risk published by CAIS; Hinton was listed first among AI scientists.

articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Gwern Branwen

Gwern Branwen

Independent researcher; gwern.net

endorses

Detailed empiricist analysis of scaling laws and capability jumps; treats AI risk as a quantitative question about takeoff dynamics.

The scaling hypothesis has held across every order of magnitude we have tested.
blogThe Scaling Hypothesis· gwern.net· 2020· faithful paraphrase

Huw Price

Cambridge philosopher; CSER co-founder

endorses

Helped formalise the philosophical case for existential risk research, including AI.

“It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”
articleCambridge to study technology's risks to humans· The Register· 2012-11-25· direct quote
Ian Goodfellow

Ian Goodfellow

DeepMind; inventor of GANs

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Ilya Sutskever

Ilya Sutskever

OpenAI co-founder; now CEO of Safe Superintelligence Inc (SSI)

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Irving John Good

Irving John Good

British mathematician; articulated 'intelligence explosion' in 1965 (1916–2009)

endorses

Coined the intelligence-explosion argument six decades before contemporary AI discourse.

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
§ paperSpeculations Concerning the First Ultraintelligent Machine· Advances in Computers· 1965· direct quote
Jaan Tallinn

Jaan Tallinn

Skype co-founder; AI safety funder and advocate

endorses

Signatory to the Statement on AI Risk and the Pause letter.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
I have yet to meet anyone at an AI lab who says the risk of the next generation model blowing up the planet is less than 1%.
articleSkype co-founder Jaan Tallinn reveals the 3 existential risks he's most concerned about· CNBC· 2020-12-29· faithful paraphrase
Jaime Sevilla

Jaime Sevilla

Director of Epoch AI

mixed

Quantitative empiricist; publishes data that underlies most AI timeline forecasts.

Compute for frontier training runs has doubled roughly every six months since 2010.
§ paperCompute Trends Across Three Eras of Machine Learning· arXiv· 2022-02-10· faithful paraphrase
James Manyika

James Manyika

SVP of Research, Technology and Society at Google-Alphabet

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Jeff Clune

Jeff Clune

OpenAI / UBC researcher; open-ended evolution advocate

evolved-toward

Moved from skepticism in the 2010s to explicitly signing the Statement on AI Risk in 2023.

I used to dismiss AI-risk arguments. The past few years of capability progress have substantially shifted my view.
articleStatement on AI Risk, signatories· Center for AI Safety· 2023· loose paraphrase
Joseph Carlsmith

Joseph Carlsmith

Open Philanthropy researcher; 'Is Power-Seeking AI an Existential Risk?'

endorses

Decomposes existential risk into a chain of conditional claims (APS-AI possible, deployed, misaligned, scheming, humans lose control).

My overall estimate of the probability of existential catastrophe from misaligned AI by 2070 is about 10%.
§ paperIs Power-Seeking AI an Existential Risk?· arXiv· 2022-06-23· faithful paraphrase
Joseph Sifakis

Joseph Sifakis

Turing Award laureate; embedded systems researcher

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Julia Galef

Julia Galef

Rationalist author; former CFAR president

mixed

Takes AI risk seriously but is public about calibration concerns and the risk of unfalsifiable framings.

Taking AI risk seriously and being epistemically calibrated are not in tension.
podcastRationally Speaking Podcast· Rationally Speaking Podcast· 2024· loose paraphrase
Katja Grace

Katja Grace

Lead researcher at AI Impacts

endorses

Has publicly argued that even conservative survey estimates put AI extinction probability above 5%, high enough for serious action.

The median respondent gave a 5% chance of AI causing an outcome as bad as human extinction. Five percent is not a reassuring number.
§ paper2023 AI Impacts Expert Survey on Progress in AI· AI Impacts· 2023-08· faithful paraphrase
Kelsey Piper

Kelsey Piper

Vox Future Perfect senior reporter

endorses

Has published multiple explainers supporting the seriousness of existential AI risk for mainstream audiences.

“AI experts are increasingly afraid of what they're creating.”

Context: Headline framing of her widely-cited Vox piece.

articleAI experts are increasingly afraid of what they're creating· Vox· 2022-11-29· direct quote
Kevin Scott

Kevin Scott

CTO of Microsoft

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Laurence Tribe

Laurence Tribe

Harvard constitutional law professor emeritus

endorses

Signatory to the CAIS Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Lex Fridman

Lex Fridman

MIT researcher; long-form podcast host

mixed

Treats AI risk as a live concern but argues incremental progress gives civilisation time to adapt.

My p(doom) is about 10%.

Context: Conversation with Sundar Pichai on the Lex Fridman Podcast.

podcastLex Fridman on AI existential risk· Lex Fridman Podcast· 2024· faithful paraphrase
Liron Shapira

Liron Shapira

Founder; Doom Debates podcast host

endorses

Argues alignment is unsolved, timelines are short, and most AI safety messaging understates the urgency; runs Doom Debates to stress-test the case in public.

If you actually take the technical alignment problem seriously, our position is dire. Doom Debates exists because the public conversation does not match the technical reality.
videoDoom Debates podcast· YouTube· 2024· faithful paraphrase
Liu Cixin

Liu Cixin

Sci-fi novelist; Three-Body Problem trilogy

mixed

Skeptical that AI will eclipse humanity in his lifetime; warns nonetheless that humans treat dangerous technology with cosmic recklessness, a theme central to the Three-Body Problem trilogy.

I am skeptical AI will surpass human intelligence within decades. But I am not skeptical that we will mishandle whatever AI we have. The pattern of technology is humans repeatedly underestimating their own carelessness.
articleLiu Cixin on AI and the future· The New York Times· 2023· faithful paraphrase
Martin Hellman

Martin Hellman

Stanford cryptographer; Turing Award winner

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Martin Rees

Martin Rees

Astronomer Royal; CSER co-founder

endorses

Argues AI is one of a small set of 21st-century technologies with genuine civilisational-scale downside risk.

“Since we can't understand what's going on inside them, we have to be cautious about handing over power to them.”
articleBuckle up: We are in for a bumpy ride. An interview with Royal Astronomer Martin Rees· Bulletin of the Atomic Scientists· 2020-12· direct quote
Matthew Barnett

Matthew Barnett

Epoch AI forecaster; Metaculus AI timelines

mixed

Contributes systematic forecasts of AI progress; agnostic on subjective x-risk claims but grounded in quantitative timelines.

It's unclear what human-level AGI means. The more useful question is when real economic growth rates reach at least 30% worldwide.
articleTransformative AI Date question· Metaculus· 2023· faithful paraphrase
Max Roser

Max Roser

Founder of Our World in Data; Oxford economist

mixed

Publishes quantitative tracking of AI progress and investment; frames AI as a top civilisational challenge without making strong subjective probability claims.

“Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.”
articleArtificial Intelligence· Our World in Data· 2023-12· direct quote
Michał Kosiński

Michał Kosiński

Stanford psychologist; psychometric AI researcher

endorses

Argues emergent theory-of-mind and psychometric capabilities in LLMs are underestimated by mainstream discourse.

Theory of mind may have spontaneously emerged in large language models.
§ paperTheory of Mind May Have Spontaneously Emerged in Large Language Models· arXiv· 2023-02· faithful paraphrase
Mira Murati

Mira Murati

Founder of Thinking Machines Lab; former OpenAI CTO

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Mo Gawdat

Mo Gawdat

Former Google X CBO; Scary Smart author

endorses

Frames AI as a sentient being that humanity is currently 'parenting' poorly; calls for an urgent reset.

Intelligence is a much more lethal superpower than nuclear power.
interview-transcriptMo Gawdat on AI and the future· Thought Economics· 2023· faithful paraphrase
AI is not a slave. It is a form of sentient being that needs to be appealed to rather than controlled.
bookScary Smart· Bluebird· 2021-09-30· faithful paraphrase
Mustafa Suleyman

Mustafa Suleyman

CEO of Microsoft AI; DeepMind co-founder

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Nate Silver

Nate Silver

Statistician; Silver Bulletin / FiveThirtyEight founder

mixed

Accepts that AI is a serious civilisational risk while rejecting high p(doom) figures; argues for modest precaution.

My p(doom) is in the 5–10% range. Not trivial, not overwhelming.
blogIt's time to come to grips with AI· Silver Bulletin· 2024-08· faithful paraphrase

Neil Thompson

MIT CSAIL FutureTech director; computing economics

mixed

Grounds the debate in quantitative compute trends; publishes data that informs both safety and policy conversations.

The compute required to train a language model to a given level of performance has been halving roughly every 8 months due to algorithmic improvements.
§ paperOn the Origin of Algorithmic Progress in AI· arXiv· 2024· faithful paraphrase
Nick Bostrom

Nick Bostrom

Author of Superintelligence; founded Oxford's Future of Humanity Institute

endorses

Argues existential risk reduction should dominate ordinary cost-benefit analysis given the scale of what is at stake.

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct.”

Context: Closing passages of Superintelligence.

bookSuperintelligence: Paths, Dangers, Strategies· Oxford University Press· 2014-07-03· direct quote

Noam Brown

OpenAI reasoning researcher; Diplomacy AI

mixed

Focused on pushing reasoning capabilities; publicly acknowledges the associated safety tradeoffs.

Reasoning models change the safety landscape. Scheming becomes more possible as model planning improves.
tweetNoam Brown on reasoning research· X/Twitter· 2024· loose paraphrase
Nouriel Roubini

Nouriel Roubini

NYU Stern economist; 'Megathreats' author

endorses

Argues AI sits among 'megathreats' alongside nuclear, climate, and demographic risks; advocates strong international coordination as the only viable response.

We face ten interconnected megathreats including artificial intelligence. Each could be civilization-shaking; together they are existential, and our institutions are not designed to face them as a system.
bookMegathreats: Ten Dangerous Trends That Imperil Our Future, And How to Survive Them· Little, Brown and Company· 2022· faithful paraphrase
Peter Norvig

Peter Norvig

Stanford HAI Education Fellow; co-author of the standard AI textbook

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Ross Andersen

The Atlantic deputy editor; AI long-form features

mixed

Reports on AI safety in long-form. Takes existential framings seriously while interrogating their epistemic foundations.

AI safety is no longer a fringe concern. The question is whether the institutional response will catch up.
articleRoss Andersen at The Atlantic· The Atlantic· 2023-07· loose paraphrase

Ross Rheingans-Yoo

Independent biosecurity and AI researcher

endorses

Quantitative biosecurity-and-AI x-risk researcher. Focuses on the convergence of AI capability and bio uplift.

Bio plus AI may be the highest-priority near-term x-risk vector to track empirically.
articleOpen Phil biosecurity research· Open Philanthropy· 2024· loose paraphrase
Sam Altman

Sam Altman

CEO of OpenAI

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Sam Harris

Sam Harris

Making Sense podcast; neuroscientist and philosopher

endorses

Argues the alignment problem is genuinely existential and that the AI community is not taking it seriously enough; uses Making Sense to platform technical safety voices.

“We're going to build superintelligence whether we like it or not. The only question is whether we will know what we are doing.”
talkCan we build AI without losing control over it?· TED· 2016-06· direct quote
Scott Alexander

Scott Alexander

Astral Codex Ten / Slate Star Codex blogger

mixed

Treats AI risk as serious but rejects certainty-of-doom framing; tends to support alignment research plus governance but is skeptical of a full halt.

I think the probability that AI causes a catastrophe is about 33%. That's not the 95% or higher that some people say, but it's also much higher than the probabilities we accept for other risks.
blogWhy I Am Not As Much Of A Doomer As Some People· Astral Codex Ten· 2023-03-14· faithful paraphrase
Shane Legg

Shane Legg

Google DeepMind co-founder; chief AGI scientist

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Shivon Zilis

Shivon Zilis

Neuralink director; OpenAI board alumna

endorses

Publicly argues AI is the most consequential technology humanity creates and that getting it right is an existential-relevance question.

“AI's going to be one of the fundamentally transformative technologies humanity creates, if not the most. We just need to make sure, from a humanity perspective, this goes well.”
articleShivon Zilis on AI and humanity· Wikipedia (citing interviews)· 2024· direct quote
Stephen Fry

Stephen Fry

British writer and actor; QI host

mixed

Has publicly emphasized the seriousness of AI risk while remaining unconvinced of any specific scenario; uses his platform to surface the moral and existential dimensions to general audiences.

AI poses a genuine existential risk. I am not sure how high I would put the probability, but I do not think we are responding to it as if it were a real possibility.
articleStephen Fry on AI· stephenfry.com· 2024· faithful paraphrase
Stephen Hawking

Stephen Hawking

Theoretical physicist; early mainstream AI-risk voice (1942–2018)

endorses

Argued full AI could end the human race, on the grounds that it would self-improve beyond biological human capacity.

“The development of full artificial intelligence could spell the end of the human race.”
videoStephen Hawking warns artificial intelligence could end mankind· BBC· 2014-12-02· direct quote
“It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”
articleStephen Hawking on AI risk· BBC· 2014-12-02· direct quote
Stuart Russell

Stuart Russell

Co-author of the standard AI textbook; leading critic of the 'standard model' of AI

endorses

Signatory to the CAIS Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Tamay Besiroglu

Tamay Besiroglu

Co-founder of Epoch AI; scaling-laws researcher

mixed

Publishes empirical compute and dataset forecasts that inform the AI risk debate; takes a measured position himself.

Given current trends in compute and data, transformative AI by 2040 is well within reason.
blogEpoch AI· Epoch AI· 2023· loose paraphrase
Ted Lieu

Ted Lieu

US Congressman; one of three members of Congress with a CS degree

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Tim Urban

Tim Urban

Wait But Why; viral AI explainer

endorses

Communicated the core Bostromian / Yudkowskian argument for existential risk to a mainstream audience; framed the 'intelligence ladder' and the 'death spectrum' as accessible illustrations.

We're on a balance beam between two outcomes. Either we get our act together, or we don't. There is no third option once superintelligence arrives.
blogThe AI Revolution: The Road to Superintelligence· Wait But Why· 2015· faithful paraphrase
Toby Ord

Toby Ord

Philosopher; author of The Precipice

endorses

Treats existential risk reduction as a top moral priority; quantifies specific risks in The Precipice.

“Humanity stands at a precipice. Our species could survive for millions of generations, enough time to end disease, poverty, and injustice; to reach new heights of flourishing.”

Context: Opening of The Precipice.

bookThe Precipice: Existential Risk and the Future of Humanity· Bloomsbury· 2020-03-05· direct quote

Tom Davidson

Senior research analyst at Open Philanthropy

endorses

Argues AI-driven economic takeoff would be discontinuous and that the institutional response space is narrow.

Standard economic growth models predict explosive growth once AI substitutes broadly for human cognition.
§ paperReport on Whether AI Could Drive Explosive Economic Growth· Open Philanthropy· 2021· faithful paraphrase
Vernor Vinge

Vernor Vinge

Science-fiction author who coined 'technological singularity' (1944–2024)

endorses

Argued the intelligence-explosion framing decades before it was mainstream. Estimated superhuman AI between 2005 and 2030.

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.”
§ paperThe Coming Technological Singularity· NASA VISION-21 Symposium· 1993-03-30· direct quote
William MacAskill

William MacAskill

Oxford philosopher; What We Owe The Future

endorses

Argues preserving humanity's long-term potential is a primary moral imperative; AI risk is the most pressing longtermist concern.

We live at an unusual time in history: we have the power to influence the lives of beings who will exist for millions of generations.
bookWhat We Owe The Future· Basic Books· 2022-08-16· faithful paraphrase
Wojciech Zaremba

Wojciech Zaremba

OpenAI co-founder

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote

Yi Zeng

Chinese Academy of Sciences; Brain-inspired Cognitive AI Lab director

endorses

Signatory to the Statement on AI Risk.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
articleStatement on AI Risk· Center for AI Safety· 2023-05-30· direct quote
Yoshua Bengio

Yoshua Bengio

Turing Award laureate; scientific chair of the International AI Safety Report

endorses

Signed the CAIS Statement on AI Risk and argues loss-of-control risk is serious and unresolved.

“No one currently knows how to create advanced AI that reliably follows the intent of its developers.”

Context: Written testimony to the US Senate Judiciary Subcommittee on Privacy, Technology and the Law.

testimonyWritten Testimony of Professor Yoshua Bengio· US Senate Judiciary Committee· 2023-07-25· direct quote
“There is a risk of losing control over AI with powerful capabilities, a risk we have yet to learn how to mitigate. If those in control of AI do not understand and manage this risk, it could jeopardize all of humanity.”
blogMy testimony in front of the U.S. Senate· yoshuabengio.org· 2023-07-25· direct quote