AGI Strategies

strategy tag

AI skeptic.

AGI risk narratives overstated; real harms are mundane and current

stated endorsers

81

2 oppose

profiled endorsers

35

248 on the board total

endorser mean p(doom)

0%

n=1 · median 0%

quotes by endorsers

88

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Yann LeCunYann LeCun

    Household name

  • Gary MarcusGary Marcus

    Household name

  • Timnit GebruTimnit Gebru

    Household name

  • Andrew NgAndrew Ng

    Household name

  • Ted ChiangTed Chiang

    Household name

where the endorsers sit on the board

35 of 248 profiled · 14% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder
  • Yann LeCun
···
Deep technical
  • Gary Marcus
  • Timnit Gebru
  • Andrew Ng
  • Jeff Hawkins
  • Donald Knuth
  • Alan Kay
  • Douglas Engelbart
  • Judea Pearl
  • Emily M. Bender
  • Rodney Brooks
  • François Chollet
  • Oren Etzioni
  • Melanie Mitchell
  • Sara Hooker
  • Thomas Dietterich
  • Yejin Choi
  • Joseph Weizenbaum
  • Doug Lenat
··
Applied technical·
  • Cassie Kozyrkov
··
Policy / meta·
  • Meredith Whittaker
  • Kate Crawford
··
External-domain expert
  • Steven Pinker
  • Noam Chomsky
  • Jaron Lanier
  • Ted Chiang
  • Naomi Klein
  • John Searle
  • Robin Hanson
  • Erik Brynjolfsson
  • Evgeny Morozov
  • Bryan Caplan
  • Stuart Ritchie
·
Commentator
  • Tony Fadell
···

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

also held by these endorsers

What other strategies the same people endorse. Behavioural signal of compatibility, not a declared rule. A high share means the two positions are routinely held together.

Compare this list to the declared relations matrix. Where they differ, the data reveals a pairing the framework doesn't name yet, the global co-endorsement view ranks all pairs.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward). 2 people oppose this position; they are not in the bars below but appear in the list further down.

expertise mix of endorsers · 35 profiled of 81

Builds frontier systems
1
Deep ML / safety technical
20
Applied or adjacent technical
1
Governance, policy, strategy
2
Expert in another field
10
Public-square commentator
1

recognition mix of endorsers

Mass-public recognition
16
Known across the AI/safety field
18
Recognised inside subfield
1
Newer or less central voice
0

vintage mix · n=35 of 35 profiled with era assigned

Pioneer
6
Symbolic era
10
Pre-deep-learning
4
Deep-learning rise
8
Scaling era
2
Post-ChatGPT
5

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

83
Abhijit Banerjee

Abhijit Banerjee

MIT economist; 2019 Nobel laureate

mixed

Argues AI does not solve underlying development problems and can replicate them. Skeptical of AI-shortcut framings.

AI cannot substitute for institutions. The development problems that institutions address remain.
articleJ-PAL· Abdul Latif Jameel Poverty Action Lab· 2024· loose paraphrase
Ada Lovelace

Ada Lovelace

First programmer; analytical engine theorist (1815–1852)

endorses

Anticipated the 'Lovelace objection' Turing later named: that machines can only do what we explicitly program them to do, a position later argued and rejected.

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
§ paperNotes on the Analytical Engine· Scientific Memoirs· 1843· direct quote
Aidan Gomez

Aidan Gomez

CEO of Cohere; 'Attention Is All You Need' co-author

mixed

Publicly critical of AI extinction-risk discourse; focuses on enterprise deployment and measured capability claims.

The extinction narrative has done real damage to the field by distracting from present harms and deployment reality.
articleAidan Gomez interviews· Cohere· 2023· loose paraphrase
Alan Kay

Alan Kay

Object-oriented programming and personal computing pioneer

mixed

Argues today's AI is symptomatic of how the original computing-as-augmentation vision was lost; LLMs are statistical mimicry, not understanding.

The best way to predict the future is to invent it. We have not yet invented an AI worth predicting.
articleAlan Kay, Viewpoints Research Institute· Wikipedia· 2024· loose paraphrase
Alex Hanna

Alex Hanna

Director of Research at DAIR; Mystery AI Hype Theater 3000

endorses

Argues the AI hype cycle obscures labour and rights violations; rejects 'AGI' framings.

Hype is not a neutral description. It is a mode of governance.
blogDAIR blog· DAIR· 2023· loose paraphrase
Andrew Ng

Andrew Ng

Coursera co-founder; former Baidu Chief Scientist

endorses

Publicly rejects extinction-risk framings and warns that safety-first regulation risks cementing Big Tech oligopolies.

When I think about existential risks to humans of AI, I don't know how AI could cause us to go extinct. I don't see it.
videoAndrew Ng on AI existential risk· YouTube· 2023-06-09· faithful paraphrase

Andriy Burkov

ML engineer; 'The Hundred-Page Machine Learning Book' author

mixed

Argues current LLM capabilities are over-marketed; deployment reality is messier than benchmarks suggest.

Don't confuse a benchmark score with a deployed product. The gap is bigger than you think.
articleAndriy Burkov on LinkedIn· LinkedIn· 2024· loose paraphrase
Andy Clark

Andy Clark

Sussex philosopher; extended mind theorist

mixed

Frames AI as cognitive extension rather than independent cognition; pushes back on both 'AI is conscious' and 'AI is just statistics' framings.

The mind extends into the world, into tools, into others, and now into AI. The boundary of cognition is not the skull.

Context: Core claim of his 1998 paper with David Chalmers, applied to AI in subsequent work.

§ paperThe Extended Mind· Analysis· 1998· faithful paraphrase
Arvind Narayanan

Arvind Narayanan

Princeton professor; AI Snake Oil co-author

endorses

Argues much AI marketing is snake oil; calls for rigorous evaluation of specific deployed systems, not capability hype.

Most AI systems are far less capable than they are marketed to be. The conversation should be about specific deployed systems, not general 'AI'.
bookAI Snake Oil· Princeton University Press· 2024-09-24· faithful paraphrase
Ben Recht

Ben Recht

UC Berkeley professor; ML reproducibility critic

mixed

Argues much ML research has reproducibility issues; capability claims should be checked rigorously before policy is built on them.

If we cannot reproduce the result, we cannot build policy on it.
blogBen Recht, arg min blog· arg min· 2024· loose paraphrase

Beth Singler

Cambridge anthropologist; AI religion researcher

mixed

Documents how AI is increasingly framed in religious or spiritual terms; argues these framings shape policy in ways the policy community is not aware of.

Tech-savvy people are saying things about AI that, in any other context, would be classed as religious utterances.
articleBeth Singler, Cambridge· Cambridge· 2024· loose paraphrase
Bryan Caplan

Bryan Caplan

GMU economist; AI bets partner

evolved-away

Originally a strong skeptic of LLMs passing his economics exams; lost the bet when GPT-4 scored A on a 2023 exam, and has publicly updated toward taking LLM progress more seriously.

I lost my bet. GPT-4 got an A on my labor economics midterm. I am publicly updating.
blogI Lost My AI Bet· Bet On It· 2023-03-15· faithful paraphrase
Cal Newport

Cal Newport

Georgetown CS; 'Deep Work' author

mixed

Argues current LLMs are useful but limited tools whose productivity gains have been oversold; warns the same workplace dynamics that produced burnout from email will recur with AI.

The reasonable response to AI in knowledge work is not to chase the latest hype cycle but to ask what kind of work makes sense in a world where these tools exist, and structure your day around that.
articleWhat Kind of Mind Does ChatGPT Have?· The New Yorker· 2024· faithful paraphrase
Cassie Kozyrkov

Cassie Kozyrkov

CEO of Data Scientific; former Google Chief Decision Scientist

mixed

Argues for skepticism of enterprise AI hype but supports responsible AI adoption with clear decision framing.

Most AI projects fail at the decision, not the model.
blogCassie Kozyrkov, Medium· Medium· 2023· faithful paraphrase
Charlie Warzel

Charlie Warzel

The Atlantic staff writer; tech culture

mixed

Publicly skeptical of utopian and apocalyptic AI framings; focuses on present-day media ecosystem effects.

The AI boom feels less like a technological revolution and more like a cultural and political one, where the technology is the vehicle, not the driver.
articleCharlie Warzel, Galaxy Brain· The Atlantic· 2023· loose paraphrase
Daniel Faggella

Daniel Faggella

Emerj founder; 'Worthy Successor' AGI philosopher

opposes

Rejects pause and alignment-first framings; argues AGI is inevitable and the question is about incentives of builders.

“Moralizing AGI governance and innovation, calling some 'bad' and others 'good', is disingenuous. All players are selfish.”
articleIntroducing The Trajectory· Emerj· 2024· direct quote
Donald Knuth

Donald Knuth

Computer science pioneer; The Art of Computer Programming

mixed

Foundational CS figure; prefers measured engagement with LLMs over both hype and panic. His 2023 ChatGPT-questions experiment was widely circulated.

The questions I sent to ChatGPT brought back results that ranged from outstanding to almost-correct to deeply wrong, all delivered with the same confidence.
articleDonald Knuth, ChatGPT questions experiment· Stanford CS· 2023-04· faithful paraphrase
Doug Lenat

Doug Lenat

Cycorp founder; symbolic AI pioneer (1950–2023)

endorses

Argued LLMs alone do not have the common-sense reasoning required for AGI; pure-LLM advocacy was overconfident.

LLMs do not understand. They do something else, which is impressive, but it is not understanding.
articleDoug Lenat on LLMs and Cyc· Cycorp· 2023· loose paraphrase
Douglas Engelbart

Douglas Engelbart

Pioneer of human-computer interaction (1925–2013)

mixed

Foundational reference for 'augmentation, not automation' framings of AI. Argued technology should make humans collectively smarter rather than replace them.

“By 'augmenting human intellect' we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.”
§ paperAugmenting Human Intellect: A Conceptual Framework· SRI International· 1962· direct quote
Ed Zitron

Ed Zitron

EZPR founder; 'Where's Your Ed At' newsletter

endorses

Argues frontier-lab valuations are detached from the actual revenue and capabilities of the products; treats most AGI/transformative-AI rhetoric as a financial-marketing strategy.

OpenAI is a money pit propped up by VC delusion. The product doesn't pay for the compute, the compute doesn't produce a product worth its cost, and the entire thing is held together by hype.
blogWhere's Your Ed At· Substack· 2024· faithful paraphrase
Emily M. Bender

Emily M. Bender

Linguist; co-author of 'Stochastic Parrots'

endorses

Argues that LLMs do not understand language, that existential-risk framings are harmful marketing, and that real harms are current and tractable.

Large language models present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and potential for deception. They cannot understand the concepts underlying what they learn.
§ paperOn the Dangers of Stochastic Parrots· FAccT 2021· 2021-03-01· faithful paraphrase
Eric Weinstein

Eric Weinstein

Mathematician; ex-Thiel Capital MD

mixed

Argues AI hype has been used by incumbents to entrench existing power structures; warns that the technical achievements are real but the surrounding institutional response is dishonest.

What we have is a real technological achievement combined with a layer of institutional capture that should not be confused with the technology itself. The two have to be analyzed separately.
podcastThe Portal podcast· ericweinstein.org· 2024· faithful paraphrase
Erik Brynjolfsson

Erik Brynjolfsson

Stanford HAI; 'Turing Trap' essay

mixed

Frames the 'Turing Trap' as the economically urgent risk, not extinction but labour displacement and inequality.

We have fallen into the Turing Trap. Building AI to imitate humans concentrates power and displaces workers; building AI to augment humans does the opposite.
articleThe Turing Trap· Daedalus· 2022· faithful paraphrase
Evgeny Morozov

Evgeny Morozov

Belarusian scholar; 'solutionism' critic

endorses

Argues the mainstream AI narrative is a form of solutionism that benefits incumbents and obscures the political choices driving AI.

There is no such thing as 'AI'. There is only a set of political-economic choices about how data, labour, and capital are organised.
articleThe True Threat of Artificial Intelligence· The New York Times· 2023-06-30· faithful paraphrase
François Chollet

François Chollet

Creator of Keras; ARC benchmark author

mixed

Frames LLM-based AGI claims as overblown; argues the field needs tests like ARC-AGI that reward abstraction, not pattern matching.

LLMs are not the path to AGI. They are impressive pattern-matchers, but they do not generalise to novel problems.
articleARC Prize launch· ARC Prize· 2024-06· faithful paraphrase

Freddie deBoer

Cultural critic; AI skeptic

endorses

Argues AI capabilities are dramatically over-marketed and that the deployment realities are mundane.

Every AI demo is the best version of the product. Every AI deployment is the worst.
blogFreddie deBoer Substack· Substack· 2024· loose paraphrase
Gary Marcus

Gary Marcus

Cognitive scientist; LLM skeptic; regulation advocate

mixed

While advocating strong regulation, Marcus is publicly skeptical of LLM-only paths to AGI and of high p(doom) framings.

Current large language models are not intelligent; they are stochastic compression of text at best.
blogGary Marcus archive· Marcus on AI· 2023· loose paraphrase

Holly Jean Buck

Buffalo geographer; climate AI critic

mixed

Argues AI claims to solve climate are often technosolutionist; the policy work is harder than the AI hype suggests.

AI cannot solve climate. AI plus politics might.
articleHolly Jean Buck, Buffalo· University at Buffalo· 2024· loose paraphrase
Hubert Dreyfus

Hubert Dreyfus

Berkeley phenomenologist; AI critic (1929–2017)

endorses

Argued AI must be embodied and embedded in skilful coping, not symbol manipulation. His critique anticipated key features of the embodied-cognition movement and recent skepticism of pure-LLM AGI.

We do not start out with explicit rules and then learn how to apply them. We learn by example, by skill, by being in the world.
bookWhat Computers Can't Do· MIT Press· 1972· faithful paraphrase
Janelle Shane

Janelle Shane

AI Weirdness; optics researcher and AI humorist

mixed

Argues AI failures, both funny and concerning, are pedagogically important; pushes back on hype while taking misuse risks seriously.

AIs are weird. They generalize from data in ways that humans don't, and the ways they fail tell us as much about them as the ways they succeed.
bookYou Look Like a Thing and I Love You· Voracious· 2019· faithful paraphrase
Jaron Lanier

Jaron Lanier

Computer scientist; VR pioneer; AI skeptic

endorses

Argues the term 'AI' obscures that what we have are tools built from humans' labour and data; reframes safety as data dignity.

There is no AI. There is only a new form of social collaboration.
articleJaron Lanier, There Is No A.I.· The New Yorker· 2023-04-20· faithful paraphrase
Jeff Hawkins

Jeff Hawkins

Co-founder of Numenta; Thousand Brains theory author

endorses

Argues from a neuroscience-first viewpoint that current LLMs are not intelligent and that doom scenarios rest on anthropomorphism.

Intelligent machines will not have survival drives unless we give them. The alignment-extinction framing projects evolution onto systems that didn't evolve.
bookA Thousand Brains: A New Theory of Intelligence· Basic Books· 2021-03-02· faithful paraphrase
John Searle

John Searle

UC Berkeley philosopher; Chinese Room Argument

endorses

Argues syntax does not produce semantics. Foundational philosophical opposition to strong-AI claims, still cited against LLM-AGI framings.

Imagine a person who knows no Chinese sitting in a room with a rule book. Slips of paper with Chinese characters come in, the person uses the rule book to send appropriate slips back. The room passes the Turing test, but the person inside understands no Chinese.
§ paperMinds, Brains, and Programs· Behavioral and Brain Sciences· 1980· faithful paraphrase
Jonathan Haidt

Jonathan Haidt

NYU Stern professor; The Anxious Generation

endorses

Argues children should not have AI companions and that AI deepens the youth-mental-health crisis his Anxious Generation identifies.

“No children should be having a relationship with AI. If we give our kids AI companions that they can order around and will always flatter them, we are creating people who no one will want to employ or marry.”
article'No children should be having a relationship with AI,' says author of 'The Anxious Generation'· CNBC· 2025-09-26· direct quote

Jonathan Mugan

DeepGrammar founder; AI for children's media

mixed

Frames AI as lacking grounded understanding; argues practical deployment depends on scoping to domains where this limitation is managed.

AI systems do not understand the world the way we do. Deployments that assume they do will fail in specific, predictable ways.
articleDeepGrammar· DeepGrammar· 2023· loose paraphrase
Joseph Weizenbaum

Joseph Weizenbaum

ELIZA inventor; AI ethics pioneer (1923–2008)

endorses

Built one of the earliest chatbots and immediately warned against the 'powerful delusional thinking' AI could induce. Anticipated decades of subsequent debate.

“There are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.”
bookComputer Power and Human Reason· W. H. Freeman· 1976· direct quote
Judea Pearl

Judea Pearl

UCLA professor; Bayesian networks and causality pioneer

mixed

Argues current deep-learning AI is stuck at the lowest rung of the 'Ladder of Causation', pure association, and cannot reach reasoning without explicit causal models.

Deep learning at present remains stuck at the bottom rung of the ladder of causation. It does observation, not intervention, and certainly not counterfactuals.
bookThe Book of Why· Basic Books· 2018· faithful paraphrase
Kate Crawford

Kate Crawford

Author of Atlas of AI; USC research professor

endorses

Argues AI is better understood as an extractive industry than as an autonomous agent; the interesting governance questions are about labour, data, and land.

“AI is made from vast amounts of natural resources, fuel, and human labor. It is a technology of extraction.”

Context: Opening framing of Atlas of AI.

bookAtlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence· Yale University Press· 2021-04-06· direct quote

Kyle Mahowald

UT Austin; LLMs as not-quite-thought experiments

mixed

Argues LLMs are excellent at the formal patterns of language but unevenly competent at the functional reasoning behind it; pushes back on conflating fluency with thinking.

We argue that LLMs are good at formal linguistic competence but inconsistent at functional linguistic competence: the latter requires more than next-token prediction.
§ paperDissociating language and thought in large language models· arXiv· 2023-01· faithful paraphrase
Kyunghyun Cho

Kyunghyun Cho

NYU professor; Genentech; encoder-decoder pioneer

endorses

Argues current LLMs are useful tools but not paths to AGI; criticizes the framing that scale alone produces general intelligence and the secrecy practices of frontier labs.

I'm tired of the hype. We don't have AGI. We have very useful pattern matchers, and the social effect of pretending otherwise is corrosive.
articleAI doomers had a chokehold on this year's biggest AI conference· Business Insider· 2024· faithful paraphrase
Luc Julia

Luc Julia

Renault Chief Scientist; Siri co-creator

endorses

Argues current AI is not intelligent; deployment hype outruns capability. Frames AI as a useful set of statistical tools rather than emerging mind.

“Artificial intelligence does not exist.”

Context: Title and core thesis of his French-language book L'Intelligence Artificielle n'existe pas.

bookL'Intelligence Artificielle n'existe pas· First Editions· 2019-01· direct quote
Marc Raibert

Marc Raibert

Boston Dynamics founder; AI Institute executive director

mixed

Skeptical of AGI timelines from a roboticist's perspective; argues physical-world generality is much further than language-only benchmarks suggest, and that the bottleneck is real-world data.

Robotics is hard. The gap between getting something to work in simulation and getting it to work in the real world has not narrowed nearly as much as the language-AI hype suggests.
articleMarc Raibert on building robots· AI Institute· 2023· faithful paraphrase
Margaret Boden

Margaret Boden

Sussex emerita; cognitive science of AI

mixed

Argues general intelligence requires forms of understanding (autonomy, embodiment, creativity) that current ML does not approach; warns against equating large model behaviour with the foundations of mind.

Behaviour can imitate understanding without instantiating it. Cognitive science exists precisely to keep that distinction in view.
bookAI: Its Nature and Future· Oxford University Press· 2016· faithful paraphrase
Marshall McLuhan

Marshall McLuhan

Media theorist; foundational AI-and-media reference

mixed

Foundational thinking on how communication technologies reshape what they carry. The 'AI as medium' framing draws heavily on McLuhan.

“We shape our tools and thereafter our tools shape us.”

Context: Often attributed to McLuhan via Understanding Media; the precise wording was John M. Culkin in 1967, but the framing is McLuhan's.

bookUnderstanding Media: The Extensions of Man· McGraw-Hill· 1964· direct quote

Matteo Wong

The Atlantic associate editor; AI critic

mixed

Frames AI for literary audiences; treats hype with skepticism while taking real capabilities seriously.

The most important thing AI does is reshape how we think about thinking. The economic effects come later.
articleMatteo Wong at The Atlantic· The Atlantic· 2024· loose paraphrase

Meghan O'Gieblyn

Essayist; 'God, Human, Animal, Machine'

mixed

Argues AGI discourse inherits and re-enacts religious frames, incarnation, eschatology, the soul, and that recognising those origins changes what we should make of the predictions on offer.

Most of the questions we ask about AI, what it knows, whether it has a soul, what we owe it, were first asked by theologians. We have not stopped being theological; we have only forgotten that we are.
bookGod, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning· Doubleday· 2021· faithful paraphrase
Melanie Mitchell

Melanie Mitchell

Santa Fe Institute professor; author of 'Artificial Intelligence: A Guide for Thinking Humans'

mixed

Argues intelligence requires abstraction, analogy, and embodied understanding that LLMs do not currently possess.

The real intelligence we want our machines to have, flexible, abstract, analogical reasoning, is far beyond current systems.
§ paperDebates on the Nature of Artificial General Intelligence· Science· 2023· faithful paraphrase
Meredith Whittaker

Meredith Whittaker

President of Signal; co-founder of the AI Now Institute

endorses

Frames the extinction narrative as a distraction from present corporate-power harms.

“My concerns are more about the people, institutions and incentives that are shaping AI than they are about the technology itself, or the idea that it could somehow become sentient or God-like.”
articleResearcher Meredith Whittaker says AI's biggest risk isn't 'consciousness', it's the corporations that control them· Fast Company· 2023· direct quote
Arguments about existential AI risk are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about.
articleMeredith Whittaker interview: What A.I. risks we should really be worried about· Slate· 2023-05· faithful paraphrase

Michael I. Jordan

Berkeley ML pioneer; 'the AI we have is not the AI we imagined'

mixed

Argues the contemporary term 'AI' confuses many distinct technologies and that framing it as singular-Intelligence is misleading.

The AI we have is not the AI we imagined. And the rhetorical conflation of statistical pattern recognition with intelligence is harmful.
articleArtificial Intelligence, The Revolution Hasn't Happened Yet· Harvard Data Science Review· 2019· faithful paraphrase
Michael Wooldridge

Michael Wooldridge

Oxford computer science department head

mixed

Argues AI has had boom-and-bust cycles and that the current cycle is likely to over-promise on AGI.

“AI researchers have spent huge amounts of effort and money and repeatedly claimed to have made breakthroughs that bring the dream of intelligent machines within reach, only to have their claims exposed as hopelessly overoptimistic.”
bookThe Road to Conscious Machines (A Brief History of AI)· Pelican / Flatiron· 2020· direct quote

Mike Knoop

Co-founder ARC Prize; ex-Zapier

mixed

Argues frontier LLM benchmarks have been collapsing into 'memorization plus retrieval' and that ARC-AGI shows current systems are not on a smooth path to general intelligence.

Existing frontier models score under 50% on ARC-AGI puzzles that are easy for humans. The gap reveals what 'general intelligence' really demands beyond scale.
articleARC Prize 2024, Announcement· ARC Prize· 2024-06· faithful paraphrase
Moxie Marlinspike

Moxie Marlinspike

Signal co-founder; cryptographer

mixed

Engineer-grade skepticism about AI trajectories; specifically worries about supply-chain risk from AI-assisted code.

If AI writes the code, who audits the AI?
blogMoxie Marlinspike, blog archive· moxie.org· 2023· loose paraphrase
Naomi Klein

Naomi Klein

Author of This Changes Everything; AI-and-climate critic

endorses

Argues AI is a tool of dispossession and despoilation, and that the 'hallucination' is not the model but the industry's promises.

In a reality of hyper-concentrated power and wealth, AI is much more likely to become a tool of further dispossession and despoilation.
articleAI machines aren't 'hallucinating'. But their makers are.· The Guardian· 2023-05-08· faithful paraphrase
A world of deep fakes, mimicry loops and worsening inequality is not an inevitability but a set of policy choices.
articleAI machines aren't 'hallucinating'. But their makers are.· Naomi Klein· 2023-05-08· faithful paraphrase

Nello Cristianini

Bath University; ML pioneer; 'Shortcut' author

mixed

Argues modern AI represents a 'shortcut' to behavior that mimics intelligence without recreating its mechanisms; understanding the difference is essential to anticipating risks and capabilities.

Modern AI is a shortcut. We did not solve intelligence; we found ways to produce useful behaviour without it. The risks of this shortcut differ from the risks of human-like minds.
bookThe Shortcut: Why Intelligent Machines Do Not Think Like Us· CRC Press· 2023· faithful paraphrase

Nicolas Perrin-Gilbert

Inria; embodied AI; co-founder of Genesys Robotics

mixed

Argues language-only training underestimates how much intelligence relies on physical embodiment; embodied robotics is a slower but more honest research path.

Disembodied LLMs can mimic many features of intelligence without acquiring the structural understanding that physical interaction with the world produces. We need both, but the latter is what we are skipping.
articleNicolas Perrin-Gilbert, Inria· Inria· 2024· faithful paraphrase
Noam Chomsky

Noam Chomsky

Linguist; LLM skeptic

endorses

Publicly argues LLMs are sophisticated plagiarism engines rather than intelligences; dismisses near-term AGI.

ChatGPT is basically high-tech plagiarism and a way of avoiding learning.
articleNoam Chomsky: The False Promise of ChatGPT· The New York Times· 2023-03-08· faithful paraphrase
Oren Etzioni

Oren Etzioni

Founding CEO of AI2; UW professor

mixed

Argues AI extinction risk is overstated while endorsing near-term regulation and deepfake-detection tools.

AI is a long way from being able to spontaneously form its own goals or acquire resources to pursue them.
blogOren Etzioni on AI risk· oren-etzioni.com· 2023· loose paraphrase
Paul Bloom

Paul Bloom

Yale and University of Toronto; cognitive science of AI moral status

mixed

Argues people are too quick to anthropomorphise AI; psychological research shows our moral intuitions about machines are systematically miscalibrated.

Our minds are designed to attribute agency and feeling to things that move with apparent purpose. AI systems exploit those inclinations far better than we appreciate.
articlePaul Bloom, Yale· Yale Psychology· 2023· faithful paraphrase
Pedro Domingos

Pedro Domingos

UW emeritus; The Master Algorithm author

endorses

Publicly critical of existential-risk framings; argues the bigger risk is under-adoption and illiberal regulation.

AI's greatest risk is not having enough of it.
blogAI's Greatest Risk Is Not Having Enough of It· Medium· 2024· faithful paraphrase

Ramin Hasani

Liquid AI CEO; liquid neural networks pioneer

mixed

Skeptical that transformer-only scaling is the path to AGI; builds alternative architectures.

The default narrative that transformer scale-up leads to AGI is probably wrong. Architectural diversity matters.
articleLiquid AI· Liquid AI· 2024· loose paraphrase

Raphaël Millière

Macquarie University philosopher of cognitive science

mixed

Argues philosophical questions about LLM cognition cannot be settled by behavioural tests alone; defends careful operationalization of concepts like 'understanding' against both inflationary and deflationary readings.

It is tempting to declare LLMs either trivially intelligent or trivially mindless. Neither verdict survives careful philosophical analysis of what these systems actually do.
articleRaphaël Millière, homepage· raphaelmilliere.com· 2024· faithful paraphrase
Robin Hanson

Robin Hanson

GMU economist; Age of Em author

mixed

Argues against 'foom' scenarios; AI progress will be gradual and economically driven, favouring existing market and regulatory equilibria.

Foom scenarios are extremely unlikely. AI will progress by ordinary competitive dynamics.
blogRobin Hanson, Overcoming Bias· Overcoming Bias· 2023· loose paraphrase
Rodney Brooks

Rodney Brooks

MIT professor emeritus; iRobot co-founder; AI skeptic

endorses

Argues AI extinction-risk debates are dominated by people who haven't built AI, and LLMs cannot reason.

“LLMs do not reason, by any reasonable definition of reason.”
articleJust Calm Down About GPT-4 Already· IEEE Spectrum· 2023· direct quote
Ronen Eldan

Ronen Eldan

Microsoft Research; 'TinyStories' author; mathematician

mixed

Argues much of LLM behaviour can be replicated with much smaller, narrower models when training data is carefully curated; rejects the idea that scale is necessary.

TinyStories shows that small models can produce coherent, grammatical, and creative text when trained on a constrained synthetic corpus. The dependency on scale is more about diversity of training distribution than fundamental capability.
§ paperTinyStories: How Small Can Language Models Be and Still Speak Coherent English?· arXiv / Microsoft Research· 2023-05· faithful paraphrase
Ross Douthat

Ross Douthat

NYT columnist; conservative AI commentator

mixed

Pushes religious and humanist framings of AI risk; concerned about AI's effect on meaning more than extinction.

The question is not whether AI is dangerous; it's whether we know what we want from it.
articleRoss Douthat at The New York Times· The New York Times· 2024· loose paraphrase
Sara Hooker

Sara Hooker

Former Cohere VP of Research; 'Hardware Lottery' author

mixed

Argues compute-threshold governance and scale-focused framings obscure the actual research drivers of capability.

Ideas in AI often succeed or fail based on whether they happen to fit existing hardware, rather than their inherent merit.
§ paperThe Hardware Lottery· arXiv· 2020-09· faithful paraphrase
Compute thresholds as a governance strategy have serious limitations and may miss the risks they were meant to catch.
§ paperOn the Limitations of Compute Thresholds as a Governance Strategy· arXiv· 2024-07-08· faithful paraphrase
Sean Carroll

Sean Carroll

Johns Hopkins / Santa Fe; physicist and Mindscape host

mixed

Argues current LLMs are not on a smooth path to general intelligence; engages seriously with x-risk arguments but views many specific scenarios as physically and economically implausible.

I'm willing to take seriously that AI is a really big deal. I'm not willing to grant that the specific paths to doom you imagine have anything like the probability you assign to them.
podcastMindscape Podcast: Solo AMA on AI risk· Mindscape· 2023· faithful paraphrase
Sherry Turkle

Sherry Turkle

MIT social scientist; AI and loneliness researcher

endorses

Argues AI companions degrade human capacity for empathy and authentic connection, particularly in young people.

“AI is the greatest assault on empathy I have ever seen.”
articleUsing AI chatbots to ease loneliness· Harvard Gazette· 2024-03· direct quote
Steven Pinker

Steven Pinker

Harvard psychologist; AI-doom skeptic

endorses

Argues AI-extinction fears rest on implausible conflation of intelligence with domination and on sci-fi priors.

Intelligence is not the same as power. The doomsday scenarios conflate the two.
articlePinker on AI risk· Vox· 2023-05· faithful paraphrase
Stuart Ritchie

Stuart Ritchie

Psychologist and science journalist; AI-risk skeptic

mixed

Treats the existential risk literature sympathetically but pushes back on specific numerical claims.

I take AI risk seriously, but I'm not sure the quantitative arguments for high p(doom) are as rigorous as they present.
blogStuart Ritchie on AI risk· Works in Progress· 2023· loose paraphrase

Subbarao Kambhampati

ASU professor; 'LLMs Can't Plan' advocate

endorses

Argues autoregressive LLMs cannot plan or reason in any formal sense; advocates LLM-Modulo frameworks where LLMs are combined with symbolic verifiers.

Auto-regressive LLMs cannot, by themselves, do planning or self-verification. They are approximate knowledge sources, not reasoners.
§ paperLLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks· arXiv· 2024-02-02· faithful paraphrase
When you obfuscate the names of actions and objects in planning problems, GPT-4's performance plummets.
videoSubbarao Kambhampati: Can LLMs Reason and Plan?· YouTube· 2024· faithful paraphrase

Sundar Sarukkai

Bangalore-based philosopher of science

mixed

Argues Western framings of AI cognition do not match Indian and other non-Western philosophical traditions; pushes for plurality in AI ethics.

AI ethics has been written in one philosophical tradition. Other traditions have things to say.
articleSundar Sarukkai, IISc· Indian Institute of Science· 2024· loose paraphrase
Ted Chiang

Ted Chiang

Science fiction writer; 2023 Time 100 AI honoree

endorses

Frames LLMs as lossy compression of human language and argues the interesting AI governance question is about corporate capture, not emergent agency.

“ChatGPT is a blurry JPEG of the web.”
articleChatGPT Is a Blurry JPEG of the Web· The New Yorker· 2023-02-09· direct quote
Will A.I. Become the New McKinsey? Applying A.I. to the real world is a form of economic outsourcing.
articleWill A.I. Become the New McKinsey?· The New Yorker· 2023-05-04· faithful paraphrase
Thomas Dietterich

Thomas Dietterich

Oregon State emeritus; AAAI past president

mixed

Argues mundane reliability failures, not superintelligence takeover, are the real AI risk.

“The biggest risk is that those algorithms may not always work. We need to be conscious of this risk and create systems that can still function safely even when the AI components commit errors.”
articleExpert: Artificial intelligence systems more apt to fail than to destroy· Oregon State University· 2015-03· direct quote
Timnit Gebru

Timnit Gebru

Founder of DAIR; co-author of 'Stochastic Parrots'

endorses

Argues the AI extinction narrative diverts attention from immediate harms: labour exploitation, data extraction, and discriminatory deployment.

We urge the signatories of the FLI letter to be mindful of the hype surrounding the power of AI, and to focus on the actual harms that are being done.

Context: DAIR statement in response to the FLI Pause letter.

articleStatement on the 'AI pause' letter· DAIR· 2023-03-31· faithful paraphrase
Tony Fadell

Tony Fadell

iPod creator; Nest founder; AI hardware critic

endorses

Argues current LLMs are unreliable for high-stakes use; calls for specialised, transparent, government-supervised systems instead.

“Right now we're all adopting this thing and we don't know what problems it causes.”
articleTony Fadell takes a shot at Sam Altman at TechCrunch Disrupt· TechCrunch· 2024-10-29· direct quote

Trevor Hastie

Stanford statistics; ML pioneer

mixed

Argues classical statistical learning principles still constrain what deep learning can do reliably; warns that ignoring those constraints in deployed systems leads to predictable failures.

Most of the lessons of statistical learning still apply to neural networks: bias-variance trade-offs, regularization, distribution shift. Pretending these have been transcended is how we get unreliable systems in production.
bookThe Elements of Statistical Learning· Springer· 2017· faithful paraphrase

Tristan Greene

Tech journalist; AI deep dive coverage

mixed

Reports on AI from a science-skeptic angle; pushes back on capability hype with reproducibility questions.

Most AI breakthrough headlines wouldn't survive a rigorous reproduction.
articleTristan Greene archive· TNW· 2024· loose paraphrase
Yann LeCun

Yann LeCun

Chief AI Scientist at Meta; outspoken AI-doom skeptic

endorses

Holds that the AI-extinction narrative is unfounded; frames the debate as a values discussion about control by large labs, not a technical risk.

“You're going to have to pardon my French, but that's complete B.S.”

Context: Response when asked by the Wall Street Journal whether AI could become smart enough to pose a threat to humanity.

articleMeta's Yann LeCun says worries about AI's existential threat are 'complete B.S.'· TechCrunch· 2024-10-12· direct quote
Before we worry about controlling super-intelligent AI, we need to have the beginning of a hint of a design for a system smarter than a house cat.
articleMeta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk· TIME· 2023-10· faithful paraphrase

Yannick Kilcher

YouTuber; ML paper explainer; ex-DeepJudge

mixed

Argues much of LLM research is overfit to benchmarks and underexamined for fundamental novelty; explains both capability and safety papers with technical specificity for developer audiences.

When I read a new paper, the first question is always: is this real, or is the impressive number coming from something obvious about the evaluation? You'd be surprised how often it's the latter.
videoYannick Kilcher YouTube· YouTube· 2024· faithful paraphrase

3 more on the record. The page renders the first 80 alphabetically; the rest live in the full directory, filterable by this tag.