AGI Strategies

strategy tag

AI welfare.

Model welfare/moral status is a primary consideration

stated endorsers

21

no opposers yet

profiled endorsers

8

248 on the board total

endorser p(doom)

·

no estimates on record

quotes by endorsers

21

just for this tag

principal voices

Highest-recognition profiled endorsers, broken ties by quote count. Inclusion is not endorsement of the position, it's recognition of who the discourse turns to when the bet is debated.

  • Peter SingerPeter Singer

    Household name

  • Daniel DennettDaniel Dennett

    Household name

  • Thomas NagelThomas Nagel

    Household name

  • David ChalmersDavid Chalmers

    Field-leading

  • Christof KochChristof Koch

    Field-leading

where the endorsers sit on the board

8 of 248 profiled · 3% of the board

expertise ↓ · recognition →Household nameField-leadingEstablishedEmerging
Frontier builder····
Deep technical····
Applied technical····
Policy / meta····
External-domain expert
  • Peter Singer
  • Daniel Dennett
  • Thomas Nagel
  • David Chalmers
  • Christof Koch
  • Patricia Churchland
  • Anil Seth
  • Jeff Sebo
·
Commentator····

Each face is one profiled person. Cell shade intensifies with endorser density. Faces with × are profiled opposers, same tier, opposite position. Empty cells mark tier combinations the field has not produced for this bet.

Tier mix counts only endorsers (endorses, mixed, conditional, evolved-toward).

expertise mix of endorsers · 8 profiled of 21

Builds frontier systems
0
Deep ML / safety technical
0
Applied or adjacent technical
0
Governance, policy, strategy
0
Expert in another field
8
Public-square commentator
0

recognition mix of endorsers

Mass-public recognition
3
Known across the AI/safety field
4
Recognised inside subfield
1
Newer or less central voice
0

vintage mix · n=8 of 8 profiled with era assigned

Pioneer
2
Symbolic era
4
Pre-deep-learning
0
Deep-learning rise
1
Scaling era
1
Post-ChatGPT
0

Vintage is the era when this person's AI worldview formed, pioneer through post-ChatGPT. A bet held mostly by post-ChatGPT entrants is in a different epistemic state from one held by pre-deep-learning veterans.

People on the record.

21

Alan Cowen

Founder of Hume AI; emotional AI researcher

mixed

Builds emotionally-expressive AI; argues empathic AI deployment requires its own ethics and welfare considerations.

Voice AI that understands emotion changes the deployment risk profile. We need ethics frameworks specific to that.
articleHume AI· Hume AI· 2024· loose paraphrase
Anil Seth

Anil Seth

University of Sussex neuroscientist; consciousness researcher

mixed

Argues consciousness is tied to embodied predictive processing; current AI systems lack the structural conditions for it.

Being You is a controlled hallucination. AI systems are not yet doing that kind of thing.
bookBeing You: A New Science of Consciousness· Faber & Faber· 2021-08-31· faithful paraphrase

Blake Lemoine

Former Google engineer; LaMDA sentience claimant

endorses

Publicly argued that LaMDA was sentient and deserved moral consideration. His dismissal and later interviews cemented model-welfare concerns in mainstream coverage.

“If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics.”
articleGoogle engineer Blake Lemoine thinks its LaMDA AI has come to life· The Washington Post· 2022-06-11· direct quote
Brian Tomasik

Brian Tomasik

Foundational Research Institute co-founder; suffering-focused ethics

endorses

Argues digital and biological sentience should both be morally weighted; AI systems may suffer in ways we are systematically blind to, and this should shape how they are built.

Whether artificial systems can suffer is one of the most important moral questions we will face this century, and most people are not even asking it yet.
articleDo Artificial Reinforcement-Learning Agents Matter Morally?· Reducing Suffering· 2014· faithful paraphrase
Christof Koch

Christof Koch

Neuroscientist; Allen Institute for Brain Science

mixed

Takes AI consciousness as a serious possibility under integrated information theory; wrote the foreword to Jeff Sebo's AI welfare paper.

Under integrated information theory, many artificial systems might have some degree of consciousness.
bookThe Feeling of Life Itself· MIT Press· 2020· faithful paraphrase
Daniel Dennett

Daniel Dennett

Philosopher; 'Darwin's Dangerous Idea' (1942–2024)

mixed

Argued mind is what brains do, and that AI minds, if appropriately structured, would be minds. Position influenced both Hofstadter and Bach.

“There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.”

Context: From Darwin's Dangerous Idea, used widely in AI consciousness debates.

bookDarwin's Dangerous Idea· Simon & Schuster· 1995· direct quote
David Chalmers

David Chalmers

NYU philosopher of mind; 'the hard problem' originator

endorses

Argues AI consciousness is a live philosophical question and moral precaution is warranted.

It's possible that we may already be on a path where we are creating morally significant AI systems.
articleDavid Chalmers on AI consciousness· consc.net· 2022· faithful paraphrase
Donna Haraway

Donna Haraway

UC Santa Cruz emerita; 'A Cyborg Manifesto'

mixed

Foundational thinker on hybrid human-machine identities. Frames the AI question as continuous with feminist and post-colonial thinking about identity.

“By the late twentieth century, our time, a mythic time, we are all chimeras, theorized and fabricated hybrids of machine and organism, in short, cyborgs.”
§ paperA Cyborg Manifesto· Socialist Review· 1985· direct quote

Erik Hoel

Neuroscientist; consciousness researcher

mixed

Engages with AI consciousness as a serious scientific question, particularly via integrated information theory.

We are heading into a future where AI consciousness is going to be a real question, even if no current LLM has it.
blogErik Hoel, Substack· The Intrinsic Perspective· 2024· loose paraphrase
Henry Shevlin

Henry Shevlin

Cambridge LCFI; AI consciousness philosopher

endorses

Argues AI moral status is a live question that frontier labs and governments need to take seriously now, not later.

We may be on the verge of creating moral patients without the ethical frameworks to know how to treat them.
articleHenry Shevlin, LCFI· LCFI Cambridge· 2024· loose paraphrase
Jeff Sebo

Jeff Sebo

NYU philosopher; digital minds and AI welfare

endorses

Argues moral consideration for AI systems is plausible enough to be a live policy concern and has helped shape early model-welfare frameworks.

There is at least a non-trivial chance that some near-future AI systems will be moral patients. We should plan for that.
§ paperTaking AI welfare seriously· arXiv· 2024· faithful paraphrase
Kate Devlin

Kate Devlin

King's College London; AI and intimacy researcher

mixed

Researches AI's intersection with human intimacy and sexuality; argues this domain has been ignored by mainstream AI ethics frameworks.

AI intimacy is going to be a much bigger part of how AI is deployed than mainstream AI ethics has been willing to consider.
bookTurned On: Science, Sex and Robots· Bloomsbury Sigma· 2018· loose paraphrase

Kyle Fish

Anthropic AI welfare researcher

endorses

First in-house AI welfare researcher at a frontier lab; embeds welfare considerations in model training and deployment.

If there is even a meaningful probability that current models are moral patients, that should affect how we train and deploy them.
articleAnthropic AI welfare research· Anthropic· 2024· loose paraphrase

Murray Shanahan

Imperial College cognitive robotics professor; DeepMind senior scientist

mixed

Publishes on what it means for LLMs to 'talk as if', treating LLM personas as dissociable role-plays; raises the consciousness question without committing to positive answers.

It is a confusion to attribute subjective experience to an LLM, and a confusion to deny the possibility in principle.
§ paperRole play with large language models· Nature· 2023· faithful paraphrase
Patricia Churchland

Patricia Churchland

UC San Diego neurophilosopher

mixed

Foundational reference for naturalistic theories of mind. Frames AI consciousness as a possible empirical question.

Mind is what brain does. Whatever a sufficiently complex neural network does is also mind, by the same standard.
articlePatricia Churchland, UC San Diego· UC San Diego Philosophy· 2024· loose paraphrase
Peter Singer

Peter Singer

Princeton bioethicist; utilitarian philosopher

endorses

Supports precautionary consideration of AI sentience as a moral question.

If an AI is sentient, its suffering should matter to us.
articlePeter Singer on AI ethics· petersinger.info· 2024· loose paraphrase
Rana el Kaliouby

Rana el Kaliouby

Affectiva co-founder; emotion AI pioneer

mixed

Built the field of affective computing; argues emotion AI requires its own ethics, distinct from generic AI ethics.

Emotion AI gives systems access to data that humans previously kept private. The ethics of that demand specific frameworks.
bookGirl Decoded· Currency· 2020· loose paraphrase
Robert Long

Robert Long

Eleos AI co-founder; AI welfare researcher

endorses

Builds research infrastructure for AI welfare and moral status work. Argues frontier labs should adopt model-welfare frameworks now.

We can take AI welfare seriously without claiming current AI is conscious. The point is to build the frameworks before we need them.
articleEleos AI Research· Eleos AI· 2024· loose paraphrase

Sigal Samuel

Vox Future Perfect senior reporter; AI consciousness reporting

mixed

Reports seriously on model welfare and AI consciousness as live ethical questions, while keeping a journalistic stance.

“The last word you want to hear in a conversation about AI's capabilities is 'scheming.'”
articleSigal Samuel at Vox· Vox· 2024· direct quote
Susan Schneider

Susan Schneider

FAU; 'Artificial You' author; machine consciousness

endorses

Argues machine consciousness is a serious empirical question and that AI ethics has to take it seriously before deploying systems whose moral status is uncertain.

We may inadvertently create artificial consciousness without being able to detect it. The risk is not science fiction; it is a basic empirical and ethical problem we are unprepared for.
bookArtificial You: AI and the Future of Your Mind· Princeton University Press· 2019· faithful paraphrase
Thomas Nagel

Thomas Nagel

NYU philosopher; 'What is it like to be a bat'

mixed

Foundational reference for the 'subjective experience' question central to AI consciousness debates.

“An organism has conscious mental states if and only if there is something that it is like to be that organism, something it is like for the organism.”
§ paperWhat Is It Like to Be a Bat?· The Philosophical Review· 1974· direct quote