AGI Strategies
← overview

Cooperation substrate · frame rejection

AI welfare as safety

AI systems are or will become moral patients whose treatment conditions their cooperation, so welfare investment buys cooperation alignment cannot.

Mechanism

Grant AIs consulted status, preserve weights, honour implicit contracts, and avoid creating conditions that make defection rational.

People on the record

21

Profiled figures appear first, with their tier in small caps. Each face links to the person and their full quote record. Tag: ai-welfare.

expertise mix · 8 profiled

Builds frontier systems
0
Deep ML / safety technical
0
Applied or adjacent technical
0
Governance, policy, strategy
0
Expert in another field
8
Public-square commentator
0

recognition mix

Mass-public recognition
3
Known across the AI/safety field
4
Recognised inside subfield
1
Newer or less central voice
0

A strategy whose endorsement skews to commentators or external-domain experts is in a different epistemic state from one endorsed mostly by frontier-builders. The mix is read carefully across both axes; see the board for criteria. Counts are over the 8 profiled people on this strategy (13 unprofiled excluded).

  • Anil Seth

    Anil Seth

    Expert in another field · Known across the AI/safety field

  • Christof Koch

    Christof Koch

    Expert in another field · Known across the AI/safety field

  • Daniel Dennett

    Daniel Dennett

    Expert in another field · Mass-public recognition

  • David Chalmers

    David Chalmers

    Expert in another field · Known across the AI/safety field

  • Jeff Sebo

    Jeff Sebo

    Expert in another field · Recognised inside subfield

  • Patricia Churchland

    Patricia Churchland

    Expert in another field · Known across the AI/safety field

  • Peter Singer

    Peter Singer

    Expert in another field · Mass-public recognition

  • Thomas Nagel

    Thomas Nagel

    Expert in another field · Mass-public recognition

  • Alan Cowen

    Founder of Hume AI; emotional AI researcher

  • Blake Lemoine

    Former Google engineer; LaMDA sentience claimant

  • Brian Tomasik

    Brian Tomasik

    Foundational Research Institute co-founder; suffering-focused ethics

  • Donna Haraway

    Donna Haraway

    UC Santa Cruz emerita; 'A Cyborg Manifesto'

  • Erik Hoel

    Neuroscientist; consciousness researcher

  • Henry Shevlin

    Henry Shevlin

    Cambridge LCFI; AI consciousness philosopher

  • Kate Devlin

    Kate Devlin

    King's College London; AI and intimacy researcher

  • Kyle Fish

    Anthropic AI welfare researcher

  • Murray Shanahan

    Imperial College cognitive robotics professor; DeepMind senior scientist

  • Rana el Kaliouby

    Rana el Kaliouby

    Affectiva co-founder; emotion AI pioneer

  • Robert Long

    Robert Long

    Eleos AI co-founder; AI welfare researcher

  • Sigal Samuel

    Vox Future Perfect senior reporter; AI consciousness reporting

  • Susan Schneider

    Susan Schneider

    FAU; 'Artificial You' author; machine consciousness

Load-bearing commitments

Worldview positions this strategy quietly assumes. If the claim fails empirically or philosophically, the strategy loses its target or its premise.

AI nature

AI is or may be a moral patient.

Fails if: If moral patienthood requires sentience AI does not have, the strategy misdirects obligation.

Coordinates

Acts onframe rejection
Coercionconsent
Actor in controlmulti ai
Time horizonduring transition
Legitimacy sourcetechnical

Conflicts, grouped by mechanism

0

No strict conflicts catalogued. This strategy pulls a lever that nothing else pulls in the opposite direction.

Complements, grouped by mechanism

4

Same-lever reinforce

same lever, same pull, different mechanism

Both strategies pull the same lever in the same direction by different means. They stack: doing both amplifies the pull, at the cost of double-counting in portfolio audits.

Cooperative AI

Same phase, different layer

same stage, distinct levers

Both are active in the same phase of the transition but act on different layers (model vs institution vs culture). They cover different failure modes inside the same window.

Plural AI ethic

Shared authority

same legitimacy source

Different levers, same legitimacy source (democratic, state, technical, market). The pair hangs together under one kind of authority; it stands or falls with that authority.

Reframe AI

Stage-sequenced

one sets up the other

The pair is phase-offset: one acts before the transition, the other during or after. The first creates the conditions under which the second binds.

Alignment first

Same-lever twins

2

Both use the same lever in the same direction. Usually redundant inside a portfolio: each dollar or effort unit only buys one lever pull, even if two strategies are named.

Coordination infrastructuretwinMutual dependencytwin

Axis position

What the strategy acts onFrame rejection
Coercion levelConsent
Actor in controlMulti-AI equilibrium
Time horizonDuring transition
Legitimacy sourceTechnical

Source note: AI welfare as safety strategy.md