AGI Strategies

person

Brian Tomasik

Brian Tomasik

Foundational Research Institute co-founder; suffering-focused ethics

Co-founder of the Foundational Research Institute (now Center on Long-Term Risk); long-standing essayist on suffering-focused ethics and digital sentience. His writing has shaped EA-adjacent positions on AI welfare.

current Founder, Researcher, Center on Long-Term Risk

Strategy positions

AI welfareendorses

Model welfare/moral status is a primary consideration

Argues digital and biological sentience should both be morally weighted; AI systems may suffer in ways we are systematically blind to, and this should shape how they are built.

Whether artificial systems can suffer is one of the most important moral questions we will face this century, and most people are not even asking it yet.
articleDo Artificial Reinforcement-Learning Agents Matter Morally?· Reducing Suffering· 2014· faithful paraphrase

Closest strategy neighbours

by jaccard overlap

Other people whose strategy tags overlap with Brian Tomasik's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.

  • Alan Cowen

    shared 1 · J=1.00

    Founder of Hume AI; emotional AI researcher

  • Anil Seth

    Anil Seth

    shared 1 · J=1.00

    University of Sussex neuroscientist; consciousness researcher

  • Blake Lemoine

    shared 1 · J=1.00

    Former Google engineer; LaMDA sentience claimant

  • Christof Koch

    Christof Koch

    shared 1 · J=1.00

    Neuroscientist; Allen Institute for Brain Science

  • Daniel Dennett

    Daniel Dennett

    shared 1 · J=1.00

    Philosopher; 'Darwin's Dangerous Idea' (1942–2024)

  • David Chalmers

    David Chalmers

    shared 1 · J=1.00

    NYU philosopher of mind; 'the hard problem' originator

Record last updated 2026-04-25.