person

Brian Tomasik
Foundational Research Institute co-founder; suffering-focused ethics
Co-founder of the Foundational Research Institute (now Center on Long-Term Risk); long-standing essayist on suffering-focused ethics and digital sentience. His writing has shaped EA-adjacent positions on AI welfare.
current Founder, Researcher, Center on Long-Term Risk
Strategy positions
AI welfareendorses
Model welfare/moral status is a primary considerationArgues digital and biological sentience should both be morally weighted; AI systems may suffer in ways we are systematically blind to, and this should shape how they are built.
Whether artificial systems can suffer is one of the most important moral questions we will face this century, and most people are not even asking it yet.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Brian Tomasik's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.
Record last updated 2026-04-25.