person

Anders Sandberg
Former FHI researcher; transhumanist philosopher
Long-time Oxford FHI researcher who published foundational work on whole-brain emulation and existential risk. Now independent; writes on the philosophy of grand futures.
Profile
expertise
External-domain expert
Recognised expert outside AI (philosophy, economics, biology, journalism) who weighs in on AI consequences from that vantage.
Long-time FHI researcher (Oxford). Computational neuroscientist; works on existential risk, transhumanism, whole-brain emulation. Adjacent to technical AI but not a frontier ML contributor.
recognition
Established
Reliable, recognised voice within their specific subfield. Cited and invited but not central to general AI discourse.
Recognised in EA/x-risk circles; little mainstream press.
vintage
Pre-deep-learning
Active before AlexNet. The existential-risk frame matures (FHI, OpenPhil, EA). Public AI commentary still rare; deep learning not yet dominant.
FHI from 2006. Whole-brain emulation roadmap 2008. His x-risk frame is set in the pre-AlexNet FHI period.
Hand-classified. See the board for the criteria and the full grid.
Strategy positions
Long reflectionendorses
Use post-AGI stability for extended moral deliberation before locking inArgues humanity should preserve optionality and invest in long-horizon deliberation capacity; AI governance should protect the ability to make big decisions well.
The quality of deliberation we are able to do before we make irreversible decisions is a civilisational resource.
Closest strategy neighbours
by jaccard overlapOther people whose strategy tags overlap with Anders Sandberg's. Overlap is on tag identity, not stance; opposites can show up if they reference the same tags.


Nick Bostrom
shared 1 · J=0.33
Author of Superintelligence; founded Oxford's Future of Humanity Institute
Record last updated 2026-04-25.